At a time when artificial intelligence is reshaping decision making processes, data sovereignty is emerging as a new global power domain, and young leaders are seeking shared solutions to global challenges, Young Leaders Union brought together participants from 60 countries in Paris. From sustainable development to social justice, from the ethical dimensions of technology to the evolving definition of leadership, a wide range of multilayered themes shaped this year’s discussions.

This year, an Eczacıbaşı colleague was also part of the summit. Representing Türkiye in Paris, Ezgi Yalçın, Quality Systems Specialist at Eczacıbaşı Information and Communication Technologies (Eczacıbaşı ICT), took part in the program with her talk at the opening session, where she explored the role of artificial intelligence in everyday decision making. Over four intensive days, Yalçın gathered her reflections on this experience, the new perspectives opened up through the discussions, and the intellectual traces the summit left behind for Eczacıbaşı Life Blog.

Held in Paris between 1 and 4 November 2025, Young Leaders Union was a highly dynamic and multifaceted summit, bringing together young leaders who are actively working in their fields and aiming to create social impact. Designed to connect young leaders around the world and foster dialogue on sustainable development goals, justice, inclusion and leadership, Young Leaders Union (YLU) operates in collaboration with the United Nations and under the umbrella of Edutourism for Unity, an international organization based in the United Kingdom. The gathering in Paris focused particularly on the intersections of technology, justice and social transformation. Representing Türkiye and opening the summit’s first session with a talk titled “Responsible AI in Everyday Decisions” was an experience of great value for me, both professionally and personally.

As someone visiting Paris for the first time, I found that the city carries a curious balance: cosmopolitan yet calm, modern yet deeply historical, fast-paced yet composed. From the moment you arrive, you can sense the city’s distinctive atmosphere that encourages thinking, discussion and creation. Only later did I realize that this feeling was a quiet prelude to the intense conversations that would unfold throughout the summit.

The selection process itself was an important part of this journey. Although individuals apply to Young Leaders Union on their own, the evaluation does not rely solely on personal statements. Applicants are expected to demonstrate support from non profit organizations, communities or international networks that generate social benefit. This support is designed to make visible the ecosystems candidates actively contribute to and the real world impact of their work. Rather than functioning as a traditional reference system, the process focuses on the structures candidates are engaged with and how their work is positioned within those structures. In this way, applications are assessed not as individual declarations of intent, but through tangible impact, continuity and contribution to communities.

When I applied in the spring of 2025, I was supported by Hackathon Raptors, an international technology community based in the United Kingdom and widely known for its software focused hackathons. The community is recognized for building a strong ecosystem and network for young developers working in artificial intelligence, rapid prototyping, application development and early stage innovation. My role within this community, however, takes a slightly different angle. Beyond technical development, I focus on bringing perspectives of sustainability, ethical frameworks and risk management into the technology space. I aim to support not only ideas that function well, but also those that are governed responsibly, positioned thoughtfully and implemented with social benefit in mind. Within this context, my work with Hackathon Raptors naturally became a reference point in my application.

In the application form, I went beyond outlining my professional background. I also detailed why working on AI governance, data sovereignty, ethical technology and strong institutions is a global issue, and what I hoped to bring into discussion by participating in this summit. My intention was to share institutional experiences from Türkiye while creating a shared space for thinking with international delegates around innovation, justice, trust and sustainability. This approach appears to have resonated during the selection process, as I was given the responsibility of opening the summit’s first session.

With the opening dinner on the first evening, introductions among delegates began. Even in this short time, it became clear that young leaders from across the world had arrived with distinct stories, areas of struggle and visions. Tables quickly turned into clusters of ideas: groups discussing AI ethics, others comparing migration policies, some searching for solutions to climate change, and others sharing experiences from women’s rights movements. From the very first evening, it was evident that this summit was not a space for passive listening, but an active environment for producing ideas and challenging one another’s thinking.

“From the moment you arrive, you can sense Paris’s distinctive atmosphere that encourages thinking, discussion and creation. Only later did I realize that this feeling was a quiet prelude to the intense conversations that would unfold throughout the summit.”

Most delegates had carried out detailed individual work on the United Nations Sustainable Development Goals (SDGs) ahead of the summit. Gender equality reports for SDG 5, climate data for SDG 13, governance models for SDG 16… Everyone arrived with files prepared in line with their own areas of expertise.

The SDGs that formed the foundation of my own work were SDG 9 (Industry, Innovation and Infrastructure) and SDG 16 (Peace, Justice and Strong Institutions). Because artificial intelligence is not only an innovation tool that enables new technologies to be developed, but also a governance force that transforms how institutions, governments and even societies make decisions. In my view, if SDG 9 is the “engine” of innovation, SDG 16 acts as a compass that determines whether this engine steers societies toward a fair path. What will shape the future role of AI lies precisely at this intersection: the balance between the power brought by technology and how that power is governed.

The summit’s first presentation and first interactive discussion were mine. From the moment I began my talk titled “Responsible AI in Everyday Decisions,” I could feel the high level of focus in the room. I spoke about how AI is increasingly becoming an invisible layer of decision making, how bias does not stem from code itself but from the data being used, and how data sovereignty is emerging as a new strategic source of power for countries.

When I shared an example of an international technology company using an AI-driven decision mechanism in its recruitment process and the gender bias that surfaced within that algorithm, a visible sense of movement spread through the room.

As I moved on to predictive policing, an approach focused on anticipating risks through data and supporting preventive action before crimes occur, I could clearly sense the room shifting into an atmosphere ready to be filled with questions.

In the planned flow, the Q&A session was scheduled for the end of the presentation. However, the session changed direction within the very first minutes. One participant took the floor to raise debates from their own country; immediately after, another asked, without even raising a hand, “How can bias in data be regulated by the state?” The Q&A began organically, and the session quickly turned into a lively discussion platform.

Delegates from the United States, Canada and the United Kingdom contributed particularly actively. Participants from the US raised fundamental questions around the balance between security and freedom. Canadians referred to debates on transparency and public accountability. Legal experts from the UK brought the legal consequences of algorithmic discrimination to the table. The questions from these three countries quickly moved the conversation from technical aspects to ethical, political and societal ground.

The comments that raised the tension most came from two delegates from the finance sector. The statement, “Even if ethically problematic, some models may continue to be used for operational continuity,” acted like a spark in the room. Legal experts immediately pushed back. Women’s rights advocates emphasized that biased models could reproduce historical inequalities. Those working in international policy warned that such practices could damage trust between the state and its citizens. The discussion intensified, yet flowed in a highly productive way, because my intention was precisely to spark this debate consciously, helping delegates recognize the ethical consequences of the decisions they make in everyday life.

“Technology, much like law, can have two faces: on one side innovation, on the other the reproduction of inequality.”

After my presentation, the sessions scheduled in the program continued.

The next talk was titled “Justice Has Two Faces,” delivered by a delegate of Namibian origin based in Ireland. Perhaps the most quoted sentence of the summit came from this session: “Justice has two faces, but it’s up to us to decide which one we show the world.”

In the talk, it was emphasized that although laws are written with the intention of protection, their implementation can sometimes lead to the opposite outcomes. Culture, language, bureaucracy and social norms were described as invisible barriers that not only delay justice for women, migrants and disadvantaged groups, but in some cases deny it altogether. This narrative intersected almost directly with the issue of algorithmic bias discussed in my own session. Because technology, much like law, can also have two faces: one side innovation, the other the reproduction of inequality.

A presentation by a participant from Tanzania, who spoke about how girls in certain regions cannot access even the most basic hygiene products, deeply affected everyone in the room. Stories like these once again revealed how stark inequality remains in parts of the world, and how technology can only help narrow this gap when guided by ethical leadership and sound governance.

The third day was dedicated to workshops. In the Digital Reality and Accountability session, the spread of misinformation and the responsibility of platforms were discussed. The Public Policy session focused on how countries use data during times of crisis. Delegates repeatedly returned to the intense debates of the previous day, with AI governance and the risks created by these systems becoming an almost invisible common thread across nearly all sessions.

One of the most striking moments of the day was the Global Leadership and Sustainable Change panel, where a new definition of leadership emerged: leadership was no longer described as a position, but as a responsibility to transform societies.

The farewell breakfast on the fourth day revealed just how deeply the connections formed over a few days had grown. People who had not known each other only days before were now creating shared spaces for thinking and beginning to discuss potential collaborations.

Young Leaders Union 2025 once again made one thing very clear to me:
Technology, ethics, culture and leadership can no longer be considered separately. They are all parts of the same whole, and when one is missing, the entire system begins to falter.

“When leaders are able to place empathy before efficiency and justice before speed, artificial intelligence moves beyond being a tool that merely optimizes processes and becomes a force that strengthens a culture of quality, information security and institutional trust.”

Throughout the summit, similar questions kept circulating in my mind:
Is it possible to establish a shared global ground for AI governance? Is data sovereignty creating a new balance of power? How can women’s and children’s rights be more strongly protected in the age of technology? How should the truth that “justice has two faces” be reflected in system design? And perhaps most importantly: can artificial intelligence truly become a tool that strengthens social trust?

Walking through the streets of Paris, I found myself thinking that the summit was not merely an event, but a prototype designed to discuss the future of technology, leadership and ethical values, bringing us together around a shared axis.

As I returned from Paris, the most dominant feeling I carried with me was this: these discussions are not purely theoretical. They directly affect how institutions are governed, how they are audited, and how they are made sustainable. When viewed through the lens of quality management systems and information security management systems, it becomes very clear that artificial intelligence can no longer be treated as an external element to these structures.

Today, the questions we encounter most frequently in audits revolve around whether processes are clearly defined, how risks are addressed, and how traceable and auditable decision mechanisms are. AI-driven decision making systems make these questions even more critical. Because we are no longer required to control, monitor and, when necessary, challenge only human processes, but also algorithmic decisions. At this point, artificial intelligence becomes both a new “risk domain” for quality and information security management systems and, when designed correctly, a powerful indicator of organizational maturity.

The sentence I used to close the sessions at the summit mirrored exactly the thought that stayed with me on the journey back:

Artificial intelligence may make faster decisions and appear more intelligent, but it can never grasp the value of a human choice. Responsible AI begins where algorithms end, at the point where conscience meets code.

This balance is precisely what will make the real difference within institutions today. When leaders are able to place empathy before efficiency and justice before speed, artificial intelligence moves beyond being a tool that merely optimizes processes and becomes a structure that strengthens a culture of quality, information security and institutional trust. Audits teach us one fundamental lesson: trust is not built through control alone, but through systems that are thoughtfully designed and responsibly governed.