Artificial intelligence has carried the information age into a new dimension: speed, access, and production are virtually limitless now. Within this boundlessness, the real issue is no longer “accessing information” but protecting the accuracy, security, and ethical use of that information.

The World Economic Forum’s Global Risks Report 2025 lists “misinformation” and “disinformation” among the most serious global risks for the next two years. In the same report, “cyber espionage and warfare” are also highlighted as rising threat categories directly linked to information security. This picture clearly shows that information has become a strategic source of power.

While the democratization of information represents significant progress in terms of transparency and accessibility, a single click is now enough for false information to reach millions. This can distort societies’ shared sense of reality, undermine trust in institutions, and manipulate individuals’ decision-making processes. At this point, “information security” is no longer just a technical matter, it’s an individual, societal, and institutional responsibility. Every person and institution’s role in the digital ecosystem is a critical link of the security chain.

For this Eczacıbaşı Life Blog, we spoke with Ersin Uslu, Digital Security and Risk Management Senior Manager at Eczacıbaşı Holding, about how to protect the reliability of information and other new information security risks in the age of artificial intelligence.

Artificial intelligence has radically changed the way information is produced. What does this transformation mean? Why has the distinction between “having information” and “verifying information” become so important today?

Artificial intelligence has made access to information easier than ever before; in other words, it has democratized information. However, this convenience has brought a new complexity with it: a complexity of trust. The main issue is no longer accessing information, but understanding how that information is produced, by whom, and with which data.

While artificial intelligence boosts productivity, it also introduces challenges in terms of accuracy, impartiality, and source reliability. That is why it’s not just the content of information that matters, but also how it’s produced, what kind of data it’s fed, and who validates it.

For us cybersecurity professionals, this represents a new threshold of awareness. We now have to do more than protect systems, we also have to manage the security of information and the accuracy of information produced by artificial intelligence.

The widespread use of the internet and the integration of technological devices into everyday life have accelerated access to information, and information has both become democratized and more fragile. In this period where trust is debated more than access itself, how would you define the concept of trust?

In the digital age, trust is no longer a default feeling, it’s a verifiable process.

In the past, trust was built through relationships and experience. Today, it’s being redefined through algorithms, data flows, and identity verification mechanisms. The new form of trust is built on transparency, traceability, and accountability.

For us, trust is not merely granting access rights to systems, it’s the ability to sustainably protect the integrity, confidentiality, and availability of data.

So, trust in the digital world is no longer something that is “built once and then forgotten”; it has become a domain of governance that is constantly monitored, measured, and improved.

“The breaking point for organizations will be expanding security from technical controls to being an integral part of their decision-making structure.”

Research and reports published by leading global institutions such as the World Economic Forum (WEF) and Gartner position information integrity and security as priority concerns in the age of artificial intelligence. How should organizations prepare for this era? What are the most critical breaking points? What are the top risks?

The most essential step for organizations is to redefine their digital assets and digital dependencies. Today’s risks emerge not only from technical weaknesses but also from the increasing reliance of business processes on automation and algorithmic systems.

For organizations, the most significant breaking point is where control actually resides. When AI models are integrated into critical processes, it becomes far more difficult to monitor the accuracy of decisions, data processing activities, and operational transactions. This means that preparations must go beyond technical security and include transparent workflows, strong internal audit mechanisms, and clear policies that define the limits of AI usage.

The most serious risks of this new period include misdirected automation, models that can be manipulated, AI-related vulnerabilities in the supply chain, and identity-based attacks that infiltrate operational processes. These risks can lead not only to information loss but also to physical and economic consequences.

One of the most striking examples of these risks is the 2021 Colonial Pipeline ransomware incident. Colonial Pipeline, which operates the largest fuel pipeline system on the East Coast of the United States, was forced to suspend its operations after attackers gained access through a VPN account that didn’t have multi-factor authentication. This incident showed that a single weakness in digital processes can result in widespread effects in the physical world such as fuel shortages, supply chain disruptions, and flight cancellations. Clearly, it’s not enough to simply strengthen technology. Preparing for the new era requires a holistic framework that also includes access management, business continuity, supply chain assurance, role and responsibility matrices, operational resilience, and audit capacity.

Looking ahead, what development in the field of information security excites you the most? In which direction is technology moving to provide protection, verification, or trust-building?

The most exciting development is that security is becoming less of a passive wall and more of a living system.

In the past, information security was a defensive line that reacted to threats. Today, thanks to artificial intelligence, automation, and behavioral analytics, it’s becoming a structure that can anticipate risks before they emerge. This shift is not only changing how we protect ourselves, but also fundamentally altering our perspective on security.

Technology is no longer a passive shield. It allows us to build a security ecosystem that constantly learns and adapts itself. AI-driven threat intelligence and autonomous security operations in particular will form the backbone of security in the future.

We see this transformation as a new security approach that brings together human decision-making power with the learning capacity of technology. In short, the security of the future will not only be based on protection, but also on prediction and learning.

“Employee awareness is actually the strongest security layer for organizations. Because no matter how advanced technology becomes, most attacks still target human behavior.”

Information security is shaped not only by systems but also by people’s behavior. Where does employee awareness stand in this equation?

Employee awareness is actually the strongest security layer for organizations. Because no matter how advanced technology becomes, most attacks still target human behavior.

Cases of deepfakes produced with artificial intelligence make this reality strikingly clear. Manipulated audio and visual content that is almost indistinguishable from reality has become a new form of social engineering that affects not only the public, but also the business world.

Recently, fake payment instructions created by imitating the voice or image of executives have resulted in million-dollar losses in some companies. Such attacks test human awareness reflexes before technological systems can begin functioning.

Security has never been ensured by technological barriers alone. Human intuition is still the most powerful tool for recognizing risks and reacting appropriately. Therefore, training, drills, and experiential programs built around real scenarios play a role at least as important as technical solutions, sometimes even more.

Because the security of the future will depend not only on the resilience of systems but also on people’s ability to question, to doubt when necessary, and to respond correctly.

Recently, identity theft, fake news, digital fraud and other such threats have become part of everyday life. Why is it so important for individuals to be informed and for social awareness about these issues to increase? What kind of social awareness or collaboration is needed to combat these threats?

Identity theft, fake news, and digital fraud are no longer just cybersecurity problems, they’re also issues of social trust. The spread of false information or the theft of an identity isn’t only affecting one individual, it’s eroding everyone’s trust in institutions, in the media, and in each other.

Real digital trust is built with strong systems and shared awareness. Individuals’ everyday behaviors are the most concrete reflections of this awareness. For example, when faced with a digital fraud attempt or a cyberattack, it’s very important to see it as more than a personal threat and to report it through the right channels to help others become aware of it as well. Such individual steps can increase social awareness in a chain reaction.

Similarly, institutions and organizations shouldn’t confine their activities to technical measures. Organizing awareness campaigns, informing employees, putting in place legal and administrative mechanisms, and communicating transparently about these issues all play a major role in rebuilding trust.

A secure digital future won’t be achieved solely with advanced technologies, but with individuals who take responsibility when they detect risks, and raise awareness by sharing information about the threat and helping to build strong social reflexes.