Hero Image

REPORT

chapter 5

Trust and Security in the AI Era

Charting progress and opportunities in a transforming world.


5.1 Understanding AI’s Unique Security and Trust Challenges

AI will soon become an integral element of most digital infrastructures and applications, providing them with a layer of value-added intelligence. As such, it will increasingly play a key role in automating complex tasks and supporting critical decisions. Therefore, addressing security and trust challenges is key to responsible AI adoption. In practice, it turns out that AI systems cannot be widely adopted and used unless humans trust their operation. 

AI systems fall in the broader scope of digital systems, which require strong cyber-security for their robust and trustworthy operations. Nevertheless, AI systems also come with their own unique security and trust challenges, including:

  • Complex structure and operations: Unlike traditional software, AI models are inherently complex and often function as "black boxes," which makes their decision-making processes opaque. For example, most applications based on complex large language models (e.g., ChatGPT) produce outputs and reason over data in ways that are not fully transparent and understandable to human users.  This lack of transparency poses significant risks, especially in high-stakes applications like healthcare or autonomous vehicles. Furthermore, users may struggle to trust systems they cannot fully understand or audit, which raises barriers to AI adoption.

  • Data protection issues due to dependency on large datasets: AI systems rely heavily on vast amounts of data for their training and operation. This dependency creates vulnerabilities, such as data breaches, unauthorized access, or data poisoning attacks. Due to these vulnerabilities, malicious actors are likely to manipulate training datasets to corrupt AI outputs. It is, therefore, very challenging to ensure the integrity and security of these datasets.

  • Ethical concerns in decision making: In many cases, AI systems operate in a biased way due to biases in their training data.  Biased operations can lead to unfair or discriminatory outcomes. At the same time, the lack of accountability for decisions made by AI raises ethical concerns. For instance, it is not clear who is responsible when an autonomous system makes a mistake. These issues can cause public trust in AI technologies to collapse.

Given the above-listed challenges, there is a need for robust frameworks for AI security and trust. For instance, such frameworks must foster AI transparency (e.g., based on AI explainability and AI interpretability techniques), while at the same time ensuring data security based on advanced encryption methods and secure data-sharing protocols that can protect sensitive information from breaches or misuse. Most importantly, it is important to promote ethical standards based on clear guidelines for fairness, accountability, and inclusivity that can mitigate biases and enhance public confidence in AI. The development of such a framework is a multistakeholder challenge involving policymakers, technologists, and industry leaders. The latter must collaborate to develop strategies that address these challenges in ways that promote ethical AI use without hindering innovation. 

 

5.2 AI Security Solutions and Risks

5.2.1 Common Risk Factors

The above-listed security challenges of AI systems reflect a considerable number of AI-specific security risks, which must be embraced by modern security policies. Such risks include: 

  • Adversarial attacks (e.g. evasion attacks): These are attacks that manipulate input data to deceive AI models in order to lead them to provide incorrect outputs or to compromise their integrity. For example, attackers can craft subtle changes to images or text in ways that cause misclassification in AI systems. Such attacks can compromise the reliability of critical applications such as autonomous driving and medical diagnostics.

  • Data poisoning: Poisoning attacks take place during the training phase of the AI systems. They involve attackers that inject malicious data into datasets to corrupt the model's learning process. This can result in biased or harmful outputs, which can accordingly lead to wrong recommendations in decision-making processes like loan approvals.

  • Model theft and reverse engineering: Proprietary AI models are also vulnerable to theft through unauthorized access or reverse engineering. In such cases, attackers replicate models by analyzing input-output patterns. This can lead to intellectual property theft.

  • Privacy breaches: Many AI systems process sensitive data, which makes them targets for data breaches. There are adversarial actions like membership inference attacks that allow adversaries to determine if specific data was part of a model’s training set. This leads to direct violations of the users’ privacy.

  • AI-powered cyberattacks: Nowadays, threat actors increasingly use AI to scale and optimize cyberattacks, such as phishing, ransomware, and denial-of-service (DoS) attacks. During the last couple of years, generative AI (GenAI) tools have significantly enhanced the sophistication of these attacks based on GenAI’s ability to mimic human behavior and to create realistic fake content. As a prominent example, hackers have nowadays access to tools like FraudGPT and WormGPT, which are used to create and launch cyber-attacks via the Dark Web.

5.2.2 Mitigating AI Security Risks

Fortunately, AI vendors and security experts are offered effective solutions that can mitigate AI security risks. These solutions include:

  • Adversarial defense mechanisms: These mechanisms hinge on the implementation of adversarial training processes, which expose models to deceptive inputs during their development time. Moreover, it is possible to use anomaly detection algorithms and input validation checks to identify and mitigate adversarial attempts.

  • Data protection measures: It is also possible to ensure data integrity based on encryption for data at rest and in transit. In this direction, AI vendors and security solution providers can employ differential privacy techniques to anonymize datasets, while preserving their utility for training.

  • Robust model security: To ensure the robustness of AI models, security professionals can employ techniques like access controls, secure deployment pipelines, and regular updates. They can also conduct penetration testing and vulnerability assessments, which helps them identify weaknesses proactively.

  • Continuous monitoring: Another security measure involves the deployment of real-time monitoring tools to detect anomalous behavior in AI systems. In this direction, AI developers and deployers must conduct regular audits to ensure compliance with security standards and to help refine models over time.

  • AI-driven cybersecurity tools: AI is not just introducing new cyber-security risks. It is also a powerful tool for implementing security controls. In particular, it is possible to leverage AI techniques for threat detection. AI-based functionalities are typically integrated within conventional data-driven cyber-security tools, for example  Security Information and Event Management (SIEM) systems, to enhance their threat detection and mitigation capabilities.

  • AI deployment at the edge computing: In recent years, AI models and AI systems have been deployed at the edge (i.e., edge AI systems) rather than within a cloud computing infrastructure. For example, TinyML systems are deployed within IoT devices. Edge AI systems limit data transfer to remote datacenters, which can significantly reduce the attack surface of AI deployment. The latter is a key to alleviating vulnerabilities and reducing the likelihood of data breaches. The real-world security benefits of edge AI processing can be seen in applications like intelligent camera systems for monitoring elderly individuals in healthcare settings, where processing image and scene recognition directly on the device creates an inherently more secure system by keeping sensitive visual data local and eliminating risks associated with transmitting data to third parties for processing. 

5.2.3 Hardware-Based Security Solutions

While many AI security solutions focus on software-level protections, hardware-based security features provide a fundamental layer of defense that is crucial for protecting AI systems. Modern processor architectures incorporate several key security technologies that help safeguard AI workloads:

  • Memory safety technologies: Memory safety issues account for approximately 70% of all serious security vulnerabilities in computing systems, including AI applications. To address this, advanced processor architectures like  Armv9 incorporate Memory Tagging Extension (MTE), which enables dynamic identification of both spatial and temporal memory safety issues. This technology is particularly important for AI-based systems that process large amounts of data and complex model architectures. For example, Google has adopted Arm MTE in Android and committed to supporting MTE across the entire Android stack, providing enhanced security for millions of AI-enabled devices.

  • Secure virtualization: As AI workloads increasingly run in virtualized environments, protecting the confidentiality and integrity of data becomes crucial. Realm Management Extensions, which form the basis of the Arm Confidential Compute Architecture, provide hardware-enforced isolation that secures data running in virtual machines from potential hypervisor compromises. This is especially critical in datacenters used for training advanced ML models, where multiple tenants may share computing resources. The same technology also helps secure edge computing systems where trained ML models are deployed.

  • Protection against code reuse attacks:  Modern AI systems face sophisticated attacks that can repurpose existing code for malicious purposes. Technologies like Arm Pointer Authentication (PAC) and Branch Target Identification (BTI) provide robust protection against code reuse attacks such as return-oriented programming (ROP) and jump-oriented programming (JOP). These protections are particularly important as attackers increasingly use AI tools to develop more sophisticated attack methods. These security features are being deployed across both high-performance application processors and microcontrollers used in IoT devi

  • Standardized security framework:  Beyond individual security features, industry-led security frameworks like PSA Certified provide a comprehensive approach to device security. This framework establishes security best practices and certification processes that help ensure AI-enabled devices meet robust security standards from the silicon level up. By adhering to these standards, manufacturers can demonstrate their commitment to security while providing customers with verifiable security assurances.

These hardware-based security solutions complement software-level protections to create a comprehensive security architecture for AI systems. As AI workloads become more prevalent across different computing environments – from cloud datacenters to edge devices–  the role of hardware security becomes increasingly critical in protecting sensitive AI models and data.

5.3 Data Protection in AI Applications

5.3.1 Data Protection Risks

As already outlined, AI systems require large, high-quality datasets to function properly and efficiently. In several cases, these datasets include personal or sensitive information, which introduces significant risks. For instance, sensitive data (e.g., health records and financial information) can be exposed during storage, transmission, or processing. This can lead to privacy breaches and potential misuse. Overall, AI systems face several key challenges when it comes to safeguarding data:

  • Data breaches: Large datasets are attractive targets for cyberattacks. Unauthorized access can compromise personal information, which can subsequently lead to identity theft or financial fraud.

  • Lack of transparency: In many cases, AI users lack information and clarity about how AI data are collected, stored, and used. 

  • Bias and misuse: Improper handling of data can perpetuate biases or lead to discriminatory outcomes. This is a setback to ensuring ethical AI deployments.

5.3.2 Solutions for Enhancing Data Protection

The following technologies, strategies, and measures can be used to address the above-listed data protection challenges:

  • Encryption techniques: AI vendors and system developers offer advanced data encryption methods, allowing them to safeguard sensitive data against data breaches. For instance, techniques like homomorphic encryption allow computations on encrypted data without decryption, which boosts privacy and data protection. As another example, differential privacy techniques add statistical noise to datasets in order to protect individual identities while maintaining analytical utility.

  • Secure multiparty computation (SMPC): SMPC is a novel technique that enables collaborative model training without exposing raw data. In the scope of this technique, each party processes encrypted inputs locally, which ensures that sensitive information remains secure.

  • Regulatory compliance: AI applications are subject to stringent data protection regulations, which safeguard data privacy. As a prominent example, frameworks like the European General Data Protection Regulation (GDPR) help ensure that data is collected, processed, and stored ethically. In this direction, AI data processors and AI system deployers can employ techniques like data minimization and anonymization. Furthermore, they can also conduct Data Protection Impact Assessments (DPIAs) in order to identify how they can stay compliant with applicable laws and regulations.

5.4 Challenges and Solutions for Trustworthy AI Systems

5.4.1 Trustworthiness Challenges

AI systems face significant trustworthiness challenges that typically stem from their lack of transparency, susceptibility to bias, and difficulties in verifying the reliability of their outputs. These issues can be very critical in high-stake AI applications, such as healthcare, finance, and criminal justice. More specifically, the main AI trustworthiness challenges are as follows:

  • AI black boxes: Many AI systems operate as "black boxes." This means that their decision-making processes are not clear and transparent to their users. There are many cases where these processes are not transparent even to their developers. This lack of interpretability makes it difficult to understand how models arrive at specific conclusions, which hinders accountability and trust. For example, an AI system for medical screening might suggest a specific diagnosis, yet it can hardly explain the rationale behind its decision.

  • Algorithmic bias: Bias is a pervasive issue in AI, which is often introduced based on skewed or limited training data. The latter can result in discriminatory outcomes, such as denying loans to certain demographic groups or unfair hiring practices. Even with efforts to mitigate bias, algorithmic bias can persist due to the inherent complexities of real-world data and due to the varying definitions of fairness across contexts.

  • Reliability verification: Ensuring consistent and reliable outputs from AI systems is not a trivial task. Hence, models may perform well during testing but fail in real-world scenarios due to unforeseen variables or data shifts. This unpredictability undermines user confidence and raises concerns about the robustness of AI applications.

5.4.2 Strategies to Enhance Trustworthiness

The following measures and techniques can be nowadays employed to safeguard AI trustworthiness:

  • Security by design: A security-by-design approach integrates security considerations from the hardware level up. This approach involves building security features directly into the processor architecture and silicon, rather than trying to add security measures after deployment. For example, hardware-based security features can help provide robust protection against memory-related vulnerabilities, secure the execution of AI workloads in virtualized environments, and protect against sophisticated code manipulation attacks.

  • Explainable AI (XAI): XAI is a special research direction in AI, which aims at making the operations and outcomes of AI models transparent and understandable by users. For instance, techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations are nowadays widely used to explain the internal operations of ML models, which fosters trust in AI systems. As another example, counterfactual techniques are also used to provide “what-if” insights on how AI model inputs influence outputs. This is one more approach to reinforcing trust in the operations of AI systems.

  • Bias detection and mitigation: It is also possible to address bias issues proactively, i.e. during the AI data preparation and model training process. In this direction, fairness-aware algorithms, diverse data sourcing, and regular audits can be employed to identify and reduce biases. Furthermore, organizations are prompted to adhere to standards and regulations like the AI Act of the European Union The latter comprise fairness detection and mitigation mandates, which are especially applicable in the case of AI issues.

  • Rigorous testing and validation: Systematic testing across diverse scenarios is one more effective way of verifying the reliability of an AI system. This includes stress-testing models under varying conditions, monitoring them for performance drift over time, and conducting independent audits to ensure adherence to ethical guidelines.

  • Independent security certification: Such frameworks play a crucial role in verifying and validating these security implementations. These frameworks provide structured evaluation processes that test products against defined security requirements and help build trust by providing independent verification of security claims. It also ensures compliance with evolving security standards and regulations. 

5.5 Evaluation Metrics for Security and Trust

To evaluate the security and trustworthiness of AI systems, organizations require robust metrics that address their key vulnerabilities and ethical concerns. These metrics can be used to help ensure these systems are reliable and fair and preserve privacy when required. These metrics provide a framework for assessing AI model performance under various conditions and adherence to ethical standards. 

Some of the most prominent security and trust metrics for AI systems include:

  • Robustness against adversarial attacks: Adversarial robustness measures an AI model's ability to withstand maliciously crafted inputs designed to deceive it. Relevant metrics for AI resilience include:

(i) Attack success rate.; 

(ii) Robustness radius, i.e., the maximum perturbation a model can tolerate.

(iii) Adversarial accuracy, i.e., the percentage of correct predictions on adversarial examples. For instance, a high robustness score indicates that the model is less susceptible to adversarial manipulation, which is very important in the case of applications in security-sensitive domains like finance or healthcare.

  • Resilience to data poisoning: Data poisoning resilience evaluates how well a model performs when its training data is tampered with. Relevant metrics include accuracy degradation under attack and the detection rate of poisoned samples. Models with strong defenses (e.g., anomaly detection, robust training) demonstrate minimal performance loss even when exposed to malicious data.

  • Interpretability scores: Many (XAI) metrics assess how well a model's decisions can be understood by humans. In several cases, these measures are combined with user satisfaction indicators, which gauge the user friendliness of the AI explanations as well.

  • Fairness metrics: Fairness metrics like demographic parity and equal opportunity measure whether AI systems treat all demographic groups equitably. These metrics help identify and mitigate biases while helping to ensure that models do not perpetuate discrimination.

  • Differential privacy guarantees: Differential privacy quantifies the privacy risk associated with individual data points in a dataset.  It comprises a “privacy budget” parameter, which is the trade-off between privacy and utility. Lower “privacy budget” values indicate stronger privacy guarantees.

  • Data leakage risk assessments: These metrics evaluate the likelihood of sensitive information being inferred from model outputs or training datasets. The latter ensures that private data remains secure during use.

Conclusion

Overall, AI systems vendors and deployers must address the challenges of security, trust, and privacy. As AI systems become increasingly integrated into critical aspects of society, it is very important to ensure their robustness, fairness, and ethical use. The latter is essential for fostering public confidence and maximizing their benefits.  

Interdisciplinary collaboration plays a pivotal role when it comes to addressing AI security and trust challenges in production environments. Technologists must work alongside policymakers, ethicists, and industry leaders to establish robust frameworks that balance innovation with accountability. At the same time, there is a need for continuous advancements in security measures (e.g. XAI techniques, privacy-preserving technologies, and fairness-aware algorithms) to keep pace with evolving threats and societal expectations. As a call to action, stakeholders across sectors must prioritize trust and security as foundational principles for sustainable AI development. 

These values must be embedded into the design, implementation, and governance of AI systems to help ensure that emerging AI technologies serve as a force for good. The future of AI must empower individuals, drive innovation, and address global challenges, while at the same time safeguarding ethical standards. The path forward requires collective effort, but the rewards of trustworthy and secure AI are certainly well worth the investment.

Arm AI Readiness Index

REPORT | Arm AI Readiness Index | CHAPTER 6

Sustainability as a Core Metric for AI Readiness

author avatar

27 Mar, 2025.

Dr Vanessa Just, Founder and CEO of juS.TECH GmbH

Dr Nicole Höher, Project Manager for Sustainability & Digitalisation at juS.TECH GmbH

AI has emerged as a transformative force, revolutionizing industries, and enhancing efficiencies across different sectors of the economy. However, this rapid advancement comes with environmental considerations; the energy consumption associated with AI technologies poses a challenge to global environmental sustainability efforts. At the same time, AI offers tools to combat climate change, optimize energy use, and enhance environmental monitoring. As such, AI has a dual role as both a substantial new source of energy demand and a potential ally in promoting sustainability, exploring strategies to balance technological progress with environmental stewardship.

6.1 AI's Rising Energy Consumption and its Implications

The rapid expansion of AI applications has driven an unprecedented surge in computational demand, particularly for training complex models, such as large language models. These AI systems require immense processing power, resulting in a significant increase in energy consumption, not to mention significant amounts of water for cooling. This growing demand has far-reaching implications for global electricity usage and carbon emissions.

Today’s datacenters already consume lots of power: Globally 460 terawatt-hours (TWh) of electricity are needed annually. That’s equivalent to the entire country of Germany. In the United States, datacenter electricity consumption was 2.5% of the U.S. total (~130 TWh) in 2022 and is expected to triple to 7.5% (~390 TWh) by 2030, according to the Boston Consulting Group. That’s the equivalent of the electricity used by about 40 million U.S. houses – almost a third of the total homes in the U.S. 

This escalating energy consumption presents a critical challenge to electrical grid capacity and relia

CHAPTER 1

The Global State of AI Readiness: A Data-Driven Analysis

Momentum, Without a MapIn boardrooms and strategy sessions across the globe, artificial intelligence has moved from a distant aspiration to an immediate priority. Our extensive survey of 655 business leaders from countries including the U.S., U ...

CHAPTER 2

The Technical Foundation: Technology Requirements for AI Success

2.1 Overview of AI Technology Requirements2.1.1 The Need for a Robust Technology InfrastructureArtificial Intelligence (AI) systems are advancing rapidly, becoming more flexible, scalable, and powerful. But to unlock their fu ...

CHAPTER 3

Policy and Governance: Shaping the AI Regulatory Landscape

The policy and governance landscape for artificial intelligence (AI) is multifaceted and complex due to several factors. First, a variety of actors at local, national, regional, and global levels contribute to the governance of AI. Second, the umbrella term “AI” ...

CHAPTER 4

AI Safety and Risk: Navigating the Path to Responsible Innovation

AI is evolving at a pace that promises unprecedented gains—improving healthcare diagnostics, streamlining logistics, and unlocking new avenues of creativity. Yet, this rapid growth raises questions about safety, reliability, and accountability. ...

CHAPTER 5

Trust and Security in the AI Era

5.1 Understanding AI’s Unique Security and Trust Challenges ...

CHAPTER 6

Sustainability as a Core Metric for AI Readiness

Dr Vanessa Just, Founder and CEO of juS.TECH GmbHDr Nicole Höher, Project Manager for Sustainability Digitalisation at juS.TECH GmbH ...

CHAPTER 7

Building an AI-Ready Culture

Mark Hinkle, CEO and Founder of Peripety Labs, Founding Publisher The Artificially Intelligent Enterprise ...

CHAPTER 8

Case studies

Streamlining ADAS Integration for Safer, Smarter Vehicles with LeddarTechIntroduction ...

Get your Company on Wevolver

We reach millions of professional engineers every month who leverage the platform to stay up-to-date and connect with the industry.
Learn how your company can publish as well.

Wevolver 2025

| Privacy Policy | Terms of Service | Cookie Policy