Reducing AI's Vulnerable Attack Surface with Edge Computing
Article #1 of the "Why Edge?" Series. Edge AI presents a multitude of benefits for security across industry.
This is the first of a series of articles exploring the benefits of Edge AI for a variety of applications.
The proliferation of digital transactions, smart phones, cyber-physical systems, and other internet connected objects in modern enterprises has led to generation of unprecedented volumes of data. Nowadays, companies are increasingly taking advantage of this data to improve the efficacy of their business processes and make better decisions via analytics. Machine learning (ML) and Artificial Intelligence (AI) algorithms play an instrumental role in collecting and analyzing enterprise data in fast, automated, and cost-efficient ways.
Most enterprise-scale ML/AI systems are deployed in conjunction with cloud computing infrastructures. This is because cloud computing eases the storage and management of large datasets, as well as the processing of many data points. Nevertheless, cloud-based AI systems are not the best choice when it comes to processing data close to the field and providing real-time performance. Likewise, cloud systems are vulnerable to significant cyber-security threats as large volumes of data are transferred from different field systems to a single cloud infrastructure.
The edge computing paradigm alleviates the latency and security limitations of cloud computing. Edge computing brings data collection and decision-making closer to the field where data are produced. In this way, it also moves computing power close to the data sources, which reduces latency. Moreover, it limits the amount of data that are transferred from the field to the cloud, which foundational for reducing data breaches and stronger data protection.
Reduced latency and increased security is the reason behind the ongoing shift of AI/ML applications from the cloud to the edge. In recent years edge AI deployments are gaining traction across various verticals. For instance, connected and autonomous vehicles leverage edge AI to reduce latency as split-second decision-making can make a huge impact on passenger safety. Similarly, across the manufacturing sectors, companies are taking advantage of ML algorithms on the edge to provide near real-time detection of machine failures and of defective products.
The Security Benefits of Edge AI
In general, state-of-the-art AI/ML applications needs strong security for a variety of reasons, including:
- Their data intensive nature, which makes them susceptible to costly breaches. This can be critical especially in applications that handle sensitive information such as healthcare and personal data management.
- Machine learning itself has security vulnerabilities due to its reliance on suitable training data for proper operation. For example, in instances known as poisoning attacks adversaries compromise the operation of an ML system through by its training data with malicious datasets. In another example, evasion attacks confuse ML/AI systems by providing them with adversarial examples e.g., perturbed malicious input that looks like an untampered copy, yet it side-tracks the operation of the ML algorithms.
Edge computing is recognized for its ability to provide stronger security and data protection than conventional cloud deployments. Specifically, edge AI provides the following security benefits over cloud AI deployments:
- Operation based on safer (local) data. In edge AI data need not travel over a wide area network, which reduces the possibility of cybersecurity attacks. AI algorithms operate based on local data, which reside within edge servers or on the device itself. This makes them much more difficult to tamper with than data that are transferred across networks.
- Almost impossible for malicious party to access all data. Edge AI datasets are distributed across various on-premises data centers or devices, which means that there is no single point of vulnerability. Edge AI systems use distributed data that reside at the edge of the network, which makes it impossible for an adversary to gain access to all the data by attacking one point of the network.
- Much fewer breach points. Edge AI deployments limit drastically the number of connections and the number of data transfers from edge to the cloud. This means that they provide much fewer breach points than conventional cloud AI deployments. Moreover, the fewer number of connections facilitates the deployment of a set of carefully encrypted connections between the edge devices and the cloud data centers. In this way, the vulnerable attack surface of edge AI applications is significantly reduced.
- Flexible and cost-effective regulatory compliance. Edge AI facilitates enterprises in their efforts to comply with mandatory privacy and data protection regulations such as the GDPR (General Data Protection Regulation) in Europe. In particular, edge computing enables end-users and data providers to control their personal data rather than relying on the services of a vendor or infrastructure provider. Moreover, it enables the implementation of decentralized security policies that tend to be more resilient. In this way, the risk of non-compliance for AI/ML services providers is significantly reduced.
Increasing Resilience with Different Edge AI Deployment Paradigms
Edge AI applications are usually deployed in conjunction with cloud infrastructures, as some algorithms run in the cloud, where data points from various distributed sources and edge computing processes are aggregated. Moreover, edge AI applications come at varying deployment configurations based on the placement of ML/AI processes. For instance, AI analytics functions can be run inside a microsystem (e.g., a sensor), within an edge computing cluster or gateway, or even in the cloud.
Different ML paradigms enable a wide set of edge AI deployment configurations. These ML paradigms include:
- Embedded Machine Learning, which means executing ML algorithms within embedded devices. This paradigm enables data processing and data analytics within devices embedded with powerful processors ranging from CPUs and GPUs to neural native AI processing chips. Set-top boxes, smartphones, connected cars OBUs (On-Board Units), and many other types of fog nodes and internet-connected devices make use of this paradigm.
- TinyML, which refers to the execution of full stack machine learning systems (e.g., deep neural networks) within very small processors such as microcontrollers and AI Accelerators amongst others. In most cases, TinyML systems serve applications that do not require complex computations. TinyML delivers the security benefits of edge AI at the highest possible degree. This is because data are virtually processed inside the data source and there is absolutely no need for data I/O (input/output) operations.
- Federated Machine Learning, is a novel ML technique that trains an algorithm across multiple decentralized edge devices. This technique is very powerful as it offers the benefits of a consolidated data lake while allowing data owners have full control of their data locally, offering great security and performance.
In non-trivial edge AI applications, it is possible to combine more than one of the above paradigms in a solution configuration. This provides increased versatility in specifying deployment configurations that meet stringent security and data protection needs.
Case Study: Security and Data Protection for Healthcare Applications
The future of healthcare relies on the collection and processing of massive amounts of data from patients, including for example, data from medical and consumer devices (e.g., smart watches), clinical data, laboratory data, diagnostic devices (e.g., medical imaging), genomic databases, data about the patient’s medical history, and many more.
The analysis of this data though AI/ML algorithms is already driving significant improvements in many healthcare sectors, including prevention, diagnosis, prognosis, and treatment. For example, the analysis of clinical, genomic, imaging, and lifestyle data provides cardiologists with predictive insights on risk factors for various cardiovascular diseases. However, the regulatory approval and consumer adoption of AI/ML enhances services hinges on strong security and data protection measures.
Many state-of-the-art ML applications in the healthcare domain rely on cloud infrastructures, which creates considerable risks for data breaches and other cyber attacks. This is because large data volumes are transferred from the patients to the cloud, which makes it easier for malicious parties to tamper with healthcare data. In this context, the deployment of ML systems and algorithms at the edge of the network is becoming a good practice that alleviates such security concerns.
In an edge AI deployment, patient data is initially processed locally within edge devices like IoT (Internet of Things) gateways. Hence, patient data do not travel outside the perimeter of the network of the hospital or home care infrastructure. Nevertheless, select AI-based insights can be properly encrypted and transferred to the cloud infrastructure to enable further processing for statistical or clinical purposes (e.g., clinical trials). For example, if a patient’s respiratory sounds are to be analyzed, instead of transferring the entire recording to the cloud, edge AI may be used to infer significant events: number of coughs, sneezes, wheezes, and so on. In this way, both the volume of data that needs to be transferred to the clouds is reduced, reducing bandwidth requirements, and the patient’s data, which may include conversations is protected on the premise or within a device.
Healthcare data processing infrastructures must be compliant to applicable regulations like HIIPA (Health Insurance Portability and Accountability) in US and GDPR in Europe. This can be quite challenging given that patient information comprises personally identifiable data. Fortunately, Edge AI can help reduce the risks of data tampering and boosts regulatory compliance. Most importantly, patients and healthcare providers can choose when to share their data and for what purpose. This eases the implementation of informed consent processes that are fundamental for compliance to data protection regulations.
Conclusion
Edge computing can provide stronger security and data protection than conventional cloud deployments making it the right choice for security-sensitive applications such as Ip protection, healthcare, and many more. This is the first in a series of articles exploring the benefits of Edge AI across applications, industries, and products.
The first article discussed about reducing AI's Vulnerable Attack Surface with Edge Computing.
The second article talked about Edge AI in wearables.
The third article explored how edge AI is enabling cutting-edge advances in sustainability.
The forth article explained why Edge AI is a win for automotive.
The fifth article analyzed computer Vision on Compute-Constrained Embedded Devices.
The sixth article explained why edge AI is essential for EV Battery Management.
Security with Syntiant
Syntiant combines advanced silicon solutions and deep learning models to provide ultra-low-power, high performance, deep neural network processing for edge AI applications across a wide range of consumer and industrial use cases, from earbuds to automobiles. The company’s Neural Decision Processors (NDP) are optimally designed to deploy deep (machine) learning models on the edge, where power and area are often constrained. Syntiant’s NDP solutions can equip, every device, from earbuds and doorbells to automobiles and healthcare wearables, with powerful deep learning capabilities, enabling real-time data processing and decision making, with near zero latency, enabling secure and private artificial intelligent solutions.