25.1 C
New Delhi
Friday, September 27, 2024

New Security Protocol Protects Data from Attacks in Cloud Computing | MIT News

More from Author

In Short:

MIT researchers have created a new security protocol that ensures the privacy of sensitive data during deep-learning tasks on cloud servers. By using the quantum properties of light, they protect patient information and proprietary model data from being intercepted. Their method allows the computation to remain secure while achieving high accuracy without the risk of data leaks, promising safer applications in fields like healthcare.


Deep-learning models have found applications across various domains, including health care diagnostics and financial forecasting. However, the computational intensity of these models necessitates the utilization of powerful cloud-based servers, which introduces substantial security risks. This concern is particularly pronounced in the health care sector, where hospitals may be reluctant to employ AI tools for analyzing confidential patient data due to privacy issues.

In response to these challenges, researchers at MIT have devised a security protocol that harnesses the quantum properties of light, ensuring that data transmitted to and from cloud servers remains secure during deep-learning computations.

The protocol encodes data into the laser light utilized in fiber optic communication systems, leveraging the fundamental principles of quantum mechanics. This approach makes it virtually impossible for attackers to copy or intercept information without detection.

“Deep learning models like GPT-4 possess unprecedented capabilities, yet they require enormous computational resources. Our protocol allows users to utilize these powerful models without compromising the privacy of their data or the proprietary nature of the models themselves,” stated Kfir Sulimany, an MIT postdoc in the Research Laboratory for Electronics (RLE) and lead author of a recent paper on this security protocol.

Sulimany collaborated on this research with Sri Krishna Vadlamani, an MIT postdoc; Ryan Hamerly, a former postdoc currently at NTT Research, Inc.; Prahlad Iyengar, an electrical engineering and computer science (EECS) graduate student; and senior author Dirk Englund, a professor in EECS and principal investigator of both the Quantum Photonics and Artificial Intelligence Group as well as RLE. Their findings were presented at the Annual Conference on Quantum Cryptography.

A Two-Way Street for Security in Deep Learning

The focus of the researchers’ investigation was a cloud-based computation scenario involving two parties: a client with confidential data, such as medical images, and a central server that manages a deep learning model. The client aims to utilize the model to predict health conditions, such as determining whether a patient has cancer, without disclosing any patient information.

In this arrangement, sensitive data must be transmitted to make a prediction while ensuring that patient information remains secure. Additionally, the server aims to protect its proprietary model from exposure, a model that companies like OpenAI have invested significant time and resources in developing.

“Both parties have something they want to hide,” Vadlamani pointed out. In traditional digital computation, a malicious actor could easily replicate the data sent between the server and the client. However, quantum information cannot be perfectly copied, a property known as the no-cloning principle, which the researchers utilized in their security protocol.

The protocol allows the server to encode the weights of a deep neural network into an optical field using laser light. A neural network, a type of deep-learning model, comprises layers of interconnected nodes or neurons that process data. These weights perform mathematical operations on the inputs, layer by layer, until the final layer produces a prediction.

The server transmits the network’s weights to the client, which executes operations to generate a result based on its private data, keeping the data protected from the server’s view. Concurrently, the protocol permits the client to measure only the necessary result, preventing any replication of the weights due to the quantum nature of light.

Once the client processes the first result, the protocol is designed to negate the first layer, preventing the client from gaining additional insights about the model.

“Instead of measuring all the incoming light from the server, the client only measures what is essential to operate the deep neural network and feed the result into the subsequent layer, thereafter sending the residual light back to the server for security checks,” explained Sulimany.

Due to the no-cloning theorem, the client inevitably introduces minor errors when measuring the model’s result, which the server can evaluate upon receiving the residual light to ascertain whether any data leakage has occurred. Notably, this residual light does not compromise the client’s information.

A Practical Protocol

Modern telecommunications systems usually rely on optical fibers to transmit information due to their capability to support significant bandwidth over long distances. Since this equipment inherently includes optical lasers, the researchers can implement their security protocol by encoding data into light without requiring special hardware.

Initial tests revealed that the protocol could ensure security for both the server and client while enabling the deep neural network to achieve an accuracy rate of 96%. The minimal information that leaks from the model during client operations constitutes less than 10 percent of what an adversary would need to uncover hidden information. Conversely, a malicious server would only acquire around 1 percent of the data required to access the client’s information.

“Our protocol guarantees security in both directions — from the client to the server and from the server back to the client,” emphasized Sulimany.

Englund remarked, “Years ago, when we developed our demonstration of distributed machine learning inference between MIT’s main campus and MIT Lincoln Laboratory, I realized that we could innovate to provide physical-layer security, building on years of research in quantum cryptography that had previously been validated on that testbed. However, overcoming various theoretical challenges was essential to determine whether privacy-guaranteed distributed machine learning could be achieved. The integration of Kfir into our team was crucial, as he possesses a unique understanding of both experimental and theoretical aspects essential for developing the unified framework supporting this work.”

Looking ahead, the researchers aim to explore applications of this protocol in federated learning, a technique where multiple parties share their data to train a central deep-learning model. Additionally, they hope to investigate its potential in quantum operations, which may offer advantages in both accuracy and security.

This project combines innovative techniques from disparate domains, particularly deep learning and quantum key distribution, effectively adding a security layer to the former while facilitating a potentially viable implementation. This advancement could significantly enhance privacy in distributed architectures. Eleni Diamanti, a CNRS research director at Sorbonne University in Paris, who was not involved in the study, expressed interest in observing how the protocol performs amid experimental imperfections and its practical application.

This research was partially supported by the Israeli Council for Higher Education and the Zuckerman STEM Leadership Program.

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article