The rapid development of generative AI has created unprecedented challenges in privacy and security, triggering urgent calls for regulatory intervention.

Last week, I had the opportunity to discuss the security-related implications of AI with a number of Members of Congress and their staff in Washington, DC.

Today’s generative AI reminds me of the internet in the late 80s, with basic research, latent potential, and academic uses, but it wasn’t ready for the public. This time around, unfettered vendor ambitions, fueled by minor league venture capital and emboldened by Twitter echo chambers, are rapidly advancing the “brave new world” of AI.

The so-called "public" base model is flawed and unsuitable for consumer and commercial use; privacy abstractions, if they exist, leak like a sieve; the security fabric is very much a work in progress, as attack surfaces and threat vectors are still Being understood; as for the illusory guardrails, the less said the better.

So how did we get here? What happened to security and privacy?

The basic model of "compromise"

The so-called “open” models are not open at all. Different vendors advertise their openness by providing access to model weights, documentation, or testing. Despite this, none of the major vendors provide anything close to the training datasets or their inventory or lineage to be able to copy and reproduce their models.

This opacity about training datasets means that if you wish to use one or more of these models, you as a consumer or organization do not have any ability to verify or confirm the extent of the data contamination, whether in terms of intellectual property, copyright, etc., or potentially illegal content.

Crucially, without a manifest of the training data set, there is no way to verify or confirm non-existent malicious content. Malicious actors, including state-sponsored actors, plant Trojan content on the network that, if ingested during model training, results in unpredictable and potentially malicious side effects during inference.

Remember, once a model is compromised, there is no way to make it forget, the only option is to destroy it.

“Pervasive” security issues

Generative AI models are the ultimate safe honeypot because "all" data is ingested into a container. New categories of attack vectors have emerged in the AI ​​era; the industry has yet to understand how these models are protected from cyber threats and the impact of how these models can be used as tools by cyber threat actors.

Malicious hint injection techniques may be used to pollute indexes; data poisoning may be used to corrupt weights; embedding attacks, including inversion techniques, may be used to extract rich data from embeddings; membership inference may be used to determine certain Whether the data is in the training set, etc., is just the tip of the iceberg.

Threat actors may gain access to confidential data through model inversion and programmatic querying; they may corrupt or otherwise influence the underlying behavior of the model; and, as mentioned earlier, large-scale uncontrolled data ingestion can result in Threats embedded in state-sponsored cyber activities, such as Trojan horses.

"Leaked" Privacy

AI models are only useful because of the data sets they are trained on; indiscriminate large-scale data ingestion creates unprecedented privacy risks for individuals and the public. In the AI ​​era, privacy has become a social concern; regulations that primarily address personal data rights are insufficient.

In addition to static data, prompts for dynamic conversations must also be protected and maintained as intellectual property. If you are a consumer involved in co-creating an artifact with a model, you hope that the prompts you use to guide this creation activity will not be used to train the model or be shared with other model consumers.

If you are an employee using the model to achieve business outcomes, your employer expects your tips to be confidential; furthermore, tips and responses require a secure audit trail in case liability issues arise for either party. This is mainly due to the stochastic nature of these models and their response over time.

What happens next?

We are dealing with an unprecedented technology that is unique in our computing history in that it exhibits emergent and latent behavior at scale; the methods used in the past for security, privacy, and confidentiality are no longer adequate.

Industry leaders threw caution to the wind, leaving regulators and policymakers with no choice but to step in.
#AI  #安全隐私