White House Issues AI Executive Order
The White House has issued an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The EO includes provisions that include new standards for AI safety and security; mandated testing for AI models to ensure they cannot be used to create weapons; and addressing privacy and job displacement.
This is a very broad initiative and you could replace every mention of “Artificial Intelligence” with “New Technology” and it would pretty much read this same. What needs to be focused on is governance and essential security hygiene of AI and one really important area: stronger authentication to enable more use of encryption and digital signatures to be enable differentiating between real information and AI-product dis/misinformation.
This is a very broad directive. Capability and content filtering has been problematic in the past, eroding trust of the user. The administration, likely CISA, is going to be issuing guidance for agency use of AI, speed acquisition of AI products and hiring of AI professionals as part of a government-wide AI talent surge. Look to where you can leverage AI, possibly with a very focused training set, to help drive innovation and opportunities.
This is a very broad statement covering numerous different areas and tasking a large number of US federal departments and agencies. This EO is less about “AI is evil we need to control it” and much more about “AI is the next big thing and the US wants to lead it.” What is interesting about this EO is not only its breadth but timing. The UK government is leading an international AI summit this week; the US made sure to release this EO the day before the summit. In addition to the EO, the US government is promoting their new https://ai.gov/ website which is all about getting people training and jobs in AI.
The long awaited EO has dropped. It is rather extensive especially in the areas of research and applications. There is considerable focus on US strategic national advantage. What is a bit surprising is that there is little on international cooperation or standards making other than multiple references to actions among ‘international allies and partners.’ Hopefully that will be corrected in multi-lateral discussions.
Large language models (LLMs) are the newest user interface to the computer. They enable us to express the result that we want in natural language. Like every new UI in the past, they make the computer a more powerful tool and open up new applications. That said, the computer remains a tool. Tools vary in quality, utility, usability, and use. The user is responsible for the selection of the tool, its application, and all the properties of the result. We forget any part of that at our peril. We should not impute authority or autonomy to the tool. While regulating the quality of the tool may be useful, it will not ensure good results. Only the user can do that.
William Hugh Murray
Read more in
Gov Infosecurity: White House Issues Sweeping Executive Order to Secure AI