290117643 | Ai © Oleg Marushin | Dreamstime
Dreamstime M 290117643

Cybersecurity stakeholders praise AI executive order—but say it’s just a start

Oct. 31, 2023
Industrial uses of artificial intelligence have a head start on any government attempts to protect its users, Smart Industry learned in interviews this week, but sources see the Biden administration effort as well-intentioned yet incomplete.

Stakeholders in manufacturing software and cybersecurity generally lauded the executive order (EO) announced by President Biden's administration on Oct. 30 that attempts to bring AI under the umbrella of basic government oversight. But they cautioned that the EO is only a beginning in bringing some understanding to U.S. industry and the public of the technology and how it can be twisted for criminal purposes by bad actors.

Possibly the top mandate of Biden’s freshly minted, ambitious and the first-ever directive on AI is a rule that requires developers of the most powerful artificial intelligence systems to share their safety test results and other critical information with the U.S. government.

“AI is all around us,” Biden said. “To realize the promise of AI and avoid the risk, we need to govern this technology.”

See also: Gen-AI leads back to reducing downtime on the line

But Lisa Plaggemier, executive director of the nonprofit National Cybersecurity Alliance, emphasized that the leadership needs to start at the companies—OpenAI, maker of ChatGPT, Microsoft, DeepMind, IBM, Google, Nvidia, DataRobot, and Intel—that are developing the AI technology that is rapidly expanding to government, defense, consumer, and industrial IT and OT uses.

“The big tech that created this stuff is kind of left out of the equation,” Plaggemier told Smart Industry. “Theoretically, the people that created [AI technology] would know where the guardrails should be.” But the Biden EO, she added, is “a stake in the ground. It raises a lot of questions, but it starts a conversation.”

AI uses cases in IT and OT in manufacturing already are widespread, from leveraging the technology to identify possible production machine downtime and forecasting predictive maintenance, generative design, price forecasting of raw materials, robotics, edge analytics, quality assurance, inventory management, and process optimization.

'Leaning into' the Defense Production Act

In accordance with the Defense Production Act, the Biden executive order also requires that companies developing any foundation AI model that poses a serious risk to national security, national economic security, or national public health and safety to notify the U.S. government when training the model and must share the results of all “red-team” safety tests that developers perform.

The presidential directive also orders the National Institute of Standards and Technology to set standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security also will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Energy and Homeland Security departments also will address AI systems’ threats to infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.

See also: Cybersecurity: ‘Largest obstacle to adoption of smart manufacturing technologies’

The presidential EO, Plaggemier noted, “really leans into the Defense Production Act, which is normally for times of national emergency. When I think about cybersecurity, I think it’s been a national emergency for a while.”

“It's a start,” she added, “but people will say there’s no enforcement with it. The problem is the bad actors don’t have to live by the executive order. While we’re busy figuring out policy … federal policy … corporate policy … and updating our policies, the cybercriminals are unencumbered. Doing something is better than doing nothing at all. [The] important part is the exposure.”

She said digital “watermarking”—a technique that involves embedding digital marks or indicators into machine learning models or datasets to enable identification—may hold an answer toward greater AI safety. The Biden executive order calls on the U.S. Commerce Department to develop guidance for content authentication and watermarking to clearly label AI-generated content.

Rudimentary oversight on AI development

“We think the executive order is much needed from several fronts, certainly including putting more oversight (and controls) on AI development and advancement, while acknowledging the potential for AI in workforce development and training,” said Chris Kuntz, VP of strategic operations for Augmentir, which makes a connected workforce platform that uses AI to operationalize training and on-the-job support for manufacturers, digitizing skills tracking, work instructions, and factory-floor collaboration.

Kuntz lauded the Biden executive order for “putting worker and labor union concerns front and center.” He said the EO “reinforces Augmentir’s use of AI as a way to ‘augment’ workers—not replace them.”

“The EO addresses worker concerns around employers using AI to track productivity at a level that violates their federal labor rights,” Kuntz said in a statement. “Augmentir is aligned with this caution; our central focus is on workforce development and training, an issue called out in the EO.”

See also: AI powering ChatGPT and empowering manufacturers

“We applaud the Biden [EO], particularly the emphasis on ensuring the cybersecurity of models,” said Neil Serebryany, CEO and founder of San Mateo, California-based CalypsoAI, which makes advanced AI security solutions that test, validate, and protect AI systems.

“The president’s main directive is to develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy,” Serebryany added in a statement to SI. “Users need to know the AI systems they are using are secure, so building in security solutions before use is important, as is providing security at the point of inference. Securing the usage of these models means organizations will be protected in real time from threat incursions, regardless of the threat’s nature or origin.

See also: Clorox cyberattack to cost up to $593 million

Serebryany continued: “While this order is a step in the right direction, we urge the Biden administration to take steps to reinforce cybersecurity measures surrounding the utilization of foundation models, including large language models (LLMs), by encouraging the organizations deploying them to aggressively address critical security considerations. This external approach is increasingly important on a global scale as multinational and international organizations adopt SaaS applications that have models embedded within them and seek to integrate a rapidly expanding array of models into their enterprise.”

Also added Jon Siegler, chief product officer at Chicago-based LogicGate, which offers cyber risk, controls compliance and enterprise risk management services: “As President Biden's executive order highlights, the growth of artificial intelligence presents both opportunities and challenges. This balanced outlook recognizes the potential of AI for speed and innovation, while remaining cognizant of the associated risks to security and privacy by those who would seek to misuse it. It is encouraging to see the government taking such robust action to advance AI safely and ethically, promoting research and development, fostering a diverse and skilled AI workforce, and addressing ethical and security considerations.”

Other provisions of the Biden executive order include:

  • Protects against the risks of using AI to engineer dangerous biological materials: The EO mandates development of new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding.
  • Establishes an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software: This builds on the administration’s ongoing AI Cyber Challenge, and these efforts combine to harness AI’s cyber capabilities to make software and networks more secure.
  • Orders development of a National Security Memorandum: This would direct further actions on AI and security, to be developed by the National Security Council and White House chief of staff, to ensure that the U.S. military and intelligence community use AI safely, ethically, and effectively in their missions.
  • Protects Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques: This includes methods that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.
About the Author

Scott Achelpohl

I've come to Smart Industry after stints in business-to-business journalism covering U.S. trucking and transportation for FleetOwner, a sister website and magazine of SI’s at Endeavor Business Media, and branches of the U.S. military for Navy League of the United States. I'm a graduate of the University of Kansas and the William Allen White School of Journalism with many years of media experience inside and outside B2B journalism. I'm a wordsmith by nature, and I edit Smart Industry and report and write all kinds of news and interactive media on the digital transformation of manufacturing.