Regulating Emerging Technology: How Can Regulators Get a Grasp on AI?
By: Chisup Kim
Uses of Artificial Intelligence (“AI”), such as ChatGPT, are fascinating experiments that have the potential to morph their user’s parameters, requests, and questions into answers. However, as malleable these AIs are to user requests, governments and regulators have not had the same flexibility in governing this new technology. Countries have taken drastically different approaches to AI regulations. For example, on April 11, 2023, China announced that AI products developed in China must undergo a security assessment to ensure that content upholds “Chinese socialist values and do[es] not generate content that suggests regime subversion, violence or pornography, or disput[ions to] to economic or social order.” Italy took an even more cautionary stance, outright banning ChatGPT. Yet domestically, in stark contrast to the decisive action taken by other countries, the Biden Administration has only begun vaguely examining whether there should be rules for AI tools.
In the United States, prospective AI regulators seem to be more focused on the application of AI tools to a specific industry. For example, the Equal Employment Opportunity Commission (“EEOC”) has begun an initiative to examine whether AI in employment decisions comply with federal civil rights laws. On autonomous vehicles, while the National Highway Traffic Safety Administration (“NHTSA”) has not yet given autonomous vehicles the green light exemption from occupant safety standards, they do maintain a web page open to a future with automated vehicles. Simultaneously, while regulators are still trying to grasp this technology, AI is entering every industry and field in some capacity. TechCrunch chronicled the various AI applications from Y Combinator’s Winter Demo Day. TechCrunch’s partial list included the following: an AI document editor, SEC-compliance robo-advisors, Generative AI photographer for e-commerce, automated sales emails, an AI receptionist to answer missed calls for small companies, and many more. While the EEOC and NHTSA have taken proactive steps for their own respective fields, we may need a more proactive and overarching approach for the widespread applications of AI.
Much like their proactive GDPR regulations in privacy, the EU proposed a regulatory framework on AI. The framework proposes a list of high-risk applications for AI, and creates more strenuous obligations for those high-risk applications and tempered regulations for the limited and no risk applications of AI. Applications identified as high-risk include the use of AI in critical infrastructure, education or vocational training, law enforcement, and administration of justice. High-risk applications would require adequate risk assessment and mitigation, logging of data with traceability, and clear notice and information provided to the user. ChatBots are considered limited risk but require that the user has adequate notice that they’re interacting with a machine. Lastly, the vast majority of AI applications are likely to fall under the “no risk” bucket for harmless applications, including applications such as video games or spam filters.
If U.S. regulators fail to create a comprehensive regulatory framework for AI, they will likely fall behind on this issue, much like they have fallen behind on privacy issues. For example, with privacy, the vacuum of guidance and self-regulating bodies forced many states and foreign countries to begin adopting GDPR-like regulations. The current initiatives by the EEOC and NHTSA are applaudable, but these organizations seem to be waiting for actual harm to occur before taking proactive steps to regulate the industry. For example, last year, NHTSA found that the Tesla autopilot system, among other driver-assisted systems, was linked to nearly 400 crashes in the United States with six fatal accidents. Waiting for the technology to come to us did not work for privacy regulations; we should not wait for AI technology to arrive either.
Share this:
This content was originally published here.