It Is Time to Standardize Artificial Intelligence Practices Properly

Risks and challenges highlighted by the latest LLMs and Generative AI solutions must be addressed through the power of standards.

Olivier Blais
12 min readApr 7, 2023

Artificial intelligence has been one of the most disruptive technologies of the past few decades, potentially revolutionizing industries from healthcare to finance to transportation. However, as with any new technology, there are risks and challenges that must be addressed to ensure that AI is safe, reliable, and effective. These risks are more apparent to the public than ever due to Generative AI (ChatGPT, Bard, MidJourney & cie). The public is starting to push back, which is good for society but leads to important questions. Progress should not be detrimental to humans; currently, several signs say otherwise.

To have a safer development and usage of these solutions, it ultimately requires the adoption of standardization efforts, which is crucial for the benefit of humanity. It is essential to follow in the footsteps of other critical industries, which have made incredible progress thanks to standardizations like ISO. Laws and policies alone are insufficient to control bad AI systems and create value with AI.

This article emphasizes the importance of standardization in developing and adopting AI systems. It also advocates for the adoption of international standards and frameworks, such as those offered by ISO, to guide the development and deployment of AI systems. The end goal is to ensure that AI is developed in a responsible and ethical manner and to maximize the potential benefits while minimizing the risks associated with its use.

Quick timeline.

Already in June 2022, LaMDA (a Generative language AI solution by Google) made headlines around the globe after a Google engineer claimed it was sentient, generating pushback from the Google development team.

This event may seem like it happened years ago; let me remind you that it was only a few months ago. Since then, ChatGPT has been released and used by millions, becoming the most-used AI solution in history. DALL-E-2 and MidJourney have also generated millions of awe-inspiring pictures. Other impressive Generative AI solutions like GPT-4 were also released.

Number of days to 1M and 100M users by technology, by Kyle Hailey

These solutions have generated so many questions and worries from the public about how they can use Generative AI in their day-to-day and how these solutions will change their jobs, potentially replacing millions of jobs.

The privacy and safety of these solutions are also questioned and even sanctioned. An excellent example of this happened with the update of MidJourney. As soon as people realized it is now possible to generate deep fakes with the free trial of this solution, very concerning deep fakes of Trump, Macron and even the Pope were generated.

Imagine: with MidJourney, you can generate super realistic deep fakes for free in 2 minutes without requiring a technical background. Yep. Scary indeed.

“Due to a combination of extraordinary demand and trial abuse, we are temporarily disabling free trials until we have our next improvements to the system deployed,” MidJourney founder David Holz said in a post this week on the company’s Discord channel (The Verge, 2023).

Is this related to the proliferation of deep fakes? Most experts believe it is.

Deepfakes of Donald Trump and Emmanuel Macron, found on Google

March 2023 was a critical moment when the industry decided to act upon this recent innovation. First of all, an open letter has been released and signed by thousands of AI scientists and professionals such as Yoshua Bengio, Turing Prize winner and professor at the University of Montreal and Valerie Pisano, President & CEO, MILA calling on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (future of life institute, 2023).

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

At the end of March, Italy even temporarily blocked ChatGPT as there is no way for ChatGPT to continue processing data in breach of privacy laws. The Italian SA imposed an immediate temporary limitation on the processing of Italian users’ data by OpenAI, the US-based company developing and managing the platform (GPDP, 2023)

As confirmed by the tests carried out so far, the information made available by ChatGPT does not always match factual circumstances, so inaccurate personal data are processed.

So, what’s the solution to all of this?

What’s next? Should AI research be stopped indefinitely? This is unrealistic. So much money is on the line, and millions of individuals are now using these solutions. It’s like trying to stop a moving train by standing on the track.

In my opinion, one way to address these risks and challenges is through using standards. Standards are guidelines or requirements that establish best practices and minimum requirements for products, services, and processes. They can help ensure that AI solutions and research are safe, reliable, and high-quality.

After all, this approach has made research and development of aerospace, banking and healthcare solutions safe.

The objective is to enable research of powerful yet safe and high-quality solutions for humans.

A real-world exemple of industrial standards: Standards for aerospace.

Standards have played a significant role in reducing the number of airplane crashes in the aerospace industry. It’s one of the most highly regulated industries in the world, and standards are a crucial part of ensuring safety in the sector.

In the 1920s and 1930s, there were several high-profile airplane crashes that led to the establishment of organizations such as the International Civil Aviation Organization (ICAO) and the International Air Transport Association (IATA). These organizations were tasked with developing standards for airplane design, maintenance, and operation, among other things.

Fatal aerospace accidents per million departures, by Aviation Safety Network

Since then, ISO has developed a range of standards to improve safety in the aerospace industry. These include ISO 9001 for quality management, ISO 14001 for environmental management, ISO 45001 for occupational health and safety management, and AS9100 for aerospace-specific quality management. ISO has also developed technical standards for specific aerospace components and systems. These standards help ensure that companies proactively manage safety hazards, comply with regulations, and design and manufacture components and systems to meet rigorous safety requirements.

Today, the aviation industry is one of the safest modes of transportation in the world, with a remarkably low accident rate. This is due in large part to the rigorous safety standards that have been established over the years. These standards cover everything from the design and manufacturing of aircraft to pilot training and air traffic control.

So, lots of regulations, and yet, planes are flying, and companies are making money. What can we learn from aerospace and apply to AI?

Standardization efforts in artificial intelligence.

Few tools are available to manage the standardization efforts for artificial intelligence better. Guidelines, procedures, standards and policies are tools that can be used by standardization organizations to manage AI-based processes, products and services better.

Role of different standardization tools, made by the author

AI policies and laws.

For years, governments, companies and other organizations have been developing and implementing policies to ensure that AI systems are developed and used in a responsible and ethical manner. A policy is a statement of principles that are established to guide decision-making, action, or conduct.

Although it is an excellent first step to creating meaningful AI development and usage policies, it is unclear how to enforce these policies when they perceive a gap. Ethicists and responsible AI professionals will tell AI professionals that their most prominent tool today is the courage to speak up when something seems odd. This is a problem… it takes more than courage to implement rigorous best practices and controls that are standard across the field.

We must go further than this, and this is precisely what standardization organizations do, using standards supported by procedures and guidelines.

Laws are also getting voted on and implemented to support these policies. The EU AI Act in Europe and the C-27 bill in Canada are both legislative projects that will support the safety and trustworthiness of AI solutions. However, even the most comprehensive laws are impossible to enforce appropriately without clear proper standards, procedures, and guidelines.

Standardization efforts support the application of existing laws and policies. That’s the major missing piece of this puzzle.

Who can help standardize AI?

National Institute of Standards and Technology (NIST), and ISO (International Organization for Standardization) in partnership with IEC (International Electrotechnical Commission) are the most important organizations working on artificial intelligence standards.

NIST has been working on AI standards for several years, focusing on developing metrics for evaluating AI performance and trustworthiness. NIST has also developed a framework for managing AI risks and a guide for selecting appropriate AI algorithms. These efforts are designed to help organizations ensure that their AI systems are reliable and trustworthy by creating tools such as the NIST AI risk management framework.

On the other hand, ISO has been working on AI standards since 2017 through a joint technical committee. This committee has already published several standards related to AI. Additionally, ISO is working on several other AI-related standards and certifications, such as the AI Management System (AIMS) and many other tools that can help regulators, developers and stakeholders of AI systems.

One of the reasons why ISO is a good choice for AI standardization is its ability to bring together experts from around the world to develop consensus-based standards. ISO has a well-established process for developing standards, which involves input from a wide range of stakeholders, including industry, government, and academia. This process ensures that ISO standards are developed based on a consensus of experts and reflect the latest technological advancements and best practices.

Another reason why ISO is a good choice for AI standardization is its extensive experience in developing standards across various industries. ISO has been developing international standards for over 70 years, and has expertise in developing standards for a wide range of fields, from manufacturing and healthcare to environmental management and information security. This experience has given ISO a unique perspective on developing effective, practical, and widely adopted standards.

How can ISO help with the standardization of AI practices?

ISO/IEC Artificial Intelligence experts are working on developing standardizations that can be used as certification for AI solutions. One such initiative is the AI Management System (AIMS), which is designed to help organizations manage the risks associated with AI. AIMS provides a framework for managing the lifecycle of AI systems, from design to decommissioning, and includes requirements for risk management, data management, and human oversight. This standard is still under development but should be released very soon.

These are reasons why this standard should be a game changer in terms of the standardization of AI:

  • It is very comprehensive, covering organizational context as well as planning, support, operation, evaluation, and improvement of AI systems.
  • It focuses on mitigating risks associated with an AI system as it is generally an accepted methodology amongst professionals.
  • It promotes an iterative process and continuous improvement, which is ideal for the AI field, given the low maturity level for standardization.
  • It can either act as a central component of a governance, risk and compliance ecosystem or as a conformity assessment/ certification of management systems or product/service certification. Therefore, it provides users with flexibility.
Current AIMS Structure, from ISO/IEC 42001 workshop

In 2022, the Standards Council of Canada (SCC) moved forward with a first-of-its-kind pilot to define and test requirements for a conformity assessment program for AI management systems. SCC also plans to release a certification process based on the same standards. This will allow organizations to prove their dedication to the responsible use of AI, raising the confidence of customers and partners in their operations.

Other valuable tools are also available in the ISO toolbox; these guidelines, procedures, and frameworks, often called technical specifications can provide more tactical help on how to streamline and enhance the development and/or usage of AI systems. Although they are usually referred by standards, they are also instrumental individually. For example, I am currently developing a very useful technical specification (and yes, I know, I’m biased) which will guide the quality evaluation of AI systems.

This document will provide guidance for the evaluation of AI systems and will be applicable to all types of organizations engaged in the development and the use of artificial intelligence.

Once available, this document will act as a framework to evaluate the quality of an AI system by validating that concrete actions have been taken and critical processes have been put in place. This very tactical framework can be used standalone or leveraged as part of conformity assessment/ certification efforts.

In summary, not only is ISO more than qualified to build the right tools to standardize very complex and/or critical industries, but it has already done a lot to support organizations developing and using AI systems. Moreover, AIMS is about to be made available to the public and will be leveraged for certifications and conformity assessments, which can directly answer the very genuine and current tensions.

However, how can standardization work make a real impact in an industry?

A real-world example of standards implementation: Standards for data privacy.

As you know, data privacy is an important topic, and a lot of energy is invested in ensuring data privacy is a top priority for companies that leverage private data.

Implementing ISO standards for data privacy, such as ISO/IEC 27001:2013 can help build trust with the public by demonstrating an organization’s commitment to protecting their personal data. This is especially important in today’s data-driven economy, where customers are increasingly concerned about the privacy and security of their data.

By implementing ISO/IEC 27001:2013, organizations can develop a comprehensive set of policies and procedures for managing information security risks and mitigating potential threats to data privacy. This includes measures such as data classification, access controls, encryption technologies, and incident response procedures.

For example, a financial institution that implements this standard would have policies and procedures in place to ensure the security of customer financial data, including measures to prevent unauthorized access, protect data in transit, and securely dispose of data when it is no longer needed.

By complying with ISO/IEC 27001:2013, organizations can demonstrate to their customers that they take data privacy seriously and have taken steps to ensure their data's confidentiality, integrity, and availability. This can help build trust with customers, who may be more likely to choose a provider who is committed to protecting their data.

In addition, third-party certification of compliance with ISO/IEC 27001:2013 can further assure customers that an organization has implemented the necessary controls to protect their data privacy. This is why software suppliers such as Google and Microsoft undergo regular audits to ensure that they comply with this standard.

For decades, there has been a growing focus on privacy in technology, with laws such as the EU’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act, and the Australian Privacy Principles providing guidance on how to protect and maintain user privacy. Privacy has always been a priority at Google, and we’re continuously evolving to help our customers directly address global privacy and data protection requirements. Today, we’re pleased to announce that Google Cloud is the first major cloud provider to receive an accredited ISO/IEC 27701 certification as a data processor.

ISO/IEC 27001 series compliance, made by Google

Overall, implementing ISO standards for data privacy can help build trust with the public by demonstrating an organization’s commitment to protecting their personal data and mitigating potential risks.

Standardization for AI is not an option.

As we enter a new and more dangerous (or dynamic for the optimistics) era of artificial intelligence, we must ensure that its development and deployment follow a standardized approach. This is not only crucial for the success of the AI industry but also for the betterment of humanity as a whole. We have already seen the benefits of standardization in industries like aerospace and data privacy, and it is time for the AI industry to follow suit.

However, simply enacting laws and policies to restrict the usage of AI is not enough. We need to encourage the creation of value with AI while also controlling the negative impacts that come with it. And this is where standardization comes in. By following standardized approaches, we can ensure that AI systems are safe, reliable, and efficient while also fostering innovation and growth.

Therefore, any companies that develop or use AI systems must seek help from the International Organization for Standardization (ISO) for the standardization of their approaches. ISO has developed various tools like AIMS that are specifically designed to help companies develop and adopt AI systems in a standardized way. And as more major companies adopt these tools, we can expect to see a significant change in how we develop and adopt AI systems.

In conclusion, standardization is not an option, but a necessity for the success of AI. It is up to us to embrace standardized approaches like the one mentioned in this article to ensure that AI systems are developed and deployed in a safe, reliable, and efficient manner, while also creating value for humanity.

--

--

Olivier Blais
Olivier Blais

Written by Olivier Blais

Cofounder & VP Decision Science at Moov AI and Editor of the ISO/IEC TS 25058 — Guidance for quality evaluation of AI systems technical specifications.

No responses yet