Technological innovation and regulation: how to achieve ethical AI governance ?

Aymeric Arnoult
14 min readJan 31, 2024

--

Economic regulation, in the sense of commercial restrictions of what is allowed and what is not in terms of business practices, is a social compromise. On one hand, it restrains companies from practicing certain types of patterns, that could have been beneficial for their firm, from the strict financial point of view. On the other hand, it is meant to protect end users, and, thus, citizens, from adversary commercial practices that could turn out to be harmful for them, or, even worse, for a whole part of the society. It is a compromise to the extent that, if not enough regulation is applied, citizens may experience abuses from the most powerful industrial actors such as multinational companies.

That may be the case for oil companies destroying the nature for their profits. That was also observed when the market of Internet’s big tech companies like Google and Microsoft was unregulated : huge amount of data were collected without user’s consent, sometimes for the worst usage possible (e.g. the Cambridge Analytica case, that was unanimously considered as a scandal), at least, if we take into consideration the majority of public opinion; however, if too much regulation is applied, it can slow down firm’s development, innovation, and event prevent some companies to develop and to enter the market; this result in a loss of value for end users, which can be a good utilitarian argument against too much regulation. For instance, it has been measured, that in countries with not enough competition, the innovation rate was much slower. One can add, putting the restriction cursor too far result in a loss of freedom for entrepreneurs and people running business. Yet, all liberal political framework should try to minimize loss of freedom of its citizens if it is not useful. That is, in addition, a deontological argument against too heavy regulation.

Considering this moral issue, people have thought about an efficient way of creating, deploying and assessing regulation. The tools that we had until now were suited and sufficient to handle cases where it was reasonably easy to evaluate the potential impacts of a particular industrial field and its innovations on society. This is why, although it took some years, it was somewhat straightforward to achieve the GDPR enactment. Politicians, public agencies and regulators have managed to find a “sweet spot” where regulation is balanced enough to protect technological and innovation misuse from big tech companies, and to let those companies continue to 1) operate the service they propose and 2) to propose constantly new innovations[1].

This consideration, however, becomes way harder for the problem of Artificial Intelligence (AI). AI labels all the automated treatment technologies. AI conveys changes in industry and societal relations that are deeper and broader than any new technological digital innovation that we have been exposed to in the recent years, since the invention of Internet. Hence, many people, including executives, academics and politicians, have proposed different ways of handling this issue, and discussions are still ongoing about this topic for now. The example of AI is therefore a good example to illustrate the dilemma of regulation: it is recent, fresh, and ask this question again : what governance to adopt to produce technological artefact that serves the common good ? We will see that even though discussions are still ongoing, a good lot of work has already been achieved, and some answers can already be given, that can even be extrapolated for a more general consideration over regulation in technological innovation.

Regulation: what

Earlier, I defined regulation of commercial guidelines setting the rules in specific business areas. This is the global picture of what regulation is; but it is possible to describe it in a more precise manner. OECD gives more specific description about regulation, and, in particular, it provides precise examples of what can be regulated. To put it differently, it explains different sub-types of regulation.

It distinguishes between Economic, social and administrative regulations. Economic regulation is the one focusing on controlling the output of companies: what are the prices of the items it sells, the quantity, or even the number of actors allowed on the market. This type of regulation is intended to maximize the efficiency of the market. Social regulation is the one focusing on the well-being of each citizen: workers of the company, but also consumers. It can include protection of the rights of workers as well as protection of the rights of buyers. Finally, the administrative regulation is related to global management of the State concerning public and private sectors. Taxes, distribution systems and intellectual property fall under this category.

Although the social part is obviously the one that will be considered in a deeper discussion here, the other aspects shall not be neglected, as they also come with their load of negative effects and freedom limitation.

Regulation: the risks

Indeed, multiple risks are associated with an over-heavy regulation. Most of the literature concerning regulations express warning about a regulation that would be too smothering or that happen too soon in the development process of a new technology. At the same time, it is commonly admitted that regulation usually “lags” behind the innovation process, meaning that it always occurs a bit later, and that it is reactive by nature. The discussion then lies down on the length of this lag, whether the different laws and directions should occur soon and not, and with which degree of directiveness. Let us have a review of the problems that can occur with a regulation that would be ill-controlled.

One of the concerns is the lack of competition. If regulation is too strong, it can be possible that too few actors are operating on a specific market. This can be the case if the State controls the number of actors allowed on this market, for instance, if it keeps a monopoly, or only allows a numerus closus of actors. It can also occur if the increasing number of rules prevent some new actors from entering the market. For some of those specific market, competition has been proven to be beneficial for the innovation process. It influences the speed of technology diffusion. It was measured, for example, in the telecommunication field. Members of OECD where competition was not allowed saw the growth of people having access to mobile technologies grow slower in countries with a monopoly than in countries with a duopoly, and slower in duopoly situations than in open competition situations. In the end, the result is a faster access to new technologies for citizens, providing them with an access to a tool that is unanimously considered as positive.

There are also concerns toward employment. When applying a certain amount of norms and stronger rules to comply to after a certain threshold of worker number, regulation can discourage companies to hire new people, and, consequently, to get the opportunity to be more innovative. In France, for instance, companies have to comply to a bunch of norms if they reach 50 people in the company: special controls for healthcare, workers representation… That is why many companies tend to stay right below this threshold (John Van Reenen, 2021). This also have an effect on the number of patents they produce. The discussion is a bit less clear concerning the quality of those patents.

Regulation: the benefits

Non regulation risks

With all those points, it may appear that regulation is highly dangerous and mostly harmful for technological innovation, and AI is no exception to that. However, it would be an unfair trial to regulation not to recognize that it has obvious positive outcomes.

First, counter-intuitively, it is the best way to achieve the most optimal market outputs. In fact, complete non-regulation and freedom of actors are considered ineffective; that is why this option is not really considered in the literature about “how to regulate and govern AI”. If we take a quick detour in cryptocurrencies field, an industry that has been largely low-regulated, it is visible that a good amount of resources and energy has been spent not to find new innovative technologies (although some effectively showed up, but to find 1) ways to make money quickly and 2) ways to scam lay people). It is no hazard if most of cryptocurrencies are still nowadays used for speculation purpose, or illegal traffic. In the end, social utility of crypto has not managed to take off during these almost 15 years of non-existing regulation.

Some other digital industry examples are to be cited when it comes to harm caused by insufficient regulation. Big data and data-collection associated with the usage of Internet and the new big digital platforms like social medias, search engines online shops evidently harmed privacy’s rights. In this case, some beneficial effects have been observed overall on the society. Be that as it may, many scandals have been seen because of the lack of control that citizens had over their personal data. As most of the people were shocked because of those cases, it is legitimate to consider that those side effects were mostly negative.

Concerns have also been raised regarding AI. Some negative effects and misuses are already easily visible. For example, the use of deepfake, fake news, cyberattacks, terrorism, warfare, weapons, people manipulation, espionage, low level of democracy. One can argue that those problems are not of a different nature from the one that we evoked with cryptocurrencies or digital platforms: some mere misuses. Some people, though, disagree, and state that the problems and misused that have to be feared with AI are of a different nature. For example, Mathew Scherer says that AI poses some problems of a new nature. I chose some of them. First, there is the opacity problem: the technologies used to make those algorithms work are mostly opaque to regulators, even after reading the code. Then, a foreseeability problem is also posed, as some behaviors of AI systems will be unforeseeable by their programmers and designers, resulting in what the authors call a “liability gap”. And, besides, a control problem is also posed: AI systems may be impossible to have control over it, would it be because it has a fully autonomous way of operating, or because the control mechanisms have been deactivated by the AI itself. To sum up, the autonomous character of AI makes it much more delicate to rule and to get someone accountable for it; incidentally, multiple models of responsibility and accountability have been proposed to face this problem.

Some others also point out the systematic aspect of AI systems, that will be required to arbitrate between concurrent interests if it is deployed on a large, societal scale; if not implemented correctly, wrong decision will lead some people to see their interests mistreated.

Regulation is not always harmful

It is not completely consensual to assess whether regulation stifle innovation. Some people argue that the link is not systematically established. It is, for instance, stated that regulation can stimulate innovation. The examples of green technologies and data-protection technologies are taken: creating new regulatory. It can also be true in AI case: the obligation of making generative AI output transparent, as it is stated in the European Union AI act, can require AI companies to be innovative on this topic. The obligation of preventing these same models to generate illegal content is another of those innovative challenges. This can, as state before, become a burden for new actors entering the market if they have to comply to these new rules with a limited budget. This was the case, for instance, with GDPR, where small actors suffered from these new privacy rules.

Therefore, regulation should stay careful about letting small actors and innovators develop their company and infrastructures before harassing them with regulatory issues.

The actual benefits

We saw that AI and its autonomous character bring up a lot of uncertainty. Facing uncertainty is precisely one of the main goals and benefits of regulation.

A well-regulated market can speed up people’s adoption by reducing their distrust. According to the Center of Strategic and International Studies, an American think tank that recently launched its AI Council, many people and companies restrain their adoption of AI systems because they fear its risks and its challenges related to accountability, reliability and safety.

Good regulatory framework can also promote the best practices of AI and reduce uncertainty for new actors that would need to concentrate their resources on the unclear aspects. It can help companies craft what they call an RAI (responsible AI) policy, spend less time on framing the contours of this policy as it would be stated in the framework, and reclaim all the value and return on investment that emanate from adopting a RAI program, by reducing AI systems failures, anticipating them, and getting a better oversight on them. Costs are reduced, generated value increase.

Last but not least, the main benefit of this regulation, thanks to its Social subpart, is protecting citizens. Indeed, regulatory frameworks have the ability to protect both AI workers and users, would they be voluntary or unvoluntary users — we’ll get back to that later.

AI workers, first, are exposed to different sort of data. Some of them are training data to train models in the first place, some others are reinforcement data, i.e. data coming from end-users’ usage on which operators have to assess whether the model gave an output that was actually fulfilling end user’s need. Some of those data are harsh for the operators to treat. It has been the case with Apple’s operator in Ireland, with some of their employees having to review Siri’s usage. Sometimes, Siri was activated by mistake, and was recording iPhone owners’ activity, which turned out to be shocking in some cases, leading some of those employees to quit their job and relate about their experience to denounce the fact that they were not protected against this type of content whatsoever.

Users, on their hand, can also experience troubles with unproperly regulated AI. We can think about the data-bias problem, and how African Americans are sometimes mistreated by some AI systems, as it has been seen in the COMPAS automated legal system. This is a case in which users clearly identify the usage of AI, and hence the potential flaws resulting of a faulty implementation. This is not always the case. A growing share of AI systems are getting transparently integrated to our everyday usage and everyday tools, and a growing share of the Internet is running partly on AI systems. Any search engine and recommendation engine of a big platform is making use of AI algorithm. But what if those systems are partially failing? What autonomy — or, more precisely in this case, what authors call self-determination remains for human users? Self-determination protection is also part of the concerns of those frameworks.

To avoid these major issues, AI regulation frameworks give some special tools. The first that we can think of is the share of responsibility. In traditional ethics, responsibility of a negative outcome can be attributed to an identified person. With AI systems, this is the automated system that is, in the end, producing the outcome. We then need to share the responsibility between the different stakeholders to be able to take someone for accountable. If every stakeholder involved in the development and usage of those systems have their share of responsibility, negative outcomes should strongly be discouraged. There are also ways to supervise systematically the decision taken by a system thank to a human and to prevent it to lead to any action with consequences if it is wrong. This mechanism is called Human In The Loop (HITL).

Involving lay people in regulation

If regulation sometimes experiences a bad reputation in some business environments, it is also because it can be perceived as a bit authoritarian. Business owner see the State as an agent willing to keep control macro-economic situation, sometimes at the expenses of fluid trade, and to impose their vision of common good. On the other hand, regulatory institutions are sometimes perceived as technocratic, not taking people’s considerations enough, and thus not democratic enough. How about involving directly lay people, along the other stakeholders, in the creation and governance process to provide regulation some more legitimacy? Could it even enhance the regulatory outcomes?

This approach is part of many people’s reflection about how to handle AI governance. To extend the HITL principle that I evoked earlier, Ranwhan is proposing a model that he calls Society In The Loop (SITL). This participatory model tries to propose a synthesis between HITL and Social Contract. Thus, the lacking brick that AI need to be able to rule some situations where conflicting interests are meeting could be built. He compares AI, in this situation where AI systems are spread in every society management aspect, as a government. A government shall be accountable to its citizens; so shall be large AI systems. He then says that to answer this goal, we would need to build some tools to achieve this accountability. Some of them are here to know what people actually expect from these systems. Crowdsourcing is one of them: it consists of gathering people’s preferences about some ethical problems, by asking them directly during public consultations, or thanks to some special tools — the author himself developed one — or by trying to infer those preferences thanks to the amount of public crawlable data available on the Internet, noticeably on social medias.

This model, along with twenty others, were compiled in a meta-analysis resulting in the “AIR” framework. This framework tries to propose a thorough regulatory approach borrowing every relevant and/or overlapping ideas found in those twenty-one models. In this framework, a new entity comes into place: the AI national agency. It also includes a full participatory part, where society can give feedback and contributions to the parliament, through special online and physical channels, to the agency, and answer to consulting about system behavior from the agency.

The range of techniques enabling public participation is somewhat extensive. Public consultations are considered as a traditional method of surveying public, but some newer ways are also attracting some interest: digital-enabled techniques, like online tools and virtual reality, are also more and more valued for public engagement.

Conclusion

Artificial Intelligence, as previous other big technology innovation, is challenging to regulate. Some people are warned that we take too much time to do it, as we did with previous industrial revolutions risks. Some others are warning about being too fast in this process.

This debate is in keeping with the more global debate on the two main ways of regulating new technologies: the precautionary principle and the innovation principle. The former states that we should not deploy any technology until we are fully sure that it is harmless; the latter states the opposite, i.e. that we should not forbid or limit any technology until we are proven that it is harmful. But AI’s nature is a bit different compared to other technological innovations: it makes the responsibility attribution difficult, and is about to underlie many aspect of the society’s organisation (law, finance, medicine, education), getting some sort of political mandate. Yet, politics should remain in control of people’s hand.

That is why providing a comprehensive, extensive framework to regulate and keep control over AI is crucial. Many works, especially since the last five to ten years, have contributed to this reflexion. They will hopefully converge toward a set of rules somewhat universal, comparable to Human’s Rights — but for AI.

[1] Facebook, Apple, Microsoft, Amazon continue to innovate and propose new products as Libra, Horizon, Vision pro, ChatGPT, etc.

Bibliography

Alessio Tartaro, A. L. (2023). Assessing the impact of Regulation and standards on innovation in the field of AI, I. arXiv.org.

Grandy, J. (2020). Regulators can help clear the way for entrepreneurial energy companies to innovate. The Conversation.

Greenberg, T. (2020). We need a full investigation into Siri’s secret surveillance campaign . The Guardian.

Gregory C. Allen, A. T. (2023). Advancing Cooperative AI Governance at the 2023 G7 Summit. CSIS.

Guardian, T. (s.d.). The Cambridge Analytica Files. The Guardian.

Gunashekar, S. (2021). The use of public engagement for technological innovation. RAND.

John Van Reenen, P. A. (2021). National Bureau of Economic Research.

Ki, P. (2020). How Regulatory Frameworks Drive Technological Innovations. Entrepreneur.com.

McGreal, C. (2022). ‘What we now know … they lied’: how big oil companies betrayed us all. The Guardian.

OECD. (1996). Regulatory reform and innovation

Parliament, E. (2023). EU AI Act: first regulation on artificial intelligence.

Patricia Gomes Rêgo de Almeida, C. D. (2021). Artifcial Intelligence Regulation: a framework for governance. Springer Nature.

Rahwan, I. (2017). Society‑in‑the‑loop: programming the algorithmic social contract. Springer Science+Business Media.

Scherer, M. U. (2016). Regulating artifical intelligence systems: risks, challenges, competencies, and strategies, Harvard Journal of Law & Technology.

Stirling, A. (2014). Towards Innovation Democracy? Participation, Responsibility and Precaution in Innovation Governance. SSRN.com.

Taddeo, M. &. (2018). How AI can be a force for good. Science.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Aymeric Arnoult
Aymeric Arnoult

Written by Aymeric Arnoult

Software engineer, involved in creative web dev and crypto, curious about AI, medias and economy. Ex story writer at https://medium.com/@sun_app.

No responses yet

Write a response