AI Systems As Digital Public Goods : Exploring The Potential Of Open Source AI

This study aimed to understand the current state of AI in the view of social progress and development and what role it plays in social upliftment by being recognized as an integral part of public administration. The paper explored the theme of AI as a probable Digital Public Goods in prospect. Another important aspect of the study was to explore the synchronous relations adopted by countries in the purview of public policy and the stances taken in the larger geopolitical context particularly undertaking a case study of the European Union’s “Roadmap for Ethical AI” and propose a possibility of a completely Open Source FOSS AI which is scalable however and list the reasons why it is not possible. In my closing remarks the paper discusses recommendations to create an AI model which is under the parameters of a Digital Public Good. 

Introduction

Francis Fukuyama in his work ‘The End of History and the Last Man’ claimed that post cold war and the emergence of Liberal capitalism as the dominant ideology, humans had not just reached a watershed moment in history but an epoch that would mean the end of the evolution of any sorts, however true but technology has changed that perspective. Gordon Moore, the co-founder of Intel claimed in his observation that the number of transistors in a dense integrated circuit doubles every two years, this was not only a reflection on the capacity of the motherboards but also what on whole their capacities and abilities were to become. Artificial Intelligence and its possible implementations have started to besiege the consumer technology industry and are being considered a similar epoch in advancement as to the democratization of the internet from military development to civilian usage. However, AI has had its issues during its first in a sense scaled preview with Google and Microsoft both running into trouble with their iterations giving out misinformation (1) (2), leading to criticisms from both the State and the individuals alike. States have already started to take a proactive approach to AI and its possible implementations and how likely it is that the dichotomy or co-existence is to be defined, whether be EU releasing a white paper on AI and its roadmap on probable policy issues, while FTC has censured companies for their “AI claims”. (3)

Although the study of AI began in the 40s in the field of computer science, its applications and implementation have far-reaching consequences in different arenas. 

In recent years, Artificial Intelligence has been contemplated by different nations. Several governments all over the world are beginning to study the implementation and potential benefits of technologies based on Artificial General Intelligence. Some countries have realized their potential and also have picked their sides, with the EU releasing its roadmap on how it is to create a policy framework for AI, followed by the USA. Governments and institutions are rapidly responding to the demands of a changing society. Among many significant trends, some facts, and pre-conditions which should be considered, AI in its current state is harmful, it gives out misinformation and is susceptible to manipulation. Government and public policy have to operate in this context added to the newer inventions due to the development of new chips, new materials, and increased computational power. Public administration has found lucrative uses for AI support in multiple flavors of public management such as decision-making in the handling of big data, public services, and public safety. Fears of relying on government systems often leave bitter reminders of the 80s Soviet Union when reliance on computer software trained by the country’s best game theory economists made the entire economy collapse. Intensification-90 was to be the saving face of the Soviet economy as game theory economists created and trained software which was to meet the demand-supply needs of Leningrad in real-time, which ended up creating a haywire economy. (4)

 Understanding AI

Artificial Intelligence as a concept has transcended from ancient Greek philosophy to cyberpunk literature to its current form emerging with thought experiments with Alan Turing, in his possibilities of machine developing self-sustaining systems, AI has been extensively debated and discussed throughout history. In the 21st century, AI aims to extend its help in maintaining nature and governance through intelligent machines with the ultimate goal being the mutual coexistence of machines and humans. Terminologies like Big Data, Deep learning, and artificial general intelligence or AGI have defined our current understanding of the technology. Google began to research NLP or natural language processing which changed the paradigm for machine communication in spoken languages, this debuted with LaMDA in Google I/O 2021, however, the seeds to this were sown earlier. (5)

The generation of software (algorithms) and hardware (machines) based on artificial intelligence (AI) uses different techniques. Some techniques are useful for generating learning, others evolve and others are based on data analysis, and robotics. However, AI applications are a mixture of different techniques, these techniques are developed in software and others in hardware, or a mixture of the two. There are various software-based techniques used in artificial intelligence (AI), including but not limited to artificial neural networks, evolutionary computation (such as genetic algorithms, evolutionary strategies, and genetic programming), fuzzy logic, intelligent systems, multi-agent systems, natural language processing, expert systems, learning classifier systems, automatic learning, and deep learning. Other AI techniques, in software, are data mining, text mining, and sentiment analysis. In this manner, organizations will implement a series of emerging technologies that will be useful for mass process automation, cost, and error reduction, increase efficiency and competitiveness, the creation of value, and fraud avoidance. This situation will impact the performance and development of governments throughout the world(6). The design of artificial neural networks is based on the learning mechanisms present in the neural networks of the human brain. In these networks, neurons are interconnected through synapses, which can be modified to alter the connections between neurons. These types of techniques represent learning. Evolutionary computation techniques, such as genetic algorithms, are based on genetic operators such as crossing, mutation, selection, and adaptation: they are based on evolution and natural selection. This type of technique represents the evolution of the species in computational algorithms. Fuzzy logic is distinct from traditional logic in that it permits a spectrum or range of possibilities, as opposed to binary values (true or false). An example of its implementation is the Likert scale, wherein each value is within a range of possibilities, and certain values can exist on two scales concurrently. Intelligent agents are important for AI because the internal process of AI algorithms can be represented by an architecture of intelligent agents. An intelligent agent interacts with its environment and has sensors, effectors, and responses (reactively or proactively) to the stimuli of the environment. An intelligent agent is an intelligent system process. And a set of intelligent, interacting agents is called a multi-agent system. Some software techniques, such as text mining, sentiment analysis, expert systems, and machine learning base their operation on techniques such as genetic algorithms, artificial neural networks, and fuzzy logic. (7)

Click Here To Read The Paper
Author: Pranay Khattar