Artificial Intelligence: Transformative Challenge for Humanity

Through the course of history, we have seen multiple changes in the way we work. Until now we have seen 3 major revolutions in the form of Agricultural, Mechanical and Digital. Human beings have been able to adapt to these changes and in-fact, thrive during and after these transitions. However, we are now at a cusp of another Revolution which is different from all the previous one’s.

This time, the technology not only makes our life easy but also learns from us, adapts to us and possess a unique challenge which might question the very existence of us, the Humans. But before we delve deeper into the “buzz” word AI, it is imperative for us to look back and reflect on the previous shifts we experienced and understand how they played an important role in deciding where we are currently as a society and how the future might look like.

The Industrial Revolution has two phases: one material, the other social; one concerning the making of things, the other concerning the making of men Charles A. Beard

Agricultural Revolution

The Agricultural Revolution, also referred to as the Neolithic Revolution, was a significant turning point in human history that occurred around 10,000 B.C. This period marked the shift from nomadic hunting and gathering societies to settled agricultural communities. Humans began to cultivate crops and domesticate animals for food, which led to a more stable food supply. Common crops included wheat, barley, rice, and maize, while animals such as goats, sheep, and cattle were domesticated.

As agriculture provided a reliable food source, people began to settle in one place, leading to the development of villages and eventually complex societies.As societies became more complex, social hierarchies began to form. Roles became defined, leading to the development of specialised professions and trade. The shift to agriculture had profound effects on culture, religion, and social structures, paving the way for the rise of civilisations.

The Agricultural Revolution laid the foundation for modern society by enabling population growth, the development of cities, and the rise of complex social structures.

Mechanical Revolution

The Mechanical or the Industrial Revolution, which occurred from the late 18th century to the early 19th century, marked a period of significant technological advancement and industrialisation. This revolution was characterised by the transition from hand production methods to machine-based manufacturing processes.

The Invention of machinery and Steam Engine made our lives much easier and contributed to the rise of capitalism, as industrial production led to increased efficiency and profitability. The growth of factories led to mass migration from rural areas to urban centers, as people sought jobs in industrial settings. This contributed to rapid urbanisation and the growth of cities.
It also resulted in changes in labor practices and the nature of work. While the revolution created jobs and increased wealth, it also led to harsh working conditions in factories, child labor, and environmental degradation, prompting social reform movements.

The Mechanical Revolution laid the groundwork for modern industry and fundamentally altered social, economic, and cultural aspects of life, setting the stage for the technological advancements of the 19th and 20th centuries.

Digital Revolution

The digital revolution refers to the shift from analog, mechanical, and electronic technology to digital technology that began in the late 20th century and continues to transform various aspects of life. This revolution encompasses the rise of digital computers, the internet, and related technologies, and has dramatically changed how we communicate, work, and access information.

The widespread adoption of the internet has revolutionised communication, enabling instant access to information, social networking, and global connectivity. The way we consume media has changed dramatically with the advent of digital streaming services, online gaming, and digital publications. The digital economy has led to the rise of new business models, such as e-commerce and gig economy platforms, altering traditional economic structures.

The digital revolution continues to evolve, with emerging technologies like artificial intelligence, blockchain, and augmented reality further shaping the future.

the Future, ArtifIcial Intelligence

Over the summer of 1956, Claude Shannon, the begetter of information theory, and Herb Simon, the only person ever to win both the Nobel Memorial Prize in Economic Sciences and the Turing Award awarded by the Association for Computing Machinery had been called together by a young researcher, John McCarthy, who wanted to discuss “how to make machines use language, form abstractions and concepts” and “solve kinds of problems now reserved for humans”. It was the first academic gathering devoted to what McCarthy dubbed “artificial intelligence” and it set a template for the field’s next 60-odd years.

The following decades saw much intellectual ferment and argument on the topic, but by the 1980s there was wide agreement on the way forward: “expert systems” which used symbolic logic to capture and apply the best of human know-how. The Japanese government, in particular, threw its weight behind the idea of such systems and the hardware they might need. But for the most part such systems proved too inflexible to cope with the messiness of the real world. By the late 1980s AI had fallen into disrepute, a byword for overpromising and underdelivering. Those researchers still in the field started to shun the term.

A decade ago, the best AI systems in the world were unable to classify objects in images at a human level, AI struggled with language comprehension and could not solve math problems. Today, AI systems routinely exceed human performance on standard benchmarks.

In his book, “Work” James Suzman argues that we are in the midst of a similarly transformative point in history, Suzman shows how automation might revolutionise our relationship with work and in doing so usher in a more sustainable and equitable future for our world and ourselves.
But that was in 2020, when he wrote that book and a lot has changed since then. The dramatic rise of AI in last couple of years has been unprecedented and has caused a lot of fear among all the individuals who happened to hear about it and understand a bit of what it is capable of. But this goes without mentioning that the current AI technology still has significant problems. It cannot reliably deal with facts, perform complex reasoning, or explain its conclusions.

AI enables workers to complete tasks more quickly and to improve the quality of their output

Some key takeaways from Standford Univerity’s Annual AI Index Report are,

  • A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 38% in 2022.
  • In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still other studies caution that using AI without proper oversight can lead to diminished performance.
  • AI beats humans on some tasks, but not on all. AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.

AI models can neither create nor solve problems on their own. They are merely elaborate pieces of software, not sentient or autonomous.

AI models can neither create nor solve problems on their own (or not yet anyway). They are merely elaborate pieces of software, not sentient or autonomous. They rely on human users to invoke them and prompt them, and then to apply or discard the results. AI’s revolutionary capacity, for better or worse, still depends on humans and human judgment. Researchers are still getting a handle on what AI will and will not be able to do. So far, bigger models, trained on more data, have proved more capable. This has encouraged a belief that continuing to add more will make for better AI. Research has been done on “scaling laws” that show how model size and the volume of training data interact to improve LLMs.

Regulations

Advances in the past few years have prompted a growing concern that progress in the field is now dangerously rapid—and that something needs to be done about it. Yet there is no consensus on what should be regulated, how or by whom.
The EU has created an AI Office to ensure that big model-makers comply with its new law. America and Britain will rely on existing agencies in areas where AI is deployed, such as in health care or the legal profession. But both countries have created AI-safety institutes. Other countries, including Japan and Singapore, intend to set up similar bodies.

Meanwhile, three separate efforts are under way to devise global rules and a body to oversee them. One is the AI-safety summits and the various national AI-safety institutes, which are meant to collaborate. Another is the “Hiroshima Process”, launched in the Japanese city in May 2023 by the G7 group of rich democracies and increasingly taken over by the OECD, a larger club of mostly rich countries. A third effort is led by the UN, which has created an advisory body that is producing a report ahead of a summit in September.

Summary

AI can automate repetitive tasks, increase productivity and efficiency in various industries. It can analyse vast amounts of data quickly, identifying patterns and insights that might be missed by humans. In healthcare, it can assist in diagnosing diseases, predicting patient outcomes, and personalising treatment plans. AI-powered tools can analyse medical images and detect abnormalities with high precision.

On the other hand however, AI systems can act on existing biases if they are trained on biased data. This can lead to unfair outcomes in areas such as hiring and law enforcement. The extensive data collection required for AI can raise privacy issues due to the potential for misuse. AI systems can be vulnerable to attacks, such as attacks where inputs are manipulated to produce incorrect outputs. This can pose risks in critical applications like cybersecurity and autonomous vehicles or possible future warfare.

    Balancing these advantages and disadvantages is crucial as AI technology continues to develop. Responsible development and deployment of AI can help maximise benefits while mitigating potential risks.