Response to Noam Chomsky’s ChatGPT review

While it is true that machine learning models like ChatGPT have their limitations and have garnered a lot of social attention due to their high usability in everyday life, they are not as severe as Chomsky and his co-authors suggest as much in their recent article. titled “Noam Chomsky: The False Promise of ChatGPT” published in the New York Times. This article is an attempt to explain where the authors lost the plot.

The relationship between technology and society is complex and multidimensional, and it is influenced by a wide range of factors, including social, cultural, economic and political, etc. One way to understand this relationship is through the lens of social formation and social construction of technology, which are theoretical frameworks developed in the field of science and technology studies (STS), which emphasize the role social and cultural factors in the development and use of technology.

According to the famous philosopher Langdon Winner, “Technology itself is neutral, but it depends on how societies use it.” His social shaping of technology perspective argues that technological development is shaped by social and cultural factors, including the interests, values, and power relations of various social actors, such as users, developers, decision makers and other stakeholders. This means that technologies are neither neutral nor worthless, but are imbued with social and cultural meanings that reflect the interests and perspectives of their creators and users.

Similarly, the social construction of technology proposed by renowned STS researcher, Professor Trevor Pinch, emphasizes the role of social and cultural factors in how technologies are perceived and used in society. Thus, technologies are not inherently good or bad, and they are not solely dependent on technical characteristics, but are constructed and defined by social and cultural norms and factors such as gender, race, class and others. dimensions of social inequality.


For example, technological tools are often advertised in ways that reinforce gender stereotypes and reinforce social hierarchies. Washing machines are often marketed to women, while cars are marketed to men, although both technologies can be used by people of any gender. Similarly, other technologies, such as cryptocurrencies like Bitcoin, can be used to democratize financial transactions, but they can also be used for illegal activities, such as money laundering and terrorist financing. . Therefore, it is not the technology itself that is problematic, but the way it is perceived, used, shaped and constructed by human society, with the human brain at its heart.

Today’s most popular and trendy strain of AI – machine learning-based tools like ChatGPT, as Chomsky and his co-authors would say, are also marvels of the human brain, not an alien phenomenon distinct. It can also be used to generate language in a variety of contexts, but it is up to humans to determine how to use that language responsibly and ethically for the benefit of society as a whole.

Chomsky and his co-authors mainly made three arguments to argue that ChatGPT makes false promises. The first is that machine learning programs like ChatGPT cannot produce the Borgesian revelation of understanding because they differ profoundly from the way humans reason and use language. The second argument is that machine learning models cannot explain the rules of English syntax, which leads to superficial and dubious predictions. The third and final major argument is that machine learning models cannot produce real intelligence because they lack the ability to think morally.

To begin with, the authors tend to overlook the great usability of the technological tool for various sections of society and how it will strengthen the global economy in the long run. How it revolutionized online research supported by GPT-4 finds almost no mention in the article. That said, it’s true that it’s new and evolving technology, and how it reacts is entirely based on how it’s programmed and the plethora of text it has gone through in the process. training of its different models. Both are undoubtedly the result of the human brain that lives and breathes around us.

At this point, machine learning models do not reason like humans, but they are able to produce language that is indistinguishable from human language in certain contexts. Recent research shows how the OpenAI GPT-3 model can produce consistent, fluent text in a variety of genres, from news articles to poetry. Second, machine learning models can make mistakes in some cases, but they are able to learn complex language patterns that are difficult for humans to articulate and they can produce coherent and meaningful texts in many contexts. Finally, it is true that machine learning models do not have the same capacity for ethical reasoning as humans, but they are able to learn to avoid morally objectionable content. Even ChatGPT uses a content filtering system to detect and remove offensive and inappropriate content. While not always perfect, they are able to learn how to avoid harmful content and produce text acceptable to most users.

Drawing an analogy from one of the famous and reputable authors of our time, Salman Rushdie, and how his work is seen across the world, we all know that his literary work has been the subject of controversy between different societies . While some view his work as a mark of academic excellence and his right to free speech, others view it as socially and culturally inappropriate. It is important to recognize that offensive and politically incorrect content originates from the human brain and is influenced by subjective social and cultural norms. There are instances where even we unwittingly produce content that is inappropriate for certain cultures, religions or political ideologies due to a lack of awareness of new social and cultural norms. At the same time, there is also a deliberate effort to create offensive and inappropriate content, which has led to academic discourse on the subject.

The debate over the capabilities and limitations of machine learning models like ChatGPT is ongoing, and there are valid arguments on both sides. Although Chomsky and his co-authors have raised some concerns about the false promises of ChatGPT, it is important to note that these technologies are still in their infancy and are evolving over time. How these tools are used and shaped by society is crucial in determining their impact on our lives. Ultimately, it is up to humans to determine how to use these tools responsibly and ethically for the benefit of society as a whole before we launch into a direct critique of these evolving technologies.

The author is a researcher in science, technology and society and focuses on digital democracy and studies on exclusion. Currently, he is working as Senior Director, Training at Bharti Institute of Public Policy, Indian School of Business. The opinions expressed are personal.

Read all the latest reviews here

Related Article

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button