• sponser

PRESENTS

ASSOCIATE PARTNER

  • sponser

TYRE PARTNER

  • sponser
News » Opinion » Social Shaping and Construction of Technology: Response to Noam Chomsky's Critique of ChatGPT
5-MIN READ

Social Shaping and Construction of Technology: Response to Noam Chomsky's Critique of ChatGPT

Written By: Dr Sharique Hassan Manazir

News18.com

Last Updated:

New Delhi, India

Chomsky and co-authors have made majorly three arguments to make it a point that ChatGPT delivers false promises. (Photo: AFP)

Chomsky and co-authors have made majorly three arguments to make it a point that ChatGPT delivers false promises. (Photo: AFP)

While Chomsky and his co-authors have raised some concerns about the false promises of ChatGPT, it is important to note that these technologies are still in early stages and are evolving with time

While it is true that machine learning models like ChatGPT have their limitations and have attracted a lot of social attention due to their wide usability in day-to-day life, they are not as severe as Chomsky and co-authors suggest in their recent article titled Noam Chomsky: The False Promise of ChatGPT published in The New York Times. This article is an attempt to explain where the authors lost the plot.

The relationship between technology and society is complex and multidimensional, and it is influenced by a wide range of factors, including social, cultural, economic, and political etc. One way to understand this relationship is through the lens of social shaping and social construction of technology, which are theoretical frameworks developed in the field of Science and Technology Studies (STS), that emphasise the role of social and cultural factors in shaping technological development and use.

According to the famous philosopher, Langdon Winner, “technology itself is neutral, but it depends on the way that societies use them.” His social shaping of technology perspective argues that technological development is shaped by social and cultural factors, including the interests, values, and power relations of various social actors, such as users, designers, policymakers, and other stakeholders. This means that technologies are neither neutral nor value-free, but are imbued with social and cultural meanings that reflect the interests and perspectives of their creators and users.

Similarly, the social construction of technology propounded by the famous STS scholar Professor Trevor Pinch, emphasises the role of social and cultural factors in shaping the way that technologies are perceived and used in society. Thus, technologies are not inherently good or bad, nor do they solely depend upon technical features, but are constructed and defined by social and cultural norms and factors such as gender, race, class, and other dimensions of social inequality.

For example, technological tools are often advertised in ways that reinforce gender stereotypes and reinforce social hierarchies. Washing machines are often marketed to women, while cars are marketed to men, even though both technologies can be used by people of any gender. Similarly, other technologies, such as cryptocurrencies like Bitcoin, can be used to democratise financial transactions, but they can also be used for illegal activities, such as money laundering and terrorism financing. Therefore, it is not the technology itself that is the problem, but the way that it is perceived, utilised, shaped and constructed by human society, with human brain at its core.

The current most popular and fashionable strain of A.I. — machine learning-based tools like ChatGPT, as Chomsky and his co-authors would put it, too, are wonders of the human brain, and not some separate extra-terrestrial phenomenon. It too can be used to generate language in a variety of contexts, but it is up to humans to determine how to use that language in ways that are responsible and ethical to benefit society at large.

Chomsky and co-authors have made majorly three arguments to make it a point that ChatGPT delivers false promises. The first one is that machine learning programs like ChatGPT cannot produce the Borgesian revelation of understanding because they differ profoundly from how humans reason and use language. The second argument is that machine learning models cannot explain the rules of English syntax, leading to superficial and dubious predictions. The third and final major argument is that machine learning models cannot produce true intelligence because they lack the capacity for moral thinking.

To begin with, the authors tend to overlook the wide usability of the technological tool to various sections of society and how it will bolster the global economy in the long run. The way it has revolutionised the online search backed by GPT-4 hardly finds any mention in the article. Having said that, it is true that it is relevantly a new and evolving technology, and the way it responds is completely based upon how it is programmed and the plethora of text it has gone through in the process of training its various models. Both of which are without an iota of doubt the outcome of the human brain living and breathing around us.

At this stage, machine learning models do not reason like humans, but they are capable of producing language that is indistinguishable from human language in some contexts. Recent research shows how the OpenAI GPT-3 model can produce coherent and fluent text in a variety of genres, from news articles to poetry. Secondly, machine learning models may make errors in some cases, but they are capable of learning complex patterns in language that are difficult for humans to articulate and they can produce texts that are coherent and meaningful in many contexts. Lastly, it is true that machine learning models do not have the same capacity for ethical reasoning as humans yet they are capable of learning to avoid morally objectionable content. Even ChatGPT uses a content filtering system to detect and remove offensive and inappropriate content. Though not always perfect, they are capable of learning to avoid harmful content and produce text that is acceptable to most users.

Drawing an analogy from one of the famous and reputed authors of our time, Salman Rushdie, and how his work is seen across the globe, we all know that his literary work has been a subject of controversy among different societies. While some consider his work as a hallmark of academic excellence and his right to freedom of speech, others view it as socially and culturally inappropriate. It is important to acknowledge that offensive and politically incorrect content stems from the human brain and is influenced by subjective social and cultural norms. There are instances when even we unintentionally produce content that is inappropriate for certain cultures, religions, or political ideologies due to a lack of awareness of new social and cultural norms. At the same time, there is also a deliberate effort to create offensive and inappropriate content, and this has led to academic discourse on the subject.

The debate over the capabilities and limitations of machine learning models like ChatGPT is ongoing, and there are valid arguments on both sides. While Chomsky and his co-authors have raised some concerns about the false promises of ChatGPT, it is important to note that these technologies are still in their early stages and are evolving with time. The way these tools are utilised and shaped by society is crucial in determining their impact on our lives. Ultimately, it is up to humans to determine how to use these tools responsibly and ethically to benefit society at large before we get into outright criticism of these evolving technologies.

The author is a Science Technology & Society Scholar with research focus on Digital Democracy and Exclusion Studies.  Currently, he is working as Senior Manager, Training at the Bharti Institute of Public Policy, Indian School of Business. Views expressed are personal.

Read all the Latest Opinions here

first published:March 18, 2023, 13:21 IST
last updated:March 18, 2023, 13:21 IST