English

Preview release of ChatGPT shows potential of artificial intelligence

Last November, the artificial intelligence (AI) research laboratory OpenAI launched a free prototype of its text-based human conversation simulator called ChatGPT. Over the past four months, more than 100 million users across a wide range of disciplines have been experimenting with the preview version of ChatGPT.

The users are testing the system in fields such as science and journalism research, essay and legal brief writing, software development, math problem solving and language translation, to name just a few. Some of the more creative uses of ChatGPT have included writing limericks, fixing software bugs and songwriting.

ChatGPT is designed to generate natural language responses to questions, provide recommendations and to write copy. It has numerous applications and has the potential to transform the way people interact with technology and each other.

The breakthrough system is based on advanced computer technology known as generative pretrained transformers (GPTs). GPTs are defined as a family of large language models (LLMs) developed by OpenAI that have been trained with large databases of texts.

The 'pre-training' in GPTs refers to the learning process on a large text corpus enabling the language model to predict the next word in a passage. This provides a foundation for the model to perform well without being dependent on task-specific data.

Like the way Google autocompletes web search entries, ChatGPT anticipates the content of inquiries submitted by users. Known as synchronous processing, it interprets questions as they are typed in real-time and generates responses on the fly.

The limitations of ChatGPT, as described by OpenAI, are its tendency to sometimes write, “plausible-sounding but incorrect or nonsensical answers,” and its inclination to be “excessively verbose” and overuse certain phrases. The system will also often guess at an answer when asked an ambiguous question as opposed to “asking a clarifying question.”

Whatever its drawbacks, ChatGPT represents a significant step forward in AI technology. In December, Ethan Mollick of Harvard Business Review called ChatGPT a tipping point for artificial intelligence, writing, “While versions of GPT have been around for a while, this model has crossed a threshold: It’s genuinely useful for a wide range of tasks … While previous generations of the system could technically do these things, the quality of the outputs was much lower than that produced by an average human. The new model is much better, often startlingly so.”

The initial release of ChatGPT was based on GPT-3.5. On March 9, OpenAI announced the release of GPT-4, which has been described in a Cornell University research paper as having characteristics which are “strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT.”

The authors state that early experiments with GPT-4 show that it exhibits “sparks of artificial general intelligence,” that is, it has the ability to simulate thinking and not just answer specific questions, but do things like reason, sense and behave.

There is no doubt that ChatGPT and GPT-4 show how artificial intelligence technologies are increasing productivity. By replacing functions previously carried out by groups of people into a single automated process, tasks can now be completed quickly and accurately by a computer.

While the mass adoption of personal computers beginning in the 1980s had a dramatic impact on productivity, the adaptive and learning features of artificial intelligence tools like GPTs mean productivity will rise exponentially over a much shorter period of time.

For example, today, ChatGPT is a powerful tool for software developers. Using its natural language processing capability, it can model what a developer is trying to accomplish and provide corresponding code snippets. It can also automate repetitive and time consuming tasks without mistakes and inconsistencies typical of direct human coding input.

ChatGPT can quickly and accurately simplify complex computer code and provide comments and documentation that are often more accurate and informative than anything a developer can write.

Artificial intelligence was pioneered in the mid-20th century, with important contributions made by scientists such as Alan Turing, Marvin Minsky and John McCarthy. Turing, a British mathematician and computer scientist, is widely considered a founding father of AI. In 1950, he proposed the Turing Test, a measure of a computer’s ability to exhibit intelligent behavior equivalent to that of a human.

Turing’s idea was groundbreaking and set the stage for decades of research on machine learning and natural language processing. Turing published a paper in 1950 called “Computing Machinery and Intelligence,” in which he discussed the potential for machines to mimic human intelligence through the use of algorithms and programming.

Marvin Minsky, an American cognitive scientist and computer scientist, was a pioneer of AI who, along with John McCarthy, founded the Artificial Intelligence Laboratory at MIT in 1959. Minsky was interested in the idea of machine perception, or the ability of machines to understand and interpret visual and sensory information. McCarthy, who is often credited with coining the term “artificial intelligence” in 1956, was responsible for Lisp, which became a favored programming language for artificial intelligence (AI) research.

ChatGPT can be described as a new generation of artificial intelligence text-based chatbots that were begun in the 1960s. ELIZA, developed by Joseph Weizenbaum in 1966, used pattern matching and substitution methodology to simulate human conversation. It attempted to match scripted responses to a series of psychotherapy questions.

Later in 1988, the chatbot Jabberwacky was created by Rollo Carpenter to simulate entertaining human conversation by expanding pattern matching to include another level of variability to account for the context of questions being asked.

One of the breakthroughs that came in the 1980s was the development of rule-based systems for natural language processing. These systems relied on sets of hand-crafted rules to analyze and generate natural responses, but they were limited in their ability to handle complex and ambiguous language.

In 1995, Artificial Linguistic Internet Computer Entity (ALICE) operated over the internet and added heuristics—the ability to apply shortcuts that humans often use to solve problems—to the previously developed pattern matching methods. In the 1990s, statistical approaches gained popularity in natural language processing, allowing systems to learn from large datasets of text. This led to the development of probabilistic models which were able to handle a wider range of language inputs and generate more accurate outputs.

In the 2000s, with the development of neural network architectures, deep learning emerged as a powerful technique for natural language processing. These models were able to learn and represent complex patterns in language data, leading to significant improvements in language processing accuracy and efficiency.

In 2010, Apple released the first version of Siri as an intelligent personal assistant and learning navigator that uses spoken natural language to perform computer executed duties such as reading text messages, playing music, scheduling events and searching the web for answers to questions. This simulation of audible human conversation was also offered by Google with Google Assistant (2012) and Amazon with Alexa (2014) .

In addition to the software of ChatGPT, the hardware that runs it is a critical factor in the speed and accuracy of its responses as well as the number of queries it can handle simultaneously. The hardware includes a large number of interconnected processors or nodes working together to handle the computational workload.

The platform also includes specialized processors optimized for machine learning and deep learning workloads as well as high-speed networking and storage technologies that enable fast data transfer and retrieval.

Finally, the advances made in AI, as manifested in ChatGPT, are the product of a collaborative effort among researchers, engineers, and innovators from around the world. The development of AI is truly a global effort, with contributions from individuals and organizations in many different countries.

AI is a field that requires a multidisciplinary approach, bringing together experts from computer science, mathematics, neuroscience, psychology, linguistics, and other related fields. Advances in hardware, software, and data infrastructure have also been made possible by global collaboration and cooperation.

Many countries have made significant investments in AI research and development, and international organizations and conferences such as the Association for Computing Machinery (ACM) provide a platform for researchers and practitioners to share their work and collaborate on new ideas around the world.

However, while ChatGPT brings forward all the accomplishments of computer technology of the past 75 years on a world scale and possesses socially transformative potential, it also remains ensconced within capitalism, its private property for profit system and national state political structures.

The immediate concerns of Wall Street, which has driven up the value of OpenAI to $29 billion following an investment by Microsoft of $10 billion in January, is to ensure that technology oligarchs such as Elon Musk, Sam Altman, Peter Thiel, and Reid Hoffman have a clear path to realizing a return on their financial commitment to the company.

The expectation is that the core technology of ChatGPT will be sold to corporations across all industries as a means of cutting costs and eliminating jobs. In the present economic environment of inflation, rising interest rates and falling share values on Wall Street, this prospect is without question an attractive one for corporate executives, boards of directors and investors.

According to a study by researchers at the University of Pennsylvania, half of the tasks performed by auditors, interpreters and writers can be performed more quickly by AI tools. A report published by McKinsey & Company estimates that 25 percent of work across all occupations could be automated by 2030 and 60 percent of 800 occupations listed by the Bureau of Labor Statistics could have one-third of their work tasks automated in the coming decades.

Meanwhile, as with all other high tech innovation under capitalism, the power of ChatGPT and artificial intelligence are understood to fetch substantial contracts with the Pentagon and defense departments around the globe.

With AI technologies already in use to automate battlefield operations in the imperialist wars of the twenty-first century, including unmanned drone air assaults and targeted assassinations, the power of GPT decision-making is being actively pursued by the US military.

According to an article in Defense One, Lauren Barrett Knausenberger, the chief information officer of the Air Force, said, “I think that there is a lot of benefit to the DOD of being able to find information, of being able to find who’s in charge, of being able to rapidly pull together information in general because we do waste a lot of time like with taskers, for instance.”

Another report on Vice said that the Pentagon is using ChatGPT to write a news report on February 8 about the launch of a new counter-drone task force. In other words, the Pentagon is leveraging the potential of AI to automate decision-making and to deliver pro-militarist propaganda.

The only way that the progressive content and global power of artificial intelligence technologies such as ChatGPT can be achieved and, as the system’s self-definition indicated, “the potential to transform the way we interact with technology and each other” can be realized is through the revolutionary socialist reorganization of society by the working class.

Loading