The highly anticipated release of OpenAI’s latest language software, GPT-4, has finally arrived, marking a significant step forward for artificial intelligence (AI) power. Unlike its predecessor, ChatGPT, GPT-4 has the ability to describe images in response to written commands, surpassing ChatGPT’s “advanced reasoning capabilities.
GPT-4 is expected to revolutionize work and life, but its release has also raised concerns about potential risks, including job outsourcing and trust issues with online content.
OpenAI has delayed the release of the image-description feature due to concerns of abuse, and the current version of GPT-4 available to subscribers only offers text. OpenAI policy researcher Sandhini Agarwal explained that the model could potentially identify individuals in a picture, leading to mass surveillance. OpenAI’s blog post also acknowledged that GPT-4 has limitations, including perpetuating social biases and offering bad advice.
Despite these concerns, GPT-4 has generated significant interest in the field of AI, and its launch has fueled the ongoing AI arms race. Companies like Microsoft and Google have invested billions of dollars in OpenAI’s technology, which they hope will become a secret weapon for their workplace software and search engines.
However, critics argue that the rush to exploit untested, unregulated, and unpredictable technology could deceive people, undermine artists’ work, and lead to real-world harm. The pace of progress demands an urgent response to potential pitfalls, said former OpenAI researcher Irene Solaiman, who is now the policy director at Hugging Face, an open-source AI company.
GPT-4 is expected to improve on some shortcomings of its predecessor, but it is not without flaws. AI evangelists argue that “GPT-4 is better than anyone expects,” while OpenAI’s CEO, Sam Altman, has tried to temper expectations, saying that speculation about its capabilities has reached impossible heights.