On Tuesday, OpenAI released ChatGPT, the AI language model making significant waves in the tech industry, in its latest version. GPT-4, the latest model, can understand images as input, meaning it can look at a photo and provide the user with general information about it.
A larger information database allows the language model to provide more accurate information and to write code in all major programming languages. In the Uniform Bar Exam, GPT-4 scored in the 90th percentile, while its previous model scored in the 10th. It now reads, analyzes, and generates up to 25,000 words of text and appears to be much smarter than its previous model.
ChatGPT, which was only released a few months ago, is already considered the fastest-growing consumer application in history. The app hit 100 million monthly active users in just a few months. TikTok took nine months to reach that many users and Instagram took nearly three years, according to a UBS study.
“While less capable than humans in many real-world scenarios, [GPT-4] exhibits human-level performance on various professional and academic benchmarks,” OpenAI wrote in its press release, adding that the language model scored a 700/800 on the math SAT.
Despite its impressiveness, OpenAI CEO Sam Altman recognizes that the program still has flaws and limitations, as well as being more impressive on first use than with further usage. Several artificial intelligence models, including ChatGPT, have been generating attention due to their potential impact in various fields such as education. While some students are using them for assistance with writing assignments, educators remain divided on whether these systems are disruptive or could be useful educational tools.
These systems have also been prone to generate inaccurate information — Google’s AI, “Bard,” notably made a factual error in its first public demo. This is a flaw OpenAI hopes to improve upon — GPT-4 is 40% more likely to produce accurate information than its previous version, according to OpenAI.
Misinformation and potentially biased information are subjects of concern. AI language models are trained on large datasets, which can sometimes contain bias in terms of race, gender, religion, and more. This can result in the AI language model producing biased or discriminatory responses.
Many have pointed out the malicious ways people could use misinformation through models like ChatGPT, like phishing scams or to spread misinformation to deliberately disrupt important events like elections.
OpenAI says it “spent months making [ChatGPT] safer,” adding the company is working with “over 50 experts for early feedback in domains including AI safety and security.”
According to OpenAI, GPT-4 is 82% less likely to provide users with “disallowed content,” which refers to illegal or morally objectionable content.