OpenAI has introduced GPT-4.5, its latest and most advanced AI model, now available to ChatGPT Pro users and developers worldwide. Positioned as a research preview, GPT-4.5 builds on OpenAI’s ongoing efforts to enhance AI reliability, accuracy, and creative intelligence.
The model represents a major advancement in AI scaling, incorporating improvements in pre-training and post-training techniques. It benefits from enhanced unsupervised learning methods, which improve its ability to recognize patterns, generate insights, and interpret user intent with greater nuance.
A key improvement is the model’s ability to reduce hallucinations—incorrect or misleading information generated by AI. GPT-4.5 achieved a 37.1% hallucination rate, a significant improvement over GPT-4o’s 61.8%. In a comparative evaluation, it also scored a SimpleQA accuracy of 62.5%, surpassing previous OpenAI models.
Users will notice a more natural conversational experience, as GPT-4.5 demonstrates an improved ability to understand implicit expectations, interpret subtle cues, and exhibit higher emotional intelligence. These enhancements make it particularly valuable for writing, programming, problem-solving, and professional queries.
Trained on Microsoft Azure AI supercomputers, GPT-4.5 has undergone extensive safety evaluations. OpenAI employed new supervision techniques combined with traditional supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF). Additionally, the model was tested under OpenAI’s Preparedness Framework to assess risks before deployment.
GPT-4.5 is now accessible to ChatGPT Pro users on the web, mobile, and desktop. OpenAI plans to extend availability to Plus and Team users next week, followed by Enterprise and Edu users the week after. Developers can also integrate GPT-4.5 through OpenAI’s API offerings, including the Chat Completions API, Assistants API, and Batch API. However, due to its high computational requirements, OpenAI is still evaluating whether GPT-4.5 will remain a long-term API option.
Despite its advancements, GPT-4.5 does not yet support multimodal features like Voice Mode, video, and screen sharing within ChatGPT. OpenAI remains in an exploratory phase with the model, actively seeking feedback from users and developers to assess new capabilities and potential applications.