Skip to main content

Here’s everything OpenAI announced in the past 12 days

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

OpenAI kicked off its inaugural “12 Days of OpenAI” media blitz on December 5, each day unveiling new features, models, subscription tiers, and capabilities for its growing ChatGPT product ecosystem during a series of live-stream events.

Recommended Videos

Here’s a quick rundown of everything the company announced.

Day 1: OpenAI unleashes its o1 reasoning model and introduces ChatGPT Pro

OpenAI o1 and o1 pro mode in ChatGPT — 12 Days of OpenAI: Day 1

OpenAI kicked off the festivities with a couple major announcements. First, the company revealed the full-function version of its new o1 family of reasoning models and announced that they would be immediately available, albeit in limited amounts, to its $20/month Plus tier subscribers. In order to get full use of the new model (as well as every other model that OpenAI offers, plus unlimited access to Advanced Voice Mode), users will need to spring for OpenAI’s newest, and highest, subscription package: the $200/month Pro tier.

Day 2: OpenAI expands its Reinforcement Fine-Tuning Research program

Reinforcement Fine-Tuning—12 Days of OpenAI: Day 2

On the event’s second day, the OpenAI development team announced that it is expanding its Reinforcement Fine-Tuning Research program, which allows developers to train the company’s models as subject matter experts that “excel at specific sets of complex, domain-specific tasks,” according to the program’s website. Though it is geared more toward institutes, universities, and enterprises than individual users, the company plans to make the program’s API available to the public early next year.

Day 3: OpenAI’s Sora video generator has finally arrived. Huzzah?

Sora–12 Days of OpenAI: Day 3

On the third day of OpenAI, Sam Altman gave to me: Sora video generation. Yeah, OK, so the cadence for that doesn’t quite work but hey, neither does Sora. OpenAI’s long-awaited and highly touted video generation model, which has been heavily hyped since February, made its official debut on December 9 to middling reviews. Turns out that two years into the AI boom, being the leading company in the space and only rolling out 20-second clips at 1080p doesn’t really move the needle, especially when many of its competitors already offer similar performance without requiring a $20- or $200-per-month subscription.

Day 4: OpenAI expands its Canvas

Canvas—12 Days of OpenAI: Day 4

OpenAI followed up its Sora revelations with a set of improvements to its recently released Canvas feature, the company’s answer to Anthropic’s Artifacts. During its Day 4 live stream, the OpenAI development team revealed that Canvas will now be integrated directly into the GPT-4o model, making it natively available to users at all price tiers, including free. You can now run Python code directly within the Canvas space, enabling the chatbot to analyze it directly and offer suggestions for improvement, as well as use the feature to construct custom GPTs.

Day 5: ChatGPT teams up with Apple Intelligence

ChatGPT x Apple Intelligence—12 Days of OpenAI: Day 5

On day five, OpenAI announced that it is working with Apple to integrate ChatGPT into Apple Intelligence, specifically Siri, allowing users to invoke the chatbot directly through iOS. Apple had announced that this would be a thing back when it first unveiled Apple Intelligence but, with the release of iOS 18.2, that functionality is now a reality. If only Apple’s users actually wanted to use Apple’s AI.

Day 6: Advanced Voice Mode now has the power of sight and can speak Santa

Santa Mode & Video in Advanced Voice—12 Days of OpenAI: Day 6

2024 was the year that Advanced Voice Mode got its eyes. OpenAI announced on Day 6 of its live-stream event that its conversational chatbot model can now view the world around it through a mobile device’s video camera or via screen sharing. This will enable users to ask the AI questions about their surroundings without having to describe the scene or upload a photo of what they’re looking at. The company also released a seasonal voice for AVM which mimics Jolly Old St. Nick, just in case you don’t have time to drive your kids to the mall and meet the real one in person.

Day 7: OpenAI introduces Projects for ChatGPT

Projects—12 Days of OpenAI: Day 7

OpenAI closed out the first week of announcements with one that is sure to bring a smile to the face of every boy and girl: folders! Specifically, the company revealed its new smart folder system, dubbed “Projects,” which allows users to better organize their chat histories and uploaded documents by subject.

Day 8: ChatGPT Search is now available to everybody

Search—12 Days of OpenAI: Day 8

OpenAI’s ChatGPT Search function, which debuted in October, is now available to all logged-in users, regardless of their subscription tier. The feature works by searching the internet for information about the user’s query, scraping the info it finds from relevant websites, and then synthesize that data into a conversational answer. It essentially eliminates the need to click through a search results page and is functionally identical to what Perplexity AI offers, allowing ChatGPT to compete with the increasingly popular app. Be warned, however: a recent study has shown the feature to be “confidently wrong” in many of its answers.

Day 9: the full o1 model comes to OpenAI’s API

Dev Day Holiday Edition—12 Days of OpenAI: Day 9

Like being gifted a sweater from not one but two aunts, OpenAI revealed on day nine that it is allowing select developers to access the full version of its o1 reasoning model through the API. The company is also rolling out real-time API updates, a new model customization technique called Preference Fine-Tuning, and new SDKs for Go and Java.

Day 10: 1-800-ChatGPT

1-800-CHAT-GPT—12 Days of OpenAI: Day 10

In an effort to capture that final market segment that it couldn’t already reach — specifically, people without internet access — OpenAI has released the 1-800-ChatGPT (1-800-242-8478) chatline. Dial in from any land or mobile number within the U.S. to speak with the AI’s Advanced Voice Mode for up to 15 minutes for free.

Day 11: ChatGPT now works with even more coding apps

Work with Apps—12 Days of OpenAI: Day 11

Last month, OpenAI granted the Mac-based desktop version of ChatGPT the ability to interface directly with a number of popular coding applications, allowing its AI to pull snippets directly from them rather than require users to copy and paste the code into its chatbot’s prompt window. On Thursday, the company announced that it is drastically expanding the number of coding apps and IDEs that ChatGPT can collaborate with. And it’s not just coding apps; ChatGPT now also works with conventional text programs like Apple Notes, Notion, and Quip. You can even launch Advanced Voice Mode in a separate window as you work, asking questions and getting suggestions from the AI about your current project.

Day 12: OpenAI teases its upcoming o3 and o3-mini reasoning models

OpenAI o3 and o3-mini—12 Days of OpenAI: Day 12

For the 12th day of OpenAI’s live-stream event, CEO Sam Altman made a final appearance to discuss what the company has in store for the new year — specifically, its next-generation reasoning models, o3 and o3-mini. The naming scheme is a bit odd (and done to avoid copyright issues with U.K. telecom, O2) but the upcoming models reportedly offer superior performance on some of the industry’s most challenging math, science, and coding benchmark tests — even compared to o1, the full version of which was formally released less than a fortnight ago. The company is currently offering o3-mini as a preview to researchers for safety testing and red teaming trials, though there’s no word yet on when everyday users will be able to try the models for themselves.

Curiously, the live streams did not feature anything solid on the next generation of GPT. Don’t worry — we’re keeping an eye on everything you need to know about GPT-5,

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
ChatGPT’s awesome Deep Research gets a light version and goes free for all
Deep Research option for ChatGPT.

There’s a lot of AI hype floating around, and it seems every brand wants to cram it into their products. But there are a few remarkably useful tools, as well, though they are pretty expensive. ChatGPT’s Deep Research is one such feature, and it seems OpenAI is finally feeling a bit generous about it. 

The company has created a lightweight version of Deep Research that is powered by its new o4-mini language model. OpenAI says this variant is “more cost-efficient while preserving high quality.” More importantly, it is available to use for free without any subscription caveat. 

Read more
Google might have to sell Chrome — and OpenAI wants to buy it
OpenAI press image

It feels like all of the big tech companies practically live in courtrooms lately, but it also feels like not much really comes of it. Decisions get made and unmade again, and it takes a long time for anything to affect consumers. At the moment, Google is in danger of getting dismantled and sold for parts -- and if it really happens, OpenAI has told the judge that it would be interested in buying.

OpenAI, the company behind ChatGPT, currently doesn't work with Google at all. Apparently, it wanted to make a deal last year to use Google's search technology with ChatGPT but it didn't work out. Instead, OpenAI is now working on its own search index but it's turning out to be a much more time-consuming project than anticipated.

Read more
The original AI model behind ChatGPT will live on in your favorite apps
OpenAI press image

OpenAI has released its GPT‑3.5 Turbo API to developers as of Monday, bringing back to life the base model that powered the ChatGPT chatbot that took the world by storm in 2022. It will now be available for use in several well-known apps and services. The AI brand has indicated that the model comes with several optimizations and will be cheaper for developers to build upon, making the model a more efficient option for features on popular applications, including Snapchat and Instacart. 

Apps supporting GPT‑3.5 Turbo API

Read more