On November 6, OpenAI will host a "DevDay", and some of it will be livestreamed.
This might be a landmark event (by some preliminary indications).
In particular, my prediction is that the ability fine-tune GPT-4 will be open to the public, and that this functionality will enable creation of specialized systems which are much more powerful than GPT-4, and that some of the magic will be demonstrated during the livestream.
We'll see if this prediction turns true. I record some relevant links and information in the comments.
Livestream link: www.youtube.com/watch?v=U9mJuUkhUzkUpdate: One can watch the recording (45 min, if you'd like the transcript, you need to switch manually from "auto-generated" to close-caption-induced. like CC1 or DTVCC1).
Tons of upgrades (GPT-4 Turbo with 128K context and tons of other things, including making the API engineering easier) and major API price cuts (if they manage that without quality degradation, that would be a major step forward).
With fine-tuning: opening fine-tuning for GPT-3.5 Turbo 16K context and
inviting active fine-tuning users to apply to the experimental GPT-4 fine-tuning program (so they are going very cautiously, I grade my prediction as 50% only; they are in the process of opening it to the public, but they are afraid of its potential and will go slowly; they also have chosen not to show-case fine-tuning at all; they've show-cased all kinds of things, but they don't want to encourage fine-tuning too much at this moment, because it is so uncontrollable).
openai.com/blog/new-models-and-developer-products-announced-at-devdayopenai.com/blog/introducing-gpts