An OpenAI event on Nov 6
Nov. 1st, 2023 03:36 pmOn November 6, OpenAI will host a "DevDay", and some of it will be livestreamed.
This might be a landmark event (by some preliminary indications).
In particular, my prediction is that the ability fine-tune GPT-4 will be open to the public, and that this functionality will enable creation of specialized systems which are much more powerful than GPT-4, and that some of the magic will be demonstrated during the livestream.
We'll see if this prediction turns true. I record some relevant links and information in the comments.
Livestream link: www.youtube.com/watch?v=U9mJuUkhUzk
Update: One can watch the recording (45 min, if you'd like the transcript, you need to switch manually from "auto-generated" to close-caption-induced. like CC1 or DTVCC1).
Tons of upgrades (GPT-4 Turbo with 128K context and tons of other things, including making the API engineering easier) and major API price cuts (if they manage that without quality degradation, that would be a major step forward).
With fine-tuning: opening fine-tuning for GPT-3.5 Turbo 16K context and inviting active fine-tuning users to apply to the experimental GPT-4 fine-tuning program (so they are going very cautiously, I grade my prediction as 50% only; they are in the process of opening it to the public, but they are afraid of its potential and will go slowly; they also have chosen not to show-case fine-tuning at all; they've show-cased all kinds of things, but they don't want to encourage fine-tuning too much at this moment, because it is so uncontrollable).
openai.com/blog/new-models-and-developer-products-announced-at-devday
openai.com/blog/introducing-gpts
This might be a landmark event (by some preliminary indications).
In particular, my prediction is that the ability fine-tune GPT-4 will be open to the public, and that this functionality will enable creation of specialized systems which are much more powerful than GPT-4, and that some of the magic will be demonstrated during the livestream.
We'll see if this prediction turns true. I record some relevant links and information in the comments.
Livestream link: www.youtube.com/watch?v=U9mJuUkhUzk
Update: One can watch the recording (45 min, if you'd like the transcript, you need to switch manually from "auto-generated" to close-caption-induced. like CC1 or DTVCC1).
Tons of upgrades (GPT-4 Turbo with 128K context and tons of other things, including making the API engineering easier) and major API price cuts (if they manage that without quality degradation, that would be a major step forward).
With fine-tuning: opening fine-tuning for GPT-3.5 Turbo 16K context and inviting active fine-tuning users to apply to the experimental GPT-4 fine-tuning program (so they are going very cautiously, I grade my prediction as 50% only; they are in the process of opening it to the public, but they are afraid of its potential and will go slowly; they also have chosen not to show-case fine-tuning at all; they've show-cased all kinds of things, but they don't want to encourage fine-tuning too much at this moment, because it is so uncontrollable).
openai.com/blog/new-models-and-developer-products-announced-at-devday
openai.com/blog/introducing-gpts
no subject
Date: 2023-11-01 07:44 pm (UTC)https://twitter.com/sama/status/1699492275209003425
"on november 6, we’ll have some great stuff to show developers! (no gpt-5 or 4.5 or anything like that, calm down, but still i think people will be very happy…)"
no subject
Date: 2023-11-01 07:47 pm (UTC)https://devday.openai.com/
There will be a livestream of a part of this event (and an in-person part, which is full, so it's too late to join that). The keynote is at 10am Pacific (1pm East Coast, note that both US and most of Europe will be off Summer time, Europe had its switch last weekend, and US will switch this weekend).
no subject
Date: 2023-11-01 07:48 pm (UTC)https://twitter.com/sama/status/1719020057311986149
"openai devday is in a week--11/6 at 10 am
we have some great new stuff for you!
will be livestreamed on http://openai.com"
no subject
Date: 2023-11-01 07:54 pm (UTC)https://dmm.dreamwidth.org/75336.html
What we know is that people were reporting ability to reach GPT-4-levels in specialized subfields with this fine-tuning, so we can expect levels far above GPT-4 with fine-tuning of GPT-4.
There is just one month left to fulfill the promise to open this fine-tuning this Fall.
The current price structure can be a guide for what's coming (fine-tuned models are only available via API at this point, I have no idea if they plan to make them available through ChatGPT interface).
no subject
Date: 2023-11-01 08:05 pm (UTC)https://openai.com/pricing
GPT-4:
8K context Input: $0.03 / 1K tokens Output: $0.06 / 1K tokens
32K context Input: $0.06 / 1K tokens Output: $0.12 / 1K tokens
GPT-3.5 Turbo:
4K context Input: $0.0015 / 1K tokens Output: $0.002 / 1K tokens
16K context Input: $0.003 / 1K tokens Output: $0.004 / 1K tokens
GPT-3.5 Turbo fine-tuned 4K context:
Training: $0.0080 / 1K tokens Input usage: $0.0120 / 1K tokens Output usage: $0.0160 / 1K tokens
So, training is not all that expensive (but it really depends on the size of your training corpus for your fine-tuning), but the usage cost for the 4K context is 8 times higher than for the standard GPT-3.5 Turbo version with 4K context.
So we might potentially expect a similar 8x pricing factor for the use of a fine-tuned GPT-4, with training being reasonably affordable, but usage being relatively expensive...
no subject
Date: 2023-11-05 03:34 pm (UTC)https://twitter.com/OpenAI/status/1720833435541946468
https://www.youtube.com/watch?v=U9mJuUkhUzk
A note of caution from OpenAI
Date: 2023-11-08 08:19 pm (UTC)*****
Model customization
GPT-4 fine tuning experimental access
We're creating an experimental access program for GPT-4 fine-tuning. Preliminary results indicate that GPT-4 fine-tuning requires more work to achieve meaningful improvements over the base model compared to the substantial gains realized with GPT-3.5 fine-tuning. As quality and safety for GPT-4 fine-tuning improves, developers actively using GPT-3.5 fine-tuning will be presented with an option to apply to the GPT-4 program within their fine-tuning console.