For the 4K context version. OpenAI says that it is often better than GPT-4 for a specific application.
Price to fine-tune a model is very reasonable (but depends on the size of your training set).
The cost to use the resulting model is much higher than using the un-finetuned GPT-3.5, though it's still cheaper than using GPT-4.
Obviously, you only can use a fine-tuned model via API, not via standard Web interface.
OpenAI says the ability to fine-tune 16K context and the ability to fine-tune GPT-4 are coming.
Price to fine-tune a model is very reasonable (but depends on the size of your training set).
The cost to use the resulting model is much higher than using the un-finetuned GPT-3.5, though it's still cheaper than using GPT-4.
Obviously, you only can use a fine-tuned model via API, not via standard Web interface.
OpenAI says the ability to fine-tune 16K context and the ability to fine-tune GPT-4 are coming.