OpenAI announces fine-tuning for GPT-3.5 Turbo, allowing more customisations for developers



Today, OpenAI has announced on its official website the launch of the GPT-3.5 Turbo fine-tuning feature and API updates. Support for fine-tuning GPT-4, which can comprehend images in addition to text, will be released later this fall. Any user can now train their own fine-tuned GPT model using their data in just four simple steps: Prepare Data, Upload File, Create Fine-Tuning Task, and Use the Fine-Tuned Model. This enables users to train customized models suitable for any scenario. The improvements of the fine-tuned models compared to previous versions mainly focus on higher controllability, reliable outputs, and personalized responses.



Unlike open-source models, OpenAI places significant importance on security, thus all user-uploaded data requires review. Prohibited content is not allowed for training. In terms of pricing, fine-tuning costs include both training and usage expenses.



For instance, using the GPT-3.5 model, the cost is $0.008 per 1000 tokens trained. Additionally, when using the trained model for input and output, corresponding fees apply. OpenAI has also partnered with Scale to assist businesses in customizing fine-tuned models based on their proprietary data.