- Building a Flask Application to Send Requests to ChatGPT
- Creating a Simple ChatGPT Clone with React and Flask
In this blog post, we will explore how to use Flask to send requests to OpenAI’s ChatGPT API and get responses. This tutorial is beginner-friendly and assumes basic knowledge of Python and Flask.
Prerequisites
Before we get started, make sure you have the following:
- pyenv installed to manage Python versions. You can install it following the instructions here.
- Python 3.7+ managed by pyenv. Install it with:
-
pyenv install 3.9.7 # Replace with your desired Python version pyenv global 3.9.7 # Set the global version
-
- Flask library installed. You can install it using pip:
pip install flask
- OpenAI Python client library installed. You can install it using pip:
pip install openai
- python-dotenv installed to manage environment variables. Install it using pip:
pip install python-dotenv
- An API key for OpenAI. You can get one by signing up at OpenAI’s API page.
- (Optional) A virtual environment set up for your project to keep dependencies organized:
python -m venv venv source venv/bin/activate # On Windows, use venv\Scripts\activate
Those prerequisites are treated in details in a previous post, I suggest you read it if you need detailed instructions.
Step 1: Setting Up the Flask App
Create a new Python file, for example, app.py
, and initialize a Flask application. Store your OpenAI API key in a .env
file for better security.
.env
File
Create a .env
file in your project directory and add your API key:
OPENAI_API_KEY=your-api-key-here
Flask App Code
from flask import Flask, request, jsonify
import openai
import os
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
app = Flask(__name__)
# Set OpenAI API key from environment variable
openai.api_key = os.getenv("OPENAI_API_KEY")
if not openai.api_key:
raise ValueError("OPENAI_API_KEY environment variable is not set")
Step 2: Define a Route to Handle User Input
We’ll set up a route where users can send their messages to ChatGPT and receive a response.
# Import required libraries
from flask_cors import CORS
from flask import Flask, request, jsonify # Flask web framework and utilities
import openai # OpenAI library for API access
from openai import OpenAI # OpenAI client
from dotenv import load_dotenv # For loading environment variables
import os # For accessing environment variables
# Load environment variables from .env file
load_dotenv()
# Get OpenAI API key from environment variables
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Initialize chat history with system message
messages_history = [
{"role": "system", "content": "You are a helpful assistant."}]
# Initialize Flask application
app = Flask(__name__)
CORS(app) # Enable CORS for all routes
# Create OpenAI client instance with API key
client = OpenAI(api_key=OPENAI_API_KEY)
# Define chat endpoint that accepts POST requests
@app.route("/chat", methods=["POST"])
def chat():
# Get JSON data from request
data = request.get_json()
user_input = data.get("message")
# Validate if message exists in request
if not user_input:
return jsonify({"error": "Message required"}), 400
# Add user message to chat history
messages_history.append({"role": "user", "content": user_input})
try:
# Make API call to OpenAI
response = client.chat.completions.create(
model="gpt-3.5-turbo", # Use GPT-3.5 model
messages=messages_history,
max_tokens=1024, # Maximum length of response
n=1, # Number of responses to generate
stop=None, # No custom stop sequence
# Controls randomness (0=deterministic, 1=creative)
temperature=0.8,
)
# Extract response text
chat_response = response.choices[0].message.content
# Add assistant response to chat history
messages_history.append({"role": "system", "content": chat_response})
# Return response to client
return jsonify({"message": chat_response})
# Handle different types of OpenAI API errors
except openai.AuthenticationError:
return jsonify({"error": "Authentication failed. Check your API key."}), 401
except openai.RateLimitError:
return jsonify({"error": "Rate limit exceeded. Please try again later."}), 429
except openai.OpenAIError as e:
return jsonify({"error": f"An error occurred: {str(e)}"}), 500
# Run the Flask application in debug mode if script is run directly
if __name__ == "__main__":
app.run(debug=True)
Step 3: Running the Flask App
Add the following code at the end of your app.py
file to run the application:
if __name__ == '__main__':
app.run(debug=True)
Run the Flask app by executing:
python app.py
Your app will be accessible at http://127.0.0.1:5000
.
Step 4: Testing the Application
You can use tools like Postman, curl, or a frontend application to send a POST request to the /chat
endpoint. For example, using curl:
curl -X POST http://127.0.0.1:5000/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how are you?"}'
The response will look something like this:
{
"message": "I am just a program, but I'm here to help! How can I assist you today?"
}
Step 5: Expanding the Application
Here are a few ways to improve the app:
- Error Handling: Add better error messages for different scenarios, like invalid API keys or rate limits.
- Frontend Integration: Create a simple HTML page where users can input messages and see responses dynamically.
- Customization: Modify the system prompt to adjust ChatGPT’s behavior.
- Logging: Log requests and responses for debugging or analytics purposes.
Conclusion
In this tutorial, we built a simple Flask application to send user inputs to OpenAI’s ChatGPT and receive responses. This is just a starting point, and you can expand it in many ways to suit your use case. Happy coding!