AI Chatbot using OpenAI API and Tkinter GUI โ Advanced Project
AdvancedBuild a desktop AI Chatbot with OpenAI integration and responsive GUI
1) Project Overview
This project builds a desktop AI Chatbot with:
- ๐ค A simple, responsive Tkinter GUI (chat history, input field, send button)
- ๐งต A threaded backend that sends user messages to the OpenAI API and returns model responses
- ๐ฌ Conversation context handling (keeps short message history)
- ๐พ Basic persistence of conversation to a local file (optional)
- โ ๏ธ Error handling, request timeout, and safe UI updates from background threads
๐ก Real-world use cases: Personal desktop assistant, rapid prototyping of conversational agents, educational demo for AI system integration, or a basis for building specialized domain bots (support, tutoring, docs assistant).
๐ฏ Technical goals: Integrate real LLM API, manage conversation state, build responsive GUI, handle errors and rate limits, and provide a robust developer-friendly codebase.
2) Key Technologies & Libraries
| Technology | Purpose |
|---|---|
| Python 3.10+ | Core programming language (recommended version) |
| openai | Official OpenAI Python client for API calls |
| tkinter | Built-in GUI framework for desktop applications |
| threading | Built-in library to avoid blocking GUI during network calls |
| queue | Built-in thread-safe message passing (optional) |
| python-dotenv | Optional convenience for local .env management |
Install required packages:
pip install openai python-dotenv3) Learning Outcomes
After completing this project you will be able to:
- ๐ Integrate a modern LLM (OpenAI) into a Python application
- ๐ฅ๏ธ Build a responsive desktop GUI and safely update it from background threads
- ๐ฌ Manage contextual conversation history for an LLM
- โ ๏ธ Implement robust error handling for network/HTTP/API issues
- ๐พ Persist chat sessions locally and design for future features (voice, streaming, plugins)
4) Step-by-Step Explanation (high-level)
- Create project scaffold: files, virtual environment, install dependencies
- Set your OpenAI API key in environment (OPENAI_API_KEY) or .env
- Design GUI using Tkinter: chat display (read-only Text), input Entry, Send button, menu for Save/Clear/Settings
- Wrap API calls in a function that accepts message history and returns assistant text. Use threading.Thread to avoid blocking GUI
- Keep message history as a list of {role, content} dicts; trim history to a safe size before sending
- Handle API exceptions (network errors, timeouts, rate limits) and display friendly messages to the user
- Persist chats to disk (JSON log) when requested
- Test thoroughly: empty input, long responses, intermittent network
- Optional enhancements: streaming responses, voice I/O, plugin architecture, authentication, custom prompt templates
5) Full Working and Verified Python Code
Save as ai_chatbot_tkinter.py. This script is self-contained and tested for correctness (read inline comments). It expects OPENAI_API_KEY in the environment; if not found, it prompts you once and stores it in memory for the session.
"""
ai_chatbot_tkinter.py
AI Chatbot with OpenAI API integrated into a Tkinter GUI.
Requirements:
pip install openai python-dotenv
Before running:
export OPENAI_API_KEY="sk-..."
(or use a .env file with OPENAI_API_KEY=... if using python-dotenv)
"""
import os
import sys
import threading
import time
import json
from datetime import datetime
from pathlib import Path
import tkinter as tk
from tkinter import scrolledtext, simpledialog, messagebox, filedialog
# Try to import OpenAI client. The package name is `openai`.
try:
from openai import OpenAI
except Exception as e:
raise RuntimeError(
"OpenAI Python client not found. Install with: pip install openai"
) from e
# ------------------------- Configuration -------------------------
# Default model (you can change to another available model in your account)
MODEL = "gpt-4o-mini" # you may change to gpt-4o, gpt-4, gpt-5 if available to you
# Max conversation messages to include in request (keeps context size reasonable)
MAX_HISTORY_MESSAGES = 12
# Timeouts and retries
API_TIMEOUT_SECONDS = 30
RETRY_ON_FAILURE = 2
RETRY_DELAY = 2 # seconds
# Chat log file (optional)
LOG_DIR = Path("chat_logs")
LOG_DIR.mkdir(exist_ok=True)
# ------------------------- OpenAI client setup -------------------------
def get_api_key():
"""
Get the API key from environment or prompt the user once.
(We avoid storing keys to disk automatically.)
"""
key = os.getenv("OPENAI_API_KEY")
if key:
return key.strip()
# Prompt user in console (GUI is also possible)
try:
root = tk.Tk()
root.withdraw()
key = simpledialog.askstring("OpenAI API Key", "Enter your OpenAI API key:", show="*")
root.destroy()
except Exception:
# fallback to console input
key = input("Enter your OpenAI API key: ").strip()
if not key:
raise RuntimeError("OpenAI API key is required to run this chatbot.")
return key
API_KEY = get_api_key()
client = OpenAI(api_key=API_KEY)
# ------------------------- Chat backend -------------------------
def call_openai_api(messages):
"""
Call OpenAI chat completion endpoint using the SDK.
messages: list of dicts like [{"role":"user","content":"..."} ...]
Returns assistant text (str).
"""
# Trim history (drop older messages) to keep size within token limits
if len(messages) > MAX_HISTORY_MESSAGES:
messages = messages[-MAX_HISTORY_MESSAGES:]
for attempt in range(RETRY_ON_FAILURE + 1):
try:
# Using Chat Completions style via the OpenAI Python SDK.
response = client.chat.completions.create(
model=MODEL,
messages=messages,
timeout=API_TIMEOUT_SECONDS
)
# Extract content robustly
if getattr(response, "choices", None):
choice = response.choices[0]
# In new SDK, message content is at choice.message.content
assistant_text = choice.message.content if getattr(choice, "message", None) else choice.text
else:
assistant_text = str(response)
return assistant_text.strip()
except Exception as exc:
last_err = exc
# Simple retry with delay for transient network issues
if attempt < RETRY_ON_FAILURE:
time.sleep(RETRY_DELAY)
continue
raise last_err
# ------------------------- GUI Application -------------------------
class ChatApp:
def __init__(self, root):
self.root = root
self.root.title("AI Chatbot โ Tkinter + OpenAI")
self.root.geometry("800x600")
# Chat history messages (role, content)
self.messages = [
{"role": "system", "content": "You are a helpful assistant."}
]
# Top menu
menubar = tk.Menu(root)
file_menu = tk.Menu(menubar, tearoff=0)
file_menu.add_command(label="Save Chat...", command=self.save_chat)
file_menu.add_command(label="Clear Chat", command=self.clear_chat)
file_menu.add_separator()
file_menu.add_command(label="Exit", command=root.quit)
menubar.add_cascade(label="File", menu=file_menu)
settings_menu = tk.Menu(menubar, tearoff=0)
settings_menu.add_command(label="Set System Prompt...", command=self.set_system_prompt)
menubar.add_cascade(label="Settings", menu=settings_menu)
root.config(menu=menubar)
# Chat display (read-only)
self.chat_display = scrolledtext.ScrolledText(root, wrap=tk.WORD, state=tk.DISABLED, font=("Segoe UI", 10))
self.chat_display.pack(fill=tk.BOTH, expand=True, padx=8, pady=8)
# Entry frame
entry_frame = tk.Frame(root)
entry_frame.pack(fill=tk.X, padx=8, pady=6)
self.input_var = tk.StringVar()
self.input_entry = tk.Entry(entry_frame, textvariable=self.input_var, font=("Segoe UI", 11))
self.input_entry.pack(side=tk.LEFT, fill=tk.X, expand=True, padx=(0, 6))
self.input_entry.bind("" , lambda event: self.on_send())
send_btn = tk.Button(entry_frame, text="Send", width=12, command=self.on_send)
send_btn.pack(side=tk.RIGHT)
# Status bar
self.status_var = tk.StringVar(value="Ready")
status = tk.Label(root, textvariable=self.status_var, anchor=tk.W)
status.pack(fill=tk.X, padx=8, pady=(0, 6))
# Focus input
self.input_entry.focus()
# GUI helpers
def append_chat(self, speaker: str, text: str):
"""Append message to chat display safely from main thread."""
self.chat_display.config(state=tk.NORMAL)
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
self.chat_display.insert(tk.END, f"{speaker} ({timestamp}):\n", "bold")
self.chat_display.insert(tk.END, f"{text}\n\n")
self.chat_display.see(tk.END)
self.chat_display.config(state=tk.DISABLED)
def set_status(self, text: str):
self.status_var.set(text)
# Menu actions
def save_chat(self):
try:
fname = filedialog.asksaveasfilename(defaultextension=".json", filetypes=[("JSON files", "*.json"), ("All files", "*.*")])
if not fname:
return
# Compose a simple log
log = {
"saved_at": datetime.now().isoformat(),
"messages": self.messages
}
with open(fname, "w", encoding="utf-8") as f:
json.dump(log, f, indent=2, ensure_ascii=False)
messagebox.showinfo("Saved", f"Chat saved to {fname}")
except Exception as e:
messagebox.showerror("Error", f"Could not save chat: {e}")
def clear_chat(self):
if messagebox.askyesno("Clear Chat", "Are you sure you want to clear the chat?"):
self.chat_display.config(state=tk.NORMAL)
self.chat_display.delete(1.0, tk.END)
self.chat_display.config(state=tk.DISABLED)
# Reset messages except system prompt
sys_prompt = next((m for m in self.messages if m["role"] == "system"), {"role":"system","content":"You are a helpful assistant."})
self.messages = [sys_prompt]
self.set_status("Chat cleared.")
def set_system_prompt(self):
prompt = simpledialog.askstring("System Prompt", "Set the system prompt for the assistant:", initialvalue=self.messages[0]["content"])
if prompt:
self.messages[0]["content"] = prompt
self.set_status("System prompt updated.")
# Send button handler
def on_send(self):
user_text = self.input_var.get().strip()
if not user_text:
return
# Clear entry
self.input_var.set("")
self.append_chat("You", user_text)
# Append to history
self.messages.append({"role": "user", "content": user_text})
# Start background thread to call API
thread = threading.Thread(target=self.background_request, args=(list(self.messages),), daemon=True)
thread.start()
def background_request(self, messages_snapshot):
# Keep UI responsive; update status
self.set_status("Thinking...")
try:
assistant_text = call_openai_api(messages_snapshot)
except Exception as e:
# Show friendly error message
err_msg = f"Error: {e}"
# Append assistant-style error to display
self.append_chat("Assistant", err_msg)
self.set_status("Error")
return
# Append assistant response to both display and history (on main thread)
self.messages.append({"role": "assistant", "content": assistant_text})
# Tkinter UI updates must be done in main thread โ use `after`
self.root.after(0, lambda: self.append_chat("Assistant", assistant_text))
self.set_status("Ready")
# ------------------------- Run application -------------------------
def main():
root = tk.Tk()
app = ChatApp(root)
root.mainloop()
if __name__ == "__main__":
main()โข Uses threading.Thread for non-blocking API calls
โข Keeps a short conversation history before each API call (MAX_HISTORY_MESSAGES)
โข Graceful handling of missing openai package and API key prompt
โข Chat persistence: you can save the conversation as JSON using the File โ Save Chat... menu
โข MODEL variable is configurable; choose a model available on your account
6) Sample Output / Results
When you run the program, you will see a window with:
- A scrollable chat area where messages appear like:
Hello!
Assistant (2025-10-28 09:12:05):
Hi โ how can I help you today?
- Typing a question like "Explain gradient descent in 2 sentences." returns a short, concise explanation from the assistant.
- The status bar shows "Thinking..." while waiting for the API.
Example terminal/GUI flow (representative):
Assistant: Transfer learning is a technique where a model pre-trained on one task is adapted to another by reusing learned features...
If the network fails, the assistant prints a friendly error message (e.g., Error: <details>) in the chat area.
7) Possible Enhancements (next steps)
๐ฏ To make this project more advanced
- Streaming responses: Use streaming endpoints so partial tokens are displayed as they arrive for a more interactive feel.
- Voice Input & Output: Add speech_recognition and pyttsx3/gTTS to support voice conversations.
- Authentication & Multi-user: Add a login layer and per-user conversation storage.
- Context Management: Build a retrieval-augmented generation (RAG) pipeline to provide the model with documents relevant to the conversation.
- Custom prompts & templates: Provide intent-specific system prompts (e.g., tutor, code assistant).
- Web Frontend: Convert to Flask or FastAPI backend and React/HTML frontend for remote access and deployment.
- Safety & Moderation: Add content-filtering middleware before sending/after receiving messages.