Initial version of gui for our T-SQL AI framework.

This commit is contained in:
2026-02-19 14:46:03 +01:00
commit c28cce4086
8 changed files with 708 additions and 0 deletions

6
.gitignore vendored Normal file
View File

@@ -0,0 +1,6 @@
__pycache__/
*.pyc
.venv/
.env
.DS_Store
*.log

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2026
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

73
README.md Normal file
View File

@@ -0,0 +1,73 @@
# JR SQL AI GUI (Ollama)
Schlanke, stabile GUI für **Arch Linux / Hyprland (Wayland)**, um ein Ollama-Modell (z.B. `jr-sql-expert:latest`) bequem zu nutzen.
## Features
- Zwei Bereiche:
- **Prompt/Kontext** (links)
- **Antwort** (rechts) inkl. **Markdown Rendering**
- **Streaming** (Antwort läuft live ein)
- Buttons:
- **An AI senden**
- **Antwort kopieren** (Markdown)
- **Copy SQL only** (extrahiert SQL aus ```sql```-Blöcken bzw. SQL-ähnlichen Code-Fences)
- **Modell aktualisieren (pull)** über Ollama API
- **Ollama Runtime updaten** (optional) via Docker (`docker pull` + `docker restart`)
## Voraussetzungen
- Ein laufender Ollama Server (bei dir z.B. Docker Container auf `127.0.0.1:11434`)
- Python 3.11+ empfohlen
## Installation (Arch Linux)
### Systempakete (minimal)
```bash
sudo pacman -S pyside6 python-requests
```
### Optional: venv (isoliert)
```bash
python -m venv .venv
source .venv/bin/activate
pip install PySide6 requests
```
## Start
```bash
python sql_ai_gui.py
```
Falls Qt/Wayland zickt (selten), erzwingen:
```bash
QT_QPA_PLATFORM=wayland python sql_ai_gui.py
```
## Konfiguration
Per Environment Variablen (optional):
- `OLLAMA_BASE_URL` (Default: `http://127.0.0.1:11434`)
- `OLLAMA_MODEL` (Default: `jr-sql-expert:latest`)
- `OLLAMA_DOCKER_CONTAINER` (Default: `ollama`)
Beispiel:
```bash
OLLAMA_MODEL="jr-sql-expert:latest" OLLAMA_BASE_URL="http://127.0.0.1:11434" python sql_ai_gui.py
```
## Sicherheit
- Standardmäßig wird nur auf `127.0.0.1` gearbeitet.
- Runtime-Update nutzt `docker`. Wenn dein User keine Docker-Rechte hat, wird das fehlschlagen.
## Repo Layout
- `sql_ai_gui.py` die App (single-file)
- `docs/` kurze Doku (Markdown)
- `requirements.txt` falls du lieber via pip installierst
## License
MIT (siehe LICENSE)

27
docs/installation.md Normal file
View File

@@ -0,0 +1,27 @@
# Installation
## Arch Linux (Pacman)
```bash
sudo pacman -S pyside6 python-requests
```
## venv (optional)
```bash
python -m venv .venv
source .venv/bin/activate
pip install PySide6 requests
```
## Start (Wayland/Hyprland)
```bash
python sql_ai_gui.py
```
Falls nötig:
```bash
QT_QPA_PLATFORM=wayland python sql_ai_gui.py
```
## Ollama (Docker) Minimal
```bash
docker run -d --name ollama --restart unless-stopped -p 127.0.0.1:11434:11434 -v /mnt/data/ollama:/root/.ollama ollama/ollama:latest
```

23
docs/troubleshooting.md Normal file
View File

@@ -0,0 +1,23 @@
# Troubleshooting
## "Model-Liste konnte nicht geladen werden"
- Prüfe Ollama:
- `curl -s http://127.0.0.1:11434/api/tags | head`
- Prüfe Port-Bind:
- `ss -ltnp | grep 11434`
## GUI startet, aber kein Fenster unter Hyprland
- Starte mit:
```bash
QT_QPA_PLATFORM=wayland python sql_ai_gui.py
```
- Alternativ testweise XWayland:
```bash
QT_QPA_PLATFORM=xcb python sql_ai_gui.py
```
## Runtime Update Button fehlgeschlagen
- Stelle sicher, dass Docker verfügbar ist:
- `docker ps`
- Rechte:
- Dein User sollte in der `docker` Gruppe sein (oder nutze sudo-basierten Wrapper).

22
docs/usage.md Normal file
View File

@@ -0,0 +1,22 @@
# Usage
## Prompt senden
1. Prompt/Kontext links einfügen
2. Modell auswählen (Dropdown)
3. Optional: Streaming aktivieren/deaktivieren
4. **An AI senden**
## Copy SQL only
- Extrahiert SQL aus:
- ```sql``` code fences
- oder „SQL-ähnlichen“ code fences (Keywords wie SELECT/FROM/WHERE ...)
## Modell aktualisieren
- Button **Modell aktualisieren (pull)** führt `/api/pull` aus.
- Währenddessen wird der Status unten angezeigt.
## Runtime Update (Ollama)
- Button **Ollama Runtime updaten** führt aus:
- `docker pull ollama/ollama:latest`
- `docker restart ollama`
- Voraussetzung: Docker installiert + Rechte (docker group)

2
requirements.txt Normal file
View File

@@ -0,0 +1,2 @@
PySide6
requests

534
sql_ai_gui.py Executable file
View File

@@ -0,0 +1,534 @@
#!/usr/bin/env python3
"""
JR SQL AI GUI (Ollama) - lightweight Arch/Hyprland friendly GUI.
- Left: Prompt/context
- Right: Rendered Markdown answer + raw markdown
- Buttons: Send, Copy, Copy SQL only, Model pull, Ollama runtime update (Docker)
"""
import json
import os
import re
import shutil
import subprocess
import sys
from dataclasses import dataclass
from typing import Optional, List
import requests
from PySide6.QtCore import Qt, QThread, Signal, QTimer
from PySide6.QtGui import QFont
from PySide6.QtWidgets import (
QApplication,
QComboBox,
QHBoxLayout,
QLabel,
QLineEdit,
QMainWindow,
QMessageBox,
QPushButton,
QPlainTextEdit,
QSplitter,
QStatusBar,
QVBoxLayout,
QWidget,
QCheckBox,
QTextBrowser,
)
# -----------------------------
# Config (defaults)
# -----------------------------
DEFAULT_OLLAMA_BASE_URL = os.environ.get("OLLAMA_BASE_URL", "http://127.0.0.1:11434")
DEFAULT_MODEL = os.environ.get("OLLAMA_MODEL", "jr-sql-expert:latest")
DEFAULT_DOCKER_CONTAINER_NAME = os.environ.get("OLLAMA_DOCKER_CONTAINER", "ollama")
# -----------------------------
# Helpers
# -----------------------------
def is_docker_available() -> bool:
return shutil.which("docker") is not None
def run_cmd(cmd: list[str], timeout: int = 600) -> tuple[int, str, str]:
proc = subprocess.run(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
timeout=timeout,
text=True,
)
return proc.returncode, proc.stdout, proc.stderr
def human_error(e: Exception) -> str:
return f"{type(e).__name__}: {e}"
SQL_KW_RE = re.compile(
r"\\b(select|from|where|join|group|order|having|insert|update|delete|create|alter|drop|with|merge)\\b",
re.IGNORECASE,
)
FENCE_RE = re.compile(r"```(\\w+)?\\s*\\n(.*?)\\n```", re.DOTALL)
def extract_sql_blocks(markdown_text: str) -> List[str]:
"""
Extract SQL from markdown fenced code blocks.
Priority:
1) ```sql ... ```
2) any fenced block that looks like SQL (contains common keywords)
"""
blocks = []
for m in FENCE_RE.finditer(markdown_text):
lang = (m.group(1) or "").strip().lower()
body = (m.group(2) or "").strip()
if not body:
continue
if lang == "sql":
blocks.append(body)
elif lang in ("tsql", "t-sql", "mssql"):
blocks.append(body)
else:
if SQL_KW_RE.search(body):
blocks.append(body)
return blocks
def build_sql_only_text(blocks: List[str]) -> str:
if not blocks:
return ""
return "\\n\\n-- ----------------------------------------\\n\\n".join(blocks) + "\\n"
# -----------------------------
# Workers (threads)
# -----------------------------
@dataclass
class GenerateParams:
base_url: str
model: str
prompt: str
stream: bool = True
class GenerateWorker(QThread):
chunk = Signal(str) # streaming chunk
done = Signal(str) # full response
error = Signal(str)
def __init__(self, params: GenerateParams):
super().__init__()
self.params = params
def run(self) -> None:
try:
url = self.params.base_url.rstrip("/") + "/api/generate"
payload = {
"model": self.params.model,
"prompt": self.params.prompt,
"stream": self.params.stream,
}
with requests.post(url, json=payload, stream=self.params.stream, timeout=(5, 600)) as r:
r.raise_for_status()
if not self.params.stream:
data = r.json()
self.done.emit(data.get("response", ""))
return
full = []
for line in r.iter_lines(decode_unicode=True):
if not line:
continue
obj = json.loads(line)
part = obj.get("response", "")
if part:
full.append(part)
self.chunk.emit(part)
if obj.get("done", False):
break
self.done.emit("".join(full))
except Exception as e:
self.error.emit(human_error(e))
class PullModelWorker(QThread):
status = Signal(str)
done = Signal()
error = Signal(str)
def __init__(self, base_url: str, model: str):
super().__init__()
self.base_url = base_url
self.model = model
def run(self) -> None:
try:
url = self.base_url.rstrip("/") + "/api/pull"
payload = {"name": self.model, "stream": True}
with requests.post(url, json=payload, stream=True, timeout=(5, 1800)) as r:
r.raise_for_status()
for line in r.iter_lines(decode_unicode=True):
if not line:
continue
obj = json.loads(line)
st = obj.get("status")
total = obj.get("total")
completed = obj.get("completed")
if st and total and completed:
self.status.emit(f"{st}: {completed}/{total}")
elif st:
self.status.emit(st)
self.done.emit()
except Exception as e:
self.error.emit(human_error(e))
class UpdateOllamaDockerWorker(QThread):
status = Signal(str)
done = Signal()
error = Signal(str)
def __init__(self, container_name: str):
super().__init__()
self.container_name = container_name
def run(self) -> None:
try:
if not is_docker_available():
raise RuntimeError("docker not found in PATH")
self.status.emit("docker pull ollama/ollama:latest …")
code, out, err = run_cmd(["docker", "pull", "ollama/ollama:latest"], timeout=1800)
if code != 0:
raise RuntimeError(err.strip() or out.strip() or f"docker pull failed (code {code})")
self.status.emit(f"Restarting container '{self.container_name}'")
code, out, err = run_cmd(["docker", "restart", self.container_name], timeout=120)
if code != 0:
raise RuntimeError(err.strip() or out.strip() or f"docker restart failed (code {code})")
self.status.emit("Done.")
self.done.emit()
except Exception as e:
self.error.emit(human_error(e))
# -----------------------------
# Main Window
# -----------------------------
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("JR SQL AI GUI (Ollama)")
self._gen_worker: Optional[GenerateWorker] = None
self._pull_worker: Optional[PullModelWorker] = None
self._update_worker: Optional[UpdateOllamaDockerWorker] = None
self._raw_markdown: str = ""
self._render_timer = QTimer(self)
self._render_timer.setInterval(250) # throttle UI updates
self._render_timer.timeout.connect(self._render_markdown_throttled)
root = QWidget()
self.setCentralWidget(root)
layout = QVBoxLayout(root)
# Top bar
top = QHBoxLayout()
layout.addLayout(top)
top.addWidget(QLabel("Ollama URL:"))
self.base_url = QLineEdit(DEFAULT_OLLAMA_BASE_URL)
self.base_url.setMinimumWidth(260)
top.addWidget(self.base_url, 2)
top.addWidget(QLabel("Model:"))
self.model = QComboBox()
self.model.setEditable(True)
self.model.addItem(DEFAULT_MODEL)
self.model.setCurrentText(DEFAULT_MODEL)
self.model.setMinimumWidth(220)
top.addWidget(self.model, 1)
self.btn_refresh_models = QPushButton("Models laden")
self.btn_refresh_models.clicked.connect(self.refresh_models)
top.addWidget(self.btn_refresh_models)
self.chk_stream = QCheckBox("Streaming")
self.chk_stream.setChecked(True)
top.addWidget(self.chk_stream)
# Split view
splitter = QSplitter(Qt.Horizontal)
layout.addWidget(splitter, 1)
# Left: prompt
left = QWidget()
left_l = QVBoxLayout(left)
left_l.addWidget(QLabel("Prompt / Kontext"))
self.prompt = QPlainTextEdit()
self.prompt.setPlaceholderText("Prompt + Kontext hier einfügen …")
self.prompt.setFont(QFont("Monospace", 10))
left_l.addWidget(self.prompt, 1)
btn_row = QHBoxLayout()
self.btn_send = QPushButton("An AI senden")
self.btn_send.clicked.connect(self.on_send)
btn_row.addWidget(self.btn_send)
self.btn_clear = QPushButton("Leeren")
self.btn_clear.clicked.connect(lambda: self.prompt.setPlainText(""))
btn_row.addWidget(self.btn_clear)
left_l.addLayout(btn_row)
# Right: response
right = QWidget()
right_l = QVBoxLayout(right)
right_l.addWidget(QLabel("Antwort (Markdown gerendert)"))
self.response_view = QTextBrowser()
self.response_view.setOpenExternalLinks(True)
self.response_view.setFont(QFont("Monospace", 10))
right_l.addWidget(self.response_view, 1)
self.response_raw = QPlainTextEdit()
self.response_raw.setReadOnly(True)
self.response_raw.setFont(QFont("Monospace", 10))
self.response_raw.setPlaceholderText("Raw Antwort (für Copy/Debug).")
self.response_raw.setMaximumHeight(140)
right_l.addWidget(self.response_raw)
right_btn_row = QHBoxLayout()
self.btn_copy = QPushButton("Antwort kopieren")
self.btn_copy.clicked.connect(self.copy_response)
right_btn_row.addWidget(self.btn_copy)
self.btn_copy_sql = QPushButton("Copy SQL only")
self.btn_copy_sql.clicked.connect(self.copy_sql_only)
right_btn_row.addWidget(self.btn_copy_sql)
self.btn_model_pull = QPushButton("Modell aktualisieren (pull)")
self.btn_model_pull.clicked.connect(self.on_pull_model)
right_btn_row.addWidget(self.btn_model_pull)
self.btn_runtime_update = QPushButton("Ollama Runtime updaten")
self.btn_runtime_update.clicked.connect(self.on_update_runtime)
self.btn_runtime_update.setEnabled(is_docker_available())
right_btn_row.addWidget(self.btn_runtime_update)
right_l.addLayout(right_btn_row)
splitter.addWidget(left)
splitter.addWidget(right)
splitter.setSizes([520, 760])
self.status = QStatusBar()
self.setStatusBar(self.status)
self.status.showMessage("Bereit.")
QTimer.singleShot(300, self.refresh_models)
# -------------- UI helpers --------------
def ui_busy(self, busy: bool) -> None:
for w in [self.btn_send, self.btn_model_pull, self.btn_refresh_models, self.btn_runtime_update, self.btn_copy_sql]:
w.setEnabled(not busy)
self.prompt.setEnabled(not busy)
self.base_url.setEnabled(not busy)
self.model.setEnabled(not busy)
self.chk_stream.setEnabled(not busy)
def msg_error(self, title: str, text: str) -> None:
QMessageBox.critical(self, title, text)
def msg_info(self, title: str, text: str) -> None:
QMessageBox.information(self, title, text)
# -------------- Model list --------------
def refresh_models(self) -> None:
base = self.base_url.text().strip().rstrip("/")
if not base:
return
try:
r = requests.get(base + "/api/tags", timeout=(3, 15))
r.raise_for_status()
data = r.json()
models = [m.get("name") for m in data.get("models", []) if m.get("name")]
if models:
current = self.model.currentText()
self.model.clear()
self.model.addItems(models)
if current in models:
self.model.setCurrentText(current)
else:
self.model.setCurrentIndex(0)
self.status.showMessage(f"{len(models)} Modelle geladen.", 2500)
else:
self.status.showMessage("Keine Modelle gefunden (api/tags leer).", 5000)
except Exception as e:
self.status.showMessage(f"Model-Liste konnte nicht geladen werden: {human_error(e)}", 8000)
# -------------- Send / Generate --------------
def on_send(self) -> None:
prompt = self.prompt.toPlainText().strip()
if not prompt:
self.msg_info("Hinweis", "Bitte erst einen Prompt/Kontext eingeben.")
return
base = self.base_url.text().strip()
model = self.model.currentText().strip()
if not base or not model:
self.msg_info("Hinweis", "Bitte Ollama URL und Model setzen.")
return
self._raw_markdown = ""
self.response_raw.setPlainText("")
self.response_view.setMarkdown("")
self.status.showMessage("Sende Anfrage …")
self.ui_busy(True)
params = GenerateParams(
base_url=base,
model=model,
prompt=prompt,
stream=self.chk_stream.isChecked(),
)
self._gen_worker = GenerateWorker(params)
self._gen_worker.chunk.connect(self._on_chunk)
self._gen_worker.done.connect(self._on_done)
self._gen_worker.error.connect(self._on_gen_error)
self._gen_worker.start()
if self.chk_stream.isChecked():
self._render_timer.start()
def _on_chunk(self, s: str) -> None:
self._raw_markdown += s
self.response_raw.setPlainText(self._raw_markdown)
self.response_raw.verticalScrollBar().setValue(self.response_raw.verticalScrollBar().maximum())
def _render_markdown_throttled(self) -> None:
if self._raw_markdown:
self.response_view.setMarkdown(self._raw_markdown)
def _on_done(self, full: str) -> None:
self._render_timer.stop()
if not self.chk_stream.isChecked():
self._raw_markdown = full
self.response_raw.setPlainText(full)
self.response_view.setMarkdown(self._raw_markdown)
self.status.showMessage("Fertig.", 2500)
self.ui_busy(False)
def _on_gen_error(self, err: str) -> None:
self._render_timer.stop()
self.ui_busy(False)
self.status.showMessage("Fehler.", 5000)
self.msg_error("Ollama Fehler", err)
# -------------- Copy actions --------------
def copy_response(self) -> None:
QApplication.clipboard().setText(self._raw_markdown or self.response_raw.toPlainText())
self.status.showMessage("Antwort (Markdown) in Clipboard kopiert.", 2500)
def copy_sql_only(self) -> None:
md = self._raw_markdown or self.response_raw.toPlainText()
blocks = extract_sql_blocks(md)
sql_text = build_sql_only_text(blocks)
if not sql_text:
self.msg_info("Kein SQL gefunden", "Ich habe in der Antwort keine SQL-Codeblöcke gefunden.")
return
QApplication.clipboard().setText(sql_text)
self.status.showMessage(f"SQL kopiert ({len(blocks)} Block/Blöcke).", 3000)
# -------------- Pull model --------------
def on_pull_model(self) -> None:
base = self.base_url.text().strip()
model = self.model.currentText().strip()
if not base or not model:
self.msg_info("Hinweis", "Bitte Ollama URL und Model setzen.")
return
self.ui_busy(True)
self.status.showMessage(f"Pull: {model}")
self._pull_worker = PullModelWorker(base, model)
self._pull_worker.status.connect(lambda s: self.status.showMessage(f"Pull: {s}"))
self._pull_worker.done.connect(self._on_pull_done)
self._pull_worker.error.connect(self._on_pull_err)
self._pull_worker.start()
def _on_pull_done(self) -> None:
self.ui_busy(False)
self.status.showMessage("Model pull abgeschlossen.", 4000)
self.refresh_models()
def _on_pull_err(self, err: str) -> None:
self.ui_busy(False)
self.msg_error("Model pull fehlgeschlagen", err)
# -------------- Update runtime (Docker) --------------
def on_update_runtime(self) -> None:
if not is_docker_available():
self.msg_info("Nicht verfügbar", "docker ist nicht im PATH gefunden.")
return
msg = (
"Das führt aus:\n"
" docker pull ollama/ollama:latest\n"
" docker restart ollama\n\n"
"Hinweis: Du brauchst Docker-Rechte (docker group).\n"
"Fortfahren?"
)
if QMessageBox.question(self, "Ollama updaten?", msg) != QMessageBox.Yes:
return
self.ui_busy(True)
self.status.showMessage("Ollama Runtime Update …")
self._update_worker = UpdateOllamaDockerWorker(DEFAULT_DOCKER_CONTAINER_NAME)
self._update_worker.status.connect(lambda s: self.status.showMessage(s))
self._update_worker.done.connect(self._on_update_done)
self._update_worker.error.connect(self._on_update_err)
self._update_worker.start()
def _on_update_done(self) -> None:
self.ui_busy(False)
self.status.showMessage("Ollama Runtime Update fertig.", 5000)
QTimer.singleShot(700, self.refresh_models)
def _on_update_err(self, err: str) -> None:
self.ui_busy(False)
self.msg_error("Ollama Runtime Update fehlgeschlagen", err)
def main() -> int:
# For Wayland/Hyprland you can force:
# QT_QPA_PLATFORM=wayland python sql_ai_gui.py
app = QApplication(sys.argv)
app.setApplicationName("JR SQL AI GUI")
w = MainWindow()
w.resize(1250, 760)
w.show()
return app.exec()
if __name__ == "__main__":
raise SystemExit(main())