In this episode, Ben and Anthony demonstrate how to integrate Ollama, a local LLM runtime, directly into your SQL Server workflows.
Whatβs Inside:
- Full setup of Ollama with Docker and local REST access
- SQL Server calls out to the Ollama API using
sp_invoke_external_rest_api
- Real-world use cases: text generation, summarization, classification
- Security and latency benefits of keeping everything on-prem
π Watch on YouTube