reepblue Posted Saturday at 07:36 PM Posted Saturday at 07:36 PM In today's weekend workshop, we were discussing how we could use an LLM for Lua debugging and error checking. I brought up that I would like the option to use my own AI instead of a possible official server or ChatGPT's server. I want an option to redirect the address and port number like I can do in OpenWebUI. This is how I'm doing it. I first installed ollama from their website. http://ollama.com I then browse and pull the model the model I want/can use with my server limitations. https://ollama.com/search You can then communicate with it with curl with localhost:11434 (It's showing a docker address because OpenWebUI is in docker.) Here's the ollama GitHub. https://github.com/ollama/ollama And here's OpenWebUI if you want to take a peak. https://openwebui.com Pretty much If you're looking to integrating AI, I'd like my own services. Quote Cyclone - Ultra Game System - Component Preprocessor - Tex2TGA - Darkness Awaits Template (Leadwerks) If you like my work, consider supporting me on Patreon!
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.