Jump to content

Recommended Posts

Posted

In today's weekend workshop, we were discussing how we could use an LLM for Lua debugging and error checking. I brought up that I would like the option to use my own AI instead of a possible official server or ChatGPT's server. I want an option to redirect the address and port number like I can do in OpenWebUI.

image.thumb.png.7fad030dcb171c171e8b5ebeb876e50e.png

This is how I'm doing it.

I first installed ollama from their website.

http://ollama.com

I then browse and pull the model the model I want/can use with my server limitations. 

https://ollama.com/search

You can then communicate with it with curl with localhost:11434 (It's showing a docker address because OpenWebUI is in docker.)

Here's the ollama GitHub.

https://github.com/ollama/ollama

And here's OpenWebUI if you want to take a peak.

https://openwebui.com

Pretty much If you're looking to integrating AI, I'd like my own services. :) 

Cyclone - Ultra Game System - Component PreprocessorTex2TGA - Darkness Awaits Template (Leadwerks)

If you like my work, consider supporting me on Patreon!

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...