Using local server for AI llava Vision

is there a way of using a local server running llava rather then sending it to an online service?

Python examples of local Llava server :

AmebaPro2_server/AmebaPro2_Whisper_Llava_server.py

AmebaPro2_server/AmebaPro2_Whisper_LlavaNext_server.py

I had to make another account as you forum blocked me for spam after asking that and i couldn’t respond. i dont need a local llava server, that is not what im asking for at all.

I have llama3.2-vision:latest running via ollama on my server, ollama can run lots of different models. I want to send an image to it rather than using chatgpt or google gemini.

the API is here,

Hi @Geo_Muir,

For sending image to your customized server, you may refer to existing examples that uses the HTTP client to send a post request and save it to the server. You may refer to this example. HTTP Post Image and MP4 — Ameba Arduino AIoT Documentation v1.1 documentation

Thank you.

1 Like

(post deleted by author)