Using local server for AI llava Vision

is there a way of using a local server running llava rather then sending it to an online service?

Python examples of local Llava server :

AmebaPro2_server/AmebaPro2_Whisper_Llava_server.py

AmebaPro2_server/AmebaPro2_Whisper_LlavaNext_server.py

I had to make another account as you forum blocked me for spam after asking that and i couldn’t respond. i dont need a local llava server, that is not what im asking for at all.

I have llama3.2-vision:latest running via ollama on my server, ollama can run lots of different models. I want to send an image to it rather than using chatgpt or google gemini.

the API is here,

Hi @Geo_Muir,

For sending image to your customized server, you may refer to existing examples that uses the HTTP client to send a post request and save it to the server. You may refer to this example. HTTP Post Image and MP4 — Ameba Arduino AIoT Documentation v1.1 documentation

Thank you.

1 Like

(post deleted by author)

I got this working with ollama…. no thanks to this forum suspending my account since asking questions apparently counts as spam. I would share but that might be spam….

Hi @geofrancis

Apologies for the inconvenience. The suspension was triggered by the auto-detect security system. I’ve checked and confirmed that your account is no longer suspended, so you can resume posting. We’ve taken note of this issue, please let us know if it happens again.

I was blocked in MARCH! over 6 months ago! for asking about how to get this working with an opensource project. messaged asking your admins for help and was ignored.