NextCloud + LocalAI

Summary: Make sure to name your model gpt-3.5-turbo

I installed LocalAI in a container and the NextCloud AI integration app.

Couple notes on getting things to work:

  • since I installed LocalAI inside a Podman Pod, the LocalAI instance is available on localhost; therefor on the NextCloud side I had to add 'allow_local_remote_servers' => 'true', to config/config.php
  • I downloaded the luna-ai-llama2-uncensored.Q4_K_M.gguf model based on the instructions from LocalAI; this created the files chat.tmpl, completion.tmpl, and a YAML file which I used as the basis for the following file:
context_size: 1024
name: gpt-3.5-turbo
parameters:
  model: luna-ai-llama2-uncensored.Q4_K_M.gguf
  temperature: 0.9
  top_k: 80
  top_p: 0.7
template:
  chat: chat
  completion: completion

The main point here is that NextCloud currently expects the model to be named gpt-3.5-turbo, which I discovered after spending some time troubleshooting.