Skip to content

Misc. bug: Web UI does not select the loaded model by default #20382

@strawberrymelonpanda

Description

@strawberrymelonpanda

Name and Version

ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3090, compute capability 8.6, VMM: yes
version: 8247 (ae87863)
built with GNU 13.3.0 for Linux x86_64

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

llama-server

Command line

llama-server \
--models-preset ./presets.ini \
--models-max 1

Problem description & steps to reproduce

an annoying usability snafu in the webui in that I always have to choose the loaded model because the select doesn't seem to default to one of the models already loaded.

From #18655 (comment)

When using the llama-server router mode, having the Web UI default to at least one of the models (not marked as no-webui) would be welcomed. It appears to know which ones are loaded, but it must be manually selected from the dropdown regardless.

Image Image

Left: The Web UI initial screen, no model selected despite Qwen 3.5 being loaded.
Right: The dropdown knows that Qwen 3.5 is loaded.

First Bad Commit

No response

Relevant log output

No response

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions