Native vendor layout (vendor/native)
The llama.cpp and standalone ggml trees used by xlai-sys-llama and xlai-sys-ggml live as git submodules under:
vendor/native/llama.cppvendor/native/ggml
After cloning, initialize them (for example):
bash
git submodule update --init --recursive vendor/native/llama.cpp vendor/native/ggmlOverride source paths (optional)
GGML_SRC: absolute path to aggmlcheckout forxlai-sys-ggml(defaults tovendor/native/ggml).LLAMA_CPP_SRC: absolute path to allama.cppcheckout forxlai-sys-llama(defaults tovendor/native/llama.cpp).
Dual native stacks
Enabling both local chat (xlai-sys-llama, which bundles ggml with llama.cpp) and native QTS (xlai-sys-ggml) links two native ggml implementations into one binary. Build scripts emit a cargo:warning when xlai-facade has llama + qts, or when xlai-native enables qts. Prefer separate processes or a single stack if you hit duplicate symbols or linker issues.