Skip to content

fix: build compatibility with latest llama.cpp (b8390+)#597

Open
CreatiCoding wants to merge 1 commit intowithcatai:masterfrom
CreatiCoding:fix/gemma4-llamacpp-b8831-compat
Open

fix: build compatibility with latest llama.cpp (b8390+)#597
CreatiCoding wants to merge 1 commit intowithcatai:masterfrom
CreatiCoding:fix/gemma4-llamacpp-b8831-compat

Conversation

@CreatiCoding
Copy link
Copy Markdown

Description of change

Building from source against recent llama.cpp releases fails with two distinct C++/CMake errors. This PR fixes both so that source download --release latest followed by source build works out of the box.

This is independent of #591 (which handles TypeScript-layer Gemma 4 support). Once #591 is merged, users who want to use Gemma 4 today still need these build fixes, because 3.18.1 bundles llama.cpp b8390, while Gemma 4's gemma4 architecture only shipped in a later llama.cpp release — so source download --release latest is the only path to it.

Context: I hit this while trying to run Gemma 4 via source download --release latest (which pulled b8831).


1. llama/addon/addon.cpp:239std::atomic_bool copy-initialization

AppleClang 17 (Xcode 17) rejects copy-initialization from bool because std::atomic has a deleted copy constructor:

error: copying variable of type 'std::atomic_bool' (aka 'atomic<bool>')
       invokes deleted constructor
    static std::atomic_bool loaded = false;
                            ^        ~~~~~

Fixed by switching to brace-initialization:

static std::atomic_bool loaded{false};

This is a minimal, standards-conforming change that is backward-compatible with all supported C++11+ compilers.


2. llama/CMakeLists.txt:132common target renamed to llama-common

Upstream llama.cpp renamed the common CMake target to llama-common (the produced library is now libllama-common.dylib). The current link declaration resolves to a bare -lcommon flag, which the linker cannot find:

ld: library 'common' not found
clang++: error: linker command failed with exit code 1

Fixed by linking against the renamed target:

target_link_libraries(${PROJECT_NAME} "llama-common")

How this was verified

  • npm run test:typescript — passes
  • npm run lint — passes
  • npm run test:standalone — passes (28 files, 169 tests)
  • npx --no node-llama-cpp source download --release latest followed by source build — succeeds on macOS 15 arm64 (Apple M-series), AppleClang 17, Node.js 22.15, Metal backend
  • A minimal end-to-end reproduction (loading gemma-4-E4B-it-Q8_0.gguf via LlamaChatSession with getLlama("lastBuild")) loads the model and streams responses correctly
Test output screenshot
  1. npm run test:typescript / npm run lint / npm run test:standalone
image
  1. npx --no node-llama-cpp source download --release latest
➜ npx --no node-llama-cpp source download --release latest
Repo: ggml-org/llama.cpp
Release: latest
GPU: Metal

✔ Fetched llama.cpp info
✔ Removed existing llama.cpp directory
✔ Cloned ggml-org/llama.cpp (GitHub)
◷ Compiling llama.cpp

(... compile output truncated ...)

[ 94%] Built target llama-common
[ 94%] Building CXX object CMakeFiles/llama-addon.dir/addon/AddonGrammarEvaluationState.cpp.o
[ 95%] Building CXX object CMakeFiles/llama-addon.dir/addon/AddonGrammar.cpp.o
[ 95%] Building CXX object CMakeFiles/llama-addon.dir/addon/AddonModelData.cpp.o
[ 96%] Building CXX object CMakeFiles/llama-addon.dir/addon/AddonModel.cpp.o
[ 96%] Building CXX object CMakeFiles/llama-addon.dir/addon/AddonContext.cpp.o
[ 98%] Building CXX object CMakeFiles/llama-addon.dir/addon/AddonSampler.cpp.o
[ 98%] Building CXX object CMakeFiles/llama-addon.dir/addon/addonGlobals.cpp.o
[ 98%] Building CXX object CMakeFiles/llama-addon.dir/addon/AddonModelLora.cpp.o
[ 98%] Building CXX object CMakeFiles/llama-addon.dir/addon/addon.cpp.o
[ 98%] Building CXX object CMakeFiles/llama-addon.dir/addon/globals/addonLog.cpp.o
[ 99%] Building CXX object CMakeFiles/llama-addon.dir/addon/globals/addonProgress.cpp.o
[ 99%] Building CXX object CMakeFiles/llama-addon.dir/addon/globals/getGpuInfo.cpp.o
[100%] Building CXX object CMakeFiles/llama-addon.dir/addon/globals/getMemoryInfo.cpp.o
[100%] Building CXX object CMakeFiles/llama-addon.dir/addon/globals/getSwapInfo.cpp.o
[100%] Linking CXX shared library Release/llama-addon.node
[100%] Built target llama-addon
✔ Compiled llama.cpp

Pull-Request Checklist

  • Code is up-to-date with the master branch
  • npm run format to apply eslint formatting
  • npm run test passes with this change
  • This pull request links relevant issues as Fixes #0000 — N/A (no existing issue tracks these specific build errors; related context in feat: Gemma 4 support #591)
  • There are new or updated unit tests validating the change — N/A (build-level fix; verified by existing CI compilation step and full test:standalone suite)
  • Documentation has been updated to reflect this change — N/A (no public API or docs affected)
  • The new commits and pull request title follow conventions explained in pull request guidelines

Two changes required when building against latest llama.cpp
(tested with b8831, which contains Gemma 4 support):

1. llama/addon/addon.cpp:239 - use brace-initialization for
   std::atomic_bool. AppleClang 17 rejects copy-initialization
   from bool because std::atomic has a deleted copy constructor:

     error: copying variable of type 'std::atomic_bool'
            invokes deleted constructor

2. llama/CMakeLists.txt:132 - the 'common' target was renamed to
   'llama-common' upstream. The old -lcommon flag fails to resolve
   because the actual dylib is now libllama-common.dylib.

Verified:
- npm run test:typescript passes
- npm run lint passes
- npm run test:standalone passes
- source download --release latest + source build succeeds on
  macOS 15 arm64 (M-series), AppleClang 17, Node.js 22.15, Metal
Copy link
Copy Markdown
Member

@giladgd giladgd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

Comment thread package-lock.json
Copy link
Copy Markdown
Member

@giladgd giladgd Apr 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove the changes in this file as they're unrelated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants