fix: build compatibility with latest llama.cpp (b8390+)#597
Open
CreatiCoding wants to merge 1 commit intowithcatai:masterfrom
Open
fix: build compatibility with latest llama.cpp (b8390+)#597CreatiCoding wants to merge 1 commit intowithcatai:masterfrom
CreatiCoding wants to merge 1 commit intowithcatai:masterfrom
Conversation
Two changes required when building against latest llama.cpp
(tested with b8831, which contains Gemma 4 support):
1. llama/addon/addon.cpp:239 - use brace-initialization for
std::atomic_bool. AppleClang 17 rejects copy-initialization
from bool because std::atomic has a deleted copy constructor:
error: copying variable of type 'std::atomic_bool'
invokes deleted constructor
2. llama/CMakeLists.txt:132 - the 'common' target was renamed to
'llama-common' upstream. The old -lcommon flag fails to resolve
because the actual dylib is now libllama-common.dylib.
Verified:
- npm run test:typescript passes
- npm run lint passes
- npm run test:standalone passes
- source download --release latest + source build succeeds on
macOS 15 arm64 (M-series), AppleClang 17, Node.js 22.15, Metal
labKnowledge
approved these changes
Apr 22, 2026
giladgd
requested changes
Apr 25, 2026
Member
There was a problem hiding this comment.
Please remove the changes in this file as they're unrelated
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description of change
Building from source against recent
llama.cppreleases fails with two distinct C++/CMake errors. This PR fixes both so thatsource download --release latestfollowed bysource buildworks out of the box.This is independent of #591 (which handles TypeScript-layer Gemma 4 support). Once #591 is merged, users who want to use Gemma 4 today still need these build fixes, because
3.18.1bundlesllama.cpp b8390, while Gemma 4'sgemma4architecture only shipped in a laterllama.cpprelease — sosource download --release latestis the only path to it.Context: I hit this while trying to run Gemma 4 via
source download --release latest(which pulledb8831).1.
llama/addon/addon.cpp:239—std::atomic_boolcopy-initializationAppleClang 17 (Xcode 17) rejects copy-initialization from
boolbecausestd::atomichas a deleted copy constructor:Fixed by switching to brace-initialization:
This is a minimal, standards-conforming change that is backward-compatible with all supported C++11+ compilers.
2.
llama/CMakeLists.txt:132—commontarget renamed tollama-commonUpstream
llama.cpprenamed thecommonCMake target tollama-common(the produced library is nowlibllama-common.dylib). The current link declaration resolves to a bare-lcommonflag, which the linker cannot find:Fixed by linking against the renamed target:
How this was verified
npm run test:typescript— passesnpm run lint— passesnpm run test:standalone— passes (28 files, 169 tests)npx --no node-llama-cpp source download --release latestfollowed bysource build— succeeds on macOS 15 arm64 (Apple M-series), AppleClang 17, Node.js 22.15, Metal backendgemma-4-E4B-it-Q8_0.ggufviaLlamaChatSessionwithgetLlama("lastBuild")) loads the model and streams responses correctlyTest output screenshot
npm run test:typescript/npm run lint/npm run test:standalonenpx --no node-llama-cpp source download --release latestPull-Request Checklist
masterbranchnpm run formatto apply eslint formattingnpm run testpasses with this changeFixes #0000— N/A (no existing issue tracks these specific build errors; related context in feat: Gemma 4 support #591)test:standalonesuite)