Replies: 19 comments 76 replies
-
|
This would be awesome! Even people that have macbook air or pro as a daily driver laptop would benefit if they had immich running on low capability server. Say a pi or something. Maybe convert this into a feature request instead of 'Q&A' |
Beta Was this translation helpful? Give feedback.
-
|
To my understanding, docker on mac runs inside of a Linux VM and it's not possible to pass the Apple Metal hwaccel through to that. |
Beta Was this translation helpful? Give feedback.
-
|
You would have to run the machine learning service directly outside of Docker. The instructions to do this are as follows:
It should then be listening on 3003. Keep in mind that none of the models have been tested for CoreML compatibility, so you might need to experiment with different models to find one that works well. |
Beta Was this translation helpful? Give feedback.
-
|
I all, if it could help to do some tests I can use also a mac mini m4 (not pro) to see how it works :) |
Beta Was this translation helpful? Give feedback.
-
|
Hello immich community, I've tried this setup as follows but getting an error. Can anyone help? Error I receive: |
Beta Was this translation helpful? Give feedback.
-
|
This enabled me to start ml locally. brew update
brew upgrade
softwareupdate --install-rosetta --agree-to-license
brew reinstall gcc gfortran scipy pyenv uv
git clone git@github.com:immich-app/immich.git
cd immich/machine-learning
uv sync --extra cpu
source .venv/bin/activate
python -m immich_mlNext time the run command is: cd immich/machine-learning
source .venv/bin/activate
python -m immich_ml |
Beta Was this translation helpful? Give feedback.
-
|
@HeroSizy hi thanks for the script. though a dum question: why do we want/need to run it daily? |
Beta Was this translation helpful? Give feedback.
-
|
Say I have immich running on a linux server, and set up this machine learning on my mac. In immich settings, for machine learning, I have the mac url listed first, and then the linux server url. How can I tell that it is working? Is there a log somewhere that will tell me? Thanks @fredrike and @HeroSizy for helping me get it set up! |
Beta Was this translation helpful? Give feedback.
-
|
Is there a list of supported models? I've switched from the default to https://huggingface.co/immich-app/ViT-SO400M-16-SigLIP2-384__webli and now I seem to be getting traceback errors? Edit: Errors on an M1 Mac due to
So I'm hitting the CoreMLExecutionProvider dimension cap of 16,384, which is a hardware/software limitation of CoreML, I've switched this back to CPU and I'm getting fast, good performance so idk if there was any point of settings all this up just to do it via CPU? At least it's out of the container for better performance on resources. Would be great to have another discussion open on here for the best models supported by CoreML |
Beta Was this translation helpful? Give feedback.
-
|
Has anyone benchmarked the performance in this setup vs docker cpu-only? In some brief testing on a base M1 using this native setup, I'm seeing around 45% CPU usage (not much GPU, although I believe I've read in the past that GPU usage on apple silicon is not accurately reflected). Using ViT-L-16-SigLIP-256__webli model, it is processing basically 1 asset per second. Definitely better than running on a NAS but I expected a little faster.. Edit: that was with the default concurrency of 2. On the M1 it seems like 6 about maxes out the CPU I went ahead and gave docker a try as well for comparison. Very unscientific, take these with a grain of salt. Definitely possible I missed a step above to where hw accel is not kicking in. Results:
Not sure why docker would be faster, but either way I think I will go that route for simplicity sake. Speed should certainly be sufficient for most. MBA is likely throttling due to heat also, M4 mini should do better in that regard |
Beta Was this translation helpful? Give feedback.
-
|
So I now have an M3 instead of an M1 so I decided to give it a go again. After cloning the repository I did: uv sync --python=$(which python3.12) --extra cpu
source .venv/bin/activate
python -m immich_mlI had the error "[::]' does not appear to be an IPv4 or IPv6 address" but editing Now it processes pictures, but it uses the CPU, not the GPU. Did I miss a step somewhere? |
Beta Was this translation helpful? Give feedback.
-
|
Hello The ML app starts correctly but I am getting this error on my M4 when I try to run face recognition task.
I blamed the model cache, started with a fresh one, no luck. |
Beta Was this translation helpful? Give feedback.
-
|
I'm also having problems running the machine learning software on my M3 Mac. I'm still on macOs 15.6.1, and with python 3.12 I'm getting the following exception (after changing the loglevel to verbose, which didn't help me much unfortunately). Can someone tell me what I'm doing wrong? |
Beta Was this translation helpful? Give feedback.
-
|
If we can change immich in a way that it can use docker model runner it will be more easy. |
Beta Was this translation helpful? Give feedback.
-
|
#10454 (reply in thread), same issue as others. 2026-01-05 11:52:51.749 python[18697:1273054] 2026-01-05 11:52:51.783360 [E:onnxruntime:, sequential_executor.cc:572 ExecuteKernel] Non-zero status code returned while running 16946850393505866820_CoreML_16946850393505866820_32 node. Name:'CoreMLExecutionProvider_16946850393505866820_CoreML_16946850393505866820_32_32' Status Message: Error executing model: Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).
2026-01-05 11:52:51.903 python[18697:1273054] 2026-01-05 11:52:51.937857 [E:onnxruntime:, sequential_executor.cc:572 ExecuteKernel] Non-zero status code returned while running 16946850393505866820_CoreML_16946850393505866820_32 node. Name:'CoreMLExecutionProvider_16946850393505866820_CoreML_16946850393505866820_32_32' Status Message: Error executing model: Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).
2026-01-05 11:52:52.069 python[18697:1273054] 2026-01-05 11:52:52.103008 [E:onnxruntime:, sequential_executor.cc:572 ExecuteKernel] Non-zero status code returned while running 16946850393505866820_CoreML_16946850393505866820_32 node. Name:'CoreMLExecutionProvider_16946850393505866820_CoreML_16946850393505866820_32_32' Status Message: Error executing model: Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).Is this because OCR was introduced and now the old face recoginition model doesn't work? |
Beta Was this translation helpful? Give feedback.
-
|
Hey y'all. I'm not a developer, and I generally stay quiet about my crappy hobby code, but I wanted to post this here in hopes that it (A) helps some people on macOS and (B) inspires some community support around ML hardware acceleration on Apple Silicon. I (read: Claude) was able to throw this together today with quite bit of head-scratching. It's a drop-in temporary replacement ('til they add official support) for immich-ML that runs on Apple's ML frameworks. It responds at the same API endpoints (and in the same format) as the official ML container, but uses MLX, Apple Vision, and CoreML to do the heavy lifting. I've only tested it on my M1 MacBook running Tahoe, but it seemed to work considerably faster than the official Immich-ML container running in Orbstack. This is pretty much fully AI code, is not based on the original Immich-ML server, and I professionally advise none of you to use it. But maybe it could be considered a starting point. Or not. |
Beta Was this translation helpful? Give feedback.
-
|
I just worked with the CPU, if anyone is interested, I mostly followed this: #10454 (reply in thread), then with a bit of smarts (hint: Claude), I kinda reached this
#!/bin/zsh
LOCKFILE="/tmp/immich_ml.lock"
if [ -e "$LOCKFILE" ] && kill -0 "$(cat $LOCKFILE)" 2>/dev/null; then
echo "immich_ml is already running"
exit 1
fi
echo $$ > "$LOCKFILE"
trap "rm -f $LOCKFILE" EXIT
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
export MACHINE_LEARNING_WORKERS=1
cd /<path-to-immich-repo>/machine-learning
git pull
# Disable CoreML
sed -i '' 's/"CoreMLExecutionProvider",/# "CoreMLExecutionProvider",/' \
immich_ml/models/constants.py
/opt/homebrew/bin/uv sync --extra cpu --python=3.12
source .venv/bin/activate
python -m immich_mlThen update the launch agent at <?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.user.immichml.daily</string>
<key>ProgramArguments</key>
<array>
<string>/<immich-repo-full-path>/start_immich_ml.sh</string>
</array>
<key>StartCalendarInterval</key>
<dict>
<key>Hour</key>
<integer>3</integer>
<key>Minute</key>
<integer>0</integer>
</dict>
<key>RunAtLoad</key>
<true/>
<key>EnvironmentVariables</key>
<dict>
<key>OBJC_DISABLE_INITIALIZE_FORK_SAFETY</key>
<string>YES</string>
<key>PATH</key>
<string>/opt/homebrew/bin:/Users/dev/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin</string>
</dict>
<key>StandardOutPath</key>
<string>/tmp/immichml.log</string>
<key>StandardErrorPath</key>
<string>/tmp/immichml.err</string>
</dict>
</plist>Then launchctl unload ~/Library/LaunchAgents/com.user.immichml.daily.plist
launchctl load ~/Library/LaunchAgents/com.user.immichml.daily.plist |
Beta Was this translation helpful? Give feedback.
-
|
Yes
…On Tue, Feb 24, 2026 at 17:51 Rui Marinho ***@***.***> wrote:
If you're disabling CoreML, there's no advantage of running outside
Docker. You're just adding complexity for the same CPU-only result, right?
—
Reply to this email directly, view it on GitHub
<#10454 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/APJSP63AUMOJAEFCZ2HLEC34NTPZ7AVCNFSM6AAAAABJPOHMBOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKOJRGY4TQOI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Hey, I made a project that maximizes Apple Silicon usage for dockerized Immich, offloading ML, transcoding and thumbnails from the CPU (and fixed the face bug in immich-ml-metal). I regenerated my entire library with it and it looks good, would love if others can check it out and give feedback or ideas to improve it. |
Beta Was this translation helpful? Give feedback.



Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
just wanted to ask if it's planned to support ML hardware acceleration on apple silicon gpu's (metal).
This would result in a good performance for the initial ML scan on macs with "max" and/or "ultra" CPUs.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions