프로그램 사용/ai 프로그램
ollama 를 jetson nano 4GB 에서 돌려보기
구차니
2026. 4. 10. 22:19
32GB SD 카드라서 용량 부족으로 멀 해볼수가 없다. -_ㅠ
겨우겨우 llama 정도 돌려봄. 되는게 더 신기한건가..
일단은 cpu 만으로 llama가 나쁘지 않게 도는거 보면 jetson nano cpu가 제법 좋긴한듯.
| jetson@nano-4gb-jp451:~ $ curl -fsSL https://ollama.com/install.sh | sh >>> Installing ollama to /usr/local [sudo] password for jetson: Sorry, try again. [sudo] password for jetson: ERROR: This version requires zstd for extraction. Please install zstd and try again: - Debian/Ubuntu: sudo apt-get install zstd - RHEL/CentOS/Fedora: sudo dnf install zstd - Arch: sudo pacman -S zstd jetson@nano-4gb-jp451:~$ sudo apt-get install zstd Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: zstd 0 upgraded, 1 newly installed, 0 to remove and 498 not upgraded. Need to get 237 kB of archives. After this operation, 965 kB of additional disk space will be used. Get:1 http://ports.ubuntu.com/ubuntu-ports bionic-updates/universe arm64 zstd arm64 1.3.3+dfsg-2ubuntu1.2 [237 kB] Fetched 237 kB in 2s (124 kB/s) debconf: delaying package configuration, since apt-utils is not installed Selecting previously unselected package zstd. (Reading database ... 186946 files and directories currently installed.) Preparing to unpack .../zstd_1.3.3+dfsg-2ubuntu1.2_arm64.deb ... Unpacking zstd (1.3.3+dfsg-2ubuntu1.2) ... Setting up zstd (1.3.3+dfsg-2ubuntu1.2) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... jetson@nano-4gb-jp451:~$ curl -fsSL https://ollama.com/install.sh | sh >>> Cleaning up old version at /usr/local/lib/ollama >>> Installing ollama to /usr/local >>> Downloading ollama-linux-arm64.tar.zst ######################################################################## 100.0% WARNING: Unsupported JetPack version detected. GPU may not be supported >>> Creating ollama user... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service. >>> NVIDIA JetPack ready. >>> The Ollama API is now available at 127.0.0.1:11434. >>> Install complete. Run "ollama" from the command line. jetson@nano-4gb-jp451:~$ ollama run llama3.2 ollama: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by ollama) jetson@nano-4gb-jp451:~$ ldd --version ldd (Ubuntu GLIBC 2.27-3ubuntu1.3) 2.27 Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Written by Roland McGrath and Ulrich Drepper. jetson@nano-4gb-jp451:~$ curl -fsSL https://ollama.com/install.sh | OLLA MA_VERSION=0.5.12 sh >>> Cleaning up old version at /usr/local/lib/ollama >>> Installing ollama to /usr/local >>> Downloading ollama-linux-arm64.tgz ######################################################################## 100.0% WARNING: Unsupported JetPack version detected. GPU may not be supported >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service. >>> NVIDIA JetPack ready. >>> The Ollama API is now available at 127.0.0.1:11434. >>> Install complete. Run "ollama" from the command line. jetson@nano-4gb-jp451:~$ ollama Usage: ollama [flags] ollama [command] Available Commands: serve Start ollama create Create a model from a Modelfile show Show information for a model run Run a model stop Stop a running model pull Pull a model from a registry push Push a model to a registry list List models ps List running models cp Copy a model rm Remove a model help Help about any command Flags: -h, --help help for ollama -v, --version Show version information Use "ollama [command] --help" for more information about a command. jetson@nano-4gb-jp451:~$ ollama -v ollama version is 0.5.12 jetson@nano-4gb-jp451:~$ ollama run Error: requires at least 1 arg(s), only received 0 jetson@nano-4gb-jp451:~$ ollama run llama3.2 pulling manifest pulling dde5aa3fc5ff... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 2.0 GB pulling 966de95ca8a6... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 1.4 KB pulling fcc5a6bec9da... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 7.7 KB pulling a70ff7e570d9... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 6.0 KB pulling 56bb8bd477a5... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 96 B pulling 34bb5ab01051... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 561 B verifying sha256 digest writing manifest success >>> Send a message (/? for help) >>> 너 누구야? 나는 Meta의 가상 인자입니다. tôi은 Humanoid AI입니다. jetson@nano-4gb-jp451:~$ ollama run llama3.2 --verbose >>> 안녕 안녕하세요! (Hello!) I'm happy to meet you. How can I help you today? total duration: 12.953687613s load duration: 82.681499ms prompt eval count: 27 token(s) prompt eval duration: 2.511s prompt eval rate: 10.75 tokens/s eval count: 21 token(s) eval duration: 10.357s eval rate: 2.03 tokens/s |
[링크 : https://github.com/ollama/ollama/issues/9506]
| Docker Copy page CPU only docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama |