$ curl -fsSL https://ollama.com/install.sh | sh >>> Installing ollama to /usr/local [sudo] minimonk 암호: >>> Downloading ollama-linux-amd64.tar.zst ######################################################################## 100.0% >>> Creating ollama user... >>> Adding ollama user to render group... >>> Adding ollama user to video group... >>> Adding current user to ollama group... >>> Creating ollama systemd service... >>> Enabling and starting ollama service... Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service. >>> The Ollama API is now available at 127.0.0.1:11434. >>> Install complete. Run "ollama" from the command line. WARNING: No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode.
$ ollama Ollama 0.20.3
Chat with a model Start an interactive chat with a model
Launch Claude Code (not installed) Anthropic's coding tool with subagents
Recommended gemma4 Reasoning and code generation locally, ~16GB, (not downloaded) qwen3.5 Reasoning, coding, and visual understanding locally, ~11GB, (not downloaded) kimi-k2.5:cloud Multimodal reasoning with subagents qwen3.5:cloud Reasoning, coding, and agentic tool use with vision glm-5:cloud Reasoning and code generation minimax-m2.7:cloud Fast, efficient coding and real-world productivity
More ▸ llama3.2
↑/↓ navigate • enter select • ← back
$ ollama --help Large language model runner
Usage: ollama [flags] ollama [command]
Available Commands: serve Start Ollama create Create a model show Show information for a model run Run a model stop Stop a running model pull Pull a model from a registry push Push a model to a registry signin Sign in to ollama.com signout Sign out from ollama.com list List models ps List running models cp Copy a model rm Remove a model launch Launch the Ollama menu or an integration help Help about any command
Flags: -h, --help help for ollama --nowordwrap Don't wrap words to the next line automatically --verbose Show timings for response -v, --version Show version information
Use "ollama [command] --help" for more information about a command.
단발성으로 질문하고 속도를 보려고 하는데
음.. eval rate가 실제 체감되는 속도인가? 12 token/s
이정도 수치는 되어야 많이 지루하지 않게 읽는 속도로 말이 생성되는 듯.
$ ollama run llama3.2:latest "안녕" --verbose 안녕하세요! (Hello!) I'm here to help you. How can I assist you today?
1. **Analyze the Request:** The user asked "너에 대해서 소개해봐" (Tell me about yourself). 2. **Determine Identity & Core Facts (Self-Correction/Verification):** * Name: Gemma 4. * Developer: Google DeepMind. * Nature: Large Language Model (LLM). * Type: Open weights model. * Capabilities: Understands text/images, generates text. * Knowledge Cutoff: (Relevant for context, but usually stated implicitly) 3. **Structure the Introduction (Korean Context):** A good introduction should cover: * Greeting. * Name/Identity. * Developer/Origin. * Nature/Function (What I do). * Capabilities (What I can process). 4. **Draft the Response (Internal Monologue & Refinement):** * *Start warmly.* (안녕하세요!) * *State Identity.* (저는 Gemma 4입니다.) * *State Origin.* (Google DeepMind에서 개발했습니다.) * *State Nature.* (저는 대규모 언어 모델(LLM)입니다.) * *State Function/Role.* (사용자의 질문에 답하고, 정보를 제공하며, 다 양한 텍스트 기반 작업을 돕는 역할을 합니다.) * *State Modality/Input.* (텍스트와 이미지를 이해할 수 있습니다.) * *Concluding remark/Offer help.* (어떤 것을 도와드릴까요?) 5. **Final Output Generation (Review against constraints):** The generated response must adhere to the core identity (Gemma 4, LLM, Google DeepMind, open weights). (The draft looks good and adheres to the required identity.) ...done thinking.
안녕하세요! 저는 **Gemma 4**입니다.
저는 Google DeepMind에서 개발한 **대규모 언어 모델(LLM)**이며, 오픈 웨이트 (open weights) 모델로 공개되어 있습니다.
**저의 주요 특징은 다음과 같습니다:**
1. **정체성:** 저는 방대한 데이터를 학습하여 인간의 언어를 이해하고 생성하며 , 다양한 질문에 답하고 정보를 제공하는 능력을 가지고 있습니다. 2. **기능:** 텍스트 기반의 대화, 요약, 번역, 작문, 코드 생성 등 다양한 언어 관련 작업을 수행할 수 있습니다. 3. **입력 능력:** 저는 텍스트와 이미지를 이해하고 처리할 수 있습니다. (다만 , 출력은 텍스트로만 생성합니다.) 4. **목표:** 사용자분들이 필요한 정보를 얻고, 창의적인 작업을 수행하며, 복잡 한 문제를 해결하는 데 도움을 드릴 수 있도록 설계되었습니다.
__bss_start = .; .bss : { bss = .; *(.bss) } . = ALIGN(4096); /* align to page size */ __bss_end = .; __bss_size = __bss_end - __bss_start; __end = .; } : 저 변수들은 실제 저안에서 사용된다기 보다는 소스 코드(C 파일 혹은 어셈블리어 파일)상에서 사용되면서 의미가 부여된다. 저 변수들을 소스 코드상에서 불러 오려면 어떻게 해야 할까? 아래의 코드를 보자.
gcc 로 빌드하면 현재 빌드하는 시스템이 c가 아닌 cpp 라는걸 확인하기 위해(혹은 알려주기 위해)
__cplusplus 라는 선언을 -D__cplusplus 하듯 붙여주는 듯 한데
__cplusplus__가 아니라 왜 앞에만 언더바 두 개 일까.. -_-?
__STDC__ In normal operation, this macro expands to the constant 1, to signify that this compiler conforms to ISO Standard C. If GNU CPP is used with a compiler other than GCC, this is not necessarily true; however, the preprocessor always conforms to the standard unless the -traditional-cpp option is used.
This macro is not defined if the -traditional-cpp option is used.
On some hosts, the system compiler uses a different convention, where __STDC__ is normally 0, but is 1 if the user specifies strict conformance to the C Standard. CPP follows the host convention when processing system header files, but when processing user files __STDC__ is always 1. This has been reported to cause problems; for instance, some versions of Solaris provide X Windows headers that expect __STDC__ to be either undefined or 1. See Invocation.
__STDC_VERSION__ This macro expands to the C Standard’s version number, a long integer constant of the form yyyymmL where yyyy and mm are the year and month of the Standard version. This signifies which version of the C Standard the compiler conforms to. Like __STDC__, this is not necessarily accurate for the entire implementation, unless GNU CPP is being used with GCC.
The value 199409L signifies the 1989 C standard as amended in 1994, which is the current default; the value 199901L signifies the 1999 revision of the C standard; the value 201112L signifies the 2011 revision of the C standard; the value 201710L signifies the 2017 revision of the C standard (which is otherwise identical to the 2011 version apart from correction of defects). The value 202311L is used for the -std=c23 and -std=gnu23 modes. An unspecified value larger than 202311L is used for the experimental -std=c2y and -std=gnu2y modes.
This macro is not defined if the -traditional-cpp option is used, nor when compiling C++ or Objective-C.
__STDC_HOSTED__ This macro is defined, with value 1, if the compiler’s target is a hosted environment. A hosted environment has the complete facilities of the standard C library available.
__cplusplus This macro is defined when the C++ compiler is in use. You can use __cplusplus to test whether a header is compiled by a C compiler or a C++ compiler. This macro is similar to __STDC_VERSION__, in that it expands to a version number. Depending on the language standard selected, the value of the macro is 199711L for the 1998 C++ standard, 201103L for the 2011 C++ standard, 201402L for the 2014 C++ standard, 201703L for the 2017 C++ standard, 202002L for the 2020 C++ standard, 202302L for the 2023 C++ standard, or an unspecified value strictly larger than 202302L for the experimental languages enabled by -std=c++26 and -std=gnu++26.
__OBJC__ This macro is defined, with value 1, when the Objective-C compiler is in use. You can use __OBJC__ to test whether a header is compiled by a C compiler or an Objective-C compiler.
__ASSEMBLER__ This macro is defined with value 1 when preprocessing assembly language.
lv_demo_benchmark.h /** * Run all benchmark scenes. * * On the summary end screen the values shall be interpreted according to the following: * - CPU usage: * - If `LV_SYSMON_GET_IDLE` is not modified it's measured based on the time spent in * `lv_timer_handler`. * - If an (RT)OS is used `LV_SYSMON_GET_IDLE` can be changed to a custom function * which returns the idle percentage of idle task. * * - FPS: LVGL attempted to render this many times in a second. It's limited based on `LV_DEF_REFR_PERIOD` * * - Render time: LVGL spent this much time with rendering only. It's not aware of task yielding, * but simply the time difference between the start and end of the rendering is measured * * - Flush time: It's the sum of * - the time spent in the `flush_cb` and * - the time spent with waiting for flush ready. */ void lv_demo_benchmark(void);
/** * @file lv_conf.h * Configuration file for v9.3.0-dev */
/* * Copy this file as `lv_conf.h` * 1. simply next to `lvgl` folder * 2. or to any other place and * - define `LV_CONF_INCLUDE_SIMPLE`; * - add the path as an include path. */
/* clang-format off */ #if 1 /* Set this to "1" to enable content */
#ifndef LV_CONF_H #define LV_CONF_H
#define LV_BUILD_TEST_PERF 1 #define LV_USE_TEST 1 /* If you need to include anything here, do it inside the `__ASSEMBLY__` guard */ #if 0 && defined(__ASSEMBLY__) #include "my_include.h" #endif
/*==================== HAL SETTINGS *====================*/
/** Default Dots Per Inch. Used to initialize default sizes such as widgets sized, style paddings. * (Not so important, you can adjust it to modify default sizes and spaces.) */ #define LV_DPI_DEF 130 /**< [px/inch] */