6개월 만에 다시 똥 치우는 신세로 전락할 느낌.. 후..
코코(1살, 성별 모름, 코뉴어)
걍 꼬꼬 하자니까.. -_-
'개소리 왈왈 > 육아관련 주저리' 카테고리의 다른 글
| 폭주 (2) | 2025.05.08 |
|---|---|
| 아이나비 dmb 안테나 (0) | 2025.05.08 |
| 피곤 (0) | 2025.05.04 |
| 여유 (0) | 2025.05.02 |
| 참관수업 그리고 데이트 (2) | 2025.04.23 |
6개월 만에 다시 똥 치우는 신세로 전락할 느낌.. 후..
코코(1살, 성별 모름, 코뉴어)
걍 꼬꼬 하자니까.. -_-
| 폭주 (2) | 2025.05.08 |
|---|---|
| 아이나비 dmb 안테나 (0) | 2025.05.08 |
| 피곤 (0) | 2025.05.04 |
| 여유 (0) | 2025.05.02 |
| 참관수업 그리고 데이트 (2) | 2025.04.23 |
심심해서(?) 실행해봤는데 어라...
아무래도 depth 카메라와 rgb 카메라의 거리가 떨어져있어서 시차로 인한 오류가 발생을 하는지
특수한 각도에서는 다음과 같이 이상하게 맵핑이 된다.
(노트북 뒤로 몽둥이를 들고 있는데 3d로 합성된 쪽에서는 투명하게 뚫는 것 처럼 보임)
libfreenect2의 한계인가.. 아니면 윈도우 버전도 이럴려나?

| libfreenect2 VAAPI 버그? (2) | 2025.05.22 |
|---|---|
| kinect 1.x, 2.x color <-> depth mapping (0) | 2025.05.07 |
| libfreenect2 rgb / depth 매핑 소스코드 분석 (0) | 2025.05.05 |
| kinect rgb - depth mapping (0) | 2025.05.01 |
| kinect 깊이 정밀도 (0) | 2025.04.15 |
libfreenect2 내의 매핑 코드가 이해가 안되서 다른걸 찾아보는데 여전히 이해가 안된다..
코드 상으로는 크게 차이가 없는 것 같지만, color map에 depth map을 합성하는게 더 자연스러워 보이는 것 같다.
| The following snippet shows us how to locate the color bytes from the ColorSpacePoint: // we need a starting point, let's pick 0 for now int index = 0; ushort depth = _depthData[index]; ColorSpacePoint point = _colorSpacePoints[index]; // round down to the nearest pixel int colorX = (int)Math.Floor(point.X + 0.5); int colorY = (int)Math.Floor(point.Y + 0.5); // make sure the pixel is part of the image if ((colorX >= 0 && (colorX < colorWidth) && (colorY >= 0) && (colorY < colorHeight)) { int colorImageIndex = ((colorWidth * colorY) + colorX) * bytesPerPixel; byte b = _colorFrameData[colorImageIndex]; byte g = _colorFrameData[colorImageIndex + 1]; byte r = _colorFrameData[colorImageIndex + 2]; byte a = _colorFrameData[colorImageIndex + 3]; } ![]() |
그나저나 depth sensor가 더 FOV가 넓어서 왜곡되다보니
depth map에 맵핑했을 경우에는 lens 왜곡 보정을 한 것 같기도 한데
코드 상에는 별 내용이 없는 것 같기도 하고.. 멀 한거지?
| // clear the pixels before we color them Array.Clear(_pixels, 0, _pixels.Length); for (int depthIndex = 0; depthIndex < _depthData.Length; ++depthIndex) { ColorSpacePoint point = _colorSpacePoints[depthIndex]; int colorX = (int)Math.Floor(point.X + 0.5); int colorY = (int)Math.Floor(point.Y + 0.5); if ((colorX >= 0) && (colorX < colorWidth) && (colorY >= 0) && (colorY < colorHeight)) { int colorImageIndex = ((colorWidth * colorY) + colorX) * bytesPerPixel; int depthPixel = depthIndex * bytesPerPixel; _pixels[depthPixel] = _colorData[colorImageIndex]; _pixels[depthPixel + 1] = _colorData[colorImageIndex + 1]; _pixels[depthPixel + 2] = _colorData[colorImageIndex + 2]; _pixels[depthPixel + 3] = 255; } } ![]() |
[링크 : https://www.bryancook.net/2014/03/mapping-between-kinect-color-and-depth.html]
[링크 : https://stackoverflow.com/questions/29479746/kinect-v2-color-depth-mapping-using-c-sharp]
[링크 : https://kr.mathworks.com/matlabcentral/answers/268152-mapping-rgb-and-depth-kinect-v2]
[링크 : https://tommyhsm.tistory.com/124]
libfreenct의 Protonect 실행하는 부분에서, rgb와 depth를 어떻게 매핑하는지 궁금해서 찾아보는데 크게 자료가 있진 않다.
| /// [registration setup] libfreenect2::Registration* registration = new libfreenect2::Registration(dev->getIrCameraParams(), dev->getColorCameraParams()); libfreenect2::Frame undistorted(512, 424, 4), registered(512, 424, 4); /// [loop start] while(!protonect_shutdown && (framemax == (size_t)-1 || framecount < framemax)) { if (!listener.waitForNewFrame(frames, 10*1000)) // 10 sconds { std::cout << "timeout!" << std::endl; return -1; } libfreenect2::Frame *rgb = frames[libfreenect2::Frame::Color]; libfreenect2::Frame *ir = frames[libfreenect2::Frame::Ir]; libfreenect2::Frame *depth = frames[libfreenect2::Frame::Depth]; /// [loop start] if (enable_rgb && enable_depth) { /// [registration] registration->apply(rgb, depth, &undistorted, ®istered); /// [registration] } framecount++; if (!viewer_enabled) { if (framecount % 100 == 0) std::cout << "The viewer is turned off. Received " << framecount << " frames. Ctrl-C to stop." << std::endl; listener.release(frames); continue; } #ifdef EXAMPLES_WITH_OPENGL_SUPPORT if (enable_rgb) { viewer.addFrame("RGB", rgb); } if (enable_depth) { viewer.addFrame("ir", ir); viewer.addFrame("depth", depth); } if (enable_rgb && enable_depth) { viewer.addFrame("registered", ®istered); } protonect_shutdown = protonect_shutdown || viewer.render(); #endif /// [loop end] listener.release(frames); /** libfreenect2::this_thread::sleep_for(libfreenect2::chrono::milliseconds(100)); */ } /// [loop end] |
| void Registration::apply(const Frame *rgb, const Frame *depth, Frame *undistorted, Frame *registered, const bool enable_filter, Frame *bigdepth, int *color_depth_map) const { impl_->apply(rgb, depth, undistorted, registered, enable_filter, bigdepth, color_depth_map); } void RegistrationImpl::apply(const Frame *rgb, const Frame *depth, Frame *undistorted, Frame *registered, const bool enable_filter, Frame *bigdepth, int *color_depth_map) const { // Check if all frames are valid and have the correct size if (!rgb || !depth || !undistorted || !registered || rgb->width != 1920 || rgb->height != 1080 || rgb->bytes_per_pixel != 4 || depth->width != 512 || depth->height != 424 || depth->bytes_per_pixel != 4 || undistorted->width != 512 || undistorted->height != 424 || undistorted->bytes_per_pixel != 4 || registered->width != 512 || registered->height != 424 || registered->bytes_per_pixel != 4) return; const float *depth_data = (float*)depth->data; const unsigned int *rgb_data = (unsigned int*)rgb->data; float *undistorted_data = (float*)undistorted->data; unsigned int *registered_data = (unsigned int*)registered->data; const int *map_dist = distort_map; const float *map_x = depth_to_color_map_x; const int *map_yi = depth_to_color_map_yi; const int size_depth = 512 * 424; const int size_color = 1920 * 1080; const float color_cx = color.cx + 0.5f; // 0.5f added for later rounding // size of filter map with a border of filter_height_half on top and bottom so that no check for borders is needed. // since the color image is wide angle no border to the sides is needed. const int size_filter_map = size_color + 1920 * filter_height_half * 2; // offset to the important data const int offset_filter_map = 1920 * filter_height_half; // map for storing the min z values used for each color pixel float *filter_map = NULL; // pointer to the beginning of the important data float *p_filter_map = NULL; // map for storing the color offset for each depth pixel int *depth_to_c_off = color_depth_map ? color_depth_map : new int[size_depth]; int *map_c_off = depth_to_c_off; // initializing the depth_map with values outside of the Kinect2 range if(enable_filter){ filter_map = bigdepth ? (float*)bigdepth->data : new float[size_filter_map]; p_filter_map = filter_map + offset_filter_map; for(float *it = filter_map, *end = filter_map + size_filter_map; it != end; ++it){ *it = std::numeric_limits<float>::infinity(); } } /* Fix depth distortion, and compute pixel to use from 'rgb' based on depth measurement, * stored as x/y offset in the rgb data. */ // iterating over all pixels from undistorted depth and registered color image // the four maps have the same structure as the images, so their pointers are increased each iteration as well for(int i = 0; i < size_depth; ++i, ++undistorted_data, ++map_dist, ++map_x, ++map_yi, ++map_c_off){ // getting index of distorted depth pixel const int index = *map_dist; // check if distorted depth pixel is outside of the depth image if(index < 0){ *map_c_off = -1; *undistorted_data = 0; continue; } // getting depth value for current pixel const float z = depth_data[index]; *undistorted_data = z; // checking for invalid depth value if(z <= 0.0f){ *map_c_off = -1; continue; } // calculating x offset for rgb image based on depth value const float rx = (*map_x + (color.shift_m / z)) * color.fx + color_cx; const int cx = rx; // same as round for positive numbers (0.5f was already added to color_cx) // getting y offset for depth image const int cy = *map_yi; // combining offsets const int c_off = cx + cy * 1920; // check if c_off is outside of rgb image // checking rx/cx is not needed because the color image is much wider then the depth image if(c_off < 0 || c_off >= size_color){ *map_c_off = -1; continue; } // saving the offset for later *map_c_off = c_off; if(enable_filter){ // setting a window around the filter map pixel corresponding to the color pixel with the current z value int yi = (cy - filter_height_half) * 1920 + cx - filter_width_half; // index of first pixel to set for(int r = -filter_height_half; r <= filter_height_half; ++r, yi += 1920) // index increased by a full row each iteration { float *it = p_filter_map + yi; for(int c = -filter_width_half; c <= filter_width_half; ++c, ++it) { // only set if the current z is smaller if(z < *it) *it = z; } } } } /* Construct 'registered' image. */ // reseting the pointers to the beginning map_c_off = depth_to_c_off; undistorted_data = (float*)undistorted->data; /* Filter drops duplicate pixels due to aspect of two cameras. */ if(enable_filter){ // run through all registered color pixels and set them based on filter results for(int i = 0; i < size_depth; ++i, ++map_c_off, ++undistorted_data, ++registered_data){ const int c_off = *map_c_off; // check if offset is out of image if(c_off < 0){ *registered_data = 0; continue; } f const float min_z = p_filter_map[c_off]; const float z = *undistorted_data; // check for allowed depth noise *registered_data = (z - min_z) / z > filter_tolerance ? 0 : *(rgb_data + c_off); } if (!bigdepth) delete[] filter_map; } else { // run through all registered color pixels and set them based on c_off for(int i = 0; i < size_depth; ++i, ++map_c_off, ++registered_data){ const int c_off = *map_c_off; // check if offset is out of image *registered_data = c_off < 0 ? 0 : *(rgb_data + c_off); } } if (!color_depth_map) delete[] depth_to_c_off; } |
| kinect 1.x, 2.x color <-> depth mapping (0) | 2025.05.07 |
|---|---|
| kinect v2 잘못된 깊이 맵 맵핑 (0) | 2025.05.06 |
| kinect rgb - depth mapping (0) | 2025.05.01 |
| kinect 깊이 정밀도 (0) | 2025.04.15 |
| libfreenect2 on 2760p 성공 (0) | 2024.08.18 |
리듬도 깨지고 난리 ㅜㅠ
| 아이나비 dmb 안테나 (0) | 2025.05.08 |
|---|---|
| 가족 추가요~ (0) | 2025.05.06 |
| 여유 (0) | 2025.05.02 |
| 참관수업 그리고 데이트 (2) | 2025.04.23 |
| 바운스 슈퍼파크 하남 아니.. 미사? (2) | 2025.04.19 |
휴가가 기니까 좋긴하다.
어디왔다 갔따 해야 한다고 정신없지 않아서 여유롭고 행복하네
| 가족 추가요~ (0) | 2025.05.06 |
|---|---|
| 피곤 (0) | 2025.05.04 |
| 참관수업 그리고 데이트 (2) | 2025.04.23 |
| 바운스 슈퍼파크 하남 아니.. 미사? (2) | 2025.04.19 |
| 헤롱헤롱 (0) | 2025.04.14 |
보기에는 단순하게 변형하고 잘라서 깊이정보를 주는것 같은데.. 맞나?
[링크 : https://tommyhsm.tistory.com/124]

[링크 : https://www.researchgate.net/publication/340527659_Color_and_depth_mapping_of_Kinect_v2]
[링크 : https://m.blog.naver.com/sense_sciencefiction/221967976514]
| kinect v2 잘못된 깊이 맵 맵핑 (0) | 2025.05.06 |
|---|---|
| libfreenect2 rgb / depth 매핑 소스코드 분석 (0) | 2025.05.05 |
| kinect 깊이 정밀도 (0) | 2025.04.15 |
| libfreenect2 on 2760p 성공 (0) | 2024.08.18 |
| libfreenect2 성공 (0) | 2024.07.17 |
잘 안하게 되니 일단 삭제!

| ace combat 7 normal 난이도 끝 (0) | 2025.04.06 |
|---|---|
| ace combat 7 멀티 플레이 첫 시도 (2) | 2025.03.16 |
| ace combat 7 sp mission완료 (0) | 2025.03.15 |
| ace combat 7 - 미션 20, 엔딩 (0) | 2025.03.11 |
| ace combat 7 - 미션 19 (0) | 2025.03.10 |
min, max, mean 은 최소, 최대, 중간값인데
r-tol(min/max) t-tol(min/max)가 보이는데 용도불명이라 검색중

| 2.1.4. tolerance Tolerance Resistance The resistance tolerance for an NTC thermistor is specified for one temperature point, which is application specific and the standard value is usually 25°C. It is also possible to specify at the other temperatures upon customer request. Temperature tolerance By means of Formula 3, the temperature tolerance can be calculated for small temperature interval as following formula: ΔT = 1 / α ・ ΔR / R (Formula 6) For practical application, we recommend that the standardized R / T table be used. |
| pwm 화음 출력 (0) | 2025.08.06 |
|---|---|
| 힐베르트 변환포락선 인벨로프 (0) | 2025.06.27 |
| 전자로드 사용법 (0) | 2025.04.28 |
| 합성저항, 목표저항 계산 (0) | 2025.01.07 |
| edge detector (0) | 2024.11.15 |
Open Asset Import Library(assimp) 를 이용해서 blender를 읽어오고 openGL로 그릴수 있는 것으로 보인다.
Features
|
[링크 : https://sourceforge.net/projects/assimp/]
| Assimp::Importer importer; const aiScene *scene = importer.ReadFile(filename,aiProcessPreset_TargetRealtime_Fast); aiMesh *mesh = scene->mMeshes[0]; //assuming you only want the first mesh float *vertexArray; float *normalArray; float *uvArray; int numVerts; next extract the data numVerts = mesh->mNumFaces*3; vertexArray = new float[mesh->mNumFaces*3*3]; normalArray = new float[mesh->mNumFaces*3*3]; uvArray = new float[mesh->mNumFaces*3*2]; for(unsigned int i=0;i<mesh->mNumFaces;i++) { const aiFace& face = mesh->mFaces[i]; for(int j=0;j<3;j++) { aiVector3D uv = mesh->mTextureCoords[0][face.mIndices[j]]; memcpy(uvArray,&uv,sizeof(float)*2); uvArray+=2; aiVector3D normal = mesh->mNormals[face.mIndices[j]]; memcpy(normalArray,&normal,sizeof(float)*3); normalArray+=3; aiVector3D pos = mesh->mVertices[face.mIndices[j]]; memcpy(vertexArray,&pos,sizeof(float)*3); vertexArray+=3; } } uvArray-=mesh->mNumFaces*3*2; normalArray-=mesh->mNumFaces*3*3; vertexArray-=mesh->mNumFaces*3*3; |
[링크 : https://nickthecoder.wordpress.com/2013/01/20/mesh-loading-with-assimp/]
[링크 : https://stackoverflow.com/questions/35111681/make-a-model-in-blender-and-load-in-opengl]
| openGL 스터디용 gemini 생성 코드 (0) | 2025.07.16 |
|---|---|
| opengl texture (0) | 2025.05.30 |
| opengl glortho gluperspective (0) | 2023.08.28 |
| glReadPixels() 와 glUseProgram() (0) | 2022.11.17 |
| openCV + openGL (0) | 2022.02.08 |