'잡동사니'에 해당되는 글 13306건
- 2024.01.04 물건너 가자~
- 2024.01.03 ssd(single shot detect) 결과 처리
- 2024.01.02 mariadb 라즈베리 파이 설정값
- 2024.01.02 이재명 대표 피습
- 2024.01.02 우분투에 jupyter notebook 설치 및 실행하기
- 2024.01.02 주피터 노트북 프로젝트(?) 실행하기
- 2024.01.02 i.mx8mp gopoint 실행 경로
- 2024.01.02 tensorflow keras dataset
- 2024.01.02 tensorflow lite / mnist 학습
- 2024.01.02 베트남 유심 결제
이게 ssd 인지, ssd + mobilenet v2 쪽인진 모르겠다.
[링크 : https://stackoverflow.com/questions/67868644/post-process-of-tf2-ssd-detection-models]
'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글
softmax (0) | 2024.01.10 |
---|---|
텐서플로우 학습 (0) | 2024.01.09 |
우분투에 jupyter notebook 설치 및 실행하기 (0) | 2024.01.02 |
주피터 노트북 프로젝트(?) 실행하기 (0) | 2024.01.02 |
i.mx8mp gopoint 실행 경로 (0) | 2024.01.02 |
편의를 위해 줄 끝을 삭제함(너무 길이서)
대충봐도.. 라즈베리용 메모리 크기에 맞게 설정되지 않은 느낌이다.
MariaDB [(none)]> show variables; +----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Variable_name Value +----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ alter_algorithm DEFAULT analyze_sample_percentage 100.000000 aria_block_size 8192 aria_checkpoint_interval 30 aria_checkpoint_log_activity 1048576 aria_encrypt_tables OFF aria_force_start_after_recovery_failures 0 aria_group_commit none aria_group_commit_interval 0 aria_log_dir_path /var/lib/mysql/ aria_log_file_size 1073741824 aria_log_purge_type immediate aria_max_sort_file_size 9223372036853727232 aria_page_checksum ON aria_pagecache_age_threshold 300 aria_pagecache_buffer_size 134217728 aria_pagecache_division_limit 100 aria_pagecache_file_hash_size 512 aria_recover_options BACKUP,QUICK aria_repair_threads 1 aria_sort_buffer_size 268434432 aria_stats_method nulls_unequal aria_sync_log_dir NEWFILE aria_used_for_temp_tables ON auto_increment_increment 1 auto_increment_offset 1 autocommit ON automatic_sp_privileges ON back_log 80 basedir /usr big_tables OFF bind_address 127.0.0.1 binlog_annotate_row_events ON binlog_cache_size 32768 binlog_checksum CRC32 binlog_commit_wait_count 0 binlog_commit_wait_usec 100000 binlog_direct_non_transactional_updates OFF binlog_file_cache_size 16384 binlog_format MIXED binlog_optimize_thread_scheduling ON binlog_row_image FULL binlog_row_metadata NO_LOG binlog_stmt_cache_size 32768 bulk_insert_buffer_size 8388608 character_set_client utf8 character_set_connection utf8 character_set_database utf8mb4 character_set_filesystem binary character_set_results utf8 character_set_server utf8mb4 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ check_constraint_checks ON collation_connection utf8_general_ci collation_database utf8mb4_general_ci collation_server utf8mb4_general_ci column_compression_threshold 100 column_compression_zlib_level 6 column_compression_zlib_strategy DEFAULT_STRATEGY column_compression_zlib_wrap OFF completion_type NO_CHAIN concurrent_insert AUTO connect_timeout 10 core_file OFF datadir /var/lib/mysql/ date_format %Y-%m-%d datetime_format %Y-%m-%d %H:%i:%s deadlock_search_depth_long 15 deadlock_search_depth_short 4 deadlock_timeout_long 50000000 deadlock_timeout_short 10000 debug_no_thread_alarm OFF default_master_connection default_password_lifetime 0 default_regex_flags default_storage_engine InnoDB default_tmp_storage_engine default_week_format 0 delay_key_write ON delayed_insert_limit 100 delayed_insert_timeout 300 delayed_queue_size 1000 disconnect_on_expired_password OFF div_precision_increment 4 encrypt_binlog OFF encrypt_tmp_disk_tables OFF encrypt_tmp_files OFF enforce_storage_engine eq_range_index_dive_limit 200 error_count 0 event_scheduler OFF expensive_subquery_limit 100 expire_logs_days 10 explicit_defaults_for_timestamp OFF external_user extra_max_connections 1 extra_port 0 flush OFF flush_time 0 foreign_key_checks ON ft_boolean_syntax + -><()~*:""& ft_max_word_len 84 ft_min_word_len 4 ft_query_expansion_limit 20 ft_stopword_file (built-in) general_log OFF general_log_file raspberrypi.log group_concat_max_len 1048576 gtid_binlog_pos gtid_binlog_state gtid_cleanup_batch_size 64 gtid_current_pos gtid_domain_id 0 gtid_ignore_duplicates OFF gtid_pos_auto_engines gtid_seq_no 0 gtid_slave_pos gtid_strict_mode OFF have_compress YES have_crypt YES have_dynamic_loading YES have_geometry YES have_openssl YES have_profiling YES have_query_cache YES have_rtree_keys YES have_ssl DISABLED have_symlink YES histogram_size 254 histogram_type DOUBLE_PREC_HB host_cache_size 279 hostname raspberrypi identity 0 idle_readonly_transaction_timeout 0 idle_transaction_timeout 0 idle_write_transaction_timeout 0 ignore_builtin_innodb OFF ignore_db_dirs in_predicate_conversion_threshold 1000 in_transaction 0 init_connect init_file init_slave innodb_adaptive_flushing ON innodb_adaptive_flushing_lwm 10.000000 innodb_adaptive_hash_index OFF innodb_adaptive_hash_index_parts 8 innodb_adaptive_max_sleep_delay 0 innodb_autoextend_increment 64 innodb_autoinc_lock_mode 1 innodb_background_scrub_data_check_interval 0 innodb_background_scrub_data_compressed OFF innodb_background_scrub_data_interval 0 innodb_background_scrub_data_uncompressed OFF innodb_buf_dump_status_frequency 0 innodb_buffer_pool_chunk_size 134217728 innodb_buffer_pool_dump_at_shutdown ON innodb_buffer_pool_dump_now OFF innodb_buffer_pool_dump_pct 25 innodb_buffer_pool_filename ib_buffer_pool innodb_buffer_pool_instances 1 innodb_buffer_pool_load_abort OFF innodb_buffer_pool_load_at_startup ON innodb_buffer_pool_load_now OFF innodb_buffer_pool_size 134217728 innodb_change_buffer_max_size 25 innodb_change_buffering none innodb_checksum_algorithm full_crc32 innodb_cmp_per_index_enabled OFF innodb_commit_concurrency 0 innodb_compression_algorithm zlib innodb_compression_default OFF innodb_compression_failure_threshold_pct 5 innodb_compression_level 6 innodb_compression_pad_pct_max 50 innodb_concurrency_tickets 0 innodb_data_file_path ibdata1:12M:autoextend innodb_data_home_dir innodb_deadlock_detect ON innodb_default_encryption_key_id 1 innodb_default_row_format dynamic innodb_defragment OFF innodb_defragment_fill_factor 0.900000 innodb_defragment_fill_factor_n_recs 20 innodb_defragment_frequency 40 innodb_defragment_n_pages 7 innodb_defragment_stats_accuracy 0 innodb_disable_sort_file_cache OFF innodb_doublewrite ON innodb_encrypt_log OFF innodb_encrypt_tables OFF innodb_encrypt_temporary_tables OFF innodb_encryption_rotate_key_age 1 innodb_encryption_rotation_iops 100 innodb_encryption_threads 0 innodb_fast_shutdown 1 innodb_fatal_semaphore_wait_threshold 600 innodb_file_format innodb_file_per_table ON innodb_fill_factor 100 innodb_flush_log_at_timeout 1 innodb_flush_log_at_trx_commit 1 innodb_flush_method fsync innodb_flush_neighbors 1 innodb_flush_sync ON innodb_flushing_avg_loops 30 innodb_force_load_corrupted OFF innodb_force_primary_key OFF innodb_force_recovery 0 innodb_ft_aux_table innodb_ft_cache_size 8000000 innodb_ft_enable_diag_print OFF innodb_ft_enable_stopword ON innodb_ft_max_token_size 84 innodb_ft_min_token_size 3 innodb_ft_num_word_optimize 2000 innodb_ft_result_cache_limit 2000000000 innodb_ft_server_stopword_table innodb_ft_sort_pll_degree 2 innodb_ft_total_cache_size 640000000 innodb_ft_user_stopword_table innodb_immediate_scrub_data_uncompressed OFF innodb_instant_alter_column_allowed add_drop_reorder innodb_io_capacity 200 innodb_io_capacity_max 2000 innodb_large_prefix innodb_lock_schedule_algorithm fcfs innodb_lock_wait_timeout 50 innodb_log_buffer_size 16777216 innodb_log_checksums ON innodb_log_compressed_pages ON innodb_log_file_size 100663296 innodb_log_files_in_group 1 innodb_log_group_home_dir ./ innodb_log_optimize_ddl OFF innodb_log_write_ahead_size 8192 innodb_lru_flush_size 32 innodb_lru_scan_depth 1536 innodb_max_dirty_pages_pct 90.000000 innodb_max_dirty_pages_pct_lwm 0.000000 innodb_max_purge_lag 0 innodb_max_purge_lag_delay 0 innodb_max_purge_lag_wait 4294967295 innodb_max_undo_log_size 10485760 innodb_monitor_disable innodb_monitor_enable innodb_monitor_reset innodb_monitor_reset_all innodb_old_blocks_pct 37 innodb_old_blocks_time 1000 innodb_online_alter_log_max_size 134217728 innodb_open_files 2000 innodb_optimize_fulltext_only OFF innodb_page_cleaners 1 innodb_page_size 16384 innodb_prefix_index_cluster_optimization OFF innodb_print_all_deadlocks OFF innodb_purge_batch_size 300 innodb_purge_rseg_truncate_frequency 128 innodb_purge_threads 4 innodb_random_read_ahead OFF innodb_read_ahead_threshold 56 innodb_read_io_threads 4 innodb_read_only OFF innodb_replication_delay 0 innodb_rollback_on_timeout OFF innodb_scrub_log OFF innodb_scrub_log_speed 256 innodb_sort_buffer_size 1048576 innodb_spin_wait_delay 4 innodb_stats_auto_recalc ON innodb_stats_include_delete_marked OFF innodb_stats_method nulls_equal innodb_stats_modified_counter 0 innodb_stats_on_metadata OFF innodb_stats_persistent ON innodb_stats_persistent_sample_pages 20 innodb_stats_traditional ON innodb_stats_transient_sample_pages 8 innodb_status_output OFF innodb_status_output_locks OFF innodb_strict_mode ON innodb_sync_array_size 1 innodb_sync_spin_loops 30 innodb_table_locks ON innodb_temp_data_file_path ibtmp1:12M:autoextend innodb_thread_concurrency 0 innodb_thread_sleep_delay 0 innodb_tmpdir innodb_undo_directory ./ innodb_undo_log_truncate OFF innodb_undo_logs 128 innodb_undo_tablespaces 0 innodb_use_atomic_writes ON innodb_use_native_aio ON innodb_version 10.5.21 innodb_write_io_threads 4 insert_id 0 interactive_timeout 28800 join_buffer_size 262144 join_buffer_space_limit 2097152 join_cache_level 2 keep_files_on_create OFF key_buffer_size 134217728 key_cache_age_threshold 300 key_cache_block_size 1024 key_cache_division_limit 100 key_cache_file_hash_size 512 key_cache_segments 0 large_files_support ON large_page_size 0 large_pages OFF last_gtid last_insert_id 0 lc_messages en_US lc_messages_dir /usr/share/mysql lc_time_names en_US license GPL local_infile ON lock_wait_timeout 86400 locked_in_memory OFF log_bin OFF log_bin_basename log_bin_compress OFF log_bin_compress_min_len 256 log_bin_index log_bin_trust_function_creators OFF log_disabled_statements sp log_error log_output FILE log_queries_not_using_indexes OFF log_slave_updates OFF log_slow_admin_statements ON log_slow_disabled_statements sp log_slow_filter admin,filesort,filesort_on_disk,filesort_priority_queue,full_join,full_scan,query_cache,query_cache_miss,tmp_table,tmp_table_on_disk log_slow_rate_limit 1 log_slow_slave_statements ON log_slow_verbosity log_tc_size 24576 log_warnings 2 long_query_time 10.000000 low_priority_updates OFF lower_case_file_system OFF lower_case_table_names 0 master_verify_checksum OFF max_allowed_packet 16777216 max_binlog_cache_size 4294963200 max_binlog_size 1073741824 max_binlog_stmt_cache_size 4294963200 max_connect_errors 100 max_connections 151 max_delayed_threads 20 max_digest_length 1024 max_error_count 64 max_heap_table_size 16777216 max_insert_delayed_threads 20 max_join_size 18446744073709551615 max_length_for_sort_data 1024 max_password_errors 4294967295 max_prepared_stmt_count 16382 max_recursive_iterations 4294967295 max_relay_log_size 1073741824 max_rowid_filter_size 131072 max_seeks_for_key 4294967295 max_session_mem_used 9223372036854775807 max_sort_length 1024 max_sp_recursion_depth 0 max_statement_time 0.000000 max_tmp_tables 32 max_user_connections 0 max_write_lock_count 4294967295 metadata_locks_cache_size 1024 metadata_locks_hash_instances 8 min_examined_row_limit 0 mrr_buffer_size 262144 myisam_block_size 1024 myisam_data_pointer_size 6 myisam_max_sort_file_size 2146435072 myisam_mmap_size 4294967295 myisam_recover_options BACKUP,QUICK myisam_repair_threads 1 myisam_sort_buffer_size 134216704 myisam_stats_method NULLS_UNEQUAL myisam_use_mmap OFF mysql56_temporal_format ON net_buffer_length 16384 net_read_timeout 30 net_retry_count 10 net_write_timeout 60 old OFF old_alter_table DEFAULT old_mode old_passwords OFF open_files_limit 32186 optimizer_max_sel_arg_weight 32000 optimizer_prune_level 1 optimizer_search_depth 62 optimizer_selectivity_sampling_limit 100 optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on,not_null_range_scan=off optimizer_trace enabled=off optimizer_trace_max_mem_size 1048576 optimizer_use_condition_selectivity 4 performance_schema OFF performance_schema_accounts_size -1 performance_schema_digests_size -1 performance_schema_events_stages_history_long_size -1 performance_schema_events_stages_history_size -1 performance_schema_events_statements_history_long_size -1 performance_schema_events_statements_history_size -1 performance_schema_events_transactions_history_long_size -1 performance_schema_events_transactions_history_size -1 performance_schema_events_waits_history_long_size -1 performance_schema_events_waits_history_size -1 performance_schema_hosts_size -1 performance_schema_max_cond_classes 90 performance_schema_max_cond_instances -1 performance_schema_max_digest_length 1024 performance_schema_max_file_classes 80 performance_schema_max_file_handles 32768 performance_schema_max_file_instances -1 performance_schema_max_index_stat -1 performance_schema_max_memory_classes 320 performance_schema_max_metadata_locks -1 performance_schema_max_mutex_classes 210 performance_schema_max_mutex_instances -1 performance_schema_max_prepared_statements_instances -1 performance_schema_max_program_instances -1 performance_schema_max_rwlock_classes 50 performance_schema_max_rwlock_instances -1 performance_schema_max_socket_classes 10 performance_schema_max_socket_instances -1 performance_schema_max_sql_text_length 1024 performance_schema_max_stage_classes 160 performance_schema_max_statement_classes 222 performance_schema_max_statement_stack 10 performance_schema_max_table_handles -1 performance_schema_max_table_instances -1 performance_schema_max_table_lock_stat -1 performance_schema_max_thread_classes 50 performance_schema_max_thread_instances -1 performance_schema_session_connect_attrs_size -1 performance_schema_setup_actors_size -1 performance_schema_setup_objects_size -1 performance_schema_users_size -1 pid_file /run/mysqld/mysqld.pid plugin_dir /usr/lib/mysql/plugin/ plugin_maturity gamma port 3306 preload_buffer_size 32768 profiling OFF profiling_history_size 15 progress_report_time 5 protocol_version 10 proxy_protocol_networks proxy_user pseudo_slave_mode OFF pseudo_thread_id 32 query_alloc_block_size 16384 query_cache_limit 1048576 query_cache_min_res_unit 4096 query_cache_size 1048576 query_cache_strip_comments OFF query_cache_type OFF query_cache_wlock_invalidate OFF query_prealloc_size 24576 rand_seed1 1024636563 rand_seed2 606536313 range_alloc_block_size 4096 read_binlog_speed_limit 0 read_buffer_size 131072 read_only OFF read_rnd_buffer_size 262144 relay_log relay_log_basename relay_log_index relay_log_info_file relay-log.info relay_log_purge ON relay_log_recovery OFF relay_log_space_limit 0 replicate_annotate_row_events ON replicate_do_db replicate_do_table replicate_events_marked_for_skip REPLICATE replicate_ignore_db replicate_ignore_table replicate_wild_do_table replicate_wild_ignore_table report_host report_password report_port 3306 report_user require_secure_transport OFF rowid_merge_buff_size 8388608 rpl_semi_sync_master_enabled OFF rpl_semi_sync_master_timeout 10000 rpl_semi_sync_master_trace_level 32 rpl_semi_sync_master_wait_no_slave ON rpl_semi_sync_master_wait_point AFTER_COMMIT rpl_semi_sync_slave_delay_master OFF rpl_semi_sync_slave_enabled OFF rpl_semi_sync_slave_kill_conn_timeout 5 rpl_semi_sync_slave_trace_level 32 secure_auth ON secure_file_priv secure_timestamp NO server_id 1 session_track_schema ON session_track_state_change OFF session_track_system_variables autocommit,character_set_client,character_set_connection,character_set_results,time_zone session_track_transaction_info OFF skip_external_locking ON skip_name_resolve OFF skip_networking OFF skip_parallel_replication OFF skip_replication OFF skip_show_database OFF slave_compressed_protocol OFF slave_ddl_exec_mode IDEMPOTENT slave_domain_parallel_threads 0 slave_exec_mode STRICT slave_load_tmpdir /tmp slave_max_allowed_packet 1073741824 slave_net_timeout 60 slave_parallel_max_queued 131072 slave_parallel_mode optimistic slave_parallel_threads 0 slave_parallel_workers 0 slave_run_triggers_for_rbr NO slave_skip_errors OFF slave_sql_verify_checksum ON slave_transaction_retries 10 slave_transaction_retry_errors 1158,1159,1160,1161,1205,1213,1429,2013,12701 slave_transaction_retry_interval 0 slave_type_conversions slow_launch_time 2 slow_query_log OFF slow_query_log_file raspberrypi-slow.log socket /run/mysqld/mysqld.sock sort_buffer_size 2097152 sql_auto_is_null OFF sql_big_selects ON sql_buffer_result OFF sql_if_exists OFF sql_log_bin ON sql_log_off OFF sql_mode STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION sql_notes ON sql_quote_show_create ON sql_safe_updates OFF sql_select_limit 18446744073709551615 sql_slave_skip_counter 0 sql_warnings OFF ssl_ca ssl_capath ssl_cert ssl_cipher ssl_crl ssl_crlpath ssl_key standard_compliant_cte ON storage_engine InnoDB stored_program_cache 256 strict_password_validation ON sync_binlog 0 sync_frm ON sync_master_info 10000 sync_relay_log 10000 sync_relay_log_info 10000 system_time_zone KST system_versioning_alter_history ERROR system_versioning_asof DEFAULT table_definition_cache 400 table_open_cache 2000 table_open_cache_instances 8 tcp_keepalive_interval 0 tcp_keepalive_probes 0 tcp_keepalive_time 0 tcp_nodelay ON thread_cache_size 151 thread_handling one-thread-per-connection thread_pool_dedicated_listener OFF thread_pool_exact_stats OFF thread_pool_idle_timeout 60 thread_pool_max_threads 65536 thread_pool_oversubscribe 3 thread_pool_prio_kickup_timer 1000 thread_pool_priority auto thread_pool_size 4 thread_pool_stall_limit 500 thread_stack 299008 time_format %H:%i:%s time_zone SYSTEM timestamp 1704202334.478383 tls_version TLSv1.1,TLSv1.2,TLSv1.3 tmp_disk_table_size 4294967295 tmp_memory_table_size 16777216 tmp_table_size 16777216 tmpdir /tmp transaction_alloc_block_size 8192 transaction_prealloc_size 4096 tx_isolation REPEATABLE-READ tx_read_only OFF unique_checks ON updatable_views_with_limit YES use_stat_tables PREFERABLY_FOR_QUERIES userstat OFF version 10.5.21-MariaDB-0+deb11u1 version_comment Raspbian 11 version_compile_machine armv7l version_compile_os debian-linux-gnueabihf version_malloc_library system version_source_revision bed70468ea08c2820647f5e3ac006a9ff88144ac version_ssl_library OpenSSL 1.1.1w 11 Sep 2023 wait_timeout 28800 warning_count 0 wsrep_osu_method TOI wsrep_sr_store table wsrep_auto_increment_control ON wsrep_causal_reads OFF wsrep_certification_rules strict wsrep_certify_nonpk ON wsrep_cluster_address wsrep_cluster_name my_wsrep_cluster wsrep_convert_lock_to_trx OFF wsrep_data_home_dir /var/lib/mysql/ wsrep_dbug_option wsrep_debug NONE wsrep_desync OFF wsrep_dirty_reads OFF wsrep_drupal_282555_workaround OFF wsrep_forced_binlog_format NONE wsrep_gtid_domain_id 0 wsrep_gtid_mode OFF wsrep_gtid_seq_no 0 wsrep_ignore_apply_errors 7 wsrep_load_data_splitting OFF wsrep_log_conflicts OFF wsrep_max_ws_rows 0 wsrep_max_ws_size 2147483647 wsrep_mysql_replication_bundle 0 wsrep_node_address wsrep_node_incoming_address AUTO wsrep_node_name raspberrypi wsrep_notify_cmd wsrep_on OFF wsrep_patch_version wsrep_26.22 wsrep_provider none wsrep_provider_options wsrep_recover OFF wsrep_reject_queries NONE wsrep_replicate_myisam OFF wsrep_restart_slave OFF wsrep_retry_autocommit 1 wsrep_slave_fk_checks ON wsrep_slave_uk_checks OFF wsrep_slave_threads 1 wsrep_sst_auth wsrep_sst_donor wsrep_sst_donor_rejects_queries OFF wsrep_sst_method rsync wsrep_sst_receive_address AUTO wsrep_start_position 00000000-0000-0000-0000-000000000000:-1 wsrep_strict_ddl OFF wsrep_sync_wait 0 wsrep_trx_fragment_size 0 wsrep_trx_fragment_unit bytes +----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 667 rows in set (0.039 sec) |
'embeded > raspberry pi' 카테고리의 다른 글
rpi libcamera? (0) | 2024.02.26 |
---|---|
3d 프린트 한 라즈베리 케이스 (0) | 2024.02.21 |
MCP2515 on rpi (0) | 2023.10.31 |
서보 pan/tilt 조립 (0) | 2023.09.23 |
서보 팬틸트 브라켓 구매 (0) | 2023.09.19 |
암만봐도 살해의도를 가지고 정확하게 목을 지르는걸로 밖에 안보이는데
기사는 대부분 무덤덤하게 1cm 찢어졌다 이정도 뉘앙스로 밖에 언급이 안되는 상황..
[링크 : https://www.youtube.com/clip/UgkxzsZJQ6VySLJVHMDF-k5DzKiMsxukRGMJ]
'개소리 왈왈 > 정치관련 신세한탄' 카테고리의 다른 글
프랑스 임신중지권 보장 (0) | 2024.03.08 |
---|---|
이머병? (0) | 2024.02.04 |
12.12 쿠데타 (2) | 2023.12.12 |
조금 늦은 기사기사 (0) | 2023.09.02 |
에라이 (0) | 2023.07.12 |
어쩌면 당연한데.. python 으로 짠녀석이니 pip로 설치하면 된다.
$ pip install notebook $ jupyter notebook |
[링크 : https://jupyter.org/install]
주피터 노트북 실행하면 아래와 같이 웹으로 뜨고 ipynb를 더블클릭으로 열면 끝
아무튼 Run에 Run All Cells 하면 순차적으로 실행된다.
막상실행해보려니 2년전꺼라 패키지가 달라져서 안되는 듯.. 쩝
+
numpy 버전 문제인가.. 1.25.0 미만이어야 하는데 1.26.2라서 에러라..
1.17.3 ~ 1.24.x 면 될테니 적당하게 바꾸고 해봐야지
pip show numpy pip uninstall numpy pip install numpy==1.16.4 |
[링크 : https://reyrei.tistory.com/m/28]
+
24.01.03
numpy 손 보니 tensorflow.keras 에서 막히는 마법이.. -_-
'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글
텐서플로우 학습 (0) | 2024.01.09 |
---|---|
ssd(single shot detect) 결과 처리 (0) | 2024.01.03 |
주피터 노트북 프로젝트(?) 실행하기 (0) | 2024.01.02 |
i.mx8mp gopoint 실행 경로 (0) | 2024.01.02 |
tensorflow keras dataset (0) | 2024.01.02 |
ipynb 라는 확장자가 보여서 확인 중
[링크 : https://github.com/saunack/MobileNetv2-SSD]
아나콘다 깔고
거기서 jupiter notebook을 설치하면 된다고
[링크 : https://mananacho.tistory.com/31]
[링크 : https://blog.naver.com/tamiel/221956194782]
커맨드 라인으로는 먼가 복잡한데, 쥬피터 없이 돌리는것도 아니고 무슨 의미가 있나 싶긴하다.
$ jupyter nbconvert --execute --to notebook lda2.ipynb |
'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글
ssd(single shot detect) 결과 처리 (0) | 2024.01.03 |
---|---|
우분투에 jupyter notebook 설치 및 실행하기 (0) | 2024.01.02 |
i.mx8mp gopoint 실행 경로 (0) | 2024.01.02 |
tensorflow keras dataset (0) | 2024.01.02 |
tensorflow lite / mnist 학습 (0) | 2024.01.02 |
문서를 찾다가 지쳐서 걍 실행하고 인자를 보는걸로..
root 3019 925 72 06:38 ? 00:00:25 /usr/bin/python3 /home/root/.nxp-demo-experience/scripts/machine_learning/MLDemoLauncher.py detect |
root@imx8mpevk:~/.nxp-demo-experience/scripts/machine_learning# cat MLDemoLauncher.py #!/usr/bin/env python3 """ Copyright 2021-2023 NXP SPDX-License-Identifier: BSD-2-Clause This script launches the NNStreamer ML Demos using a UI to pick settings. """ import gi import os import sys import glob from gi.repository import Gtk, GLib, Gio gi.require_version("Gtk", "3.0") sys.path.append("/home/root/.nxp-demo-experience/scripts/") import utils class MLLaunch(Gtk.Window): """The GUI window for the ML demo launcher""" def __init__(self, demo): """Creates the UI window""" # Initialization self.demo = demo super().__init__(title=demo) self.set_default_size(450, 200) self.set_resizable(False) # Get platform self.platform = os.uname().nodename # OpenVX graph caching is not available on i.MX 8QuadMax platform. if self.platform != "imx8qmmek" : os.environ["VIV_VX_CACHE_BINARY_GRAPH_DIR"] = "/home/root/.cache/gopoint" os.environ["VIV_VX_ENABLE_CACHE_GRAPH_BINARY"] = "1" # Get widget properties devices = [] if self.demo != "brand" and self.demo != "selfie_nn": if self.platform != "imx93evk": devices.append("Example Video") for device in glob.glob("/dev/video*"): devices.append(device) backends_available = ["CPU"] if ( os.path.exists("/usr/lib/libvx_delegate.so") and self.demo != "pose" and self.demo != "selfie_nn" ): backends_available.insert(1, "GPU") if os.path.exists("/usr/lib/libneuralnetworks.so") and self.demo != "brand" and self.platform != "imx8qmmek": backends_available.insert(0, "NPU") if os.path.exists("/usr/lib/libethosu_delegate.so"): backends_available.insert(0, "NPU") backends_available.pop() displays_available = ["Weston"] colors_available = ["Red", "Green", "Blue", "Black", "White"] demo_modes_available = ["Background Substitution", "Segmentation Mask"] # Create widgets main_grid = Gtk.Grid.new() device_label = Gtk.Label.new("Source") self.device_combo = Gtk.ComboBoxText() backend_label = Gtk.Label.new("Backend") self.backend_combo = Gtk.ComboBoxText() self.display_combo = Gtk.ComboBoxText() self.launch_button = Gtk.Button.new_with_label("Run") self.status_bar = Gtk.Label.new() header = Gtk.HeaderBar() quit_button = Gtk.Button() quit_icon = Gio.ThemedIcon(name="process-stop-symbolic") quit_image = Gtk.Image.new_from_gicon(quit_icon, Gtk.IconSize.BUTTON) separator = Gtk.Separator.new(0) time_title_label = Gtk.Label.new("Video Refresh") self.time_label = Gtk.Label.new("--.-- ms") self.fps_label = Gtk.Label.new("-- FPS") inference_title_label = Gtk.Label.new("Inference Time") self.inference_label = Gtk.Label.new("--.-- ms") self.ips_label = Gtk.Label.new("-- IPS") if self.demo != "selfie_nn": self.width_entry = self.r_scale = Gtk.Scale.new_with_range( Gtk.Orientation.HORIZONTAL, 300, 1920, 2 ) self.height_entry = self.r_scale = Gtk.Scale.new_with_range( Gtk.Orientation.HORIZONTAL, 300, 1080, 2 ) self.width_label = Gtk.Label.new("Height") self.height_label = Gtk.Label.new("Width") self.color_label = Gtk.Label.new("Label Color") else: self.color_label = Gtk.Label.new("Text Color") self.demo_mode = Gtk.Label.new("Demo Mode") self.mode_combo = Gtk.ComboBoxText() self.color_combo = Gtk.ComboBoxText() # Organize widgets self.add(main_grid) self.set_titlebar(header) quit_button.add(quit_image) header.pack_end(quit_button) main_grid.set_row_spacing(10) main_grid.set_border_width(10) main_grid.attach(device_label, 0, 1, 2, 1) device_label.set_hexpand(True) main_grid.attach(backend_label, 0, 2, 2, 1) # main_grid.attach(display_label, 0, 3, 2, 1) if self.demo != "selfie_nn": main_grid.attach(self.width_label, 0, 4, 2, 1) main_grid.attach(self.height_label, 0, 5, 2, 1) main_grid.attach(self.color_label, 0, 6, 2, 1) else: main_grid.attach(self.demo_mode, 0, 4, 2, 1) main_grid.attach(self.color_label, 0, 5, 2, 1) main_grid.attach(self.device_combo, 2, 1, 2, 1) self.device_combo.set_hexpand(True) main_grid.attach(self.backend_combo, 2, 2, 2, 1) # main_grid.attach(self.display_combo, 2, 3, 2, 1) if self.demo != "selfie_nn": main_grid.attach(self.width_entry, 2, 4, 2, 1) main_grid.attach(self.height_entry, 2, 5, 2, 1) main_grid.attach(self.color_combo, 2, 6, 2, 1) else: main_grid.attach(self.mode_combo, 2, 4, 2, 1) main_grid.attach(self.color_combo, 2, 5, 2, 1) main_grid.attach(self.launch_button, 0, 7, 4, 1) main_grid.attach(self.status_bar, 0, 8, 4, 1) main_grid.attach(separator, 0, 9, 4, 1) main_grid.attach(time_title_label, 0, 10, 2, 1) main_grid.attach(self.time_label, 0, 11, 1, 1) main_grid.attach(self.fps_label, 1, 11, 1, 1) main_grid.attach(inference_title_label, 2, 10, 2, 1) main_grid.attach(self.inference_label, 2, 11, 1, 1) main_grid.attach(self.ips_label, 3, 11, 1, 1) # Configure widgets for device in devices: self.device_combo.append_text(device) for backend in backends_available: self.backend_combo.append_text(backend) for display in displays_available: self.display_combo.append_text(display) for color in colors_available: self.color_combo.append_text(color) if self.demo == "selfie_nn": for mode in demo_modes_available: self.mode_combo.append_text(mode) self.device_combo.set_active(0) self.backend_combo.set_active(0) self.display_combo.set_active(0) self.color_combo.set_active(0) if self.demo != "selfie_nn": self.width_entry.set_value(1920) self.height_entry.set_value(1080) self.width_entry.set_sensitive(False) self.height_entry.set_sensitive(False) else: self.mode_combo.set_active(0) self.device_combo.connect("changed", self.on_source_change) self.launch_button.connect("clicked", self.start) quit_button.connect("clicked", exit) if self.demo == "detect": header.set_title("Detection Demo") elif self.demo == "id": header.set_title("Classification Demo") elif self.demo == "pose": header.set_title("Pose Demo") elif self.demo == "brand": header.set_title("Brand Demo") elif self.demo == "selfie_nn": header.set_title("Selfie Segmenter Demo") else: header.set_title("NNStreamer Demo") header.set_subtitle("NNStreamer Examples") def start(self, button): """Starts the ML Demo with selected settings""" self.update_time = GLib.get_monotonic_time() self.launch_button.set_sensitive(False) if self.color_combo.get_active_text() == "Red": r = 1 g = 0 b = 0 elif self.color_combo.get_active_text() == "Blue": r = 0 g = 0 b = 1 elif self.color_combo.get_active_text() == "Green": r = 0 g = 1 b = 0 elif self.color_combo.get_active_text() == "Black": r = 0 g = 0 b = 0 elif self.color_combo.get_active_text() == "White": r = 1 g = 1 b = 1 else: r = 1 g = 0 b = 0 if self.demo == "detect": if self.platform == "imx93evk": model = utils.download_file("mobilenet_ssd_v2_coco_quant_postprocess_vela.tflite") else: model = utils.download_file("mobilenet_ssd_v2_coco_quant_postprocess.tflite") labels = utils.download_file("coco_labels.txt") if self.device_combo.get_active_text() == "Example Video": device = utils.download_file("detect_example.mov") else: device = self.device_combo.get_active_text() if model == -1 or model == -2 or model == -3: if self.platform == "imx93evk": error = "mobilenet_ssd_v2_coco_quant_postprocess_vela.tflite" else: error = "mobilenet_ssd_v2_coco_quant_postprocess.tflite" elif labels == -1 or labels == -2 or labels == -3: error = "coco_labels.txt" elif device == -1 or device == -2 or device == -3: error = "detect_example.mov" if self.demo == "id": if self.platform == "imx93evk": model = utils.download_file("mobilenet_v1_1.0_224_quant_vela.tflite") else: model = utils.download_file("mobilenet_v1_1.0_224_quant.tflite") labels = utils.download_file("1_1.0_224_labels.txt") if self.device_combo.get_active_text() == "Example Video": device = utils.download_file("id_example.mov") else: device = self.device_combo.get_active_text() if model == -1 or model == -2 or model == -3: if self.platform == "imx93evk": error = "mobilenet_v1_1.0_224_quant_vela.tflite" else: error = "mobilenet_v1_1.0_224_quant.tflite" elif labels == -1 or labels == -2 or labels == -3: error = "1_1.0_224_labels.txt" elif device == -1 or device == -2 or device == -3: error = "id_example.mov" if self.demo == "pose": model = utils.download_file("posenet_resnet50_uint8_float32_quant.tflite") labels = utils.download_file("key_point_labels.txt") if self.device_combo.get_active_text() == "Example Video": device = utils.download_file("pose_example.mov") else: device = self.device_combo.get_active_text() if model == -1 or model == -2 or model == -3: error = "posenet_resnet50_uint8_float32_quant.tflite" elif labels == -1 or labels == -2 or labels == -3: error = "key_point_labels.txt" elif device == -1 or device == -2 or device == -3: error = "pose_example.mov" if self.demo == "brand": model = utils.download_file("brand_model.tflite") labels = utils.download_file("brand_labels.txt") if self.device_combo.get_active_text() == "Example Video": device = utils.download_file("brand_example.mov") else: device = self.device_combo.get_active_text() if model == -1 or model == -2 or model == -3: error = "brand_model.tflite" elif labels == -1 or labels == -2 or labels == -3: error = "brand_labels.txt" elif device == -1 or device == -2 or device == -3: error = "brand_example.mov" if self.demo == "selfie_nn": if self.platform == "imx93evk": model = utils.download_file( "selfie_segmenter_landscape_int8_vela.tflite" ) else: model = utils.download_file("selfie_segmenter_int8.tflite") # Labels refer to background img if self.platform == "imx93evk": labels = utils.download_file("bg_image_landscape.jpg") else: labels = utils.download_file("bg_image.jpg") if self.device_combo.get_active_text() == "Example Video": device = utils.download_file("selfie_example.mov") else: device = self.device_combo.get_active_text() if model == -1 or model == -2 or model == -3: if self.platform == "imx93evk": error = "selfie_segmenter_landscape_int8_vela.tflite" else: error = "selfie_segmenter_int8.tflite" elif labels == -1 or labels == -2 or labels == -3: if self.platform == "imx93evk": error = "bg_image_landscape.jpg" else: error = "bg_image.jpg" elif device == -1 or device == -2 or device == -3: error = "selfie_example.mov" if self.mode_combo.get_active_text() == "Background Substitution": set_mode = 0 else: set_mode = 1 if model == -1 or labels == -1 or device == -1: """ dialog = Gtk.MessageDialog( transient_for=self, flags=0, message_type=Gtk.MessageType.ERROR, buttons=Gtk.ButtonsType.CANCEL, text="Cannot find files! The file that you requested" + " does not have any metadata that is related to it. " + "Please see /home/root/.nxp-demo-experience/downloads.txt" + " to see if the requested file exists! \n \n Cannot find:" + error) dialog.run() dialog.destroy() """ self.status_bar.set_text("Cannot find files!") self.launch_button.set_sensitive(True) return if model == -2 or labels == -2 or device == -2: """ dialog = Gtk.MessageDialog( transient_for=self, flags=0, message_type=Gtk.MessageType.ERROR, buttons=Gtk.ButtonsType.CANCEL, text="Cannot download files! The URL used to download the" + " file cannot be reached. If you are connected to the " + "internet, please check the /home/root/.nxp-demo-experience" + "/downloads.txt for the URL. For some regions, " + "these sites may be blocked. To install these manually," + " please go to the file listed above and provide the " + "path to the file in \"PATH\" \n \n Cannot download " + error) dialog.run() dialog.destroy() """ self.status_bar.set_text("Download failed!") self.launch_button.set_sensitive(True) return if model == -3 or labels == -3 or device == -4: """ dialog = Gtk.MessageDialog( transient_for=self, flags=0, message_type=Gtk.MessageType.ERROR, buttons=Gtk.ButtonsType.CANCEL, text="Invalid files! The files where not what we expected." + "If you are SURE that the files are correct, delete " + "the \"SHA\" value in /home/root/.nxp-demo-experience" + "/downloads.txt to bypass the SHA check. \n \n Bad SHA for " + error) dialog.run() dialog.destroy() """ self.status_bar.set_text("Downloaded bad file!") self.launch_button.set_sensitive(True) return if self.demo == "detect": import nndetection example = nndetection.ObjectDetection( self.platform, device, self.backend_combo.get_active_text(), model, labels, self.display_combo.get_active_text(), self.update_stats, self.width_entry.get_value(), self.height_entry.get_value(), r, g, b, ) example.run() if self.demo == "id": import nnclassification example = nnclassification.NNStreamerExample( self.platform, device, self.backend_combo.get_active_text(), model, labels, self.display_combo.get_active_text(), self.update_stats, self.width_entry.get_value(), self.height_entry.get_value(), r, g, b, ) example.run_example() if self.demo == "pose": import nnpose example = nnpose.NNStreamerExample( self.platform, device, self.backend_combo.get_active_text(), model, labels, self.display_combo.get_active_text(), self.update_stats, self.width_entry.get_value(), self.height_entry.get_value(), r, g, b, ) example.run_example() if self.demo == "brand": import nnbrand example = nnbrand.NNStreamerExample( self.platform, device, self.backend_combo.get_active_text(), model, labels, self.display_combo.get_active_text(), self.update_stats, self.width_entry.get_value(), self.height_entry.get_value(), r, g, b, ) example.run_example() if self.demo == "selfie_nn": import selfie_segmenter example = selfie_segmenter.SelfieSegmenter( self.platform, device, self.backend_combo.get_active_text(), model, labels, self.update_stats, set_mode, r, g, b, ) example.run() self.launch_button.set_sensitive(True) def update_stats(self, time): """Callback used the update stats in GUI""" interval_time = (GLib.get_monotonic_time() - self.update_time) / 1000000 if interval_time > 1: refresh_time = time.interval_time inference_time = time.tensor_filter.get_property("latency") if refresh_time != 0 and inference_time != 0: # Print pipeline information if self.demo == "selfie_nn" or self.demo == "id" or self.demo == "detect": self.time_label.set_text( "{:12.2f} ms".format(1.0 / time.current_framerate * 1000.0) ) self.fps_label.set_text( "{:12.2f} FPS".format(time.current_framerate) ) else: self.time_label.set_text("{:12.2f} ms".format(refresh_time / 1000)) self.fps_label.set_text( "{:12.2f} FPS".format(1 / (refresh_time / 1000000)) ) # Print inference information self.inference_label.set_text( "{:12.2f} ms".format(inference_time / 1000) ) self.ips_label.set_text( "{:12.2f} FPS".format(1 / (inference_time / 1000000)) ) self.update_time = GLib.get_monotonic_time() return True def on_source_change(self, widget): """Callback to lock sliders""" if self.demo != "selfie_nn": if self.device_combo.get_active_text() == "Example Video": self.width_entry.set_value(1920) self.height_entry.set_value(1080) self.width_entry.set_sensitive(False) self.height_entry.set_sensitive(False) else: self.width_entry.set_sensitive(True) self.height_entry.set_sensitive(True) if __name__ == "__main__": if ( len(sys.argv) != 2 and sys.argv[1] != "detect" and sys.argv[1] != "id" and sys.argv[1] != "pose" and sys.argv[1] != "selfie_nn" ): print("Demos available: detect, id, pose, selfie_nn") else: win = MLLaunch(sys.argv[1]) win.connect("destroy", Gtk.main_quit) win.show_all() Gtk.main() |
아래가 실행되는 녀석인데 nndetection을 import 하니까 그걸 따라가서 보는 중.
그나저나 LGPL 이면 그냥 공개해도 되려나?
root@imx8mpevk:~/.nxp-demo-experience/scripts/machine_learning# find / -name nndetection.py /run/media/root-mmcblk2p2/home/root/.nxp-demo-experience/scripts/machine_learning/nndetection.py /home/root/.nxp-demo-experience/scripts/machine_learning/nndetection.py root@imx8mpevk:~/.nxp-demo-experience/scripts/machine_learning# cat /home/root/.nxp-demo-experience/scripts/machine_learning/nndetection.py #!/usr/bin/env python3 """ Copyright SSAFY Team 1 <jangjongha.sw@gmail.com> Copyright 2021-2023 NXP SPDX-License-Identifier: LGPL-2.1-only Original Source: https://github.com/nnstreamer/nnstreamer-example This demo shows how you can use the NNStreamer to detect objects. From the original source, this was modified to better work with the a GUI and to get better performance on the i.MX 8M Plus and i.MX93. """ import os import sys import gi import re import logging import numpy as np import cairo gi.require_version("Gst", "1.0") gi.require_foreign("cairo") from gi.repository import Gst, GObject, GLib DEBUG = False class ObjectDetection: """The class that manages the demo""" def __init__( self, platform, device, backend, model, labels, display="Weston", callback=None, width=1920, height=1080, r=1, g=0, b=0, ): """Creates an instance of the demo Arguments: device -- What camera or video file to use backend -- Whether to use NPU or CPU model -- the path to the model labels -- the path to the labels display -- Whether to use X11 or Weston callback -- Callback to pass stats to width -- Width of output height -- Height of output r -- Red value for labels g -- Green value for labels b -- Blue value for labels """ self.loop = None self.pipeline = None self.running = False self.video_caps = None self.first_frame = True self.BOX_SIZE = 4 self.LABEL_SIZE = 91 self.DETECTION_MAX = 20 self.MAX_OBJECT_DETECTION = 20 self.Y_SCALE = 10.0 self.X_SCALE = 10.0 self.H_SCALE = 5.0 self.W_SCALE = 5.0 self.VIDEO_WIDTH = width self.VIDEO_HEIGHT = height self.MODEL_WIDTH = 300 self.MODEL_HEIGHT = 300 self.tflite_model = model self.label_path = labels self.device = device self.backend = backend self.display = display self.tflite_labels = [] self.detected_objects = [] self.callback = callback self.r = r self.b = b self.g = g self.platform = platform self.current_framerate = 1000 # Define PXP or GPU2D converter if self.platform == "imx93evk": self.nxp_converter = "imxvideoconvert_pxp " else: self.nxp_converter = "imxvideoconvert_g2d " if not self.tflite_init(): raise Exception Gst.init(None) def run(self): """Starts pipeline and run demo""" if self.backend == "CPU": if self.platform == "imx93evk": backend = "true:cpu custom=NumThreads:2" else: backend = "true:cpu custom=NumThreads:4" elif self.backend == "GPU": os.environ["USE_GPU_INFERENCE"] = "1" backend = ( "true:gpu custom=Delegate:External," "ExtDelegateLib:libvx_delegate.so" ) else: if self.platform == "imx93evk": backend = ( "true:npu custom=Delegate:External," "ExtDelegateLib:libethosu_delegate.so" ) else: os.environ["USE_GPU_INFERENCE"] = "0" backend = ( "true:npu custom=Delegate:External," "ExtDelegateLib:libvx_delegate.so" ) if self.display == "X11": display = "ximagesink name=img_tensor " elif self.display == "None": self.print_time = GLib.get_monotonic_time() display = "fakesink " else: display = "fpsdisplaysink name=img_tensor text-overlay=false video-sink=waylandsink sync=false" # main loop self.loop = GLib.MainLoop() self.old_time = GLib.get_monotonic_time() self.update_time = GLib.get_monotonic_time() self.reload_time = -1 self.interval_time = 999999 # Create decoder for video file if self.platform == "imx8qmmek": decoder = "h264parse ! v4l2h264dec " else: decoder = "vpudec " if "/dev/video" in self.device: gst_launch_cmdline = "v4l2src name=cam_src device=" + self.device gst_launch_cmdline += " ! " + self.nxp_converter + "! video/x-raw,width=" gst_launch_cmdline += str(int(self.VIDEO_WIDTH)) + ",height=" gst_launch_cmdline += str(int(self.VIDEO_HEIGHT)) gst_launch_cmdline += ",framerate=30/1,format=BGRx ! tee name=t" else: gst_launch_cmdline = "filesrc location=" + self.device gst_launch_cmdline += " ! qtdemux ! " + decoder + "! tee name=t" gst_launch_cmdline += " t. ! " + self.nxp_converter + "! video/x-raw," gst_launch_cmdline += "width={:d},".format(self.MODEL_WIDTH) gst_launch_cmdline += "height={:d},".format(self.MODEL_HEIGHT) gst_launch_cmdline += " ! queue max-size-buffers=2 leaky=2 ! " gst_launch_cmdline += "videoconvert ! video/x-raw,format=RGB !" gst_launch_cmdline += " tensor_converter ! tensor_filter" gst_launch_cmdline += " framework=tensorflow-lite model=" gst_launch_cmdline += self.tflite_model + " accelerator=" + backend gst_launch_cmdline += " silent=FALSE name=tensor_filter latency=1 ! " gst_launch_cmdline += "tensor_sink name=tensor_sink t. ! " gst_launch_cmdline += self.nxp_converter + "! " gst_launch_cmdline += "cairooverlay name=tensor_res ! " gst_launch_cmdline += "queue max-size-buffers=2 leaky=2 ! " gst_launch_cmdline += display self.pipeline = Gst.parse_launch(gst_launch_cmdline) # bus and message callback bus = self.pipeline.get_bus() bus.add_signal_watch() bus.connect("message", self.on_bus_message) self.tensor_filter = self.pipeline.get_by_name("tensor_filter") self.wayland_sink = self.pipeline.get_by_name("img_tensor") # tensor sink signal : new data callback tensor_sink = self.pipeline.get_by_name("tensor_sink") tensor_sink.connect("new-data", self.new_data_cb) tensor_res = self.pipeline.get_by_name("tensor_res") tensor_res.connect("draw", self.draw_overlay_cb) tensor_res.connect("caps-changed", self.prepare_overlay_cb) if self.callback is not None: GObject.timeout_add(500, self.callback, self) # start pipeline self.pipeline.set_state(Gst.State.PLAYING) self.running = True self.set_window_title("img_tensor", "NNStreamer Object Detection Example") # run main loop self.loop.run() # quit when received eos or error message self.running = False self.pipeline.set_state(Gst.State.NULL) bus.remove_signal_watch() def tflite_init(self): """ :return: True if successfully initialized """ if not os.path.exists(self.tflite_model): logging.error("cannot find tflite model [%s]", self.tflite_model) return False label_path = self.label_path try: with open(label_path, "r") as label_file: for line in label_file.readlines(): if line[0].isdigit(): while str(len(self.tflite_labels)) not in line: self.tflite_labels.append("Invalid") self.tflite_labels.append(line[line.find(" ") + 1 :]) else: self.tflite_labels.append(line) except FileNotFoundError: logging.error("cannot find tflite label [%s]", label_path) return False logging.info("finished to load labels, total [%d]", len(self.tflite_labels)) return True # @brief Callback for tensor sink signal. def new_data_cb(self, sink, buffer): """Callback for tensor sink signal. :param sink: tensor sink element :param buffer: buffer from element :return: None """ if self.running: new_time = GLib.get_monotonic_time() self.interval_time = new_time - self.old_time self.old_time = new_time if buffer.n_memory() != 4: return False # tensor type is float32. # LOCATIONS_IDX:CLASSES_IDX:SCORES_IDX:NUM_DETECTION_IDX # 4:20:1:1\,20:1:1:1\,20:1:1:1\,1:1:1:1 # [0] detection_boxes (default 4th tensor). BOX_SIZE : # #MaxDetection, ANY-TYPE # [1] detection_classes (default 2nd tensor). # #MaxDetection, ANY-TYPE # [2] detection_scores (default 3rd tensor) # #MaxDetection, ANY-TYPE # [3] num_detection (default 1st tensor). 1, ANY-TYPE # bytestrings that are based on float32 must be # decoded into float list. # boxes mem_boxes = buffer.peek_memory(0) ret, info_boxes = mem_boxes.map(Gst.MapFlags.READ) if ret: assert info_boxes.size == ( self.BOX_SIZE * self.DETECTION_MAX * 4 ), "Invalid info_box size" decoded_boxes = list( np.frombuffer(info_boxes.data, dtype=np.float32) ) # decode bytestrings to float list # detections mem_detections = buffer.peek_memory(1) ret, info_detections = mem_detections.map(Gst.MapFlags.READ) if ret: assert info_detections.size == ( self.DETECTION_MAX * 4 ), "Invalid info_detection size" decoded_detections = list( np.frombuffer(info_detections.data, dtype=np.float32) ) # decode bytestrings to float list # scores mem_scores = buffer.peek_memory(2) ret, info_scores = mem_scores.map(Gst.MapFlags.READ) if ret: assert info_scores.size == ( self.DETECTION_MAX * 4 ), "Invalid info_score size" decoded_scores = list( np.frombuffer(info_scores.data, dtype=np.float32) ) # decode bytestrings to float list # num detection mem_num = buffer.peek_memory(3) ret, info_num = mem_num.map(Gst.MapFlags.READ) if ret: assert info_num.size == 4, "Invalid info_num size" decoded_num = list( np.frombuffer(info_num.data, dtype=np.float32) ) # decode bytestrings to float list self.get_detected_objects( decoded_boxes, decoded_detections, decoded_scores, int(decoded_num[0]) ) mem_boxes.unmap(info_boxes) mem_detections.unmap(info_detections) mem_scores.unmap(info_scores) mem_num.unmap(info_num) if self.display == "None": if (GLib.get_monotonic_time() - self.print_time) > 1000000: inference = self.tensor_filter.get_property("latency") print( "Inference time: " + str(inference / 1000) + " ms (" + "{:5.2f}".format(1 / (inference / 1000000)) + " IPS)" ) self.print_time = GLib.get_monotonic_time() def get_detected_objects(self, boxes, detections, scores, num): """Pairs boxes with dectected objects""" threshold_score = 0.5 detected = list() for i in range(num): score = scores[i] if score < threshold_score: continue c = detections[i] box_offset = self.BOX_SIZE * i ymin = boxes[box_offset + 0] xmin = boxes[box_offset + 1] ymax = boxes[box_offset + 2] xmax = boxes[box_offset + 3] x = xmin * self.MODEL_WIDTH y = ymin * self.MODEL_HEIGHT width = (xmax - xmin) * self.MODEL_WIDTH height = (ymax - ymin) * self.MODEL_HEIGHT obj = { "class_id": int(c), "x": x, "y": y, "width": width, "height": height, "prob": score, } detected.append(obj) # update result self.detected_objects.clear() for d in detected: self.detected_objects.append(d) if DEBUG: print("==============================") print("LABEL : {}".format(self.tflite_labels[d["class_id"]])) print("x : {}".format(d["x"])) print("y : {}".format(d["y"])) print("width : {}".format(d["width"])) print("height : {}".format(d["height"])) print("Confidence Score: {}".format(d["prob"])) def prepare_overlay_cb(self, overlay, caps): """Store the information from the caps that we are interested in.""" self.video_caps = caps def draw_overlay_cb(self, overlay, context, timestamp, duration): """Callback to draw the overlay.""" if self.video_caps is None or not self.running: return scale_height = self.VIDEO_HEIGHT / 1080 scale_width = self.VIDEO_WIDTH / 1920 scale_text = max(scale_height, scale_width) # mutex_lock alternative required detected = self.detected_objects # mutex_unlock alternative needed drawed = 0 context.select_font_face( "Sans", cairo.FONT_SLANT_NORMAL, cairo.FONT_WEIGHT_BOLD ) context.set_font_size(int(50.0 * scale_text)) context.set_source_rgb(self.r, self.g, self.b) for obj in detected: label = self.tflite_labels[obj["class_id"]][:-1] x = obj["x"] * self.VIDEO_WIDTH // self.MODEL_WIDTH y = obj["y"] * self.VIDEO_HEIGHT // self.MODEL_HEIGHT width = obj["width"] * self.VIDEO_WIDTH // self.MODEL_WIDTH height = obj["height"] * self.VIDEO_HEIGHT // self.MODEL_HEIGHT # draw rectangle context.rectangle(x, y, width, height) context.set_line_width(3) context.stroke() # draw title context.move_to(x + 5, y + int(50.0 * scale_text)) context.show_text(label) drawed += 1 if drawed >= self.MAX_OBJECT_DETECTION: break inference = self.tensor_filter.get_property("latency") # Get current framerate and avg. framerate output_wayland = self.wayland_sink.get_property("last-message") if output_wayland: current_text = re.findall(r"current:\s[\d]+[.\d]*", output_wayland)[0] self.current_framerate = float(re.findall(r"[\d]+[.\d]*", current_text)[0]) context.set_font_size(int(25.0 * scale_text)) context.move_to( int(50 * scale_width), int(self.VIDEO_HEIGHT - (100 * scale_height)) ) context.show_text("i.MX NNStreamer Detection Demo") if inference == 0: context.move_to( int(50 * scale_width), int(self.VIDEO_HEIGHT - (75 * scale_height)) ) context.show_text("FPS: ") context.move_to( int(50 * scale_width), int(self.VIDEO_HEIGHT - (50 * scale_height)) ) context.show_text("IPS: ") elif ( GLib.get_monotonic_time() - self.reload_time ) < 100000 and self.refresh_time != -1: context.move_to( int(50 * scale_width), int(self.VIDEO_HEIGHT - (75 * scale_height)) ) context.show_text( "FPS: {:6.2f} ({:6.2f} ms)".format( self.current_framerate, 1.0 / self.current_framerate * 1000 ) ) context.move_to( int(50 * scale_width), int(self.VIDEO_HEIGHT - (50 * scale_height)) ) context.show_text( "IPS: {:6.2f} ({:6.2f} ms)".format( 1 / (inference / 1000000), inference / 1000 ) ) else: self.reload_time = GLib.get_monotonic_time() self.refresh_time = self.interval_time self.inference = self.tensor_filter.get_property("latency") context.move_to( int(50 * scale_width), int(self.VIDEO_HEIGHT - (75 * scale_height)) ) context.show_text( "FPS: {:6.2f} ({:6.2f} ms)".format( self.current_framerate, 1.0 / self.current_framerate * 1000 ) ) context.move_to( int(50 * scale_width), int(self.VIDEO_HEIGHT - (50 * scale_height)) ) context.show_text( "IPS: {:6.2f} ({:6.2f} ms)".format( 1 / (inference / 1000000), inference / 1000 ) ) if self.first_frame: context.move_to(int(400 * scale_width), int(600 * scale_height)) context.set_font_size(int(200.0 * min(scale_width, scale_height))) context.show_text("Loading...") self.first_frame = False context.fill() def on_bus_message(self, bus, message): """Callback for message. :param bus: pipeline bus :param message: message from pipeline :return: None """ if message.type == Gst.MessageType.EOS: logging.info("received eos message") self.loop.quit() elif message.type == Gst.MessageType.ERROR: error, debug = message.parse_error() logging.warning("[error] %s : %s", error.message, debug) self.loop.quit() elif message.type == Gst.MessageType.WARNING: error, debug = message.parse_warning() logging.warning("[warning] %s : %s", error.message, debug) elif message.type == Gst.MessageType.STREAM_START: logging.info("received start message") elif message.type == Gst.MessageType.QOS: data_format, processed, dropped = message.parse_qos_stats() format_str = Gst.Format.get_name(data_format) logging.debug( "[qos] format[%s] processed[%d] dropped[%d]", format_str, processed, dropped, ) def set_window_title(self, name, title): """Set window title for X11. :param name: GstXImageasink element name :param title: window title :return: None """ element = self.pipeline.get_by_name(name) if element is not None: pad = element.get_static_pad("sink") if pad is not None: tags = Gst.TagList.new_empty() tags.add_value(Gst.TagMergeMode.APPEND, "title", title) pad.send_event(Gst.Event.new_tag(tags)) if __name__ == "__main__": if ( len(sys.argv) != 7 and len(sys.argv) != 5 and len(sys.argv) != 9 and len(sys.argv) != 12 and len(sys.argv) != 6 ): print( "Usage: python3 nndetection.py <dev/video*/video file>" + " <NPU/CPU> <model file> <label file>" ) exit() # Get platform platform = os.uname().nodename if len(sys.argv) == 7: example = ObjectDetection( platform, sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5], sys.argv[6], ) if len(sys.argv) == 5: example = ObjectDetection( platform, sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4] ) if len(sys.argv) == 6: example = ObjectDetection( platform, sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5] ) if len(sys.argv) == 9: example = ObjectDetection( platform, sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5], sys.argv[6], int(sys.argv[7]), int(sys.argv[8]), ) if len(sys.argv) == 12: example = ObjectDetection( platform, sys.argv[1], sys.argv[2], sys.argv[3], sys.argv[4], sys.argv[5], sys.argv[6], int(sys.argv[7]), int(sys.argv[8]), int(sys.argv[9]), int(sys.argv[10]), int(sys.argv[11]), ) example.run() |
self.pipeline = Gst.parse_launch( 'v4l2src name=cam_src ! videoconvert ! videoscale ! ' 'video/x-raw,width=640,height=480,format=RGB ! tee name=t_raw ' 't_raw. ! queue leaky=2 max-size-buffers=2 ! videoscale ! video/x-raw,width=300,height=300 ! tensor_converter ! ' 'tensor_transform mode=arithmetic option=typecast:float32,add:-127.5,div:127.5 ! ' 'tensor_filter framework=tensorflow-lite model=' + self.tflite_model + ' ! ' 'tensor_decoder mode=bounding_boxes option1=mobilenet-ssd option2=' + self.tflite_label + ' option3=' + self.tflite_box_prior + ' option4=640:480 option5=300:300 !' 'compositor name=mix sink_0::zorder=2 sink_1::zorder=1 ! videoconvert ! ximagesink ' 't_raw. ! queue leaky=2 max-size-buffers=10 ! mix. ' ) |
gst_launch_cmdline 를 출력해보니 아래와 같이 gstreamer 파이프라인이 나온다.
v4l2src name=cam_src device=/dev/video3 ! imxvideoconvert_g2d ! video/x-raw,width=1920,height=1080,framerate=30/1,format=BGRx ! tee name=t t. ! imxvideoconvert_g2d ! video/x-raw,width=300,height=300, ! queue max-size-buffers=2 leaky=2 ! videoconvert ! video/x-raw,format=RGB ! tensor_converter ! tensor_filter framework=tensorflow-lite model=/home/root/.cache/gopoint/mobilenet_ssd_v2_coco_quant_postprocess.tflite accelerator=true:npu custom=Delegate:External,ExtDelegateLib:libvx_delegate.so silent=FALSE name=tensor_filter latency=1 ! tensor_sink name=tensor_sink t. ! imxvideoconvert_g2d ! cairooverlay name=tensor_res ! queue max-size-buffers=2 leaky=2 ! fpsdisplaysink name=img_tensor text-overlay=false video-sink=waylandsink sync=false |
보기어려우니 엔터로 구분
v4l2src name=cam_src device=/dev/video3 ! imxvideoconvert_g2d ! video/x-raw,width=1920,height=1080,framerate=30/1,format=BGRx ! tee name=t t. ! imxvideoconvert_g2d ! video/x-raw,width=300,height=300, ! queue max-size-buffers=2 leaky=2 ! videoconvert ! video/x-raw,format=RGB ! tensor_converter ! tensor_filter framework=tensorflow-lite model=/home/root/.cache/gopoint/mobilenet_ssd_v2_coco_quant_postprocess.tflite accelerator=true:npu custom=Delegate:External,ExtDelegateLib:libvx_delegate.so silent=FALSE name=tensor_filter latency=1 ! tensor_sink name=tensor_sink t. ! imxvideoconvert_g2d ! cairooverlay name=tensor_res ! queue max-size-buffers=2 leaky=2 ! fpsdisplaysink name=img_tensor text-overlay=false video-sink=waylandsink sync=false |
+
2024.01.03
# cd /home/root/.nxp-demo-experience/scripts/machine_learning # python3 nndetection.py /dev/video3 NPU /home/root/.cache/gopoint/mobilenet_ssd_v2_coco_quant_postprocess.tflite /home/root/.cache/gopoint/coco_labels.txt |
gst-launch 로도 실행은 되는데 callback 처리가 안되서 overlay가 출력이 안되어 동일한 화면을 보여주진 않는다.
gst-launch-1.0 v4l2src name=cam_src device=/dev/video3 ! imxvideoconvert_g2d ! video/x-raw,width=1920,height=1080,framerate=30/1,format=BGRx ! tee name=t t. ! imxvideoconvert_g2d ! video/x-raw,width=300,height=300, ! queue max-size-buffers=2 leaky=2 ! videoconvert ! video/x-raw,format=RGB ! tensor_converter ! tensor_filter framework=tensorflow-lite model=/home/root/.cache/gopoint/mobilenet_ssd_v2_coco_quant_postprocess.tflite accelerator=true:npu custom=Delegate:External,ExtDelegateLib:libvx_delegate.so silent=FALSE name=tensor_filter latency=1 ! tensor_sink name=tensor_sink t. ! imxvideoconvert_g2d ! cairooverlay name=tensor_res ! queue max-size-buffers=2 leaky=2 ! fpsdisplaysink name=img_tensor text-overlay=false video-sink=waylandsink sync=false |
'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글
우분투에 jupyter notebook 설치 및 실행하기 (0) | 2024.01.02 |
---|---|
주피터 노트북 프로젝트(?) 실행하기 (0) | 2024.01.02 |
tensorflow keras dataset (0) | 2024.01.02 |
tensorflow lite / mnist 학습 (0) | 2024.01.02 |
yolo-label (0) | 2022.03.22 |
자동완성으로 해보니 몇가지 나오는데 mnist 말고는 몰라서 찾아보는 중
>>> tf.keras.datasets. tf.keras.datasets.boston_housing tf.keras.datasets.cifar100 tf.keras.datasets.imdb tf.keras.datasets.reuters tf.keras.datasets.cifar10 tf.keras.datasets.fashion_mnist tf.keras.datasets.mnist |
imdb는 영화 db
boston_housing은 statlib 사이트에서 정의된 보스톤 주택가격
reuter는 46 주제에 따른 11228 뉴스(로이터 뉴스) 인 듯.
This is a dataset of 11,228 newswires from Reuters, labeled over 46 topics. |
[링크 : https://www.tensorflow.org/api_docs/python/tf/keras/datasets/boston_housing/load_data]
[링크 : https://www.tensorflow.org/api_docs/python/tf/keras/datasets]
cifar10은 10개 클래스니까.. 결과도 MNIST 처럼 10개로 나올 것 같고..
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. |
[링크 : https://www.tensorflow.org/datasets/catalog/cifar10?hl=en]
This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs). |
[링크 : https://www.tensorflow.org/datasets/catalog/cifar100?hl=en]
[링크 : https://www.tensorflow.org/datasets/catalog/fashion_mnist?hl=en]
[링크 : https://www.tensorflow.org/datasets/catalog/emnist?hl=en]
'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글
주피터 노트북 프로젝트(?) 실행하기 (0) | 2024.01.02 |
---|---|
i.mx8mp gopoint 실행 경로 (0) | 2024.01.02 |
tensorflow lite / mnist 학습 (0) | 2024.01.02 |
yolo-label (0) | 2022.03.22 |
tflite bazel rpi3b+ (0) | 2022.01.27 |
신기하네.. 그냥 알아서 받네?
>>> mnist = tf.keras.datasets.mnist >>> (train_images, train_labels), (test_images, test_labels) = mnist.load_data() Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz 11490434/11490434 [==============================] - 1s 0us/step |
학습하고 tflite 파일로 저장하기
import tensorflow as tf import numpy as np mnist = tf.keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.astype(np.float32) / 255.0 test_images = test_images.astype(np.float32) / 255.0 model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=(28, 28)), tf.keras.layers.Reshape(target_shape=(28, 28, 1)), tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'), tf.keras.layers.MaxPooling2D(pool_size=(2, 2)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, epochs=5, validation_data=(test_images, test_labels) ) # 일반 모델로 변환 converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() # 요건 차이 없음 # converter = tf.lite.TFLiteConverter.from_keras_model(model) # converter.optimizations = [tf.lite.Optimize.DEFAULT] # tflite_model_quant = converter.convert() # quant 를 하려면 아래 코드 실행해야 함 def representative_data_gen(): for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100): yield [input_value] converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_data_gen converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 tflite_model_quant = converter.convert() # 파일로 저장하기 import pathlib tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/") tflite_models_dir.mkdir(exist_ok=True, parents=True) # Save the unquantized/float model: tflite_model_file = tflite_models_dir/"mnist_model.tflite" tflite_model_file.write_bytes(tflite_model) # Save the quantized model: tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite" tflite_model_quant_file.write_bytes(tflite_model_quant) |
[링크 : https://www.tensorflow.org/lite/performance/post_training_integer_quant?hl=ko]
netron을 통해 생성한걸 보는데 quant나 그냥이나 어째 차이가 없냐?
+
quantization 하면 uint8로 변경된다.
그나저나, MNIST에 대해서 오해가 있었다.
출력이 [1,10] 인데 0~9 까지의 숫자에 대한 필기 데이터베이스지 알파벳이 아니란 것 -_-!
그래서 출력이 딱 10개인 건 당연하다는 것..
[링크 : https://en.wikipedia.org/wiki/MNIST_database]
+
EMNIST 라고 알파벳 손글씨가 따로 있다.
[링크 : https://www.nist.gov/itl/products-and-services/emnist-dataset]
[링크 : https://www.tensorflow.org/datasets/catalog/emnist?hl=ko]
'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글
i.mx8mp gopoint 실행 경로 (0) | 2024.01.02 |
---|---|
tensorflow keras dataset (0) | 2024.01.02 |
yolo-label (0) | 2022.03.22 |
tflite bazel rpi3b+ (0) | 2022.01.27 |
bazel cross compile (0) | 2022.01.27 |
제 시간에 잘 도착하길 ㅠㅠ
항권권이 2터미널이니까 그쪽으로 신청하고 받으면 되는데 현장 구매는 안되는건가...
비나폰이 그래도 규모가 더 큰데라고 해서 일단 신청했는데
모비폰에 비하면은 순수 데이터용이라, 전화 문자 수신도 안되니 조금 아쉽긴 하다.
그래도 다행히(!) 둘 다 핫스팟은 된다고 하니 멀 하던 상관은 없을 듯.
배송비가 붙으니 먼가 배아프다 ㅠㅠ