타이핑 하기 귀찮으니(!)
[링크 : https://github.com/golbin/TensorFlow-Tutorials]
+
현재 시점에서 아래의 소스는 단 두 줄 손 보면 돌아는 간다. (tfv2 인데 tfv1 하위 호환성으로 작동 시키기)
# X 와 Y 의 상관관계를 분석하는 기초적인 선형 회귀 모델을 만들고 실행해봅니다. import tensorflow.compat.v1 as tf tf.disable_v2_behavior() x_data = [1, 2, 3] y_data = [1, 2, 3] W = tf.Variable(tf.random_uniform([1], -1.0, 1.0)) b = tf.Variable(tf.random_uniform([1], -1.0, 1.0)) # name: 나중에 텐서보드등으로 값의 변화를 추적하거나 살펴보기 쉽게 하기 위해 이름을 붙여줍니다. X = tf.placeholder(tf.float32, name="X") Y = tf.placeholder(tf.float32, name="Y") print(X) print(Y) # X 와 Y 의 상관 관계를 분석하기 위한 가설 수식을 작성합니다. # y = W * x + b # W 와 X 가 행렬이 아니므로 tf.matmul 이 아니라 기본 곱셈 기호를 사용했습니다. hypothesis = W * X + b # 손실 함수를 작성합니다. # mean(h - Y)^2 : 예측값과 실제값의 거리를 비용(손실) 함수로 정합니다. cost = tf.reduce_mean(tf.square(hypothesis - Y)) # 텐서플로우에 기본적으로 포함되어 있는 함수를 이용해 경사 하강법 최적화를 수행합니다. optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1) # 비용을 최소화 하는 것이 최종 목표 train_op = optimizer.minimize(cost) # 세션을 생성하고 초기화합니다. with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # 최적화를 100번 수행합니다. for step in range(100): # sess.run 을 통해 train_op 와 cost 그래프를 계산합니다. # 이 때, 가설 수식에 넣어야 할 실제값을 feed_dict 을 통해 전달합니다. _, cost_val = sess.run([train_op, cost], feed_dict={X: x_data, Y: y_data}) print(step, cost_val, sess.run(W), sess.run(b)) # 최적화가 완료된 모델에 테스트 값을 넣고 결과가 잘 나오는지 확인해봅니다. print("\n=== Test ===") print("X: 5, Y:", sess.run(hypothesis, feed_dict={X: 5})) print("X: 2.5, Y:", sess.run(hypothesis, feed_dict={X: 2.5})) |
$ python lr.py 2024-01-10 11:39:49.775206: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-01-10 11:39:49.775245: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-01-10 11:39:49.776215: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-01-10 11:39:49.781682: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-01-10 11:39:50.440334: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT /usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.3 warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}" WARNING:tensorflow:From /home/falinux/.local/lib/python3.10/site-packages/tensorflow/python/compat/v2_compat.py:108: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. Instructions for updating: non-resource variables are not supported in the long term Tensor("X:0", dtype=float32) Tensor("Y:0", dtype=float32) 2024-01-10 11:39:51.327415: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled 0 6.4782066 [1.2373642] [-0.24653786] 1 0.089632 [1.1144395] [-0.29217595] 2 0.012737438 [1.1244997] [-0.27951655] 3 0.011264746 [1.1201066] [-0.2734131] 4 0.01071928 [1.1173724] [-0.2667731] 5 0.010209985 [1.1145341] [-0.26036742] 6 0.009725014 [1.1117826] [-0.2541076] 7 0.009263077 [1.1090952] [-0.2479991] 8 0.008823066 [1.1064726] [-0.24203737] 9 0.008403975 [1.1039131] [-0.23621896] 10 0.008004769 [1.1014152] [-0.2305404] 11 0.007624544 [1.0989771] [-0.22499838] 12 0.007262358 [1.0965978] [-0.21958955] 13 0.0069174054 [1.0942756] [-0.21431077] 14 0.0065888255 [1.0920093] [-0.20915887] 15 0.0062758424 [1.0897975] [-0.20413081] 16 0.0059777307 [1.0876389] [-0.19922365] 17 0.0056937817 [1.0855321] [-0.19443446] 18 0.0054233256 [1.083476] [-0.1897604] 19 0.0051657106 [1.0814693] [-0.1851987] 20 0.0049203373 [1.0795108] [-0.18074667] 21 0.004686633 [1.0775993] [-0.17640167] 22 0.0044640056 [1.0757339] [-0.17216106] 23 0.0042519583 [1.0739133] [-0.1680224] 24 0.004049988 [1.0721365] [-0.16398326] 25 0.0038576098 [1.0704024] [-0.16004121] 26 0.0036743751 [1.06871] [-0.15619393] 27 0.0034998383 [1.0670582] [-0.15243913] 28 0.003333594 [1.0654461] [-0.1487746] 29 0.003175243 [1.0638729] [-0.14519812] 30 0.0030244188 [1.0623374] [-0.14170769] 31 0.0028807523 [1.0608389] [-0.13830112] 32 0.0027439168 [1.0593764] [-0.13497646] 33 0.0026135833 [1.057949] [-0.13173172] 34 0.002489428 [1.056556] [-0.12856494] 35 0.0023711843 [1.0551964] [-0.12547435] 36 0.0022585478 [1.0538695] [-0.12245804] 37 0.0021512664 [1.0525745] [-0.11951423] 38 0.0020490757 [1.0513107] [-0.11664119] 39 0.0019517452 [1.0500772] [-0.11383722] 40 0.0018590376 [1.0488734] [-0.11110065] 41 0.0017707323 [1.0476985] [-0.10842989] 42 0.0016866213 [1.0465518] [-0.1058233] 43 0.0016065066 [1.0454327] [-0.10327938] 44 0.0015301956 [1.0443406] [-0.1007966] 45 0.0014575059 [1.0432746] [-0.09837352] 46 0.0013882784 [1.0422344] [-0.09600867] 47 0.0013223292 [1.0412191] [-0.0937007] 48 0.0012595187 [1.0402282] [-0.0914482] 49 0.0011996872 [1.0392612] [-0.08924985] 50 0.0011427039 [1.0383173] [-0.08710436] 51 0.0010884297 [1.0373962] [-0.08501042] 52 0.0010367227 [1.0364972] [-0.08296681] 53 0.0009874817 [1.0356199] [-0.08097235] 54 0.0009405748 [1.0347636] [-0.07902583] 55 0.00089589664 [1.0339279] [-0.07712609] 56 0.00085334125 [1.0331123] [-0.07527205] 57 0.0008128048 [1.0323163] [-0.07346255] 58 0.0007741994 [1.0315394] [-0.07169659] 59 0.00073742354 [1.0307813] [-0.06997304] 60 0.00070239493 [1.0300413] [-0.06829095] 61 0.00066903216 [1.0293192] [-0.0666493] 62 0.0006372516 [1.0286143] [-0.06504711] 63 0.0006069818 [1.0279264] [-0.0634834] 64 0.00057814806 [1.0272552] [-0.0619573] 65 0.00055068725 [1.0265999] [-0.06046791] 66 0.0005245278 [1.0259604] [-0.05901428] 67 0.0004996119 [1.0253364] [-0.0575956] 68 0.00047588357 [1.0247273] [-0.05621104] 69 0.0004532766 [1.0241328] [-0.05485978] 70 0.00043174453 [1.0235528] [-0.05354097] 71 0.00041123512 [1.0229865] [-0.05225388] 72 0.0003917031 [1.022434] [-0.05099772] 73 0.00037309653 [1.0218947] [-0.04977177] 74 0.00035537416 [1.0213684] [-0.04857529] 75 0.00033849102 [1.0208547] [-0.04740757] 76 0.00032241447 [1.0203533] [-0.04626793] 77 0.00030709928 [1.0198641] [-0.04515567] 78 0.00029251093 [1.0193865] [-0.04407016] 79 0.0002786171 [1.0189205] [-0.04301074] 80 0.00026538406 [1.0184656] [-0.04197682] 81 0.00025277727 [1.0180218] [-0.04096771] 82 0.0002407704 [1.0175885] [-0.0399829] 83 0.0002293337 [1.0171658] [-0.03902172] 84 0.00021844136 [1.0167531] [-0.03808369] 85 0.00020806213 [1.0163504] [-0.03716817] 86 0.00019818085 [1.0159572] [-0.0362747] 87 0.000188766 [1.0155737] [-0.03540265] 88 0.00017980166 [1.0151993] [-0.03455162] 89 0.00017126095 [1.0148339] [-0.03372103] 90 0.00016312544 [1.0144774] [-0.0329104] 91 0.0001553779 [1.0141293] [-0.03211929] 92 0.00014799698 [1.0137897] [-0.03134715] 93 0.00014096718 [1.0134581] [-0.03059359] 94 0.00013426914 [1.0131347] [-0.02985811] 95 0.00012789248 [1.0128189] [-0.02914038] 96 0.00012181744 [1.0125108] [-0.02843988] 97 0.00011603059 [1.01221] [-0.02775621] 98 0.00011052046 [1.0119165] [-0.02708898] 99 0.00010527024 [1.01163] [-0.02643778] === Test === X: 5, Y: [5.0317125] X: 2.5, Y: [2.5026374] |
'프로그램 사용 > yolo_tensorflow' 카테고리의 다른 글
ssd-mobilenetv2 on jupyter notebook (2) | 2024.01.10 |
---|---|
텐서플로우 v1 을 v2로 마이그레이션은 실패 -_- (0) | 2024.01.10 |
ReLU - Rectified Linear Unit (0) | 2024.01.10 |
softmax (0) | 2024.01.10 |
텐서플로우 학습 (0) | 2024.01.09 |