"TensorFlow 단순회귀분석"의 두 판 사이의 차이

잔글 (Jmnote님이 TensorFlow 단순선형회귀분석 문서를 TensorFlow 단순회귀분석 문서로 이동했습니다)
 
(같은 사용자의 중간 판 4개는 보이지 않습니다)
7번째 줄: 7번째 줄:
import tensorflow as tf
import tensorflow as tf


learning_rate = 0.05
learning_rate = 0.02
x_data = [1,2,3]
x_data = [1,2,3]
y_data = [1,2,3]
y_data = [1,2,3]
25번째 줄: 25번째 줄:
     if step % 200 == 0:
     if step % 200 == 0:
         print(step, sess.run(W), sess.run(b))
         print(step, sess.run(W), sess.run(b))
# 0 [ 0.02381748] [ 0.36606845]
# 0 [ 0.69967157] [ 0.0295405]
# 200 [ 0.97388893] [ 0.05935668]
# 200 [ 0.97727495] [ 0.05165951]
# 400 [ 0.99767464] [ 0.00528606]
# 400 [ 0.99133259] [ 0.0197031]
# 600 [ 0.99979293] [ 0.00047073]
# 600 [ 0.99669421] [ 0.00751483]
# 800 [ 0.99998158] [ 4.19568278e-05]
# 800 [ 0.99873918] [ 0.00286615]
# 1000 [ 0.99999833] [ 3.76215985e-06]
# 1000 [ 0.99951905] [ 0.00109322]
# 1200 [ 0.99999964] [ 7.06425851e-07]
# 1200 [ 0.99981648] [ 0.00041713]
# 1400 [ 0.99999964] [ 6.58742181e-07]
# 1400 [ 0.99992996] [ 0.00015933]
# 1600 [ 0.99999964] [  6.58742181e-07]
# 1600 [ 0.99997312] [  6.09436174e-05]
# 1800 [ 0.99999964] [  6.58742181e-07]
# 1800 [ 0.99998975] [  2.34372128e-05]
# 2000 [ 0.99999964] [  6.58742181e-07]
# 2000 [ 0.99999601] [  8.96838174e-06]
</source>
</source>
:→ 수행결과는 약간 다를 수 있음 (W 초기값이 랜덤이기 때문)
:→ 수행결과는 약간 다를 수 있음 (W 초기값이 랜덤이기 때문)
43번째 줄: 43번째 줄:
import tensorflow as tf
import tensorflow as tf


learning_rate = 0.02
x_data = [1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83]
x_data = [1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83]
y_data = [52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46]
y_data = [52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46]
learning_rate = 0.02


W = tf.Variable(0.0)
W = tf.Variable(0.0)
b = tf.Variable(0.0)
b = tf.Variable(0.0)
y = W * x_data + b
y = W * x_data + b
cost = tf.reduce_mean(tf.square(y - y_data))
 
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
loss = tf.reduce_mean(tf.square(y - y_data))
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)


sess = tf.Session()
sess = tf.Session()
59번째 줄: 60번째 줄:
     sess.run(train)
     sess.run(train)
     if step % 10000 == 0:
     if step % 10000 == 0:
         print( "step=", step, "W=", sess.run(W), "b=", sess.run(b) )
         print(step, sess.run(W), sess.run(b))
       
# 0 4.12865 2.48312
# step= 0 W= 4.12865 b= 2.48312
# 10000 52.1455 -23.9476
# step= 10000 W= 52.1455 b= -23.9476
# 20000 58.7961 -34.9614
# step= 20000 W= 58.7961 b= -34.9614
# 30000 60.6001 -37.9489
# step= 30000 W= 60.6001 b= -37.9489
# 40000 61.0897 -38.7597
# step= 40000 W= 61.0897 b= -38.7597
# 50000 61.2225 -38.9796
# step= 50000 W= 61.2225 b= -38.9796
# 60000 61.2588 -39.0398
# step= 60000 W= 61.2588 b= -39.0398
# 70000 61.2619 -39.0449
# step= 70000 W= 61.2619 b= -39.0449
# 80000 61.2619 -39.0449
# step= 80000 W= 61.2619 b= -39.0449
</source>
</source>


==같이 보기==
==같이 보기==
* [[TensorFlow 회귀분석]]
* [[sklearn 단순선형회귀분석]]
* [[sklearn 단순선형회귀분석]]
* [[statsmodels 단순선형회귀분석]]
* [[statsmodels 단순선형회귀분석]]

2020년 4월 2일 (목) 23:16 기준 최신판

1 개요[ | ]

TensorFlow 단순선형회귀분석
  • 텐서플로우는 '통계분석 패키지'라기 보다는 '범용모델 학습 라이브러리'이므로 접근방법이 상당히 다르다.

2 예시 1[ | ]

import tensorflow as tf

learning_rate = 0.02
x_data = [1,2,3]
y_data = [1,2,3]

W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b

loss = tf.reduce_mean(tf.square(y - y_data))
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

sess = tf.Session()
sess.run( tf.global_variables_initializer() )

for step in range(2001):
    sess.run(train)
    if step % 200 == 0:
        print(step, sess.run(W), sess.run(b))
# 0 [ 0.69967157] [ 0.0295405]
# 200 [ 0.97727495] [ 0.05165951]
# 400 [ 0.99133259] [ 0.0197031]
# 600 [ 0.99669421] [ 0.00751483]
# 800 [ 0.99873918] [ 0.00286615]
# 1000 [ 0.99951905] [ 0.00109322]
# 1200 [ 0.99981648] [ 0.00041713]
# 1400 [ 0.99992996] [ 0.00015933]
# 1600 [ 0.99997312] [  6.09436174e-05]
# 1800 [ 0.99998975] [  2.34372128e-05]
# 2000 [ 0.99999601] [  8.96838174e-06]
→ 수행결과는 약간 다를 수 있음 (W 초기값이 랜덤이기 때문)

3 예시 2[ | ]

import tensorflow as tf

learning_rate = 0.02
x_data = [1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83]
y_data = [52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46]

W = tf.Variable(0.0)
b = tf.Variable(0.0)
y = W * x_data + b

loss = tf.reduce_mean(tf.square(y - y_data))
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

sess = tf.Session()
sess.run( tf.global_variables_initializer() )

for step in range(80001):
    sess.run(train)
    if step % 10000 == 0:
        print(step, sess.run(W), sess.run(b))
# 0 4.12865 2.48312
# 10000 52.1455 -23.9476
# 20000 58.7961 -34.9614
# 30000 60.6001 -37.9489
# 40000 61.0897 -38.7597
# 50000 61.2225 -38.9796
# 60000 61.2588 -39.0398
# 70000 61.2619 -39.0449
# 80000 61.2619 -39.0449

4 같이 보기[ | ]

문서 댓글 ({{ doc_comments.length }})
{{ comment.name }} {{ comment.created | snstime }}