"TensorFlow 단순회귀분석"의 두 판 사이의 차이

7번째 줄: 7번째 줄:
import tensorflow as tf
import tensorflow as tf


learning_rate = 0.05
learning_rate = 0.02
iteration = 2001
 
x_data = [1,2,3]
x_data = [1,2,3]
y_data = [1,2,3]
y_data = [1,2,3]
21번째 줄: 23번째 줄:
sess.run( tf.global_variables_initializer() )
sess.run( tf.global_variables_initializer() )


for step in range(2001):
for step in range(iteration):
     sess.run(train)
     sess.run(train)
     if step % 200 == 0:
     if step % 200 == 0:
         print(step, sess.run(W), sess.run(b))
         print(step, sess.run(W), sess.run(b))
# 0 [ 0.02381748] [ 0.36606845]
# 0 [ 0.69967157] [ 0.0295405]
# 200 [ 0.97388893] [ 0.05935668]
# 200 [ 0.97727495] [ 0.05165951]
# 400 [ 0.99767464] [ 0.00528606]
# 400 [ 0.99133259] [ 0.0197031]
# 600 [ 0.99979293] [ 0.00047073]
# 600 [ 0.99669421] [ 0.00751483]
# 800 [ 0.99998158] [ 4.19568278e-05]
# 800 [ 0.99873918] [ 0.00286615]
# 1000 [ 0.99999833] [ 3.76215985e-06]
# 1000 [ 0.99951905] [ 0.00109322]
# 1200 [ 0.99999964] [ 7.06425851e-07]
# 1200 [ 0.99981648] [ 0.00041713]
# 1400 [ 0.99999964] [ 6.58742181e-07]
# 1400 [ 0.99992996] [ 0.00015933]
# 1600 [ 0.99999964] [  6.58742181e-07]
# 1600 [ 0.99997312] [  6.09436174e-05]
# 1800 [ 0.99999964] [  6.58742181e-07]
# 1800 [ 0.99998975] [  2.34372128e-05]
# 2000 [ 0.99999964] [  6.58742181e-07]
# 2000 [ 0.99999601] [  8.96838174e-06]
</source>
</source>
:→ 수행결과는 약간 다를 수 있음 (W 초기값이 랜덤이기 때문)
:→ 수행결과는 약간 다를 수 있음 (W 초기값이 랜덤이기 때문)

2017년 12월 21일 (목) 10:14 판

1 개요

TensorFlow 단순선형회귀분석
  • 텐서플로우는 '통계분석 패키지'라기 보다는 '범용모델 학습 라이브러리'이므로 접근방법이 상당히 다르다.

2 예시 1

import tensorflow as tf

learning_rate = 0.02
iteration = 2001

x_data = [1,2,3]
y_data = [1,2,3]

W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b

loss = tf.reduce_mean(tf.square(y - y_data))
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

sess = tf.Session()
sess.run( tf.global_variables_initializer() )

for step in range(iteration):
    sess.run(train)
    if step % 200 == 0:
        print(step, sess.run(W), sess.run(b))
# 0 [ 0.69967157] [ 0.0295405]
# 200 [ 0.97727495] [ 0.05165951]
# 400 [ 0.99133259] [ 0.0197031]
# 600 [ 0.99669421] [ 0.00751483]
# 800 [ 0.99873918] [ 0.00286615]
# 1000 [ 0.99951905] [ 0.00109322]
# 1200 [ 0.99981648] [ 0.00041713]
# 1400 [ 0.99992996] [ 0.00015933]
# 1600 [ 0.99997312] [  6.09436174e-05]
# 1800 [ 0.99998975] [  2.34372128e-05]
# 2000 [ 0.99999601] [  8.96838174e-06]
→ 수행결과는 약간 다를 수 있음 (W 초기값이 랜덤이기 때문)

3 예시 2

import tensorflow as tf

x_data = [1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83]
y_data = [52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46]
learning_rate = 0.02

W = tf.Variable(0.0)
b = tf.Variable(0.0)
y = W * x_data + b
cost = tf.reduce_mean(tf.square(y - y_data))
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

sess = tf.Session()
sess.run( tf.global_variables_initializer() )

for step in range(80001):
    sess.run(train)
    if step % 10000 == 0:
        print( "step=", step, "W=", sess.run(W), "b=", sess.run(b) )
        
# step= 0 W= 4.12865 b= 2.48312
# step= 10000 W= 52.1455 b= -23.9476
# step= 20000 W= 58.7961 b= -34.9614
# step= 30000 W= 60.6001 b= -37.9489
# step= 40000 W= 61.0897 b= -38.7597
# step= 50000 W= 61.2225 b= -38.9796
# step= 60000 W= 61.2588 b= -39.0398
# step= 70000 W= 61.2619 b= -39.0449
# step= 80000 W= 61.2619 b= -39.0449

4 같이 보기

문서 댓글 ({{ doc_comments.length }})
{{ comment.name }} {{ comment.created | snstime }}