μ ν νκ·λ₯Ό Pytorchλ‘ κ΅¬ννκ² μ΅λλ€.
μ΄ κΈμ 보μκΈ° μ μ μλ λ§ν¬λ₯Ό 보μλ κ²μ μΆμ²λ립λλ€.
https://coding-yoon.tistory.com/50?category=825914
μ΅λν μ ν νκ·μμ²λΌ 보기 μ½κ²λ μ½λ©νμμ΅λλ€.
# μ ν νκ·
import torch
import torch.nn as nn # μ ννκ·λ₯Ό λΆλ¬μ€κΈ° μν λΌμ΄λΈλ¬λ¦¬
import torch.optim as optim # κ²½μ¬νκ°λ²λ₯Ό λΆλ¬μ€κΈ° μν λΌμ΄λΈλ¬λ¦¬
import torch.nn.init as init # ν
μμ μ΄κΈ°κ°μ ꡬνκΈ° μν λΌμ΄λΈλ¬λ¦¬
data = 1000 # λͺ κ°μ λ°μ΄ν°λ₯Ό λ§λ€ κ²μΈκ°...
epoch = 500 # λͺ λ²μ κ±°μ³ λ릴 κ²μΈκ°... ( = λ°μ΄ν° μ 체λ₯Ό νμ΅μ ν λ² μ¬μ©νλ μ£ΌκΈ° )
W = 4 # weight ( = κΈ°μΈκΈ° )
b = 5 # bias ( = μ νΈ )
x = init.uniform_(torch.Tensor(data, 1), -10, 10) # Input data
y = W * x + b # Output data
model = nn.Linear(1, 1) # μ ννκ· λͺ¨λΈ ( Input νΉμ±μ μ, κ²°κ³Όλ‘ λμ€λ νΉμ±μ μ )
cost_function = nn.MSELoss() # MSE : μ κ³±μ νκ· ( = Mean of squares )
# optim : μ΅μ ν ν¨μ ( = optimizer)
# SGD : κ²½μ¬νκ°λ² ( = stochastic gradient descent )
# model.parameters() : model μ parameter(w, b) λ₯Ό μ λ¬
# lr : νμ΅λ₯ ( = learning rate )
optimizer = optim.SGD(model.parameters(), lr=0.01)
for i in range(epoch):
optimizer.zero_grad() # μ²μμ gradient κ°μ΄ μκΈ° λλ¬Έμ 0μΌλ‘ μ΄κΈ°ν
H_x = model(x) # hypothesis
loss = cost_function(H_x, y) # cost(=loss) ꡬνκΈ°
loss.backward() # W, bμ λν κΈ°μΈκΈ° κ³μ°
optimizer.step() # optimizer νΈμΆ, κ²½μ¬νκ°λ²μ μ¬μ©νμ¬ μ
ν
μ΄νΈ
# 10λ² μ© μννλ©΄ cost μΆλ ₯
if i % 10 == 0:
print("Cost(=loss) :", loss.item())
result = list(model.parameters())
print("W :", result[0].item())
print("b :", result[1].item())
# κ²°κ³Ό
Cost(=loss) : 814.8699340820312
Cost(=loss) : 12.195707321166992
Cost(=loss) : 8.143980026245117
Cost(=loss) : 5.438344955444336
Cost(=loss) : 3.6315879821777344
Cost(=loss) : 2.4250807762145996
Cost(=loss) : 1.6194065809249878
Cost(=loss) : 1.081398606300354
Cost(=loss) : 0.7221314907073975
Cost(=loss) : 0.4822206497192383
Cost(=loss) : 0.3220142424106598
Cost(=loss) : 0.21503257751464844
Cost(=loss) : 0.14359332621097565
Cost(=loss) : 0.09588787704706192
Cost(=loss) : 0.06403134018182755
Cost(=loss) : 0.042758237570524216
Cost(=loss) : 0.02855268307030201
Cost(=loss) : 0.01906675659120083
Cost(=loss) : 0.012732233852148056
Cost(=loss) : 0.008502312004566193
Cost(=loss) : 0.005677602719515562
Cost(=loss) : 0.003791437717154622
Cost(=loss) : 0.0025317480321973562
Cost(=loss) : 0.001690658857114613
Cost(=loss) : 0.001128957374021411
Cost(=loss) : 0.0007538488134741783
Cost(=loss) : 0.0005033717607147992
Cost(=loss) : 0.0003361686831340194
Cost(=loss) : 0.00022449734387919307
Cost(=loss) : 0.0001499166974099353
Cost(=loss) : 0.00010010885307565331
Cost(=loss) : 6.684719119220972e-05
Cost(=loss) : 4.463006553123705e-05
Cost(=loss) : 2.9802462449879386e-05
Cost(=loss) : 1.990168311749585e-05
Cost(=loss) : 1.3289816706674173e-05
Cost(=loss) : 8.872347279975656e-06
Cost(=loss) : 5.9264275478199124e-06
Cost(=loss) : 3.959625701099867e-06
Cost(=loss) : 2.6436666757945204e-06
Cost(=loss) : 1.766238597156189e-06
Cost(=loss) : 1.1812345519501832e-06
Cost(=loss) : 7.882306931605854e-07
Cost(=loss) : 5.261869091555127e-07
Cost(=loss) : 3.516942399528489e-07
Cost(=loss) : 2.3484382438709872e-07
Cost(=loss) : 1.5710433842741622e-07
Cost(=loss) : 1.0491751822883089e-07
Cost(=loss) : 6.964684473587113e-08
Cost(=loss) : 4.667232644806063e-08
W : 3.9999990463256836
b : 4.999824047088623
Process finished with exit code 0
μλ κ²°κ³Όλ₯Ό μμΈν 보μλ©΄ μ€κ°κΉμ§λ costμ κ°μ΄ κ΅μ₯ν μλ€κ° νλ°λΆμ costμ κ°μ΄ μ¦κ°νλ κ²μ νμΈν μ μμ΅λλ€.
λͺ¨λΈλ§μ ν λ μ μμ¬ν μ€ epochλ₯Ό 무쑰건 λ§μ΄ μ€λ€νμ¬, νμ΅μ΄ μλλ κ²μ΄ μλλλ€.
λ°μ΄ν° μ²λ¦¬, λͺ¨λΈλ§, epoch λ±μ λͺ¨λ μ μ νκ² μ μ©νμ¬μΌ ν©λλ€.
728x90
λ°μν
'π Python > Deep Learning' μΉ΄ν κ³ λ¦¬μ λ€λ₯Έ κΈ
[λ₯λ¬λ] BRNNs(Bidirectional Recurrent Neural Networks) with Pytorch (0) | 2020.05.01 |
---|---|
[λ₯λ¬λ] RNN with PyTorch ( RNN κΈ°λ³Έ ꡬ쑰, μ¬μ© λ°©λ² ) (3) | 2020.04.28 |
[λ₯λ¬λ] μ ν νκ·(Linear Regression) (1) | 2020.04.09 |
[λ₯λ¬λ] νμ΄ν μΉ κΈ°λ³Έ step3::ꡬ쑰 1ν (0) | 2020.02.06 |
[λ₯λ¬λ] νμ΄ν μΉ step2:: μ€μΉ&μ€λΉ (0) | 2020.02.04 |