UOMOP
배치 경사 하강법 using algorithm (boston) 본문
1. 데이터 불러오기
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
boston = load_boston()
boston_df = pd.DataFrame(boston.data, columns = boston["feature_names"])
boston_df["PRICE"] = boston.target
boston_df.head()
2. 배치 경사 하강법, 함수로 정의
def gradient(feature, target, learning_rate, iter_epochs, verbose):
bias = np.random.rand(1,)
w1 = np.random.rand(1,)
w2 = np.random.rand(1,)
feature_1 = feature[:, 0]
feature_2 = feature[:, 1]
print("####### 최초 bias, w1, w2 #######\n")
print("bias : {}\tw1 : {}\tw2 : {}\n\n\n".format(bias, w1, w2))
for i in range(iter_epochs) :
N = len(target)
predict = bias + w1 * feature_1 + w2 * feature_2
diff = target - predict
bias_update = -(2/N) * learning_rate * (np.dot(np.ones((N,)), diff))
w1_update = -(2/N) * learning_rate * (np.dot(feature_1.T, diff))
w2_update = -(2/N) * learning_rate * (np.dot(feature_2.T, diff))
err = np.mean(np.square(diff))
bias = bias - bias_update
w1 = w1 - w1_update
w2 = w2 - w2_update
if verbose == True :
print("Epoch( {}/{})\tbias : {}\tw1 : {}\tw2 : {}\terror : {}"
.format(i+1, iter_epochs, bias, w1, w2, err))
return bias, w1, w2
3. feature data 전처리 과정
from sklearn.preprocessing import MinMaxScaler
mms = MinMaxScaler()
scaled_feature = mms.fit_transform(boston_df[["RM", "LSTAT"]])
4. 배치 경사 하강법을 통한 가중치, 절편 확인
bias, w1, w2 = gradient(scaled_feature, boston_df["PRICE"], learning_rate = 0.01, iter_epochs = 1000,
verbose = True)
5. 도출해낸 가중치, 절편으로 선형식을 완성하고, 예측 진행
z = bias + w1 * scaled_feature[:, 0] + w2 * scaled_feature[:, 1]
print(z)
6. 예측 결과를 DataFrame에 추가
boston_df["PREDICT_using_hand"] = z
boston_df.head()

'Ai > DL' 카테고리의 다른 글
Functional API (0) | 2022.02.06 |
---|---|
미니배치 경사 하강법 using algorithm (boston) (0) | 2022.02.01 |
확률적 경사 하강법 using algorithm (boston) (0) | 2022.02.01 |
배치 경사 하강법 using keras (boston) (0) | 2022.02.01 |
기본적인 케라스(keras)모델 구현 (0) | 2022.01.30 |
Comments