当我们必须预测分类(或离散)结果的值时,我们使用逻辑回归。我相信我们使用线性回归来预测给定输入值的结果值。
那么,这两种方法有什么不同呢?
当我们必须预测分类(或离散)结果的值时,我们使用逻辑回归。我相信我们使用线性回归来预测给定输入值的结果值。
那么,这两种方法有什么不同呢?
当前回答
只是补充一下之前的答案。
线性回归
Is meant to resolve the problem of predicting/estimating the output value for a given element X (say f(x)). The result of the prediction is a continuous function where the values may be positive or negative. In this case you normally have an input dataset with lots of examples and the output value for each one of them. The goal is to be able to fit a model to this data set so you are able to predict that output for new different/never seen elements. Following is the classical example of fitting a line to set of points, but in general linear regression could be used to fit more complex models (using higher polynomial degrees):
解决问题
线性回归有两种不同的求解方法:
法方程(直接解题方法) 梯度下降(迭代法)
逻辑回归
是为了解决分类问题,给定一个元素,你必须把它分成N个类别。典型的例子是,例如,给定一封邮件,将其分类为垃圾邮件,或者给定一辆车辆,查找它属于哪个类别(汽车、卡车、货车等)。基本上输出是一组有限的离散值。
解决问题
逻辑回归问题只能通过梯度下降来解决。一般来说,公式与线性回归非常相似,唯一的区别是使用不同的假设函数。在线性回归中,假设的形式为:
h(x) = theta_0 + theta_1*x_1 + theta_2*x_2 ..
其中是我们试图拟合的模型[1,x_1, x_2, ..]为输入向量。在逻辑回归中,假设函数是不同的:
g(x) = 1 / (1 + e^-x)
This function has a nice property, basically it maps any value to the range [0,1] which is appropiate to handle propababilities during the classificatin. For example in case of a binary classification g(X) could be interpreted as the probability to belong to the positive class. In this case normally you have different classes that are separated with a decision boundary which basically a curve that decides the separation between the different classes. Following is an example of dataset separated in two classes.
You can also use the below code to generate the linear regression curve q_df = details_df # q_df = pd.get_dummies(q_df) q_df = pd.get_dummies(q_df, columns=[ "1", "2", "3", "4", "5", "6", "7", "8", "9" ]) q_1_df = q_df["1"] q_df = q_df.drop(["2", "3", "4", "5"], axis=1) (import statsmodels.api as sm) x = sm.add_constant(q_df) train_x, test_x, train_y, test_y = sklearn.model_selection.train_test_split( x, q3_rechange_delay_df, test_size=0.2, random_state=123 ) lmod = sm.OLS(train_y, train_x).fit() lmod.summary() lmod.predict()[:10] lmod.get_prediction().summary_frame()[:10] sm.qqplot(lmod.resid,line="q") plt.title("Q-Q plot of Standardized Residuals") plt.show()
其他回答
它们在解决解决方案方面非常相似,但正如其他人所说,一个(逻辑回归)是用于预测类别“适合”(Y/N或1/0),另一个(线性回归)是用于预测值。
所以如果你想预测你是否有癌症Y/N(或概率)-使用逻辑。如果你想知道你能活多少年,用线性回归吧!
| Basis | Linear | Logistic |
|-----------------------------------------------------------------|--------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| Basic | The data is modelled using a straight line. | The probability of some obtained event is represented as a linear function of a combination of predictor variables. |
| Linear relationship between dependent and independent variables | Is required | Not required |
| The independent variable | Could be correlated with each other. (Specially in multiple linear regression) | Should not be correlated with each other (no multicollinearity exist). |
只是补充一下之前的答案。
线性回归
Is meant to resolve the problem of predicting/estimating the output value for a given element X (say f(x)). The result of the prediction is a continuous function where the values may be positive or negative. In this case you normally have an input dataset with lots of examples and the output value for each one of them. The goal is to be able to fit a model to this data set so you are able to predict that output for new different/never seen elements. Following is the classical example of fitting a line to set of points, but in general linear regression could be used to fit more complex models (using higher polynomial degrees):
解决问题
线性回归有两种不同的求解方法:
法方程(直接解题方法) 梯度下降(迭代法)
逻辑回归
是为了解决分类问题,给定一个元素,你必须把它分成N个类别。典型的例子是,例如,给定一封邮件,将其分类为垃圾邮件,或者给定一辆车辆,查找它属于哪个类别(汽车、卡车、货车等)。基本上输出是一组有限的离散值。
解决问题
逻辑回归问题只能通过梯度下降来解决。一般来说,公式与线性回归非常相似,唯一的区别是使用不同的假设函数。在线性回归中,假设的形式为:
h(x) = theta_0 + theta_1*x_1 + theta_2*x_2 ..
其中是我们试图拟合的模型[1,x_1, x_2, ..]为输入向量。在逻辑回归中,假设函数是不同的:
g(x) = 1 / (1 + e^-x)
This function has a nice property, basically it maps any value to the range [0,1] which is appropiate to handle propababilities during the classificatin. For example in case of a binary classification g(X) could be interpreted as the probability to belong to the positive class. In this case normally you have different classes that are separated with a decision boundary which basically a curve that decides the separation between the different classes. Following is an example of dataset separated in two classes.
You can also use the below code to generate the linear regression curve q_df = details_df # q_df = pd.get_dummies(q_df) q_df = pd.get_dummies(q_df, columns=[ "1", "2", "3", "4", "5", "6", "7", "8", "9" ]) q_1_df = q_df["1"] q_df = q_df.drop(["2", "3", "4", "5"], axis=1) (import statsmodels.api as sm) x = sm.add_constant(q_df) train_x, test_x, train_y, test_y = sklearn.model_selection.train_test_split( x, q3_rechange_delay_df, test_size=0.2, random_state=123 ) lmod = sm.OLS(train_y, train_x).fit() lmod.summary() lmod.predict()[:10] lmod.get_prediction().summary_frame()[:10] sm.qqplot(lmod.resid,line="q") plt.title("Q-Q plot of Standardized Residuals") plt.show()
在线性回归中,结果是连续的,而在逻辑回归中,结果只有有限数量的可能值(离散的)。
例子: 在一种情况下,x的给定值是一个地块的平方英尺大小,然后预测y的比率是在线性回归下。
相反,如果你想根据面积预测地块是否会以超过30万卢比的价格出售,你将使用逻辑回归。可能的输出是Yes,该地块的售价将超过30万卢比,或者No。
简单地说,如果在线性回归模型中有更多的测试用例到达,这些测试用例远离预测y=1和y=0的阈值(例如=0.5)。在这种情况下,假设就会改变,变得更糟。因此,线性回归模型不适用于分类问题。
另一个问题是,如果分类是y=0和y=1, h(x)可以是> 1或< 0。因此,我们使用Logistic回归0<=h(x)<=1。