当我们必须预测分类(或离散)结果的值时,我们使用逻辑回归。我相信我们使用线性回归来预测给定输入值的结果值。
那么,这两种方法有什么不同呢?
当我们必须预测分类(或离散)结果的值时,我们使用逻辑回归。我相信我们使用线性回归来预测给定输入值的结果值。
那么,这两种方法有什么不同呢?
当前回答
只是补充一下之前的答案。
线性回归
Is meant to resolve the problem of predicting/estimating the output value for a given element X (say f(x)). The result of the prediction is a continuous function where the values may be positive or negative. In this case you normally have an input dataset with lots of examples and the output value for each one of them. The goal is to be able to fit a model to this data set so you are able to predict that output for new different/never seen elements. Following is the classical example of fitting a line to set of points, but in general linear regression could be used to fit more complex models (using higher polynomial degrees):
解决问题
线性回归有两种不同的求解方法:
法方程(直接解题方法) 梯度下降(迭代法)
逻辑回归
是为了解决分类问题,给定一个元素,你必须把它分成N个类别。典型的例子是,例如,给定一封邮件,将其分类为垃圾邮件,或者给定一辆车辆,查找它属于哪个类别(汽车、卡车、货车等)。基本上输出是一组有限的离散值。
解决问题
逻辑回归问题只能通过梯度下降来解决。一般来说,公式与线性回归非常相似,唯一的区别是使用不同的假设函数。在线性回归中,假设的形式为:
h(x) = theta_0 + theta_1*x_1 + theta_2*x_2 ..
其中是我们试图拟合的模型[1,x_1, x_2, ..]为输入向量。在逻辑回归中,假设函数是不同的:
g(x) = 1 / (1 + e^-x)
This function has a nice property, basically it maps any value to the range [0,1] which is appropiate to handle propababilities during the classificatin. For example in case of a binary classification g(X) could be interpreted as the probability to belong to the positive class. In this case normally you have different classes that are separated with a decision boundary which basically a curve that decides the separation between the different classes. Following is an example of dataset separated in two classes.
You can also use the below code to generate the linear regression curve q_df = details_df # q_df = pd.get_dummies(q_df) q_df = pd.get_dummies(q_df, columns=[ "1", "2", "3", "4", "5", "6", "7", "8", "9" ]) q_1_df = q_df["1"] q_df = q_df.drop(["2", "3", "4", "5"], axis=1) (import statsmodels.api as sm) x = sm.add_constant(q_df) train_x, test_x, train_y, test_y = sklearn.model_selection.train_test_split( x, q3_rechange_delay_df, test_size=0.2, random_state=123 ) lmod = sm.OLS(train_y, train_x).fit() lmod.summary() lmod.predict()[:10] lmod.get_prediction().summary_frame()[:10] sm.qqplot(lmod.resid,line="q") plt.title("Q-Q plot of Standardized Residuals") plt.show()
其他回答
只是补充一下之前的答案。
线性回归
Is meant to resolve the problem of predicting/estimating the output value for a given element X (say f(x)). The result of the prediction is a continuous function where the values may be positive or negative. In this case you normally have an input dataset with lots of examples and the output value for each one of them. The goal is to be able to fit a model to this data set so you are able to predict that output for new different/never seen elements. Following is the classical example of fitting a line to set of points, but in general linear regression could be used to fit more complex models (using higher polynomial degrees):
解决问题
线性回归有两种不同的求解方法:
法方程(直接解题方法) 梯度下降(迭代法)
逻辑回归
是为了解决分类问题,给定一个元素,你必须把它分成N个类别。典型的例子是,例如,给定一封邮件,将其分类为垃圾邮件,或者给定一辆车辆,查找它属于哪个类别(汽车、卡车、货车等)。基本上输出是一组有限的离散值。
解决问题
逻辑回归问题只能通过梯度下降来解决。一般来说,公式与线性回归非常相似,唯一的区别是使用不同的假设函数。在线性回归中,假设的形式为:
h(x) = theta_0 + theta_1*x_1 + theta_2*x_2 ..
其中是我们试图拟合的模型[1,x_1, x_2, ..]为输入向量。在逻辑回归中,假设函数是不同的:
g(x) = 1 / (1 + e^-x)
This function has a nice property, basically it maps any value to the range [0,1] which is appropiate to handle propababilities during the classificatin. For example in case of a binary classification g(X) could be interpreted as the probability to belong to the positive class. In this case normally you have different classes that are separated with a decision boundary which basically a curve that decides the separation between the different classes. Following is an example of dataset separated in two classes.
You can also use the below code to generate the linear regression curve q_df = details_df # q_df = pd.get_dummies(q_df) q_df = pd.get_dummies(q_df, columns=[ "1", "2", "3", "4", "5", "6", "7", "8", "9" ]) q_1_df = q_df["1"] q_df = q_df.drop(["2", "3", "4", "5"], axis=1) (import statsmodels.api as sm) x = sm.add_constant(q_df) train_x, test_x, train_y, test_y = sklearn.model_selection.train_test_split( x, q3_rechange_delay_df, test_size=0.2, random_state=123 ) lmod = sm.OLS(train_y, train_x).fit() lmod.summary() lmod.predict()[:10] lmod.get_prediction().summary_frame()[:10] sm.qqplot(lmod.resid,line="q") plt.title("Q-Q plot of Standardized Residuals") plt.show()
非常同意以上的评论。 除此之外,还有一些不同之处
在线性回归中,残差被假设为正态分布。 在逻辑回归中,残差需要是独立的,但不是正态分布。
线性回归假设解释变量值的恒定变化导致响应变量的恒定变化。 如果响应变量的值代表概率(在逻辑回归中),则此假设不成立。
广义线性模型(GLM)不假设因变量和自变量之间存在线性关系。但在logit模型中,它假设link函数与自变量之间是线性关系。
Regression means continuous variable, Linear means there is linear relation between y and x. Ex= You are trying to predict salary from no of years of experience. So here salary is independent variable(y) and yrs of experience is dependent variable(x). y=b0+ b1*x1 We are trying to find optimum value of constant b0 and b1 which will give us best fitting line for your observation data. It is a equation of line which gives continuous value from x=0 to very large value. This line is called Linear regression model.
逻辑回归是一种分类技术。不要被术语回归所误导。这里我们预测y=0还是1。
在这里,我们首先需要从下面的公式中找出给定x的p(y=1) (y=1的w概率)。
概率p通过下面的公式与y相关
Ex=我们可以将患癌几率超过50%的肿瘤分类为1,将患癌几率低于50%的肿瘤分类为0。
这里红点被预测为0,而绿点被预测为1。
在线性回归中,结果(因变量)是连续的。它可以有无限个可能值中的任意一个。在逻辑回归中,结果(因变量)只有有限数量的可能值。
例如,如果X包含以平方英尺为单位的房屋面积,而Y包含这些房屋的相应销售价格,您可以使用线性回归来预测销售价格作为房屋大小的函数。虽然可能的销售价格实际上可能没有任何值,但有很多可能的值,因此可以选择线性回归模型。
相反,如果你想根据房子的大小来预测房子是否会卖到20万美元以上,你会使用逻辑回归。可能的输出是Yes,房子将以超过20万美元的价格出售,或者No,房子不会。
简单地说,如果在线性回归模型中有更多的测试用例到达,这些测试用例远离预测y=1和y=0的阈值(例如=0.5)。在这种情况下,假设就会改变,变得更糟。因此,线性回归模型不适用于分类问题。
另一个问题是,如果分类是y=0和y=1, h(x)可以是> 1或< 0。因此,我们使用Logistic回归0<=h(x)<=1。