从Udacity的深度学习课程中,y_i的softmax仅仅是指数除以整个Y向量的指数之和:
其中S(y_i)是y_i的软最大函数e是指数函数j是no。输入向量Y中的列。
我试过以下几种方法:
import numpy as np
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
scores = [3.0, 1.0, 0.2]
print(softmax(scores))
返回:
[ 0.8360188 0.11314284 0.05083836]
但建议的解决方案是:
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
它产生与第一个实现相同的输出,尽管第一个实现显式地取每列与Max的差值,然后除以和。
有人能用数学方法解释一下吗?一个是对的,另一个是错的?
实现在代码和时间复杂度方面是否相似?哪个更有效率?
编辑。从1.2.0版本开始,scipy包含了softmax作为一个特殊函数:
https://scipy.github.io/devdocs/generated/scipy.special.softmax.html
我写了一个在任意轴上应用softmax的函数:
def softmax(X, theta = 1.0, axis = None):
"""
Compute the softmax of each element along an axis of X.
Parameters
----------
X: ND-Array. Probably should be floats.
theta (optional): float parameter, used as a multiplier
prior to exponentiation. Default = 1.0
axis (optional): axis to compute values along. Default is the
first non-singleton axis.
Returns an array the same size as X. The result will sum to 1
along the specified axis.
"""
# make X at least 2d
y = np.atleast_2d(X)
# find axis
if axis is None:
axis = next(j[0] for j in enumerate(y.shape) if j[1] > 1)
# multiply y against the theta parameter,
y = y * float(theta)
# subtract the max for numerical stability
y = y - np.expand_dims(np.max(y, axis = axis), axis)
# exponentiate y
y = np.exp(y)
# take the sum along the specified axis
ax_sum = np.expand_dims(np.sum(y, axis = axis), axis)
# finally: divide elementwise
p = y / ax_sum
# flatten if X was 1D
if len(X.shape) == 1: p = p.flatten()
return p
正如其他用户所描述的那样,减去最大值是很好的做法。我在这里写了一篇详细的文章。
我需要一些与Tensorflow的密集层输出兼容的东西。
来自@desertnaut的解决方案在本例中不起作用,因为我有一批数据。因此,我提出了另一个解决方案,应该在这两种情况下工作:
def softmax(x, axis=-1):
e_x = np.exp(x - np.max(x)) # same code
return e_x / e_x.sum(axis=axis, keepdims=True)
结果:
logits = np.asarray([
[-0.0052024, -0.00770216, 0.01360943, -0.008921], # 1
[-0.0052024, -0.00770216, 0.01360943, -0.008921] # 2
])
print(softmax(logits))
#[[0.2492037 0.24858153 0.25393605 0.24827873]
# [0.2492037 0.24858153 0.25393605 0.24827873]]
参考:Tensorflow softmax