从Udacity的深度学习课程中,y_i的softmax仅仅是指数除以整个Y向量的指数之和:
其中S(y_i)是y_i的软最大函数e是指数函数j是no。输入向量Y中的列。
我试过以下几种方法:
import numpy as np
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
scores = [3.0, 1.0, 0.2]
print(softmax(scores))
返回:
[ 0.8360188 0.11314284 0.05083836]
但建议的解决方案是:
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
它产生与第一个实现相同的输出,尽管第一个实现显式地取每列与Max的差值,然后除以和。
有人能用数学方法解释一下吗?一个是对的,另一个是错的?
实现在代码和时间复杂度方面是否相似?哪个更有效率?
从数学的角度看,两边是相等的。
这很容易证明。m = max (x)。现在你的函数softmax返回一个向量,它的第i个坐标等于
注意,这适用于任何m,因为对于所有(甚至是复数)数e^m != 0
from computational complexity point of view they are also equivalent and both run in O(n) time, where n is the size of a vector.
from numerical stability point of view, the first solution is preferred, because e^x grows very fast and even for pretty small values of x it will overflow. Subtracting the maximum value allows to get rid of this overflow. To practically experience the stuff I was talking about try to feed x = np.array([1000, 5]) into both of your functions. One will return correct probability, the second will overflow with nan
your solution works only for vectors (Udacity quiz wants you to calculate it for matrices as well). In order to fix it you need to use sum(axis=0)
我很好奇它们之间的性能差异
import numpy as np
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
def softmaxv2(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
def softmaxv3(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / np.sum(e_x, axis=0)
def softmaxv4(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x - np.max(x)) / np.sum(np.exp(x - np.max(x)), axis=0)
x=[10,10,18,9,15,3,1,2,1,10,10,10,8,15]
使用
print("----- softmax")
%timeit a=softmax(x)
print("----- softmaxv2")
%timeit a=softmaxv2(x)
print("----- softmaxv3")
%timeit a=softmaxv2(x)
print("----- softmaxv4")
%timeit a=softmaxv2(x)
增加x内部的值(+100 +200 +500…)我使用原始numpy版本得到的结果始终更好(这里只是一个测试)
----- softmax
The slowest run took 8.07 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 17.8 µs per loop
----- softmaxv2
The slowest run took 4.30 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 23 µs per loop
----- softmaxv3
The slowest run took 4.06 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 23 µs per loop
----- softmaxv4
10000 loops, best of 3: 23 µs per loop
直到……x内的值达到~800,则得到
----- softmax
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: RuntimeWarning: overflow encountered in exp
after removing the cwd from sys.path.
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: RuntimeWarning: invalid value encountered in true_divide
after removing the cwd from sys.path.
The slowest run took 18.41 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 23.6 µs per loop
----- softmaxv2
The slowest run took 4.18 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 22.8 µs per loop
----- softmaxv3
The slowest run took 19.44 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 23.6 µs per loop
----- softmaxv4
The slowest run took 16.82 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 22.7 µs per loop
就像一些人说的,你的版本在“大数字”上更稳定。对于小数字来说,情况可能正好相反。