我是TensorFlow的新手。我搞不懂tf的区别。占位符和tf.Variable。在我看来,tf。占位符用于输入数据,tf。变量用于存储数据的状态。这就是我所知道的一切。
谁能给我详细解释一下他们的不同之处吗?特别是,什么时候使用tf。变量和何时使用tf.placeholder?
我是TensorFlow的新手。我搞不懂tf的区别。占位符和tf.Variable。在我看来,tf。占位符用于输入数据,tf。变量用于存储数据的状态。这就是我所知道的一切。
谁能给我详细解释一下他们的不同之处吗?特别是,什么时候使用tf。变量和何时使用tf.placeholder?
当前回答
Think of Variable in tensorflow as a normal variables which we use in programming languages. We initialize variables, we can modify it later as well. Whereas placeholder doesn’t require initial value. Placeholder simply allocates block of memory for future use. Later, we can use feed_dict to feed the data into placeholder. By default, placeholder has an unconstrained shape, which allows you to feed tensors of different shapes in a session. You can make constrained shape by passing optional argument -shape, as I have done below.
x = tf.placeholder(tf.float32,(3,4))
y = x + 2
sess = tf.Session()
print(sess.run(y)) # will cause an error
s = np.random.rand(3,4)
print(sess.run(y, feed_dict={x:s}))
在执行机器学习任务时,大多数时候我们不知道行数,但(让我们假设)我们知道特征或列的数量。在这种情况下,我们可以使用None。
x = tf.placeholder(tf.float32, shape=(None,4))
现在,在运行时,我们可以输入任意4列任意行数的矩阵。
此外,占位符用于输入数据(它们是一种我们用来为模型提供信息的变量),其中变量是我们随时间训练的权重等参数。
其他回答
区别在于tf。变量,在声明时必须提供初始值。特遣部队。占位符,你不必提供初始值,你可以在运行时在Session.run中使用feed_dict参数指定它
除了其他人的答案,他们在Tensoflow网站上的MNIST教程中也解释得很好:
We describe these interacting operations by manipulating symbolic variables. Let's create one: x = tf.placeholder(tf.float32, [None, 784]), x isn't a specific value. It's a placeholder, a value that we'll input when we ask TensorFlow to run a computation. We want to be able to input any number of MNIST images, each flattened into a 784-dimensional vector. We represent this as a 2-D tensor of floating-point numbers, with a shape [None, 784]. (Here None means that a dimension can be of any length.) We also need the weights and biases for our model. We could imagine treating these like additional inputs, but TensorFlow has an even better way to handle it: Variable. A Variable is a modifiable tensor that lives in TensorFlow's graph of interacting operations. It can be used and even modified by the computation. For machine learning applications, one generally has the model parameters be Variables. W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) We create these Variables by giving tf.Variable the initial value of the Variable: in this case, we initialize both W and b as tensors full of zeros. Since we are going to learn W and b, it doesn't matter very much what they initially are.
博士TL;
变量
为了学习参数 价值观可以从培训中获得 初始值是必需的(通常是随机的)
占位符
为数据分配存储(例如在馈送期间用于图像像素数据) 初始值不是必需的(但可以设置,参见tf.placeholder_with_default)
简而言之,使用tf。变量为可训练变量,如权重(W)和偏差(B)为您的模型。
weights = tf.Variable(
tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))), name='weights')
biases = tf.Variable(tf.zeros([hidden1_units]), name='biases')
特遣部队。占位符用于提供实际的训练示例。
images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))
这是你在训练中输入训练示例的方式:
for step in xrange(FLAGS.max_steps):
feed_dict = {
images_placeholder: images_feed,
labels_placeholder: labels_feed,
}
_, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)
你的助教。变量将被训练(修改)作为这个训练的结果。
详见https://www.tensorflow.org/versions/r0.7/tutorials/mnist/tf/index.html。(例子摘自网页。)
想象一个计算图。在这样的图中,我们需要一个输入节点来将数据传递到图中,这些节点应该在tensorflow中定义为占位符。
不要把Python想象成一个通用的程序。你可以写一个Python程序,做所有那些在其他答案中通过变量解释的事情,但对于张量流中的计算图,为了将数据输入到图中,你需要将这些点定义为占位符。