训练多层感知器时,历元和迭代的区别是什么?


当前回答

你有训练数据,你洗牌并从中挑选小批量。当您使用一个迷你批处理调整权重和偏差时,您已经完成了一次迭代。

一旦你用完了你的小批,你就完成了一个纪元。然后你再次洗牌你的训练数据,再次选择你的小批量,并再次遍历它们。那将是你的第二个纪元。

其他回答

Epoch is 1 complete cycle where the Neural network has seen all the data. One might have said 100,000 images to train the model, however, memory space might not be sufficient to process all the images at once, hence we split training the model on smaller chunks of data called batches. e.g. batch size is 100. We need to cover all the images using multiple batches. So we will need 1000 iterations to cover all the 100,000 images. (100 batch size * 1000 iterations) Once Neural Network looks at the entire data it is called 1 Epoch (Point 1). One might need multiple epochs to train the model. (let us say 10 epochs).

我想在神经网络术语的背景下:

Epoch:当你的网络最终遍历整个训练集(即,每个训练实例一次)时,它完成了一个Epoch。

为了定义迭代(也就是步骤),你首先需要知道批处理的大小:

Batch Size: You probably wouldn't like to process the entire training instances all at one forward pass as it is inefficient and needs a huge deal of memory. So what is commonly done is splitting up training instances into subsets (i.e., batches), performing one pass over the selected subset (i.e., batch), and then optimizing the network through backpropagation. The number of training instances within a subset (i.e., batch) is called batch_size. Iteration: (a.k.a training steps) You know that your network has to go over all training instances in one pass in order to complete one epoch. But wait! when you are splitting up your training instances into batches, that means you can only process one batch (a subset of training instances) in one forward pass, so what about the other batches? This is where the term Iteration comes into play: Definition: The number of forwarding passes (The number of batches that you have created) that your network has to do in order to complete one epoch (i.e., going over all training instances) is called Iteration.

例如,当你有10,000个训练实例,你想用10的大小进行批处理;你必须进行10,000/10 = 1,000次迭代才能完成1个epoch。

希望这能回答你的问题!

许多神经网络训练算法都涉及到将整个数据集多次呈现给神经网络。通常,整个数据集的单一表示被称为“epoch”。相比之下,一些算法一次只向神经网络提供一个案例的数据。

“迭代”是一个更一般的术语,但既然你和“epoch”一起问了这个词,我假设你的来源是指一个单一案例对神经网络的呈现。

通常,你会把你的测试集分成小批,让网络从中学习,并让训练在你的层数中一步一步地进行,一直应用梯度下降。所有这些小步骤都可以称为迭代。

一个epoch对应于整个训练集通过整个网络一次。限制这种情况是很有用的,例如对抗过拟合。

在神经网络术语中:

一个epoch =所有训练示例的一个向前传递和一个向后传递 批大小=一次向前/向后传递中训练示例的数量。批处理大小越大,所需的内存空间就越大。 迭代次数=通过次数,每次通过使用[批大小]示例的数量。需要明确的是,一次传球=一次向前传球+一次向后传球(我们不把向前传球和向后传球算作两次不同的传球)。

例如:如果你有1000个训练样本,你的批处理大小是500,那么将需要2次迭代来完成1个epoch。

供参考:权衡批大小和迭代次数来训练神经网络


术语“批处理”是模棱两可的:有些人用它来表示整个训练集,有些人用它来指代一次向前/向后传递中的训练示例的数量(就像我在这个回答中所做的那样)。为了避免这种歧义,并明确batch对应于一次正向/向后传递中训练示例的数量,可以使用术语mini-batch。