当我在Tensorflow 2.0环境中执行命令sess = tf.Session()时,我得到了一个错误消息,如下所示:

Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'tensorflow' has no attribute 'Session'

系统信息:

操作系统平台及发行版本:Windows 10 Python版本:3.7.1 Tensorflow版本:2.0.0-alpha0(已安装pip)

复制步骤: 安装:

PIP安装——升级PIP PIP install tensorflow==2.0.0-alpha0 PIP安装keras PIP install numpy==1.16.2

执行:

执行命令:import tensorflow as tf 执行命令:sess = tf.Session()


当前回答

对于Tensorflow 2.0及以后版本,试试这个。

import tensorflow as tf

tf.compat.v1.disable_eager_execution()

a = tf.constant(5)
b = tf.constant(6)
c = tf.constant(7)
d = tf.multiply(a,b)
e = tf.add(c,d)
f = tf.subtract(a,c)

with tf.compat.v1.Session() as sess:
  outs = sess.run(f)
  print(outs)

其他回答

TF2。X,你可以这样做。

import tensorflow as tf
with tf.compat.v1.Session() as sess:
    hello = tf.constant('hello world')
    print(sess.run(hello))

>>>你好世界

对于Tensorflow 2.0及以后版本,试试这个。

import tensorflow as tf

tf.compat.v1.disable_eager_execution()

a = tf.constant(5)
b = tf.constant(6)
c = tf.constant(7)
d = tf.multiply(a,b)
e = tf.add(c,d)
f = tf.subtract(a,c)

with tf.compat.v1.Session() as sess:
  outs = sess.run(f)
  print(outs)

TF2在默认情况下运行Eager Execution,从而消除了对session的需求。如果你想运行静态图形,更合适的方法是在TF2中使用tf.function()。虽然在TF2中仍然可以通过tf. compatat .v1.Session()访问Session,但我不鼓励使用它。通过比较hello worlds中的差异可能有助于演示这种差异:

TF1。X你好世界:

import tensorflow as tf
msg = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(msg))

TF2。X你好世界:

import tensorflow as tf
msg = tf.constant('Hello, TensorFlow!')
tf.print(msg)

更多信息请参见Effective TensorFlow 2

用这个:

sess = tf.compat.v1.Session()

如果出现错误,请使用以下方法

tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()

运行TF 1并不像你想的那么容易。x和TF 2。x环境我发现了一些错误,需要审查一些变量的使用,当我在互联网上修复神经元网络的问题。转换为TF 2。X是更好的主意。 (更容易适应)

TF - 2。X

while not done:
    next_obs, reward, done, info = env.step(action) 
        env.render()
    img = tf.keras.preprocessing.image.array_to_img(
            img,
            data_format=None,
            scale=True
    )
    img_array = tf.keras.preprocessing.image.img_to_array(img)
    predictions = model_self_1.predict(img_array) ### Prediction

### Training: history_highscores = model_highscores.fit(batched_features, epochs=1 ,validation_data=(dataset.shuffle(10))) # epochs=500 # , callbacks=[cp_callback, tb_callback]    

TF - 1。X

with tf.compat.v1.Session() as sess:
    saver = tf.compat.v1.train.Saver()
    saver.restore(sess, tf.train.latest_checkpoint(savedir + '\\invader_001'))
    train_loss, _ = sess.run([loss, training_op], feed_dict={X:o_obs, y:y_batch, X_action:o_act})
    
    for layer in mainQ_outputs: 
                model.add(layer)
        model.add(tf.keras.layers.Flatten() )
        model.add(tf.keras.layers.Dense(6, activation=tf.nn.softmax))
        predictions = model.predict(obs) ### Prediction

    
### Training: summ = sess.run(summaries, feed_dict={X:o_obs, y:y_batch, X_action:o_act})