as3:~/ngokevin-site# nano content/blog/20140114_test-chinese.mkd
as3:~/ngokevin-site# wok
Traceback (most recent call last):
  File "/usr/local/bin/wok", line 4, in
    Engine()
  File "/usr/local/lib/python2.7/site-packages/wok/engine.py", line 104, in init
    self.load_pages()
  File "/usr/local/lib/python2.7/site-packages/wok/engine.py", line 238, in load_pages
    p = Page.from_file(os.path.join(root, f), self.options, self, renderer)
  File "/usr/local/lib/python2.7/site-packages/wok/page.py", line 111, in from_file
    page.meta['content'] = page.renderer.render(page.original)
  File "/usr/local/lib/python2.7/site-packages/wok/renderers.py", line 46, in render
    return markdown(plain, Markdown.plugins)
  File "/usr/local/lib/python2.7/site-packages/markdown/init.py", line 419, in markdown
    return md.convert(text)
  File "/usr/local/lib/python2.7/site-packages/markdown/init.py", line 281, in convert
    source = unicode(source)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe8 in position 1: ordinal not in range(128). -- Note: Markdown only accepts unicode input!

如何解决?

在其他一些基于python的静态博客应用中,中文帖子可以成功发布。 比如这个应用:http://github.com/vrypan/bucket3。在我的网站http://bc3.brite.biz/,中文帖子可以成功发布。


当前回答

我正在搜索解决以下错误信息:

Unicodedecodeerror: 'ascii'编解码器无法解码位置5454中的字节0xe2:序号不在范围(128)

我最终通过指定'encoding'来修复它:

f = open('../glove/glove.6B.100d.txt', encoding="utf-8")

希望它也能帮助到你。

其他回答

在某些情况下,当你检查你的默认编码(打印sys.getdefaultencoding())时,它会返回你使用的是ASCII。如果您更改为UTF-8,它将不起作用,这取决于变量的内容。 我找到了另一种方法:

import sys
reload(sys)  
sys.setdefaultencoding('Cp1252')

这招对我很管用:

    file = open('docs/my_messy_doc.pdf', 'rb')

我有同样的错误,url包含非ascii字符(值> 128的字节),我的解决方案:

url = url.decode('utf8').encode('utf-8')

注意:utf-8, utf8只是别名。只使用'utf8'或'utf-8'应该以同样的方式工作

在我的情况下,为我工作,在Python 2.7中,我认为这个赋值改变了str内部表示中的“某些东西”——即。,它强制正确解码url中支持的字节序列,并最终将字符串放入utf-8 STR中,所有的魔法都在正确的地方。 Python中的Unicode对我来说是一种黑魔法。 希望有用

Got a same error and this solved my error. Thanks! python 2 and python 3 differing in unicode handling is making pickled files quite incompatible to load. So Use python pickle's encoding argument. Link below helped me solve the similar problem when I was trying to open pickled data from my python 3.7, while my file was saved originally in python 2.x version. https://blog.modest-destiny.com/posts/python-2-and-3-compatible-pickle-save-and-load/ I copy the load_pickle function in my script and called the load_pickle(pickle_file) while loading my input_data like this:

input_data = load_pickle("my_dataset.pkl")

load_pickle函数在这里:

def load_pickle(pickle_file):
    try:
        with open(pickle_file, 'rb') as f:
            pickle_data = pickle.load(f)
    except UnicodeDecodeError as e:
        with open(pickle_file, 'rb') as f:
            pickle_data = pickle.load(f, encoding='latin1')
    except Exception as e:
        print('Unable to load data ', pickle_file, ':', e)
        raise
    return pickle_data

这是我的解决方案,只需添加编码。 用open(file, encoding='utf8')作为f

因为读取glove文件需要很长时间,所以我建议将glove文件转换为numpy文件。当你读取嵌入权重时,它将节省你的时间。

import numpy as np
from tqdm import tqdm


def load_glove(file):
    """Loads GloVe vectors in numpy array.
    Args:
        file (str): a path to a glove file.
    Return:
        dict: a dict of numpy arrays.
    """
    embeddings_index = {}
    with open(file, encoding='utf8') as f:
        for i, line in tqdm(enumerate(f)):
            values = line.split()
            word = ''.join(values[:-300])
            coefs = np.asarray(values[-300:], dtype='float32')
            embeddings_index[word] = coefs

    return embeddings_index

# EMBEDDING_PATH = '../embedding_weights/glove.840B.300d.txt'
EMBEDDING_PATH = 'glove.840B.300d.txt'
embeddings = load_glove(EMBEDDING_PATH)

np.save('glove_embeddings.npy', embeddings) 

Gist链接:https://gist.github.com/BrambleXu/634a844cdd3cd04bb2e3ba3c83aef227