tokenizer.py 文件源码

python
阅读 38 收藏 0 点赞 0 评论 0

项目:kindlearadict 作者: runehol 项目源码 文件源码
def tokenize(data):

    sent_tokenize = nltk.tokenize.sent_tokenize

    tokenizer = nltk.tokenize.RegexpTokenizer(u"[\s\.,-?!'\"??\d·•—()׫»%\[\]|?*]+", gaps=True)
    word_tokenize = tokenizer.tokenize


    for text, blockname, textname in data:
        sentences = sent_tokenize(text.strip())
        for sentence in sentences:
            words = word_tokenize(sentence)
            for word in words:
                if len(word) > 1:
                    yield (word, sentence, blockname, textname)
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号