word_tokenizers.py 文件源码

python
阅读 20 收藏 0 点赞 0 评论 0

项目:paraphrase-id-tensorflow 作者: nelson-liu 项目源码 文件源码
def tokenize(self, sentence):
        """
        Given a string, tokenize it into words (with the conventional notion
        of word).

        Parameters
        ----------
        sentence: str
            The string to tokenize.

        Returns
        -------
        tokenized_sentence: List[str]
            The tokenized representation of the string, as a list of tokens.
        """
        return nltk.word_tokenize(sentence.lower())
评论列表
文章目录


问题


面经


文章

微信
公众号

扫码关注公众号