GluonNLP - 基于MXNet的深度学习自然语言处理包
GluonNLP是一个工具包,可以轻松进行文本预处理,数据集加载和神经模型构建,以帮助您加速自然语言处理(NLP)研究。
Python 自然语言处理
共2117Star
详细介绍
GluonNLP: Your Choice of Deep Learning for NLP
GluonNLP is a toolkit that enables easy text preprocessing, datasets loading and neural models building to help you speed up your Natural Language Processing (NLP) research.
News
- GluonNLP is featured in:
- AWS re:invent 2018 in Las Vegas, 2018-11-28! Checkout details.
- KDD 2018 London, 2018-08-21, Apache MXNet Gluon tutorial! Check out https://kdd18.mxnet.io.
Installation
Make sure you have Python 2.7 or Python 3.6 and recent version of MXNet. You can install MXNet
and GluonNLP
using pip:
pip install --pre --upgrade mxnet pip install gluonnlp
📖
Docs
GluonNLP documentation is available at our website.
Community
GluonNLP is a community that believes in sharing.
For questions, comments, and bug reports, Github issues is the best way to reach us.
We now have a new Slack channel here. (register).
How to Contribute
GluonNLP community welcomes contributions from anyone!
There are lots of opportunities for you to become our contributors:
- Ask or answer questions on GitHub issues.
- Propose ideas, or review proposed design ideas on GitHub issues.
- Improve the documentation.
- Contribute bug reports GitHub issues.
- Write new scripts to reproduce state-of-the-art results.
- Write new examples to explain key ideas in NLP methods and models.
- Write new public datasets (license permitting).
- Most importantly, if you have an idea of how to contribute, then do it!
For a list of open starter tasks, check good first issues.
Also see our contributing guide on simple how-tos, contribution guidelines and more.
Resources
Check out how to use GluonNLP for your own research or projects.
If you are new to Gluon, please check out our 60-minute crash course.
For getting started quickly, refer to notebook runnable examples at Examples.
For advanced examples, check out our Scripts.
For experienced users, check out our API Notes.
Quick Start Guide
Dataset Loading
Load the Wikitext-2 dataset, for example:
>>> import gluonnlp as nlp
>>> train = nlp.data.WikiText2(segment='train')
>>> train[0][0:5]
['=', 'Valkyria', 'Chronicles', 'III', '=']
Vocabulary Construction
Build vocabulary based on the above dataset, for example:
>>> vocab = nlp.Vocab(counter=nlp.data.Counter(train[0]))
>>> vocab
Vocab(size=33280, unk="<unk>", reserved="['<pad>', '<bos>', '<eos>']")
Neural Models Building
From the models package, apply a Standard RNN language model to the above dataset:
>>> model = nlp.model.language_model.StandardRNN('lstm', len(vocab),
... 200, 200, 2, 0.5, True)
>>> model
StandardRNN(
(embedding): HybridSequential(
(0): Embedding(33280 -> 200, float32)
(1): Dropout(p = 0.5, axes=())
)
(encoder): LSTM(200 -> 200.0, TNC, num_layers=2, dropout=0.5)
(decoder): HybridSequential(
(0): Dense(200 -> 33280, linear)
)
)
Word Embeddings Loading
For example, load a GloVe word embedding, one of the state-of-the-art English word embeddings:
>>> glove = nlp.embedding.create('glove', source='glove.6B.50d')
# Obtain vectors for 'baby' in the GloVe word embedding
>>> type(glove['baby'])
<class 'mxnet.ndarray.ndarray.NDArray'>
>>> glove['baby'].shape
(50,)
New to Deep Learning or NLP?
For background knowledge of deep learning or NLP, please refer to the open source book Dive into Deep Learning.