laserembeddings是一个pip安装包,是Facebook Research's LASER生产就绪移植
laserembeddings是一个pip安装包,Facebook Research's LASER (Language-Agnostic SEntence Representations)生产就绪移植,用于计算多语言句子嵌入。
Python 自然语言处理
共90Star
详细介绍
LASER embeddings
Out-of-the-box multilingual sentence embeddings.
laserembeddings is a pip-packaged, production-ready port of Facebook Research's LASER (Language-Agnostic SEntence Representations) to compute multilingual sentence embeddings.
- The encoder was fixed to remove an innocuous warning message that would sometimes appear when using PyTorch 1.4
🐛 - Japanese extra is now disabled on Windows (sorry) to prevent installation issues and computation failures in other languages
😕
Context
LASER is a collection of scripts and models created by Facebook Research to compute multilingual sentence embeddings for zero-shot cross-lingual transfer.
What does it mean? LASER is able to transform sentences into language-independent vectors. Similar sentences get mapped to close vectors (in terms of cosine distance), regardless of the input language.
That is great, especially if you don't have training sets for the language(s) you want to process: you can build a classifier on top of LASER embeddings, train it on whatever language(s) you have in your training data, and let it classify texts in any language.
The aim of the package is to make LASER as easy-to-use and easy-to-deploy as possible: zero-config, production-ready, etc., just a two-liner to install.
Getting started
You'll need Python 3.6 or higher.
Installation
pip install laserembeddings
To install laserembeddings with extra dependencies:
# if you need Chinese support:
pip install laserembeddings[zh]
# if you need Japanese support (not available on Windows):
pip install laserembeddings[ja]
# or both:
pip install laserembeddings[zh,ja]
Downloading the pre-trained models
python -m laserembeddings download-models
This will download the models to the default data
directory next to the source code of the package. Use python -m laserembeddings download-models path/to/model/directory
to download the models to a specific location.
Usage
from laserembeddings import Laser
laser = Laser()
# if all sentences are in the same language:
embeddings = laser.embed_sentences(
['let your neural network be polyglot',
'use multilingual embeddings!'],
lang='en') # lang is only used for tokenization
# embeddings is a N*1024 (N = number of sentences) NumPy array
If the sentences are not in the same language, you can pass a list of language codes:
embeddings = laser.embed_sentences(
['I love pasta.',
"J'adore les pâtes.",
'Ich liebe Pasta.'],
lang=['en', 'fr', 'de'])
If you downloaded the models into a specific directory:
from laserembeddings import Laser
path_to_bpe_codes = ...
path_to_bpe_vocab = ...
path_to_encoder = ...
laser = Laser(path_to_bpe_codes, path_to_bpe_vocab, path_to_encoder)
# you can also supply file objects instead of file paths
If you want to pull the models from S3:
from io import BytesIO, StringIO
from laserembeddings import Laser
import boto3
s3 = boto3.resource('s3')
MODELS_BUCKET = ...
f_bpe_codes = StringIO(s3.Object(MODELS_BUCKET, 'path_to_bpe_codes.fcodes').get()['Body'].read().decode('utf-8'))
f_bpe_vocab = StringIO(s3.Object(MODELS_BUCKET, 'path_to_bpe_vocabulary.fvocab').get()['Body'].read().decode('utf-8'))
f_encoder = BytesIO(s3.Object(MODELS_BUCKET, 'path_to_encoder.pt').get()['Body'].read())
laser = Laser(f_bpe_codes, f_bpe_vocab, f_encoder)
What are the differences with the original implementation?
Some dependencies of the original project have been replaced with pure-python dependencies, to make this package easy to install and deploy.
Here's a summary of the differences:
Part of the pipeline | LASER dependency (original project) | laserembeddings dependency (this package) | Reason |
---|---|---|---|
Normalization / tokenization | Moses | Sacremoses | Moses is implemented in Perl |
BPE encoding | fastBPE | subword-nmt | fastBPE cannot be installed via pip and requires compiling C++ code |
Japanese segmentation (optional) | MeCab / JapaneseTokenizer | mecab-python3 | mecab-python3 comes with wheels for major platforms (no compilation needed) |
Will I get the exact same embeddings?
For most languages, in most of the cases, yes.
Some slight (and not so slight
An exhaustive comparison of the embeddings generated with LASER and laserembeddings is automatically generated and will be updated for each new release.
FAQ
How can I train the encoder?
You can't. LASER models are pre-trained and do not need to be fine-tuned. The embeddings are generic and perform well without fine-tuning. See https://github.com/facebookresearch/LASER/issues/3#issuecomment-404175463.
Credits
Thanks a lot to the creators of LASER for open-sourcing the code of LASER and releasing the pre-trained models. All the kudos should go to them
A big thanks to the creators of Sacremoses and Subword Neural Machine Translation for their great packages.
Testing
The first thing you'll need is Poetry. Please refer to the installation guidelines.
Clone this repository and install the project:
poetry install
To run the tests:
poetry run pytest
Testing the similarity between the embeddings computed with LASER and laserembeddings
First, install the project with the extra dependencies (Chinese and Japanese support):
poetry install -E zh -E ja
Then, download the test data:
poetry run python -m laserembeddings download-test-data
Then, run the test with SIMILARITY_TEST
env. variable set to 1
.
SIMILARITY_TEST=1 poetry run pytest tests/test_laser.py
Now, have a coffee
The similarity report will be generated here: tests/report/comparison-with-LASER.md.
-
9390 Star
-
253 Star
-
705 Star
-
9364 Star
-
1822 Star
-
122 Star
-
186 Star