Ratsgo ELMo 실습

1 개요[ | ]

Ratsgo ELMo 실습

2 환경 및 데이터 준비[ | ]

2.1 도커 실행, git checkout[ | ]

  • 도커 이미지는 ratsgo/embedding-cpu:1.4,
  • 소스코드는 git repo ratsgo/embedding의 v1.0.1로 고정하여 테스트한다.
C:\Users\jmnote>docker run -it --rm --hostname=ratsgo ratsgo/embedding-cpu:1.4 bash
root@ratsgo:/notebooks/embedding# git pull
remote: Enumerating objects: 197, done.
remote: Counting objects: 100% (197/197), done.
remote: Compressing objects: 100% (44/44), done.
remote: Total 702 (delta 172), reused 174 (delta 153), pack-reused 505
오브젝트를 받는 중: 100% (702/702), 173.51 KiB | 0 bytes/s, 완료.
델타를 알아내는 중: 100% (495/495), 로컬 오브젝트 22개 마침.
... (생략)
 create mode 100644 models/xlnet/prepro_utils.py
 create mode 100644 models/xlnet/train_gpu.py
 create mode 100644 models/xlnet/xlnet.py
root@ratsgo:/notebooks/embedding# git checkout tags/v1.0.1
Note: checking out 'tags/v1.0.1'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD의 현재 위치는 ead260a... [python] #14 튜토리얼 페이지 개선

2.2 tokenized 데이터 다운로드[ | ]

root@ratsgo:/notebooks/embedding# bash preprocess.sh dump-tokenized
download tokenized data...
... (생략)
2020-05-29 12:57:58 (649 KB/s) - ‘/notebooks/embedding/data/tokenized.zip’ saved [872719683]

Archive:  tokenized.zip
   creating: tokenized/
  inflating: tokenized/korquad_mecab.txt
  inflating: tokenized/wiki_ko_mecab.txt
  inflating: tokenized/corpus_mecab_jamo.txt
  inflating: tokenized/ratings_okt.txt
  inflating: tokenized/ratings_khaiii.txt
  inflating: tokenized/ratings_hannanum.txt
  inflating: tokenized/ratings_soynlp.txt
  inflating: tokenized/ratings_mecab.txt
  inflating: tokenized/ratings_komoran.txt

3 ELMo 실습[ | ]

3.1 학습 데이터 준비[ | ]

root@ratsgo:/notebooks/embedding# mkdir -p data/sentence-embeddings/elmo/pretrain-ckpt/traindata
root@ratsgo:/notebooks/embedding# cat data/tokenized/wiki_ko_mecab.txt data/tokenized/ratings_mecab.txt data/tokenized/korquad_mecab.txt > data/tokenized/corpus_mecab.txt
root@ratsgo:/notebooks/embedding# ll data/tokenized/corpus_mecab.txt
-rw-r--r-- 1 root root 1153576828  6월 30 04:39 data/tokenized/corpus_mecab.txt
root@ratsgo:/notebooks/embedding# split -l 100000 data/tokenized/corpus_mecab.txt data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_
root@7d3b4d9277ba:/notebooks/embedding# ll data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_*
-rw-r--r-- 1 root root 502680069  6월 30 04:40 data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_aa
-rw-r--r-- 1 root root 319143457  6월 30 04:40 data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_ab
-rw-r--r-- 1 root root 265726906  6월 30 04:40 data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_ac
-rw-r--r-- 1 root root  33162480  6월 30 04:40 data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_ad
-rw-r--r-- 1 root root   9972751  6월 30 04:40 data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_ae
-rw-r--r-- 1 root root  22891165  6월 30 04:40 data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_af

3.2 어휘 집합 구축[ | ]

root@ratsgo:/notebooks/embedding# python models/sent_utils.py --method construct_elmo_vocab --input_path data/tokenized/corpus_mecab.txt --output_path data/sentence-embeddings/elmo/pretrain-ckpt/elmo-vocab.txt

3.3 하이퍼파라미터 확인[ | ]

root@ratsgo:/notebooks/embedding# cat models/train_elmo.py | sed -n '/options = {/,/}$/p'
    options = {
        'bidirectional': True,

        'char_cnn': {'activation': 'relu',
                     'embedding': {'dim': 16},
                     'filters': [[1, 32],
                                 [2, 32],
                                 [3, 64],
                                 [4, 128],
                                 [5, 256],
                                 [6, 512],
                                 [7, 1024]],
                     'max_characters_per_token': 30,
                     'n_characters': 261,
                     'n_highway': 2},

        'dropout': 0.1,

        'lstm': {
            'cell_clip': 3,
            'dim': 1024,
            'n_layers': 2,
            'proj_clip': 3,
            'projection_dim': 128,
            'use_skip_connections': True},

        'all_clip_norm_val': 10.0,

        'n_epochs': 10,
        'n_train_tokens': n_train_tokens,
        'batch_size': batch_size,
        'n_tokens_vocab': vocab.size,
        'unroll_steps': 20,
        'n_negative_samples_batch': 8192,
    }

3.4 ELMo 프리트레이닝 (Aborted)[ | ]

root@ratsgo:/notebooks/embedding# python models/train_elmo.py --train_prefix 'data/sentence-embeddings/elmo/pretrain-ckpt/traindata/*' --vocab_file data/sentence-embeddings/elmo/pretrain-ckpt/elmo-vocab.txt --save_dir data/sentence-embeddings/elmo/pretrain-ckpt
Found 6 shards at data/sentence-embeddings/elmo/pretrain-ckpt/traindata/*
Loading data from: data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_aa
Loaded 100000 sentences.
Finished loading
Found 6 shards at data/sentence-embeddings/elmo/pretrain-ckpt/traindata/*
Loading data from: data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_ac
runtime/cgo: pthread_create failed: Resource temporarily unavailable
                                                                    Aborted
ㅠㅠ

3.5 ELMo 모델 저장[ | ]

root@ratsgo:/notebooks/embedding# python models/sent_utils.py --method dump_elmo_weights --input_path data/sentence-embeddings/elmo/pretrain-ckpt --output_path data/sentence-embeddings/elmo/pretrain-ckpt/elmo.model

4 ELMo 단축 실습[ | ]

4.1 학습 데이터 준비[ | ]

root@ratsgo:/notebooks/embedding# mkdir -p data/sentence-embeddings/elmo/pretrain-ckpt/traindata
root@ratsgo:/notebooks/embedding# split -l 50000 data/tokenized/ratings_mecab.txt data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_
root@ratsgo:/notebooks/embedding# ll data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_*
-rw-r--r-- 1 root root 4828260  6월 30 09:10 data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_aa
-rw-r--r-- 1 root root 4839287  6월 30 09:10 data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_ab
-rw-r--r-- 1 root root 5036533  6월 30 09:10 data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_ac
-rw-r--r-- 1 root root 4982140  6월 30 09:10 data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_ad

4.2 어휘 집합 구축[ | ]

root@ratsgo:/notebooks/embedding# python models/sent_utils.py --method construct_elmo_vocab --input_path data/tokenized/ratings_mecab.txt --output_path data/sentence-embeddings/elmo/pretrain-ckpt/elmo-vocab.txt

4.3 ELMo 하이퍼파라미터 설정[ | ]

cat <<EOF > models/train_elmo.py
import argparse
from bilm.training import train, load_options_latest_checkpoint, load_vocab
from bilm.data import BidirectionalLMDataset

def main(args):
    vocab = load_vocab(args.vocab_file, 30)
    batch_size = 128  # batch size for each GPU
    n_gpus = args.n_gpus
    n_train_tokens = 8000
    options = {
        'bidirectional': True,
        'char_cnn': {'activation': 'relu',
                     'embedding': {'dim': 16},
                     'filters': [[1, 32],
                                 [2, 32],
                                 [3, 64],
                                 [4, 128]],
                     'max_characters_per_token': 30,
                     'n_characters': 261,
                     'n_highway': 2},
        'dropout': 0.1,
        'lstm': {
            'cell_clip': 3,
            'dim': 128,
            'n_layers': 2,
            'proj_clip': 3,
            'projection_dim': 32,
            'use_skip_connections': True},
        'all_clip_norm_val': 10.0,
        'n_epochs': 3,
        'n_train_tokens': n_train_tokens,
        'batch_size': batch_size,
        'n_tokens_vocab': vocab.size,
        'unroll_steps': 20,
        'n_negative_samples_batch': 4096,
    }
    prefix = args.train_prefix
    data = BidirectionalLMDataset(prefix, vocab, test=False,
                                  shuffle_on_load=True)
    tf_save_dir = args.save_dir
    tf_log_dir = args.save_dir
    train(options, data, int(n_gpus), tf_save_dir, tf_log_dir)

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--save_dir', help='Location of checkpoint files')
    parser.add_argument('--vocab_file', help='Vocabulary file')
    parser.add_argument('--train_prefix', help='Prefix for train files')
    parser.add_argument('--n_gpus', help='Number of GPUs')

    args = parser.parse_args()
    main(args)
EOF

4.4 ELMo 프리트레이닝[ | ]

  • ELMo를 프리트레인하려면 GPU가 있어야 한다.
  • --n_gpu를 지정하지 않거나 0으로 하면 오류가 발생하므로, GPU가 없어도 --n_gpu는 1로 하자?!
root@ratsgo:/notebooks/embedding# python models/train_elmo.py --train_prefix 'data/sentence-embeddings/elmo/pretrain-ckpt/traindata/*' --vocab_file data/sentence-embeddings/elmo/pretrain-ckpt/elmo-vocab.txt --save_dir data/sentence-embeddings/elmo/pretrain-ckpt --n_gpus 1
Found 4 shards at data/sentence-embeddings/elmo/pretrain-ckpt/traindata/*
Loading data from: data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_ad
Loaded 49992 sentences.
Finished loading
Found 4 shards at data/sentence-embeddings/elmo/pretrain-ckpt/traindata/*
Loading data from: data/sentence-embeddings/elmo/pretrain-ckpt/traindata/data_aa
Loaded 50000 sentences.
Finished loading
WARNING:tensorflow:From /notebooks/embedding/models/bilm/training.py:217: calling squeeze (from tensorflow.python.ops.array_ops) with squeeze_dims is deprecated and will be removed in a future version.
Instructions for updating:
Use the `axis` argument instead
USING SKIP CONNECTIONS
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/nn_impl.py:1124: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.
WARNING:tensorflow:Variable += will be deprecated. Use variable.assign_add if you want assignment to the variable value or 'x = x + y' if you want a new python Tensor object.
[['global_step:0', TensorShape([])],
 ['lm/CNN/W_cnn_0:0',
  TensorShape([Dimension(1), Dimension(1), Dimension(16), Dimension(32)])],
 ['lm/CNN/W_cnn_1:0',
  TensorShape([Dimension(1), Dimension(2), Dimension(16), Dimension(32)])],
 ['lm/CNN/W_cnn_2:0',
  TensorShape([Dimension(1), Dimension(3), Dimension(16), Dimension(64)])],
 ['lm/CNN/W_cnn_3:0',
  TensorShape([Dimension(1), Dimension(4), Dimension(16), Dimension(128)])],
 ['lm/CNN/b_cnn_0:0', TensorShape([Dimension(32)])],
 ['lm/CNN/b_cnn_1:0', TensorShape([Dimension(32)])],
 ['lm/CNN/b_cnn_2:0', TensorShape([Dimension(64)])],
 ['lm/CNN/b_cnn_3:0', TensorShape([Dimension(128)])],
 ['lm/CNN_high_0/W_carry:0', TensorShape([Dimension(256), Dimension(256)])],
 ['lm/CNN_high_0/W_transform:0', TensorShape([Dimension(256), Dimension(256)])],
 ['lm/CNN_high_0/b_carry:0', TensorShape([Dimension(256)])],
 ['lm/CNN_high_0/b_transform:0', TensorShape([Dimension(256)])],
 ['lm/CNN_high_1/W_carry:0', TensorShape([Dimension(256), Dimension(256)])],
 ['lm/CNN_high_1/W_transform:0', TensorShape([Dimension(256), Dimension(256)])],
 ['lm/CNN_high_1/b_carry:0', TensorShape([Dimension(256)])],
 ['lm/CNN_high_1/b_transform:0', TensorShape([Dimension(256)])],
 ['lm/CNN_proj/W_proj:0', TensorShape([Dimension(256), Dimension(32)])],
 ['lm/CNN_proj/b_proj:0', TensorShape([Dimension(32)])],
 ['lm/RNN_0/rnn/multi_rnn_cell/cell_0/lstm_cell/bias:0',
  TensorShape([Dimension(512)])],
 ['lm/RNN_0/rnn/multi_rnn_cell/cell_0/lstm_cell/kernel:0',
  TensorShape([Dimension(64), Dimension(512)])],
 ['lm/RNN_0/rnn/multi_rnn_cell/cell_0/lstm_cell/projection/kernel:0',
  TensorShape([Dimension(128), Dimension(32)])],
 ['lm/RNN_0/rnn/multi_rnn_cell/cell_1/lstm_cell/bias:0',
  TensorShape([Dimension(512)])],
 ['lm/RNN_0/rnn/multi_rnn_cell/cell_1/lstm_cell/kernel:0',
  TensorShape([Dimension(64), Dimension(512)])],
 ['lm/RNN_0/rnn/multi_rnn_cell/cell_1/lstm_cell/projection/kernel:0',
  TensorShape([Dimension(128), Dimension(32)])],
 ['lm/RNN_1/rnn/multi_rnn_cell/cell_0/lstm_cell/bias:0',
  TensorShape([Dimension(512)])],
 ['lm/RNN_1/rnn/multi_rnn_cell/cell_0/lstm_cell/kernel:0',
  TensorShape([Dimension(64), Dimension(512)])],
 ['lm/RNN_1/rnn/multi_rnn_cell/cell_0/lstm_cell/projection/kernel:0',
  TensorShape([Dimension(128), Dimension(32)])],
 ['lm/RNN_1/rnn/multi_rnn_cell/cell_1/lstm_cell/bias:0',
  TensorShape([Dimension(512)])],
 ['lm/RNN_1/rnn/multi_rnn_cell/cell_1/lstm_cell/kernel:0',
  TensorShape([Dimension(64), Dimension(512)])],
 ['lm/RNN_1/rnn/multi_rnn_cell/cell_1/lstm_cell/projection/kernel:0',
  TensorShape([Dimension(128), Dimension(32)])],
 ['lm/char_embed:0', TensorShape([Dimension(261), Dimension(16)])],
 ['lm/softmax/W:0', TensorShape([Dimension(60960), Dimension(32)])],
 ['lm/softmax/b:0', TensorShape([Dimension(60960)])],
 ['train_perplexity:0', TensorShape([])]]
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/util/tf_should_use.py:189: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
2020-06-30 09:11:51.971777: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
WARNING:tensorflow:Issue encountered when serializing lstm_output_embeddings.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'list' object has no attribute 'name'
Training for 3 epochs and 9 batches
WARNING:tensorflow:Issue encountered when serializing lstm_output_embeddings.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'list' object has no attribute 'name'

5 ELMo 모델 저장[ | ]

root@ratsgo:/notebooks/embedding# python models/sent_utils.py --method dump_elmo_weights --input_path data/sentence-embeddings/elmo/pretrain-ckpt --output_path data/sentence-embeddings/elmo/pretrain-ckpt/elmo.model
2020-06-30 09:13:41.673510: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
WARNING:tensorflow:From /notebooks/embedding/models/bilm/training.py:217: calling squeeze (from tensorflow.python.ops.array_ops) with squeeze_dims is deprecated and will be removed in a future version.
Instructions for updating:
Use the `axis` argument instead
USING SKIP CONNECTIONS
INFO:tensorflow:Restoring parameters from data/sentence-embeddings/elmo/pretrain-ckpt/model.ckpt-9
Saving variable lm/char_embed:0 with name char_embed
Saving variable lm/CNN/W_cnn_0:0 with name CNN/W_cnn_0
Saving variable lm/CNN/b_cnn_0:0 with name CNN/b_cnn_0
Saving variable lm/CNN/W_cnn_1:0 with name CNN/W_cnn_1
Saving variable lm/CNN/b_cnn_1:0 with name CNN/b_cnn_1
Saving variable lm/CNN/W_cnn_2:0 with name CNN/W_cnn_2
Saving variable lm/CNN/b_cnn_2:0 with name CNN/b_cnn_2
Saving variable lm/CNN/W_cnn_3:0 with name CNN/W_cnn_3
Saving variable lm/CNN/b_cnn_3:0 with name CNN/b_cnn_3
Saving variable lm/CNN_proj/W_proj:0 with name CNN_proj/W_proj
Saving variable lm/CNN_proj/b_proj:0 with name CNN_proj/b_proj
Saving variable lm/CNN_high_0/W_carry:0 with name CNN_high_0/W_carry
Saving variable lm/CNN_high_0/b_carry:0 with name CNN_high_0/b_carry
Saving variable lm/CNN_high_0/W_transform:0 with name CNN_high_0/W_transform
Saving variable lm/CNN_high_0/b_transform:0 with name CNN_high_0/b_transform
Saving variable lm/CNN_high_1/W_carry:0 with name CNN_high_1/W_carry
Saving variable lm/CNN_high_1/b_carry:0 with name CNN_high_1/b_carry
Saving variable lm/CNN_high_1/W_transform:0 with name CNN_high_1/W_transform
Saving variable lm/CNN_high_1/b_transform:0 with name CNN_high_1/b_transform
Saving variable lm/RNN_0/rnn/multi_rnn_cell/cell_0/lstm_cell/kernel:0 with name RNN_0/RNN/MultiRNNCell/Cell0/LSTMCell/W_0
Saving variable lm/RNN_0/rnn/multi_rnn_cell/cell_0/lstm_cell/bias:0 with name RNN_0/RNN/MultiRNNCell/Cell0/LSTMCell/B
Saving variable lm/RNN_0/rnn/multi_rnn_cell/cell_0/lstm_cell/projection/kernel:0 with name RNN_0/RNN/MultiRNNCell/Cell0/LSTMCell/W_P_0
Saving variable lm/RNN_0/rnn/multi_rnn_cell/cell_1/lstm_cell/kernel:0 with name RNN_0/RNN/MultiRNNCell/Cell1/LSTMCell/W_0
Saving variable lm/RNN_0/rnn/multi_rnn_cell/cell_1/lstm_cell/bias:0 with name RNN_0/RNN/MultiRNNCell/Cell1/LSTMCell/B
Saving variable lm/RNN_0/rnn/multi_rnn_cell/cell_1/lstm_cell/projection/kernel:0 with name RNN_0/RNN/MultiRNNCell/Cell1/LSTMCell/W_P_0
Saving variable lm/RNN_1/rnn/multi_rnn_cell/cell_0/lstm_cell/kernel:0 with name RNN_1/RNN/MultiRNNCell/Cell0/LSTMCell/W_0
Saving variable lm/RNN_1/rnn/multi_rnn_cell/cell_0/lstm_cell/bias:0 with name RNN_1/RNN/MultiRNNCell/Cell0/LSTMCell/B
Saving variable lm/RNN_1/rnn/multi_rnn_cell/cell_0/lstm_cell/projection/kernel:0 with name RNN_1/RNN/MultiRNNCell/Cell0/LSTMCell/W_P_0
Saving variable lm/RNN_1/rnn/multi_rnn_cell/cell_1/lstm_cell/kernel:0 with name RNN_1/RNN/MultiRNNCell/Cell1/LSTMCell/W_0
Saving variable lm/RNN_1/rnn/multi_rnn_cell/cell_1/lstm_cell/bias:0 with name RNN_1/RNN/MultiRNNCell/Cell1/LSTMCell/B
Saving variable lm/RNN_1/rnn/multi_rnn_cell/cell_1/lstm_cell/projection/kernel:0 with name RNN_1/RNN/MultiRNNCell/Cell1/LSTMCell/W_P_0

6 같이 보기[ | ]

문서 댓글 ({{ doc_comments.length }})
{{ comment.name }} {{ comment.created | snstime }}