README.md 3.8 KB

ML base Spacing Correcter

This model is improved version of TrainKoSpacing, using FastText instead of Word2Vec

Performances

Model Test Accuracy(%) Encoding Time Cost
TrainKoSpacing 96.6147 02m 23s
자모분해 FastText 98.9915 08h 20m 11s
2 Stage FastText 99.0888 03m 23s

Data

Corpus

We mainly focus on the National Institute of Korean Language 모두의 말뭉치 corpus and National Information Society Agency AI-Hub data. However, due to the license issue, we are restricted to distribute this dataset. You should be able to get them throw the link below National Institute of Korean Language 모두의 말뭉치. National Information Society Agency AI-Hub

Data format

Bziped file consisting of one sentence per line.

~/KoSpacing/data$ bzcat train.txt.bz2 | head
엠마누엘 웅가로 / 의상서 실내 장식품으로… 디자인 세계 넓혀
프랑스의 세계적인 의상 디자이너 엠마누엘 웅가로가 실내 장식용 직물 디자이너로 나섰다.
웅가로는 침실과 식당, 욕실에서 사용하는 갖가지 직물제품을 디자인해 최근 파리의 갤러리 라파예트백화점에서 '색의 컬렉션'이라는 이름으로 전시회를 열었다.

Architecture

Model

kosapcing_img

Word Embedding

자모분해

To get similar shpae of Korean charector, use 자모분해 FastText word embedding. ex) 자연어처리 ㅈ ㅏ – ㅇ ㅕ ㄴ ㅇ ㅓ – ㅊ ㅓ – ㄹ ㅣ –

2 stage FastText

Becasue of time to handdle 자모분해, use 자모분해 FastText only for Out of Vocabulary charector.

2-stage-FastText_img

Thresholding

Because middle part of output distribution are evenly distributed.

probability_distribution_of_output_vector

Use log transform and second derivative result:

Thresholding_result

How to Run

Installation

  • For training, a GPU is strongly recommended for speed. CPU is supported but training could be extremely slow.
  • Support only above Python 3.7.

    Requirement

  • Python (>= 3.7)

  • MXNet (>= 1.6.0)

  • tqdm (>= 4.19.5)

  • Pandas (>= 0.22.0)

  • Gensim (>= 3.8.1)

  • GluonNLP (>= 0.9.1)

  • soynlp (>= 0.0.493)

Dependencies

pip install -r requirements.txt

Training

python train.py --train --train-samp-ratio 1.0 --num-epoch 50 --train_data data/train.txt.bz2 --test_data data/test.txt.bz2 --outputs train_log_to --model_type kospacing --model-file fasttext

Evaluation

python train.py --model-params model/kospacing.params --model_type kospacing
sent > 중국은2018년평창동계올림픽의반환점에이르기까지아직노골드행진이다.
중국은2018년평창동계올림픽의반환점에이르기까지아직노골드행진이다.
spaced sent[0.12sec/sent]  > 중국은 2018년 평창동계올림픽의 반환점에 이르기까지 아직 노골드 행진이다.  

Directory

Directory guide for embedding model files bold texts means necessary

  • model

    • fasttext
    • fasttext_vis
    • fasttext.trainables.vectors_ngrams_lockf.npy
    • fasttext.wv.vectors_ngrams.npy
    • kospacing_wv.np
    • w2idx.dic
  • jamo_model

    • fasttext
    • fasttext_vis
    • fasttext.trainables.vectors_ngrams_lockf.npy
    • fasttext.wv.vectors_ngrams.npy
    • kospacing_wv.np
    • w2idx.dic

Reference

TrainKoSpacing: https://github.com/haven-jeon/TrainKoSpacing 딥 러닝을 이용한 자연어 처리 입문: https://wikidocs.net/book/2155