龙空技术网

「CMU」步进式长文本生成:Progressive Generation of Long Text

AI算法后丹修炼炉 60

前言:

眼前我们对“步进程序算法”大概比较看重,姐妹们都需要分析一些“步进程序算法”的相关资讯。那么小编也在网上网罗了一些有关“步进程序算法””的相关内容,希望看官们能喜欢,各位老铁们快快来了解一下吧!

Requirements

torch==1.2.0transformers==2.5.1fairseq==0.9.0

It needs at least 4 GPUs on your device if you want to finetune GPT2-Large baseline, otherwise 2 GPUs are enough.

Download Data

python download/download_cnn.pypython download/download_writing_prompts.py
Train
python train.py \    --dataset [cnn/wp] \    --prog_steps null-{...}-full \    --first_model [bart/gpt2/gpt2-large]
--first_model specifies the type of the first-stage model.

The training log will be stored in training_logs/{your setting}/:

training_logs/{setting}/log.txt: evaluation loss of each checkpoint.training_logs/{setting}/ckpt_gens/step{}.txt: ~10 generation examples on dev set of each checkpoint.training_logs/{setting}/best_model.pt: best checkpoint model according to evaluation loss.

Check scripts/train_all.sh for all commands for training.

Generate

python generate.py \    --dataset [cnn/wp] \    --prog_steps null-{...}-full \    --first_model [bart/gpt2/gpt2-large]

Generated texts will be stored in generated_texts/{your setting}/:

generated_texts/{setting}/gen.txt: generation log.generated_texts/{setting}/gen.pickle: all generated texts stored into a pickle file.

Check scripts/gen_all.sh for all commands for generation.

Evaluate

python evaluate.py \    --dataset [cnn/wp] \    --prog_steps null-{...}-full \    --first_model [bart/gpt2/gpt2-large]

Check scripts/eval_all.sh for all commands for evaluation.

Present Results

python present_eval_results.py \    --dataset [cnn/wp] \    --metric [ms_jaccard/frechet_bert_distance/tfidf_distance/forward_backward_bleu]

Github

【论文】

标签: #步进程序算法