icefall-asr-librispeech-pruned-transducer-stateless7-streaming-2022-12-29
/
decoding_results
/fast_beam_search
/log-decode-epoch-30-avg-9-streaming-chunk-size-64-beam-20.0-max-contexts-8-max-states-64-use-averaged-model-2022-12-26-16-03-35
2022-12-26 16:03:35,556 INFO [decode.py:655] Decoding started | |
2022-12-26 16:03:35,557 INFO [decode.py:661] Device: cuda:0 | |
2022-12-26 16:03:35,563 INFO [decode.py:671] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.2', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'efd83642a940dc7db08688cc0791985bed1fafcd', 'k2-git-date': 'Sun Nov 27 19:12:00 2022', 'lhotse-version': '1.12.0.dev+git.891bad1.clean', 'torch-version': '1.10.0+cu102', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'streaming_zipformer', 'icefall-git-sha1': '82b18bc-clean', 'icefall-git-date': 'Mon Dec 26 16:03:17 2022', 'icefall-path': '/star-zw/workspace/zipformer/icefall_streaming2', 'k2-path': '/star-zw/workspace/share/k2-last/k2/python/k2/__init__.py', 'lhotse-path': '/star-zw/env/k2_icefall/lib/python3.8/site-packages/lhotse-1.12.0.dev0+git.891bad1.clean-py3.8.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-1216192652-5bcf7587b4-n6q9m', 'IP address': '10.177.74.211'}, 'epoch': 30, 'iter': 0, 'avg': 9, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp-full-dynamic-chunk'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 64, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless7_streaming/exp-full-dynamic-chunk/fast_beam_search'), 'suffix': 'epoch-30-avg-9-streaming-chunk-size-64-beam-20.0-max-contexts-8-max-states-64-use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} | |
2022-12-26 16:03:35,563 INFO [decode.py:673] About to create model | |
2022-12-26 16:03:36,130 INFO [zipformer.py:378] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. | |
2022-12-26 16:03:36,143 INFO [decode.py:744] Calculating the averaged model over epoch range from 21 (excluded) to 30 | |
2022-12-26 16:03:48,140 INFO [decode.py:778] Number of model parameters: 70369391 | |
2022-12-26 16:03:48,140 INFO [asr_datamodule.py:443] About to get test-clean cuts | |
2022-12-26 16:03:48,159 INFO [asr_datamodule.py:450] About to get test-other cuts | |
2022-12-26 16:03:50,925 INFO [decode.py:560] batch 0/?, cuts processed until now is 43 | |
2022-12-26 16:03:58,813 INFO [zipformer.py:2453] attn_weights_entropy = tensor([1.8448, 1.7894, 1.9317, 1.7849, 1.4553, 3.7271, 1.8085, 2.3414], | |
device='cuda:0'), covar=tensor([0.3225, 0.2089, 0.1803, 0.2081, 0.1425, 0.0163, 0.1478, 0.0743], | |
device='cuda:0'), in_proj_covar=tensor([0.0131, 0.0117, 0.0123, 0.0121, 0.0104, 0.0094, 0.0089, 0.0088], | |
device='cuda:0'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0004, 0.0004], | |
device='cuda:0') | |
2022-12-26 16:04:09,638 INFO [zipformer.py:2453] attn_weights_entropy = tensor([2.6905, 2.1925, 1.8600, 2.4678, 1.9859, 2.2498, 2.1110, 2.5181], | |
device='cuda:0'), covar=tensor([0.2180, 0.3849, 0.2003, 0.2866, 0.4265, 0.1033, 0.3543, 0.1013], | |
device='cuda:0'), in_proj_covar=tensor([0.0293, 0.0293, 0.0247, 0.0345, 0.0274, 0.0228, 0.0290, 0.0214], | |
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], | |
device='cuda:0') | |
2022-12-26 16:04:25,948 INFO [decode.py:560] batch 20/?, cuts processed until now is 1430 | |
2022-12-26 16:04:58,837 INFO [decode.py:560] batch 40/?, cuts processed until now is 2561 | |
2022-12-26 16:04:59,894 INFO [decode.py:576] The transcripts are stored in pruned_transducer_stateless7_streaming/exp-full-dynamic-chunk/fast_beam_search/recogs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-streaming-chunk-size-64-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt | |
2022-12-26 16:04:59,967 INFO [utils.py:536] [test-clean-beam_20.0_max_contexts_8_max_states_64] %WER 3.02% [1590 / 52576, 194 ins, 109 del, 1287 sub ] | |
2022-12-26 16:05:00,146 INFO [decode.py:589] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp-full-dynamic-chunk/fast_beam_search/errs-test-clean-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-streaming-chunk-size-64-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt | |
2022-12-26 16:05:00,149 INFO [decode.py:605] | |
For test-clean, WER of different settings are: | |
beam_20.0_max_contexts_8_max_states_64 3.02 best for test-clean | |
2022-12-26 16:05:02,362 INFO [decode.py:560] batch 0/?, cuts processed until now is 52 | |
2022-12-26 16:05:32,491 INFO [decode.py:560] batch 20/?, cuts processed until now is 1647 | |
2022-12-26 16:05:58,110 INFO [zipformer.py:2453] attn_weights_entropy = tensor([2.4443, 2.4112, 2.1316, 1.2474, 2.0257, 2.0422, 1.9451, 2.2252], | |
device='cuda:0'), covar=tensor([0.0523, 0.0401, 0.1174, 0.1557, 0.0972, 0.1258, 0.1322, 0.0606], | |
device='cuda:0'), in_proj_covar=tensor([0.0169, 0.0183, 0.0204, 0.0186, 0.0206, 0.0200, 0.0213, 0.0199], | |
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], | |
device='cuda:0') | |
2022-12-26 16:06:06,445 INFO [decode.py:560] batch 40/?, cuts processed until now is 2870 | |
2022-12-26 16:06:07,632 INFO [decode.py:576] The transcripts are stored in pruned_transducer_stateless7_streaming/exp-full-dynamic-chunk/fast_beam_search/recogs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-streaming-chunk-size-64-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt | |
2022-12-26 16:06:07,712 INFO [utils.py:536] [test-other-beam_20.0_max_contexts_8_max_states_64] %WER 7.47% [3910 / 52343, 404 ins, 336 del, 3170 sub ] | |
2022-12-26 16:06:07,970 INFO [decode.py:589] Wrote detailed error stats to pruned_transducer_stateless7_streaming/exp-full-dynamic-chunk/fast_beam_search/errs-test-other-beam_20.0_max_contexts_8_max_states_64-epoch-30-avg-9-streaming-chunk-size-64-beam-20.0-max-contexts-8-max-states-64-use-averaged-model.txt | |
2022-12-26 16:06:07,973 INFO [decode.py:605] | |
For test-other, WER of different settings are: | |
beam_20.0_max_contexts_8_max_states_64 7.47 best for test-other | |
2022-12-26 16:06:07,973 INFO [decode.py:809] Done! | |