Machine Translation

A knowledge graph for Machine Translation

Posted by Wanxin on Monday, February 15, 2021
  • Machine Translation (MT)
    • Pre- Neural [1950]: rule-based/dictionary based
    • statistical [1990s]
      • bayes ruls, learn alignment
    • Decoding: search for the best translation
    • Nerual Machine Translation [2014]
      • seq2seq
        • two RNNs involved
        • encoding
        • decoding
          • greedy decoding
          • exhaustive search decoding
          • beam search
            • k host, k^2, k
      • Evaluation
        • BLEU BiLingual Evaluation Understudy
    • Attention & seq2seq [2019]
      • core idea on each step of the decoder, use direct connection(dot product) to the encoder to focus on a particular part of the source sentence
      • it’s a general DL technique
      • attention variants
      • concept: context vector = attention output: attention weighted hidden state
  • Machine Translation (MT)
    • Pre- Neural [1950]:
      • rule-based/dictionary based
    • statistical [1990s]
      • bayes ruls, learn alignment
      • Decoding:
      • search for the best translation
    • Nerual Machine Translation [2014]
      • seq2seq
        • two RNNs involved
        • encoding
        • decoding
          • greedy decoding
          • exhaustive search decoding
          • beam search
            • k host, k^2, k
      • Evaluation
        • BLEU
          • BiLingual Evaluation Understudy
    • Attention & seq2seq [2019]
      • core idea on each step fo the decoderm use direct connection(dot product) to the encoder to focus on a particular part of the source sentence
      • it’s a general DL technique
      • attention variants