Akhil Perincherry

Logo

Hello! I am a PhD. student at Oregon State University with Dr. Stefan Lee working generally in visual-language grounding and embodied AI. I am also an ML engineer at Ford Motor Company working primarily on perception features for automated driving mostly using camera images and LiDAR point clouds.

In my free time (404), I enjoy playing/watching soccer, kickboxing, hiking (esp. waterfall hikes) and practically any outdoor sport.

Email | LinkedIn | Github
Google Scholar | Twitter

Attention Is All You Need - Paper Summary

Bakground

Architecture

An autoregressive (AR) based encoder-decoder model is proposed. The input sequence representations are encoded by the encoder into a continous sequence representations. The decoder uses the encoder output to generate model’s output sequences one element at a time. Since the model is AR, it uses the previously generated symbols as additional input when generating the subsequent symbol.

Encoder

Decoder

Attention

Multi-Head attention

Layers

Positional embeddings

Self-attention benefits

References