Transformer architectures underpin modern natural language processing and large language models. We believe organisations that master AI, Cloud, and Data technologies gain a decisive advantage in building scalable, intelligent applications. This hands-on workshop equips learners with the skills to construct, fine-tune, and deploy Transformer-based deep learning models for real-world NLP tasks.Over eight hours, participants will build a Transformer neural network in PyTorch, develop a named-entity recognition application using BERT, and deploy the solution with ONNX and NVIDIA TensorRT to an NVIDIA Triton Inference Server. By the end of the course, learners will be proficient in applying task-agnostic Transformer-based models to text classification, named-entity recognition, and question answering, and deploying them into production-ready inference environments.