A hands-on training for data professionals who want to build a solid foundation in Python programming and use it effectively for data processing within the Microsoft Fabric environment. You will spend most of the time working in Notebooks — [...]
  • GOC685
  • Duration 3 days
  • 30 ITK points
  • 0 terms
  • ČR (on request)

    SR (on request)

  • Intermediate

A hands-on training for data professionals who want to build a solid foundation in Python programming and use it effectively for data processing within the Microsoft Fabric environment. You will spend most of the time working in Notebooks — mastering programming principles, learning to work with data structures, functions and objects, and understanding how to use Python for practical data processing. You will learn to work with common libraries such as Pandas, Polars, PySpark and DuckDB, and understand their role in the Microsoft Fabric ecosystem. The course will guide you through the fundamentals of algorithmic thinking, working with data sources, data transformations and storing data in Lakehouse. You will gain confidence in writing clean and maintainable code and understand how Python fits into the broader context of data engineering in Microsoft Fabric. The emphasis is on practical application — working with real data, interactive development in Notebooks, integration with Lakehouse, and working with SQL endpoints and the Spark environment.

»
  • Understand the fundamental principles of programming and how Python works
  • Work with core language constructs – variables, conditions, loops, functions
  • Use common Python modules and install external libraries
  • Work with Pandas, Polars, PySpark and DuckDB libraries for data processing
  • Load, transform and store data in the Microsoft Fabric environment
  • Understand the principles of Lakehouse architecture and working with Delta Lake
  • Use Python as a Data Engineering tool in Fabric
  • Write clean, efficient and maintainable code following best practices

The course is intended for data professionals who want to start using Python in the Microsoft Fabric environment for data processing purposes. It is primarily aimed at data engineers who are new to Python and Apache Spark, but is also suitable for data analysts looking to expand their data working capabilities, or for Power BI developers transitioning into the Microsoft Fabric ecosystem. The course is also suitable for participants with no prior experience with Python.

  • Basic knowledge of the Microsoft Fabric environment, at least at the GOC680 level
  • Basic programming experience is recommended
  • Basic familiarity with working with data
  • Basic knowledge of SQL is an advantage, not a requirement
  • Experience with analytical tools (e.g. Microsoft Power BI) is an advantage
1. Introduction and the Microsoft Fabric environment
  • Programming principles and their role in data processing
  • Basic algorithm concepts – sequences, conditions, loops
  • Specifics of the Microsoft Fabric environment
  • Notebooks vs. classic development environments
2. Python language fundamentals
  • Python syntax
  • Variables and data types
  • Working with text, numbers, boolean values and dates
  • Code writing conventions and best practices
3. Data collections and program flow control
  • Conditions and branching
  • for and while loops
  • Error and exception handling
  • Lists, tuples, sets and dictionaries
  • Iteration and working with indices
4. Functions, modules and code structuring
  • Creating custom functions
  • Parameters and return values
  • Structuring code using functions
  • Using built-in and external modules
  • Installing and using external libraries
5. Object-oriented programming and advanced concepts
  • Principles of object-oriented programming
  • Creating custom classes
  • Methods and constructors
  • Encapsulation and working with object state
  • Lambda functions
6. Python for data processing in Fabric
  • Basic data transformations using Python
  • Loading and saving data (JSON, CSV, Parquet)
  • Overview of Pandas, Polars, PySpark and DuckDB libraries
  • Basics of working with PySpark
  • Data transformations using Apache Spark
  • Storing data in Lakehouse and working with Delta Lake
  • Using SparkSQL
7. Python specifics in the Microsoft Fabric environment
  • Working with Notebooks in Microsoft Fabric
  • Magic commands
  • Environment and library management
  • Integration with Lakehouse
  • Using notebookutils and sempy tools
  • Working with SQL endpoints
  • Data optimisation using partitioning, vacuum and optimize
Current offer
Training location
Course language

The prices are without VAT.

Custom Training

Didn’t find a suitable date or need training tailored to your team’s specific needs? We’ll be happy to prepare custom training for you.