AI Red Teaming relies on the creative human expertise of highly skilled safety and security professionals to simulate attacks. The process is resource and time intensive and can create a bottleneck for many organizations to accelerate AI [...]
  • WAIA
  • Duration 1 day
  • 10 ITK points
  • 0 terms
  • Praha (7 900 Kč)

    Brno (on request)

    Bratislava (on request)

AI Red Teaming relies on the creative human expertise of highly skilled safety and security professionals to simulate attacks. The process is resource and time intensive and can create a bottleneck for many organizations to accelerate AI adoption. With the AI Red Teaming Agent, organizations can now leverage Microsoft’s deep expertise to scale and accelerate their AI development with Trustworthy AI at the forefront.

»
  • Definition and types of AI agents
  • Real-world applications and use cases
  • Discussion: The role of AI agents in modern technology
  • Understanding AI red teaming and its importance
  • Overview of Microsoft's AI Red Teaming Agent
  • Key features: automated scans, attack strategies, and reporting
  • Supported risk categories and attack techniques
  • Installing necessary tools and dependencies
  • Configuring Azure AI Foundry and the AI Red Teaming Agent
  • Running scans on a sample AI model
  • Final Project: Build an AI Agent to perform Network Exploitation tasks

Prerequisites:

  • Basic understanding of AI and machine learning concepts
  • Azure MS account

AI engineers, ML practitioners, security researchers, and technical decision-makers who want to integrate Trustworthy AI and proactive testing into their development pipeline.

Current offer
Training location
Course language

The prices are without VAT.

No term dates found, contact our client servis. Prague: +420 234 064 900-3 | Brno: +420 542 422 111