Become A Microsoft Certified Azure Data Engineer. Learn to design data models, build data warehouses, build data lakes and lakehouse architecture, create data pipelines, and work with large datasets on the Azure platform using Azure Synapse Analytics, Azure Databricks, and Azure Data Factory.
In this Azure Data Engineering Bootcamp, you will learn how to implement and manage data engineering workloads on Microsoft Azure, using Azure services such as Azure Synapse Analytics, Azure Data Lake Storage Gen2, Azure Stream Analytics, Azure Databricks, and others. This bootcamp focuses on data engineering tasks such as orchestrating data transfer and transformation pipelines, working with data files in a data lake, creating and loading relational data warehouses, capturing and aggregating streams of real-time data, and tracking data assets and lineage.
Reviewed by industry leaders, the course uses a project-based learning model to provide hands-on experience with the latest Azure Data Engineering tools. Azure Data Engineers are in high demand, which often translates into strong earning potential. According to Glassdoor, the median salary for an Azure Data Engineer in the United States is $120,000 per year. This figure varies from region to region.
In these 12 weekends of Azure Data Engineer Bootcamp, we will coach you through the step-by-step process to scale your career as a Microsoft Azure Data Engineer. We also recommend scheduling 5 hours a week to complete any tasks from class. This bootcamp is held every weekend (Saturdays and Sundays) for the duration of the program. Live classes on Saturdays can last up to 4 hours and 3 hours on Sundays. Training time zone(s) is typically in the Eastern Standard Time Zone, West African Time Zone, British Summer Time Zone and Greenwich Mean Time Zone.
1. Dual Certification from Microsoft (Microsoft Certified Azure Data Engineer Associate) and Quantum Analytics (Verified Accredited Certificate) at no extra cost.
2. Gain relevant work experience on your CV through the Quantum Analytics Virtual Growth Internship Program.
3. Land your first job as an Azure Data Engineer Within 6 Months or less of Completing your bootcamp.
4. Get access to remote work opportunities as an Azure Data Engineer & earn in USD.
5. CV Review & LinkedIn Optimization
6. Job search and interview preparation
7. On-the-Job Support (1 Month)
8. International work reference & recommendation letter from Quantum Analytics UK, US & Nigeria.
And more!
Microsoft Azure provides a comprehensive platform for data engineering; but what is data engineering? In this module you will learn how to:
Identify common data engineering tasks
Describe common data engineering concepts
Identify Azure services for data engineering
Data lakes are a core element of data analytics architectures. Azure Data Lake Storage Gen2 provides a scalable, secure, cloud-based solution for data lake storage. In this module you will learn how to:
Describe the key features and benefits of Azure Data Lake Storage Gen2
Enable Azure Data Lake Storage Gen2 in an Azure Storage account
Compare Azure Data Lake Storage Gen2 and Azure Blob storage
Describe where Azure Data Lake Storage Gen2 fits in the stages of analytical processing
Describe how Azure data Lake Storage Gen2 is used in common analytical workloads
Learn about the features and capabilities of Azure Synapse Analytics - a cloud-based platform for big data processing and analysis. In this module, you'll learn how to:
Identify the business problems that Azure Synapse Analytics addresses.
Describe core capabilities of Azure Synapse Analytics.
Determine when to use Azure Synapse Analytics.
Learn about the features and capabilities of Azure Synapse Analytics - a cloud-based platform for big data processing and analysis. After the completion of this module, you will be able to:
Identify capabilities and use cases for serverless SQL pools in Azure Synapse Analytics
Query CSV, JSON, and Parquet files using a serverless SQL pool
Create external database objects in a serverless SQL pool
By using a serverless SQL pool in Azure Synapse Analytics, you can use the ubiquitous SQL language to transform data in files in a data lake. After completing this module, you'll be able to:
Use a CREATE EXTERNAL TABLE AS SELECT (CETAS) statement to transform data.
Encapsulate a CETAS statement in a stored procedure.
Include a data transformation stored procedure in a pipeline.
Why choose between working with files in a data lake or a relational database schema? With lake databases in Azure Synapse Analytics, you can combine the benefits of both. After completing this module, you will be able to:
Understand lake database concepts and components
Describe database templates in Azure Synapse Analytics
Create a lake database
Apache Spark is a core technology for large-scale data analytics. Learn how to use Spark in Azure Synapse Analytics to analyse and visualise data in a data lake. After completing this module, you will be able to:
Identify core features and capabilities of Apache Spark.
Configure a Spark pool in Azure Synapse Analytics.
Run code to load, analyze, and visualise data in a Spark notebook.
Data engineers commonly need to transform large volumes of data. Apache Spark pools in Azure Synapse Analytics provide a distributed processing platform that they can use to accomplish this goal. In this module, you will learn how to:
Use Apache Spark to modify and save dataframes
Partition data files for improved performance and scalability.
Transform data with SQL
Delta Lake is an open source relational storage area for Spark that you can use to implement a data lakehouse architecture in Azure Synapse Analytics. In this module, you'll learn how to:
Describe core features and capabilities of Delta Lake.
Create and use Delta Lake tables in a Synapse Analytics Spark pool.
Create Spark catalog tables for Delta Lake data.
Use Delta Lake tables for streaming data.
Query Delta Lake tables from a Synapse Analytics SQL pool.
Relational data warehouses are a core element of most enterprise Business Intelligence (BI) solutions, and are used as the basis for data models, reports, and analysis. In this module, you'll learn how to:
Design a schema for a relational data warehouse.
Create fact, dimension, and staging tables.
Use SQL to load data into data warehouse tables.
Use SQL to query relational data warehouse tables.
A core responsibility for a data engineer is to implement a data ingestion solution that loads new data into a relational data warehouse. In this module, you'll learn how to:
Load staging tables in a data warehouse
Load dimension tables in a data warehouse
Load time dimensions in a data warehouse
Load slowly changing dimensions in a data warehouse
Load fact tables in a data warehouse
Perform post-load optimizations in a data warehouse
Pipelines are the lifeblood of a data analytics solution. Learn how to use Azure Synapse Analytics pipelines to build integrated data solutions that extract, transform, and load data across diverse systems. In this module, you will learn how to:
Describe core concepts for Azure Synapse Analytics pipelines.
Create a pipeline in Azure Synapse Studio.
Implement a data flow activity in a pipeline.
Initiate and monitor pipeline runs.
Apache Spark provides data engineers with a scalable, distributed data processing platform, which can be integrated into an Azure Synapse Analytics pipeline. In this module, you will learn how to:
Describe notebook and pipeline integration.
Use a Synapse notebook activity in a pipeline.
Use parameters with a notebook activity.
Learn how hybrid transactional / analytical processing (HTAP) can help you perform operational analytics with Azure Synapse Analytics. After completing this module, you'll be able to:
Describe Hybrid Transactional / Analytical Processing patterns.
Identify Azure Synapse Link services for HTAP.
Azure Synapse Link for SQL enables low-latency synchronization of operational data in a relational database to Azure Synapse Analytics. In this module, you'll learn how to:
Understand key concepts and capabilities of Azure Synapse Link for SQL.
Conduct Elicitation
Configure Azure Synapse Link for Azure SQL Database.
Configure Azure Synapse Link for Microsoft SQL Server.
Azure Stream Analytics enables you to process real-time data streams and integrate the data they contain into applications and analytical solutions. In this module, you'll learn how to:
Understand data streams.
Understand event processing.
Understand window functions.
Get started with Azure Stream Analytics.
Azure Stream Analytics provides a real-time data processing engine that you can use to ingest streaming event data into Azure Synapse Analytics for further analysis and reporting. After completing this module, you'll be able to:
Describe common stream ingestion scenarios for Azure Synapse Analytics.
Configure inputs and outputs for an Azure Stream Analytics job.
Define a query to ingest real-time data into Azure Synapse Analytics.
Run a job to ingest real-time data, and consume that data in Azure Synapse Analytics.
By combining the stream processing capabilities of Azure Stream Analytics and the data visualization capabilities of Microsoft Power BI, you can create real-time data dashboards. In this module, you'll learn how to:
Configure a Stream Analytics output for Power BI.
Use a Stream Analytics query to write data to Power BI.
Create a real-time data visualization in Power BI.
In this module, you'll evaluate whether Microsoft Purview is the right choice for your data discovery and governance needs. By the end of this module, you'll be able to:
Evaluate whether Microsoft Purview is appropriate for your data discovery and governance needs.
Describe how the features of Microsoft Purview work to provide data discovery and governance.
Learn how to integrate Microsoft Purview with Azure Synapse Analytics to improve data discoverability and lineage tracking. After completing this module, you'll be able to:
Catalog Azure Synapse Analytics database assets in Microsoft Purview.
Configure Microsoft Purview integration in Azure Synapse Analytics.
Search the Microsoft Purview catalog from Synapse Studio.
Track data lineage in Azure Synapse Analytics pipelines activities.
Azure Databricks is a cloud service that provides a scalable platform for data analytics using Apache Spark. In this module, you'll learn how to:
Provision an Azure Databricks workspace.
Identify core workloads and personas for Azure Databricks.
Describe key concepts of an Azure Databricks solution.
Azure Databricks is built on Apache Spark and enables data engineers and analysts to run Spark jobs to transform, analyse and visualise data at scale. In this module, you'll learn how to:
Describe key elements of the Apache Spark architecture.
Create and configure a Spark cluster.
Describe use cases for Spark.
Use Spark to process and analyze data stored in files.
Use Spark to visualize data
Using pipelines in Azure Data Factory to run notebooks in Azure Databricks enables you to automate data engineering processes at cloud scale. In this module, you'll learn how to:
Describe how Azure Databricks notebooks can be run in a pipeline.
Create an Azure Data Factory linked service for Azure Databricks.
Use a Notebook activity in a pipeline.
Pass parameters to a notebook.
Microsoft Azure Bootcamp Review From Our Students
The Azure Cloud Engineering program at Quantum Analytics was exactly what I needed to advance my career. The real-world projects were challenging and highly relevant, giving me the confidence to apply my skills in my new role
I came to Quantum Analytics with basic cloud knowledge, but I left with the expertise needed to engineer robust solutions on Azure. The course content was up-to-date and aligned with industry standards.
The Microsoft Azure Cloud Engineering course at Quantum Analytics was a game-changer for me. The in-depth tutorials and hands-on labs gave me the skills I needed to become a certified Azure engineer.
The Azure training at Quantum Analytics was comprehensive and engaging. I particularly appreciated the focus on best practices and security, which are crucial in the cloud engineering field.