VNode ITeS
DataIntermediateDatabricks Data Intelligence PlatformData Engineering

Data Engineering with Databricks

This program aligns to the Databricks Certified Data Engineer Associate track and prepares teams to perform introductory data engineering tasks on the Databricks Data Intelligence Platform. It covers ETL development with Spark SQL or PySpark, data ingestion and transformation, workload orchestration, and foundational governance practices needed for production-ready pipelines.

Role-Based Certification PrepTrack: Databricks Certified Data Engineer AssociateOfficial Source: Databricks

Certification

Databricks Certified Data Engineer Associate

Delivery

Virtual, On-site, or Hybrid

Duration

4 days

Product

Databricks Data Intelligence Platform

Role

Data Engineer

Lab-Based DeliveryCustomizable for TeamsOfficially Aligned: Databricks
High Demand

Best Fit

Data EngineerData EngineeringCertification ReadinessTailored Team Delivery

Audience Profile

Who This Program Is For

Built for practitioners who need to use Databricks to complete introductory data engineering tasks including ETL development, workload orchestration, and foundational governance.

Overview

Program Summary

Databricks certification-aligned program for foundational data engineering skills across Delta Lake ingestion, workflows, Delta Live Tables, and Unity Catalog governance.

Course Outline

Complete Module Sequence

Review the full module sequence for this program, including the primary topic coverage in each module where available.

1

Module 1

Lakehouse ingestion and transformation foundations

+

Establish the foundational patterns for loading, transforming, and managing data in the Databricks lakehouse using Delta Lake and medallion-style engineering practices.

  • Data Ingestion with Delta Lake
2

Module 2

Operationalize pipelines and scheduled workloads

+

Build reliable production workflows by orchestrating jobs, dependencies, and recurring data tasks with Databricks-native workflow tooling.

  • Deploy Workloads with Databricks Workflows
3

Module 3

Engineer declarative pipelines at scale

+

Use Delta Live Tables to define, manage, and monitor resilient data pipelines for transformation, quality, and lifecycle management.

  • Build Data Pipelines with Delta Live Tables
4

Module 4

Govern data access and platform controls

+

Apply core governance capabilities with Unity Catalog for access management, lineage awareness, and trusted data operations.

  • Data Management and Governance with Unity Catalog

Coverage Areas

Topic Coverage

Coverage Item 1

Data Ingestion with Delta Lake

Coverage Item 2

Deploy Workloads with Databricks Workflows

Coverage Item 3

Build Data Pipelines with Delta Live Tables

Coverage Item 4

Data Management and Governance with Unity Catalog

Customization

Adapt This Program for Your Team

We can adapt this program around your team structure, platform priorities, delivery goals, and the scenarios your people need to work through in practice.

  • Use your source-system and medallion architecture scenario in the workshop
  • Add streaming ingestion or CDC emphasis for platform teams
  • Extend with governance and operating-model decisions for Unity Catalog rollout
Ask Kriya AI