Data Streaming and Real Time Data Processing Training Course
Course Overview
This course offers a practical and structured entry point into building real-time data streaming systems. It explores core concepts, architectural patterns, and the industry-standard tools utilized to process continuous data at scale. Participants will gain the skills to design, implement, and optimize streaming pipelines using modern frameworks. The curriculum advances from foundational theory to hands-on application, empowering learners to confidently construct production-ready real-time solutions.
Training Format
• Instructor-led sessions with guided explanations
• Concept walkthroughs supported by real-world examples
• Hands-on demonstrations and coding exercises
• Progressive labs aligned with daily topics
• Interactive discussions and Q&A sessions
Course Objectives
• Grasp the concepts and system architecture of real-time data streaming
• Distinguish between batch and streaming data processing models
• Design scalable and fault-tolerant streaming pipelines
• Utilize distributed streaming tools and frameworks
• Apply event time processing, windowing, and stateful operations
• Build and optimize real-time data solutions tailored to business use cases
This course is available as onsite live training in Italy or online live training.Course Outline
Course Outline: Day 1
• Introduction to data streaming concepts
• Fundamentals of batch vs. real-time processing
• Basics of event-driven architecture
• Common industry use cases
• Overview of the streaming ecosystem
Day 2
• Streaming architecture design patterns
• Fundamentals of distributed messaging systems
• Producers and consumers
• Topics, partitions, and data flow
• Data ingestion strategies
Day 3
• Stream processing concepts and frameworks
• Event time vs. processing time
• Windowing techniques and use cases
• Stateful stream processing
• Basics of fault tolerance and checkpointing
Day 4
• Data transformation in streaming pipelines
• ETL and ELT in real-time systems
• Schema management and evolution
• Stream joins and enrichment
• Introduction to cloud-based streaming services
Day 5
• Monitoring and observability in streaming systems
• Basics of security and access control
• Performance tuning and optimization
• End-to-end pipeline design review
• Real-world use cases, such as fraud detection and IoT processing
Open Training Courses require 5+ participants.
Data Streaming and Real Time Data Processing Training Course - Booking
Data Streaming and Real Time Data Processing Training Course - Enquiry
Data Streaming and Real Time Data Processing - Consultancy Enquiry
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already
James - BHG Financial
Course - Apache NiFi for Administrators
Upcoming Courses
Related Courses
Administrator Training for Apache Hadoop
35 HoursAudience:
This course is designed for IT professionals seeking solutions to store and process large datasets within a distributed system environment.
Goal:
To develop in-depth knowledge of Hadoop cluster administration.
Big Data Analytics with Google Colab and Apache Spark
14 HoursThis instructor-led live training in Italy (online or onsite) is intended for intermediate-level data scientists and engineers seeking to utilize Google Colab and Apache Spark for big data processing and analytics.
Upon completing this training, participants will be capable of:
- Establishing a big data environment using Google Colab and Spark.
- Efficiently processing and analyzing large-scale datasets with Apache Spark.
- Visualizing big data within a collaborative framework.
- Integrating Apache Spark with cloud-based tools.
Big Data Analytics in Health
21 HoursBig data analytics is the process of examining vast volumes of diverse datasets to uncover correlations, hidden patterns, and other valuable insights.
The healthcare sector generates massive amounts of complex, heterogeneous medical and clinical data. Applying big data analytics to this information holds immense potential for deriving insights that improve healthcare delivery. However, the sheer scale of these datasets presents significant challenges for analysis and practical implementation in clinical environments.
In this instructor-led, live remote training, participants will learn how to perform big data analytics in healthcare by progressing through a series of hands-on live-lab exercises.
By the end of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to handle medical data
- Study big data systems and algorithms within the context of health applications
Audience
- Developers
- Data Scientists
Format of the Course
- Part lecture, part discussion, exercises, and extensive hands-on practice.
Note
- To request customized training for this course, please contact us to arrange.
Hadoop For Administrators
21 HoursApache Hadoop stands as the leading framework for processing Big Data across server clusters. This intensive three-day course (with an optional fourth day) equips attendees with a deep understanding of the business value and practical use cases of Hadoop and its surrounding ecosystem. Participants will learn how to plan for cluster deployment and scalability, as well as master the installation, maintenance, monitoring, troubleshooting, and optimization of Hadoop systems. The curriculum includes hands-on practice with bulk data loading, exploration of various Hadoop distributions, and the management of ecosystem tools. The course concludes with an in-depth discussion on securing clusters using Kerberos.
\u201c\u2026The materials were exceptionally well-prepared and thoroughly covered. The lab sessions were very helpful and meticulously organized.\u201d
\u2014 Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Professionals serving as Hadoop administrators.
Format
A blend of lectures and hands-on laboratories, maintaining an approximate split of 60% theoretical instruction and 40% practical labs.
Hadoop for Developers (4 days)
28 HoursApache Hadoop stands out as the leading framework for processing Big Data across server clusters. This course serves as an introduction for developers to the key components of the Hadoop ecosystem, including HDFS, MapReduce, Pig, Hive, and HBase.
Advanced Hadoop for Developers
21 HoursApache Hadoop stands out as one of the most widely adopted frameworks for managing Big Data across server clusters. This course explores data management within HDFS, alongside advanced applications of Pig, Hive, and HBase. The sophisticated programming strategies covered here are particularly advantageous for seasoned Hadoop developers.
Audience: developers
Duration: three days
Format: lectures (50%) and hands-on labs (50%).
Hadoop Administration on MapR
28 HoursTarget Audience
This course aims to demystify big data and Hadoop technologies, demonstrating that they are accessible and straightforward to grasp.
Hadoop and Spark for Administrators
35 HoursThis instructor-led, live training in Italy (online or onsite) is designed for system administrators who wish to learn how to set up, deploy, and manage Hadoop clusters within their organization.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four key components of the Hadoop ecosystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Utilize the Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Configure HDFS to serve as the storage engine for on-premise Spark deployments.
- Configure Spark to access alternative storage solutions, such as Amazon S3, and NoSQL database systems like Redis, Elasticsearch, Couchbase, Aerospike, and others.
- Perform administrative tasks, including provisioning, management, monitoring, and securing an Apache Hadoop cluster.
HBase for Developers
21 HoursThis course provides an introduction to HBase, a NoSQL database built on top of Hadoop. It is designed for developers who intend to build applications using HBase, as well as administrators responsible for managing HBase clusters.
Participants will be guided through HBase architecture, data modeling, and application development. The curriculum also covers the integration of MapReduce with HBase and addresses key administration topics, with a focus on performance optimization. The course is highly practical, featuring numerous lab exercises.
Duration: 3 days
Audience: Developers and Administrators
Apache NiFi for Administrators
21 HoursApache NiFi is an open-source platform designed for flow-based data integration and event processing. It facilitates the automated, real-time routing, transformation, and mediation of data between diverse systems, supported by a web-based interface that offers granular control.
This instructor-led live training, available both onsite and remotely, is tailored for intermediate-level administrators and engineers aiming to deploy, manage, secure, and optimize NiFi dataflows within production environments.
Upon completion of this training, participants will be equipped to:
- Install, configure, and maintain Apache NiFi clusters.
- Design and manage dataflows connecting various sources and sinks.
- Implement logic for flow automation, routing, and transformation.
- Optimize performance, monitor operations, and troubleshoot issues.
Format of the Course
- Interactive lectures featuring discussions on real-world architectures.
- Hands-on labs focused on building, deploying, and managing flows.
- Scenario-based exercises conducted in a live-lab environment.
Course Customization Options
- For information on arranging customized training for this course, please contact us.
Apache NiFi for Developers
7 HoursIn this instructor-led, live training in Italy, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
PySpark and Machine Learning
21 HoursThis course offers a hands-on introduction to creating scalable data processing and Machine Learning workflows with PySpark. Attendees will discover how Apache Spark functions within contemporary Big Data ecosystems and learn to process extensive datasets efficiently by leveraging distributed computing principles.
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led, live training in Italy, participants will learn how to combine Python and Spark to analyze big data while engaging in hands-on exercises.
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led, live training in Italy (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets.
By the end of this training, participants will be able to:
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio serves as a unified, data-centric platform that seamlessly combines big data capabilities, artificial intelligence, and governance. Its Rocket and Intelligence modules empower organizations to rapidly explore, transform, and analyze data with advanced analytics tailored for enterprise needs.
This instructor-led live training, available both online and onsite, is designed for intermediate-level data professionals aiming to master the Rocket and Intelligence modules within Stratio using PySpark. The curriculum emphasizes looping structures, user-defined functions (UDFs), and sophisticated data logic.
Upon completion, participants will gain the ability to:
- Effectively navigate and utilize the Rocket and Intelligence modules within the Stratio platform.
- Implement PySpark for efficient data ingestion, transformation, and analytical processes.
- Control data workflows and execute feature engineering tasks using loops and conditional logic.
- Develop and manage user-defined functions (UDFs) to facilitate reusable data operations in PySpark.
Training Format
- Engaging interactive lectures and discussions.
- Extensive exercises and practical practice sessions.
- Hands-on implementation within a live laboratory environment.
Customization Options
- For tailored training requests, please contact us to make arrangements.