Stream processing Training Courses

Stream processing Training Courses

I corsi di formazione di Stream Processing in diretta locale e istruttori dimostrano attraverso discussioni interattive e handson di pratica sui fondamenti e sugli argomenti avanzati di Stream Processing Il training di Stream Processing è disponibile come "allenamento dal vivo sul posto" o "allenamento dal vivo a distanza" La formazione on-site in loco può essere svolta localmente presso la sede del cliente in Italia o nei centri di formazione aziendale NobleProg in Italia La formazione in remoto dal vivo viene effettuata tramite un desktop remoto interattivo NobleProg Il tuo fornitore di formazione locale.

Machine Translated

Recensioni

★★★★★
★★★★★

Schema generale del corso Stream processing

Nome del corso
Durata
Overview
Nome del corso
Durata
Overview
14 hours
Overview
Apache Samza is an open-source near-realtime, asynchronous computational framework for stream processing. It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management.

This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.

By the end of this training, participants will be able to:

- Use Samza to simplify the code needed to produce and consume messages.
- Decouple the handling of messages from an application.
- Use Samza to implement near-realtime asynchronous computation.
- Use stream processing to provide a higher level of abstraction over messaging systems.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
14 hours
Overview
Tigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users.

This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.

By the end of this training, participants will be able to:

- Create powerful, stream processing applications for handling large volumes of data
- Process stream sources such as Twitter and Webserver Logs
- Use Tigon for rapid joining, filtering, and aggregating of streams

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
7 hours
Overview
In this instructor-led, live training, participants will learn the core concepts behind MapR Stream Architecture as they develop a real-time streaming application.

By the end of this training, participants will be able to build producer and consumer applications for real-time stream data procesing.

Audience

- Developers
- Administrators

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- To request a customized training for this course, please contact us to arrange.
7 hours
Overview
Kafka Streams is a client-side library for building applications and microservices whose data is passed to and from a Kafka messaging system. Traditionally, Apache Kafka has relied on Apache Spark or Apache Storm to process data between message producers and consumers. By calling the Kafka Streams API from within an application, data can be processed directly within Kafka, bypassing the need for sending the data to a separate cluster for processing.

In this instructor-led, live training, participants will learn how to integrate Kafka Streams into a set of sample Java applications that pass data to and from Apache Kafka for stream processing.

By the end of this training, participants will be able to:

- Understand Kafka Streams features and advantages over other stream processing frameworks
- Process stream data directly within a Kafka cluster
- Write a Java or Scala application or microservice that integrates with Kafka and Kafka Streams
- Write concise code that transforms input Kafka topics into output Kafka topics
- Build, package and deploy the application

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice

Notes

- To request a customized training for this course, please contact us to arrange
21 hours
Overview
In this instructor-led, live training in Italia (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.

By the end of this training, participants will be able to:

- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
14 hours
Overview
This instructor-led, live training (online or onsite) is aimed at engineers who wish to use Confluent (a distribution of Kafka) to build and manage a real-time data processing platform for their applications.

By the end of this training, participants will be able to:

- Install and configure Confluent Platform.
- Use Confluent's management tools and services to run Kafka more easily.
- Store and process incoming stream data.
- Optimize and manage Kafka clusters.
- Secure data streams.

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- This course is based on the open source version of Confluent: Confluent Open Source.
- To request a customized training for this course, please contact us to arrange.
7 hours
Overview
This instructor-led, live training in Italia (online or onsite) is aimed at data engineers, data scientists, and programmers who wish to use Apache Kafka features in data streaming with Python.

By the end of this training, participants will be able to use Apache Kafka to monitor and manage conditions in continuous data streams using Python programming.
28 hours
Overview
This instructor-led, live training in Italia introduces the principles and approaches behind distributed stream and batch data processing, and walks participants through the creation of a real-time, data streaming application in Apache Flink.
21 hours
Overview
In this instructor-led, live training in Italia (onsite or remote), participants will learn how to deploy and manage Apache NiFi in a live lab environment.

By the end of this training, participants will be able to:

- Install and configure Apachi NiFi.
- Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
- Automate dataflows.
- Enable streaming analytics.
- Apply various approaches for data ingestion.
- Transform Big Data and into business insights.
7 hours
Overview
In this instructor-led, live training in Italia, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.

By the end of this training, participants will be able to:

- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
28 hours
Overview
Apache Storm è un motore di calcolo distribuito in tempo reale utilizzato per abilitare la business intelligence in tempo reale Lo fa consentendo alle applicazioni di elaborare in modo affidabile flussi di dati illimitati (ovvero l'elaborazione del flusso) "Storm è in grado di elaborare in tempo reale ciò che Hadoop è per l'elaborazione in batch!" In questo corso di formazione dal vivo con istruttore, i partecipanti impareranno come installare e configurare Apache Storm, quindi sviluppare e distribuire un'applicazione Apache Storm per l'elaborazione di big data in tempo reale Alcuni degli argomenti inclusi in questa formazione includono: Apache Storm nel contesto di Hadoop Lavorare con dati illimitati Calcolo continuo Analisi in tempo reale Elaborazione RPC distribuita e ETL Richiedi questo corso ora! Pubblico Sviluppatori di software e ETL Professionisti di mainframe Scienziati di dati Grandi analisti di dati Professionisti Hadoop Formato del corso Lezione di parte, discussione parziale, esercitazioni e pratica intensiva .
21 hours
Overview
Apache Apex è una piattaforma nativa YARN che unisce l'elaborazione di stream e batch. Elabora grandi quantità di dati in movimento in modo scalabile, performante, tollerante ai guasti, con stato, sicuro, distribuito e facilmente utilizzabile.

Questa formazione dal vivo con istruttore introduce l'architettura di elaborazione unificata di Apache Apex e guida i partecipanti alla creazione di un'applicazione distribuita utilizzando Apex su Hadoop .

Al termine di questa formazione, i partecipanti saranno in grado di:

- Comprendere i concetti della pipeline di elaborazione dei dati come connettori per sorgenti e pozzi, trasformazioni di dati comuni, ecc.
- Costruisci, ridimensiona e ottimizza un'applicazione Apex
- Elaborazione di flussi di dati in tempo reale in modo affidabile e con una latenza minima
- Utilizzare Apex Core e la libreria Apex Malhar per consentire un rapido sviluppo delle applicazioni
- Utilizzare l'API Apex per scrivere e riutilizzare il codice Java esistente
- Integra Apex in altre applicazioni come motore di elaborazione
- Ottimizza, testa e ridimensiona le applicazioni Apex

Formato del corso

- Conferenza e discussione interattiva.
- Molti esercizi e pratiche.
- Implementazione pratica in un ambiente live-lab.

Opzioni di personalizzazione del corso

- Per richiedere una formazione personalizzata per questo corso, ti preghiamo di contattarci per organizzare.
14 hours
Overview
Apache Beam is an open source, unified programming model for defining and executing parallel data processing pipelines. It's power lies in its ability to run both batch and streaming pipelines, with execution being carried out by one of Beam's supported distributed processing back-ends: Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow. Apache Beam is useful for ETL (Extract, Transform, and Load) tasks such as moving data between different storage media and data sources, transforming data into a more desirable format, and loading data onto a new system.

In this instructor-led, live training (onsite or remote), participants will learn how to implement the Apache Beam SDKs in a Java or Python application that defines a data processing pipeline for decomposing a big data set into smaller chunks for independent, parallel processing.

By the end of this training, participants will be able to:

- Install and configure Apache Beam.
- Use a single programming model to carry out both batch and stream processing from withing their Java or Python application.
- Execute pipelines across multiple environments.

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- This course will be available Scala in the future. Please contact us to arrange.
14 hours
Overview
Apache Ignite è una piattaforma di elaborazione in memoria che si trova tra l'applicazione e il livello dati per migliorare la velocità, la scalabilità e la disponibilità.

in questa formazione live, guidata da istruttori, i partecipanti impareranno i principi che stanno dietro allo storage persistente e puro in memoria mentre attraversano la creazione di un progetto di calcolo in memoria di esempio.

entro la fine di questa formazione, i partecipanti saranno in grado di:

- use Ignite per la persistenza in memoria, su disco e un database in memoria puramente distribuito.
- ottenere la persistenza senza sincronizzare i dati di nuovo in un database relazionale.
- USA Ignite per eseguire SQL e join distribuiti.
- migliorare le prestazioni spostando i dati più vicino alla CPU, utilizzando RAM come storage.
set di dati - spread in un cluster per ottenere scalabilità orizzontale.
- integra Ignite con RDBMS, NoSQL, Hadoop e processori di machine learning.

formato del corso

- conferenza interattiva e discussione.
- un sacco di esercizi e pratica.
- implementazione hands-on in un ambiente lab Live.

Opzioni di personalizzazione del corso

- per richiedere una formazione personalizzata per questo corso, si prega di contattarci per organizzare.
7 hours
Overview
This instructor-led, live training in Italia (online or onsite) is aimed at developers who wish to implement Apache Kafka stream processing without writing code.

By the end of this training, participants will be able to:

- Install and configure Confluent KSQL.
- Set up a stream processing pipeline using only SQL commands (no Java or Python coding).
- Carry out data filtering, transformations, aggregations, joins, windowing, and sessionization entirely in SQL.
- Design and deploy interactive, continuous queries for streaming ETL and real-time analytics.
7 hours
Overview
This instructor-led, live training in Italia (online or onsite) is aimed at data engineers, data scientists, and programmers who wish to use Spark Streaming features in processing and analyzing real-time data.

By the end of this training, participants will be able to use Spark Streaming to process live data streams for use in databases, filesystems, and live dashboards.

Last Updated:

Prossimi corsi Stream processing

Fine settimana Stream processing corsi, Sera Stream processing training, Stream processing centro di addestramento, Stream processing con istruttore, Fine settimana Stream processing training, Sera Stream processing corsi, Stream processing coaching, Stream processing istruttore, Stream processing trainer, Stream processing corsi di formazione, Stream processing classi, Stream processing in loco, Stream processing corsi privati, Stream processing training individuale

Corsi scontati

Newsletter per ricevere sconti sui corsi

Rispettiamo la privacy di ogni indirizzo mail. Non diffonderemo,né venderemo assolutamente nessun indirizzo mail a terzi. Inserire prego il proprio indirizzo mail. E' possibile sempre cambiare le impostazioni o cancellarsi completamente.

I nostri clienti

is growing fast!

We are looking for a good mixture of IT and soft skills in Italy!

As a NobleProg Trainer you will be responsible for:

  • delivering training and consultancy Worldwide
  • preparing training materials
  • creating new courses outlines
  • delivering consultancy
  • quality management

At the moment we are focusing on the following areas:

  • Statistic, Forecasting, Big Data Analysis, Data Mining, Evolution Alogrithm, Natural Language Processing, Machine Learning (recommender system, neural networks .etc...)
  • SOA, BPM, BPMN
  • Hibernate/Spring, Scala, Spark, jBPM, Drools
  • R, Python
  • Mobile Development (iOS, Android)
  • LAMP, Drupal, Mediawiki, Symfony, MEAN, jQuery
  • You need to have patience and ability to explain to non-technical people

To apply, please create your trainer-profile by going to the link below:

Apply now!

This site in other countries/regions