Hadoop Training Courses

Hadoop Training Courses

I corsi di formazione Apache Hadoop live condotti da istruttori locali dimostrano attraverso pratiche interattive pratiche i componenti principali dell'ecosistema Hadoop e come queste tecnologie possano essere utilizzate per risolvere problemi su larga scala. L'allenamento Hadoop è disponibile come "allenamento dal vivo in loco" o "allenamento dal vivo a distanza". La formazione on-site in loco può essere svolta localmente presso la sede del cliente in Italia o nei centri di formazione aziendale NobleProg in Italia . La formazione in remoto dal vivo viene effettuata tramite un desktop remoto interattivo. NobleProg: il tuo fornitore di formazione locale

Machine Translated

Recensioni

★★★★★
★★★★★

Schema generale del corso Hadoop

Nome del corso
Durata
Overview
Nome del corso
Durata
Overview
21 hours
Overview
The course is dedicated to IT specialists that are looking for a solution to store and process large data sets in distributed system environment

Course goal:

Getting knowledge regarding Hadoop cluster administration
35 hours
Overview
Audience:

The course is intended for IT specialists looking for a solution to store and process large data sets in a distributed system environment

Goal:

Deep knowledge on Hadoop cluster administration.
28 hours
Overview
Audience:

This course is intended to demystify big data/hadoop technology and to show it is not difficult to understand.
28 hours
Overview
Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. This course will introduce a developer to various components (HDFS, MapReduce, Pig, Hive and HBase) Hadoop ecosystem.
21 hours
Overview
Apache Hadoop is one of the most popular frameworks for processing Big Data on clusters of servers. This course delves into data management in HDFS, advanced Pig, Hive, and HBase. These advanced programming techniques will be beneficial to experienced Hadoop developers.

Audience: developers

Duration: three days

Format: lectures (50%) and hands-on labs (50%).
21 hours
Overview
This course introduces HBase – a NoSQL store on top of Hadoop. The course is intended for developers who will be using HBase to develop applications, and administrators who will manage HBase clusters.

We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course is very hands-on with lots of lab exercises.

Duration : 3 days

Audience : Developers & Administrators
21 hours
Overview
Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. In this three (optionally, four) days course, attendees will learn about the business benefits and use cases for Hadoop and its ecosystem, how to plan cluster deployment and growth, how to install, maintain, monitor, troubleshoot and optimize Hadoop. They will also practice cluster bulk data load, get familiar with various Hadoop distributions, and practice installing and managing Hadoop ecosystem tools. The course finishes off with discussion of securing cluster with Kerberos.

“…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising

Audience

Hadoop administrators

Format

Lectures and hands-on labs, approximate balance 60% lectures, 40% labs.
21 hours
Overview
Apache Hadoop is the most popular framework for processing Big Data. Hadoop provides rich and deep analytics capability, and it is making in-roads in to tradional BI analytics world. This course will introduce an analyst to the core components of Hadoop eco system and its analytics

Audience

Business Analysts

Duration

three days

Format

Lectures and hands on labs.
21 hours
Overview
Hadoop is the most popular Big Data processing framework.
14 hours
Overview
Audience

- Developers

Format of the Course

- Lectures, hands-on practice, small tests along the way to gauge understanding
21 hours
Overview
This course is intended for developers, architects, data scientists or any profile that requires access to data either intensively or on a regular basis.

The major focus of the course is data manipulation and transformation.

Among the tools in the Hadoop ecosystem this course includes the use of Pig and Hive both of which are heavily used for data transformation and manipulation.

This training also addresses performance metrics and performance optimisation.

The course is entirely hands on and is punctuated by presentations of the theoretical aspects.
14 hours
Overview
Man mano che sempre più software e progetti IT migrano dall'elaborazione locale e dalla gestione dei dati all'elaborazione distribuita e all'archiviazione di big data, i Project Manager stanno riscontrando la necessità di aggiornare le proprie conoscenze e competenze per cogliere concetti e pratiche pertinenti ai progetti e alle opportunità dei Big Data Questo corso introduce i Project Manager al più popolare framework di elaborazione dei Big Data: Hadoop In questo corso di formazione istruito, i partecipanti apprenderanno i componenti principali dell'ecosistema Hadoop e come queste tecnologie possano essere utilizzate per risolvere problemi su vasta scala Nell'apprendimento di queste basi, i partecipanti miglioreranno anche la loro capacità di comunicare con gli sviluppatori e gli implementatori di questi sistemi, nonché con i data scientist e gli analisti coinvolti in molti progetti IT Pubblico Project Manager che desiderano implementare Hadoop nello sviluppo o nell'infrastruttura IT esistente Project Manager che hanno bisogno di comunicare con team interfunzionali che includono ingegneri dei big data, scienziati dei dati e analisti aziendali Formato del corso Lezione di parte, discussione parziale, esercitazioni e pratica intensiva .
14 hours
Overview
Apache Samza is an open-source near-realtime, asynchronous computational framework for stream processing. It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management.

This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.

By the end of this training, participants will be able to:

- Use Samza to simplify the code needed to produce and consume messages.
- Decouple the handling of messages from an application.
- Use Samza to implement near-realtime asynchronous computation.
- Use stream processing to provide a higher level of abstraction over messaging systems.

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
7 hours
Overview
Alluxio è un sistema di archiviazione distribuito virtuale open source che unifica diversi sistemi di archiviazione e consente alle applicazioni di interagire con i dati alla velocità della memoria. È utilizzato da aziende come Intel, Baidu e Alibaba.

In questo corso di formazione dal vivo con istruttore, i partecipanti impareranno come utilizzare Alluxio per collegare diversi framework di calcolo con sistemi di archiviazione e gestire in modo efficiente dati su scala multi-petabyte mentre passano attraverso la creazione di un'applicazione con Alluxio .

Al termine di questa formazione, i partecipanti saranno in grado di:

- Sviluppa un'applicazione con Alluxio
- Connetti sistemi e applicazioni per big data preservando uno spazio dei nomi
- Estrai in modo efficiente il valore dai big data in qualsiasi formato di archiviazione
- Migliora le prestazioni del carico di lavoro
- Distribuire e gestire Alluxio autonomo o in cluster

Pubblico

- Data scientist
- Sviluppatore
- Amministratore di sistema

Formato del corso

- Parte lezione, parte discussione, esercitazioni e esercitazioni pratiche
14 hours
Overview
Tigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users.

This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.

By the end of this training, participants will be able to:

- Create powerful, stream processing applications for handling large volumes of data
- Process stream sources such as Twitter and Webserver Logs
- Use Tigon for rapid joining, filtering, and aggregating of streams

Audience

- Developers

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
14 hours
Overview
Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion.

In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.

By the end of this training, participants will be able to:

- Create, curate, and interactively explore an enterprise data lake
- Access business intelligence data warehouses, transactional databases and other analytic stores
- Use a spreadsheet user-interface to design end-to-end data processing pipelines
- Access pre-built functions to explore complex data relationships
- Use drag-and-drop wizards to visualize data and create dashboards
- Use tables, charts, graphs, and maps to analyze query results

Audience

- Data analysts

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
21 hours
Overview
In this instructor-led, live training in Italia (onsite or remote), participants will learn how to deploy and manage Apache NiFi in a live lab environment.

By the end of this training, participants will be able to:

- Install and configure Apachi NiFi.
- Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
- Automate dataflows.
- Enable streaming analytics.
- Apply various approaches for data ingestion.
- Transform Big Data and into business insights.
7 hours
Overview
In this instructor-led, live training in Italia, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.

By the end of this training, participants will be able to:

- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
28 hours
Overview
Hadoop is a popular Big Data processing framework. Python is a high-level programming language famous for its clear syntax and code readibility.

In this instructor-led, live training, participants will learn how to work with Hadoop, MapReduce, Pig, and Spark using Python as they step through multiple examples and use cases.

By the end of this training, participants will be able to:

- Understand the basic concepts behind Hadoop, MapReduce, Pig, and Spark
- Use Python with Hadoop Distributed File System (HDFS), MapReduce, Pig, and Spark
- Use Snakebite to programmatically access HDFS within Python
- Use mrjob to write MapReduce jobs in Python
- Write Spark programs with Python
- Extend the functionality of pig using Python UDFs
- Manage MapReduce jobs and Pig scripts using Luigi

Audience

- Developers
- IT Professionals

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
14 hours
Overview
Sqoop is an open source software tool for transfering data between Hadoop and relational databases or mainframes. It can be used to import data from a relational database management system (RDBMS) such as MySQL or Oracle or a mainframe into the Hadoop Distributed File System (HDFS). Thereafter, the data can be transformed in Hadoop MapReduce, and then re-exported back into an RDBMS.

In this instructor-led, live training, participants will learn how to use Sqoop to import data from a traditional relational database to Hadoop storage such HDFS or Hive and vice versa.

By the end of this training, participants will be able to:

- Install and configure Sqoop
- Import data from MySQL to HDFS and Hive
- Import data from HDFS and Hive to MySQL

Audience

- System administrators
- Data engineers

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice

Note

- To request a customized training for this course, please contact us to arrange.
21 hours
Overview
Big data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights.

The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.

In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.

By the end of this training, participants will be able to:

- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to deal with medical data
- Study big data systems and algorithms in the context of health applications

Audience

- Developers
- Data Scientists

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice.

Note

- To request a customized training for this course, please contact us to arrange.
35 hours
Overview
This instructor-led, live training in Italia (online or onsite) is aimed at system administrators who wish to learn how to set up, deploy and manage Hadoop clusters within their organization.

By the end of this training, participants will be able to:

- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecoystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
7 hours
Overview
This course covers how to use Hive SQL language (AKA: Hive HQL, SQL on Hive, HiveQL) for people who extract data from Hive
21 hours
Overview
Cloudera Impala is an open source massively parallel processing (MPP) SQL query engine for Apache Hadoop clusters.

Impala enables users to issue low-latency SQL queries to data stored in Hadoop Distributed File System and Apache Hbase without requiring data movement or transformation.

Audience

This course is aimed at analysts and data scientists performing analysis on data stored in Hadoop via Business Intelligence or SQL tools.

After this course delegates will be able to

- Extract meaningful information from Hadoop clusters with Impala.
- Write specific programs to facilitate Business Intelligence in Impala SQL Dialect.
- Troubleshoot Impala.
21 hours
Overview
Apache Ambari is an open-source management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters.

In this instructor-led live training participants will learn the management tools and practices provided by Ambari to successfully manage Hadoop clusters.

By the end of this training, participants will be able to:

- Set up a live Big Data cluster using Ambari
- Apply Ambari's advanced features and functionalities to various use cases
- Seamlessly add and remove nodes as needed
- Improve a Hadoop cluster's performance through tuning and tweaking

Audience

- DevOps
- System Administrators
- DBAs
- Hadoop testing professionals

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
21 hours
Overview
Hortonworks Data Platform (HDP) è una piattaforma di supporto open source Apache Hadoop che fornisce una base stabile per lo sviluppo di soluzioni per big data sull'ecosistema Apache Hadoop .

Questa formazione dal vivo con istruttore (in loco o remoto) introduce la Hortonworks Data Platform (HDP) e guida i partecipanti attraverso l'implementazione della soluzione Spark + Hadoop .

Al termine di questa formazione, i partecipanti saranno in grado di:

- Utilizzare Hortonworks per eseguire in modo affidabile Hadoop su larga scala.
- Unifica le funzionalità di sicurezza, governance e operazioni di Hadoop con i flussi di lavoro analitici agili di Spark.
- Utilizzare Hortonworks per indagare, convalidare, certificare e supportare ciascuno dei componenti in un progetto Spark.
- Elaborazione di diversi tipi di dati, inclusi strutturati, non strutturati, in movimento e a riposo.

Formato del corso

- Conferenza e discussione interattiva.
- Molti esercizi e pratiche.
- Implementazione pratica in un ambiente live-lab.

Opzioni di personalizzazione del corso

- Per richiedere una formazione personalizzata per questo corso, ti preghiamo di contattarci per organizzare.

Last Updated:

Prossimi corsi Hadoop

Fine settimana Hadoop corsi, Sera Apache Hadoop training, Apache Hadoop centro di addestramento, Apache Hadoop con istruttore, Fine settimana Apache Hadoop training, Sera Apache Hadoop corsi, Hadoop coaching, Apache Hadoop istruttore, Hadoop trainer, Apache Hadoop corsi di formazione, Apache Hadoop classi, Hadoop in loco, Apache Hadoop corsi privati, Hadoop training individuale

Corsi scontati

Newsletter per ricevere sconti sui corsi

Rispettiamo la privacy di ogni indirizzo mail. Non diffonderemo,né venderemo assolutamente nessun indirizzo mail a terzi. Inserire prego il proprio indirizzo mail. E' possibile sempre cambiare le impostazioni o cancellarsi completamente.

I nostri clienti

is growing fast!

We are looking for a good mixture of IT and soft skills in Italy!

As a NobleProg Trainer you will be responsible for:

  • delivering training and consultancy Worldwide
  • preparing training materials
  • creating new courses outlines
  • delivering consultancy
  • quality management

At the moment we are focusing on the following areas:

  • Statistic, Forecasting, Big Data Analysis, Data Mining, Evolution Alogrithm, Natural Language Processing, Machine Learning (recommender system, neural networks .etc...)
  • SOA, BPM, BPMN
  • Hibernate/Spring, Scala, Spark, jBPM, Drools
  • R, Python
  • Mobile Development (iOS, Android)
  • LAMP, Drupal, Mediawiki, Symfony, MEAN, jQuery
  • You need to have patience and ability to explain to non-technical people

To apply, please create your trainer-profile by going to the link below:

Apply now!

This site in other countries/regions