If you would like to view a local version of our website, please click the link below...

HDP Features

HDP Features

Hortonworks Data Platform 2.3

Get access to an industry-leading Apache Hadoop distribution that includes common tools like MapReduce, HDFS, Pig, Hive, YARN, and Tez. We provide root access to the application itself, allowing users to interact directly with the core platform.

Secondary Name Node

With the release of HDP 2.1 and going forward, we've added a secondary name node—at no additional cost and with no additional resources in your clusters.

Included Tools

  • Hive
  • Pig
  • MapReduce
  • Tez
  • YARN
  • Flume
  • Sqoop
  • Oozie
  • Storm
  • Kafka
  • Falcon
  • Spark

Infrastructure Features

Infrastructure Features

Purpose-built hardware

Cloud Big Data combines the flexibility of our managed cloud with custom infrastructure optimized for large amounts of storage and high I/O performance.

Gigabit network

Get anywhere from 1GB to 10GB networking capabilities between nodes and master services—at no extra charge.

Single tenant option

Larger instances (10TB) are built using single server instances with no multi-tenancy, so you can take advantage of the full resources of your host machines.

OnMetal dedicated bare-metal Cloud Servers

Single-tenant bare-metal OnMetal servers give you 3x the I/O of traditional Cloud Big Data and are optimized for rapid deployment, elastic scalability, and in-memory performance.

Support for Apache Spark

We support the in-memory cluster computing tool for customers who want to maximize streaming and interactive workloads. Our team is skilled in Apache Spark and other new tools inside the HDP 2.3 framework.

Support open-source databases

Our Managed Database portfolio fully supports Hadoop, Spark, ObjectRocket for MongoDB, ObjectRocket for Redis, and MySQL.

Administrative Tools

Help Me Choose

Get guidance on how to best architect your big data environment with our easy to use Help Me Choose tool.

Connect your data

For increased flexibility, you can process data that lives natively on Cloud Files or within the Object Rocket MongoDB service. This also enables you to provision Cloud Big Data resources without moving data when your processing environment is not persistent. Learn more about using Cloud Files with Cloud Big Data and how to connect to Object Rocket.

Simple provisioning

Use the control panel to specify the size of your data nodes and Cloud Big Data will take care of creating the master services, gateway services, and secondary name node for you. Learn more about how to provision Cloud Big Data.

Post init script

Link to a post-install script to install additional packages, change cluster configuration parameters, or download data into the cluster once it's ready for use.

SCP file transfers

Use SCP to upload data directly into data nodes instead of going through the gateway node first.

Request a free Data strategy session

Talk to our data experts about your business objectives and we'll strategize with you the solutions needed to achieve them.

Change your region: