Support: 1-800-961-4454
1-800-961-2888
I was recently in a meeting with the CIO and a selection of senior Business Unit executives from one of Australia’s largest companies. We discussed the importance of cloud and big data for all businesses in the future and how these technologies and concepts could apply to their industry. Looking at the body language around the room, the reactions were typical of senior executives really exploring these concepts for the first time – awe, excitement, trepidation, confusion, curiosity and a desire to learn more.
By Paul Campaniello, Vice President of Global Marketing, ScaleBase
Data processing platforms do not exist in siloes. That is why it is increasingly important for Rackspace to provide interoperability between datastores. This enables customers to choose the best technology for their data processing needs, without being restricted to a single tool.
Hadoop Summit kicks off today. And while I could start this post by providing a long diatribe about the momentum and disruption happening in the big data space, I feel I can spare you that point. This momentum is being felt across every sector and is addressing almost every new data workload coming into existence. The argument no longer needs to be made.
For those of you who don’t know, Trove is the newest integrated OpenStack project. We have been working on it for over two years at Rackspace, and it’s been a wild ride. We’ve had a ton of help from our friends at HP, who have been on this roller coaster with us for a long while as well. You’re sure to hear more about Trove at OpenStack Summit Atlanta next week, but today I’d like to take a walk down memory lane with Trove, and talk about how it went from a small project started within Rackspace to the treasure it is today.
The world of data platforms is forging forward with increasing velocity. To stay relevant in today’s Big Data conversation, technologies must implement features and enhancements at a swifter cadence than legacy technology. The only way this is possible is by orchestrating the worldwide execution of an open ecosystem of participants. Consider Apache Hadoop; this level of advancement would not be possible without a broad network of developers and engineers working together to rapidly innovate to solve new problems. In addition to just fixing the issues users have with Hadoop, the community is changing the perception of how users can leverage it. Once a go-to tool for large batch processing jobs, Hadoop is changing to address the needs of multiple workloads simultaneously such as streaming and interactive workloads all done at the same level of scale of the original batch jobs.
Is more better? Not always, but when it comes to more industry leaders contributing to the CloudU Big Data Massive Open Online Course (MOOC); more is definitely better. As CloudU continues to extend its reach beyond its online presence, the program will sponsor the first ever Open BigCloud and Open Compute Project (OCP) Workshop at The University of Texas in San Antonio (UTSA). The event will take place Wednesday, May 7 and Thursday, May 8 on the UTSA campus.
A customer approached us recently with a new NoSQL-data requirement for an instrumentation system we had helped them with before.
Data is everywhere. It’s in every organization, economy and sector. It touches every technology user. Storing, aggregating and analyzing data is now crucial for every business.
By Roland Schmidt, Senior Director of Business Development, Clustrix, Inc.
Racker Powered
©2014 Rackspace, US Inc.