Chartboost makes a technology platform that helps mobile game developers find new users and monetize games. To do that, the company runs one of the largest MongoDB databases in the world, and writes to a database of one billion objects. At Rackspace::Solve San Francisco, Chartboost Co-Founder and CTO Sean Fannan talked about how Chartboost manages scale and growth.
I spend most of my days hacking on MongoDB environments, so it’s easy for me to get pumped about new features when I hear about them. And there are a few recent MongoDB updates and announcements that have me particularly excited.
So you managed to survive the first post and are still hungering for more? Don’t worry, I got you covered. This time around, we’ll get into more of the peripheral, optional components that might be useful to you. The format will largely be the same as the first post, so let’s get right to it.
So you wanna learn you some Hadoop, eh? Well, get ready to drink from the firehose, because the Hadoop ecosystem is crammed full of software, much of which duplicates efforts, and much of which is named so similarly that it’s very confusing for newcomers.
Have you heard of MapReduce? How about Data Lake or petabytes? If you’ve heard these big data terms before and weren’t sure exactly what they meant, it’s time to find out. CloudU today is launching its “Bigger is Better” Big Data Massive Open Online Course (MOOC).
I was recently in a meeting with the CIO and a selection of senior Business Unit executives from one of Australia’s largest companies. We discussed the importance of cloud and big data for all businesses in the future and how these technologies and concepts could apply to their industry. Looking at the body language around the room, the reactions were typical of senior executives really exploring these concepts for the first time – awe, excitement, trepidation, confusion, curiosity and a desire to learn more.
Data processing platforms do not exist in siloes. That is why it is increasingly important for Rackspace to provide interoperability between datastores. This enables customers to choose the best technology for their data processing needs, without being restricted to a single tool.
Hadoop Summit kicks off today. And while I could start this post by providing a long diatribe about the momentum and disruption happening in the big data space, I feel I can spare you that point. This momentum is being felt across every sector and is addressing almost every new data workload coming into existence. The argument no longer needs to be made.
The world of data platforms is forging forward with increasing velocity. To stay relevant in today’s Big Data conversation, technologies must implement features and enhancements at a swifter cadence than legacy technology. The only way this is possible is by orchestrating the worldwide execution of an open ecosystem of participants. Consider Apache Hadoop; this level of advancement would not be possible without a broad network of developers and engineers working together to rapidly innovate to solve new problems. In addition to just fixing the issues users have with Hadoop, the community is changing the perception of how users can leverage it. Once a go-to tool for large batch processing jobs, Hadoop is changing to address the needs of multiple workloads simultaneously such as streaming and interactive workloads all done at the same level of scale of the original batch jobs.
Is more better? Not always, but when it comes to more industry leaders contributing to the CloudU Big Data Massive Open Online Course (MOOC); more is definitely better. As CloudU continues to extend its reach beyond its online presence, the program will sponsor the first ever Open BigCloud and Open Compute Project (OCP) Workshop at The University of Texas in San Antonio (UTSA). The event will take place Wednesday, May 7 and Thursday, May 8 on the UTSA campus.