Archive for February 2012
Pentaho and DataStax
We announced a strategic partnership with DataStax today: http://www.pentaho.com/press-room/releases/datastax-and-pentaho-jointly-deliver-complete-analytics-solution-for-apache-cassandra/
DataStax provides products and services for the popular Apache No-SQL database Cassandra. We are releasing our first round of Cassandra integration in our next major release and you can download it today (see below).
Our Cassandra integration includes open source data integration steps to read from, and write to Cassandra. So you can integrate Cassandra into your data architecture using Pentaho Data Integration/Kettle and avoid creating a Big Silo – all with a nice drag/drop graphical UI. Since our tools are integrated, you can create desktop and web-based reports directly on top of Cassandra. You can also use our tools to extract and aggregate data into a datamart for interactive exploration and analysis. We are demoing these capabilities at the Strata conference in Santa Clara this week.
Links
- Product downloads, how-to videos and documents are available at http://www.pentaho.com/cassandra and http://www.datastax.com/pentaho
- Attend the webinar on March 15th to learn more and about using Cassandra’s integration with Pentaho Kettle http://www.pentaho.com/datastax-webinar
- Download, access how-to documents and videos at http://community.pentaho.com/BigData
Pentaho’s Big Data Release
This week at Pentaho we announced a major Big Data release, including:
- Open sourcing of our of big data code
- Moving Pentaho Data Integration to the Apache license
- Support for Hbase, Cassandra, MongoDB, Hadapt
- And numerous functionality and performance improvements
What does this mean for the Big Data market, for Pentaho, and for everyone else?
We believe you should use the best tool for each job. For example you should use Hadoop or a NoSQL database where those technologies suit your purposes, and use a high performance columnar database for the use cases they are suited to. Your organization probably has applications that use traditional databases, and likely has a hosted application or two as well. Like it or not, if you have a single employee that has a spreadsheet on their laptop, you have a data architecture that includes flat files. So every data architecture is a hybrid environment to some extent. To solve the requirements of your business, your IT group probably has to move/merge/transform data between these data stores. You may have an application or two that has no external inputs or outputs, and no integration points with other applications. There is a word for these applications – silos. Silos are bad. Big data is no different. A big data store that is not integrated with your data architecture is a Big Silo. Big Silos are just as bad as regular silos, only bigger.
So when you add a big data technology to your organization, you don’t want it to be a silo. The big data capabilities of Pentaho Data Integration enable you to integrate your big data store into the rest of your data architecture. If you are using any of the big data technologies we support you can move data into, and out of these data stores using a graphical environment. Our data integration capabilities also extend to traditional databases, columnar databases, flat files, web services, hosted applications and more. So you can easily integrate your big data application into the rest of your data architecture. This means your big data store is not a silo.
For Pentaho, the big data arena is a strategic one. These are new technologies and architectures so all the players in this space are starting from the same place. It is a great space for us because people using these technologies need tools and capabilities that are easy for us to deliver. Hadoop is especially cool because all of our tools and technologies are pure Java and are embeddable, so we can execute our engines within the data nodes and scale linearly as your data grows.
For everyone else our tools continue to provide great bang for the buck for ETL, reporting, OLAP, predictive analytics etc. Now we also lower the cost, time, and skills sets required to investigate big data solutions. For any one application you can divide the data architecture into two main segments: client data and server data. Client data includes things like flat files, mobile app data, cookie data etc. Server data includes transactional/traditional databases and big data stores. I don’t see the server-side as all or nothing. It could be all RDBMS, all big data store, 50/50, or any mix of the two. It’s like milk and coffee. You can have a glass of milk, a cup of coffee, or variations in between with different amounts of milk or coffee. So you can consider an application that only uses a traditional database today to be an application that currently utilizes 0% of its potential big data component. So every data architecture exists on this continuum, and we have great tools to help you if you want to step into the big data world.
If you want to find out more:
- Visit http://community.pentaho.com/BigData which has downloads, how-tos, and other resouces
- Connect with the community on irc.freenode.net ##pentaho;
- Join the Pentaho Big Data technical developer mailing list to be notified about future big data product updates and related events.
- Attend the techcast on Thursday February 9th to learn more about Pentaho Kettle for Big Data, watch a live demo and hear how you can get involved. Register now at http://www.pentaho.com/resources/events/20120209-pentaho-kettle-webinar/
- Hands-on training FREE for attendees at the 2012 Strata Conference in Santa Clara, California. Sign-up for our how-to training session (http://strataconf.com/strata2012) on February 28th during the ‘Tuesday Tutorials.’ Register with Pentaho’s 20 percent discount code: str12sd20 <https://en.oreilly.com/strata2012/public/register> .