James Dixon’s Blog

James Dixon’s thoughts on commercial open source and open source business intelligence

Archive for the ‘commercial open source’ Category

Next Big Thing: The Data Explosion

leave a comment »

These are the main points from The Data Explosion section of my Next Big Thing In Big Data presentation.

  • The data explosion started in the 1960s
  • Hard drive sizes double every 2 years
  • Data expands to fill the space available
  • Most data types have a natural maximum granularity
  • We are reaching the natural limit for color, images, sound and dates
  • Business data also has a natural limit
  • Organizations are jumping from transactional level data to the natural maximum in one implementation
  • As an organization implements a big data system, the volume of its stored data “pops”
  • Few companies are popping so far
  • The explosion of hype exceeds the data explosion
  • The server, memory, storage, and processor vendors do not show a data explosion happening

Summary

  • The underlying trend is an explosion itself
  • The explosion on the explosion is only minor so far
  • Popcorn Effect – it will continue to grow

Please comment or ask questions

Advertisements

Written by James

June 23, 2015 at 5:50 pm

5 Out Of 6 Developers Are Using Open Source

leave a comment »

ZDNet  reports on a Forrester survey that finds 5 out of 6 developers are using or deploying open source.

http://www.zdnet.com/five-out-of-six-developers-now-using-or-deploying-open-source-7000008499/?s_cid=rSINGLE

In the survey they found that 7% of developers are using open source software tools such as Pentaho.

The United States Department of Labor state that, in 2010, there were 913,100 software developers in the USA alone.

http://www.bls.gov/ooh/Computer-and-Information-Technology/Software-developers.htm

7% of 913,100 means about 64,000 developers using open source business intelligence software. Nice.

Written by James

December 10, 2012 at 2:25 pm

I Never Owned Any Software To Begin With

leave a comment »

My thoughts on the whole Emily White/stealing music topic:

http://www.npr.org/blogs/allsongs/2012/06/16/154863819/i-never-owned-any-music-to-begin-with

http://thetrichordist.wordpress.com/2012/06/18/letter-to-emily-white-at-npr-all-songs-considered/

When she says she only bought 15 albums, I think she is talking about physical CDs. I think she did buy some of her music online. But she clearly states that she ripped music from the radio station and swapped mix CDs with her friends, and she makes it sound like she thinks this is not stealing.

Don’t Blame iTunes

Many people who complain about artist’s income people blame Apple and iTunes. Yes, iTunes propagated the old economic splits and percentages into the digital world. But Apple did not create those splits, they were agreed upon in contracts between the labels/producers and the artists. What iTunes did was to provide an alternative digital distribution medium to Napster. Apple saved artists from the prospect of getting no revenue at all. People who attack and boycott iTunes thinking that they are helping artists are deluded.

It’s Not Just Music

This whole debate also extends to movies, books, news commentary, and software – anything that can be digitally copied. In each of these arenas, the players and economic distribution is different, but the consequence of not paying is the same. If we all behaved this way, ultimately, there would be no books, or movies either. So how does this relate to proprietary software, open source software, and free software?

Proprietary Software

Just like companies that publish books, music, movies etc, proprietary software companies were the gatekeepers. They decided what software was created and made available. When the hardware and software becomes available at the consumer level, independent producers spring up. This happened with freeware software for PCs. The internet enables the distribution of the software, and methods of collecting payment. The costs of creating books, music, and movies have dropped dramatically because of the hardware and software now available. But, if no-one pays for the content created the proprietary software companies will go out of business.

Non-Proprietary

Open source and free software are other ways for creating and distributing software, the difference being that these rely on software (source and binaries) being easy to copy. Don’t steal Microsoft’s BI software and use it without permission. Use our open source BI software – we want you to.

Free Software

Free software requires that the software, and all software that is built upon it, be ‘free’. In this case ‘free’ means you can freely modify it, distribute it, and build upon it, and you give others those same rights. You can still charge for the software, but it makes no sense to (given the rights you give to your ‘customer’).

The ideals of Free Software Foundation (FSF) are based on the notion that when you think of something or invent something, it belongs to the world, you don’t own it. This is a wonderful idea, however most of the world, including many industries,and jobs, and professions, are based on the opposite principle – if you create it, you own it. To my mind I have fewer rights under the FSF view of the world, I don’t have the right to my own ideas.

Because of the freedoms that the Free Software Foundation believe in, they are against Digital Rights Management (DRM) software. DRM tries to protect the rights of artists, producers, and distributors of artistic content. In order to protect these rights, software is needed that is proprietary. If the DRM source code was open, it would make it easy for hackers to decode the content and remove the copy protection. So the Free Software Foundation is taking up the fight against DRM, calling it ‘Digital Restrictions Management’ (http://www.fsf.org/campaigns/drm.html). They call it this because, they say, DRM takes away your right to steal other people’s inventions. If you support of DRM-free software, you are choosing to fight against musicians, authors, actors.

Open Source

The Open Source movement takes a pragmatic approach on this topic. When you have an idea, it is yours. You can choose to do whatever you want with your invention. If it is a software invention, and you choose to put it into open source, that’s great. If you choose not too, that’s fine too, because it is yours. Open Source allows hybrid models – where a producer can decide to put some of their software into open source but not all of it (open core or freemium model). This model enables a software producer to provide something of value to people who would not have paid for anything anyway (this includes geographies and economies where the producer would not sell anyway). These people are willing participants and contributors in other ways. The producer also gets to sell whatever software products it wants.

Doomsday?

For some creative areas, if no-one pays for any content anymore, the creators will disappear eventually, and there will be no more content. But what happens if no-one pays for software anymore?

Proprietary software dies eventually, unless they switch to services models.

The majority of people contributing to open source/free software today are IT developers. There are two main types here: creating/extending/fixing software in the course of getting their project finished, or sponsored contributors. IT is where the majority of software developers are today, so IT/enterprise/business software is safe.

The software that would be at most risk would be software that is created by smaller software companies. Particularly software that has large up-front development costs. Games. The first, and maybe only, software segment to die would be the big-budget, realistic, immersive, loud video games. Who cares most about these games? The same demographic that is stealing all the music.

I say let Generation OMG copy and steal everything they want. All the really cool and fun careers will evaporate. Lots of the stuff they love (movies, music, games) will disappear. After they have spent a decade texting each other about how sucky everything is, they will grow up and have to re-create these industries. Hopefully with better economic structures than the current ones.

Pentaho and DataStax

with one comment

We announced a strategic partnership with DataStax today: http://www.pentaho.com/press-room/releases/datastax-and-pentaho-jointly-deliver-complete-analytics-solution-for-apache-cassandra/

DataStax provides products and services for the popular Apache No-SQL database Cassandra. We are releasing our first round of Cassandra integration in our next major release and you can download it today (see below).

Our Cassandra integration includes open source data integration steps to read from, and write to Cassandra. So you can integrate Cassandra into your data architecture using Pentaho Data Integration/Kettle and avoid creating a Big Silo – all with a nice drag/drop graphical UI. Since our tools are integrated, you can  create desktop and web-based reports directly on top of Cassandra. You can also use our tools to extract and aggregate data into a datamart for interactive exploration and analysis. We are demoing these capabilities at the Strata conference in Santa Clara this week.

Links

Written by James

February 28, 2012 at 4:07 pm

olap4j V1.0 has been released.

leave a comment »

Back in the ’90s and early 2000’s I was involved in the attempts by the proprietary BI vendors to create common standards. Anyone remember JOLAP? The vendors were doing this only because of increasing demand and frustration from their customers – none of them actually these standards. Why? Because, in the short term, standards would only help the customers and the implementers, not the vendors. These efforts were hugely political with many of the vendors taking the opportunity to score points against each other. The resulting ‘standards’ were useless, and none of the large vendors were willing, or able, to support them.

How refreshing, then, to have olap4j reach the 1.0 milestone – http://www.olap4j.org. Created by consumers and producers of open source BI software, olap4j shows the advantage of open collaboration by motivated parties. Already olap4j has a Mondrian driver, and an XMLA driver for Microsoft SQL Server Analysis Services, SAP BW, and Jedox Palo. There are also several clients who use olap4j servers, some from Pentaho, and Saiku, Wabit, and ADANS.

olap4j is very cool stuff. You can read more on Julian Hyde’s blog. Congratulations for everyone that has worked on olap4j.

Written by James

April 12, 2011 at 12:24 pm

More Hadoop in New York City

leave a comment »

Yesterday was fun. First I met with a potential customer looking to try Hadoop for a big data project.

Then I had a lengthy and interesting chat with Dan Woods. Amongst other things Dan runs http://www.citoresearch.com/ and also blogs for Forbes. We talked about Pentaho’s history and our experiences so far with the commercial open source model.   We also talked about Hadoop and big data and about the vision and roadmap of our Agile BI offering.

Next I met with Steve Lohr who is a technology reporter for the New York Times. We talked about many topics including the enterprise software markets and how open source is affecting them. We also talked about Hadoop, of course.

Next was a co-meet-up of the New York Predictive Analytics and No-SQL groups where I presented decks about Weka and Hadoop, separately and together. There were lots of interesting questions and side discussions earlier. By the time we finished all these topics a blizzard was going on out side. Cabs were nowhere to be seen so Matt Gershoff of Conductrics was kind enough to lead me via the subway to the vicinity of my hotel.

Written by James

January 27, 2011 at 4:01 pm

Big Data in New York City

leave a comment »

I’m having an interesting time in NYC this week. I had to retrieve my snowboarding jacket out of the attic for this trip. It’s snowing right now, which is better than the sleet forecast for later. So far I’ve met with a few Big Data customers and prospects and presented at the New York Hadoop User Group. Our hybrid database/Hadoop data lake architecture always gets a good reception and our ability to run our data integration engine within the Hadoop data nodes impresses people.

Being the first Business Intelligence vendor to bring reporting and ETL to the Hadoop space sets us apart from all the other vendors. We have so much recognition in this space that I’ve spoken to a few people in the last month who thought we were ‘THE’ visualization and data transformation provider for Hadoop and didn’t connect to other data sources.

This afternoon I’m meeting with reporters and columnists from a couple of different publications to chat about Big Data / Hadoop stuff. Tonight I’m presenting at the New York Predictive Analytics Meetup to talk about Hadoop from an analysis perspective.

 

 

Written by James

January 26, 2011 at 5:20 pm