This morning, O'Reilly Media published the results of its 2016 Data Science Salary Survey. The report covers a wide set of topics such as salary differences by gender and countries as well as details for the types of skills that can give employees an edge when it comes to earnings. We tooked a closer look at the Business Intelligence answers and what we found out might surprise you...
There is no shortage of advice if you’re in the market for a “BI on Big Data” solution or as we sometimes like to call it “Big BI”. Big BI refers to a Business Intelligence (BI) platform that conforms with end-users needs (typically seamlessly integrated with Tableau or Excel), that can scale on any data size to deliver top query performance.
A resource you might find particularly interesting when sizing up your BI tools, is the latest version of the “G2 Crowd Grid for Business Intelligence Software”. The Winter 2016 version just got published. We reviewed the grid and its report and found some interesting insights worth sharing.
Google the word “CDO” today and your search will mostly results return articles about the “Chief Digital Officer”. However, if you came to this blog, you’re probably looking for guidance on the other title this acronym refers to: “The Chief Data Officer”...
We started AtScale because we believe that everyone should be able to use all data for all their decisions. We believe that people should have unencumbered and secured access to information, work with data of all shapes, at lighting speed and in the tools they are already familiar with like Tableau and Microsoft Excel.
Rumor has it that with the rise of Apache Spark, Spark will replace Hadoop.
Well, let’s take a look. Apache Spark is an open-source processing engine that supports interactive quieries while Hadoop is an easy to scale, cost effective data storage. The truth is- Spark does not replace Hadoop, in fact, Hadoop and Spark complement one another.
Now you may wonder: how will Spark and Hadoop affect your big data strategy?
With Hadoop Summit San Jose just around the corner, I thought it might be helpful to preview what to watch out for a the conference. In some ways, not much has changed in the past few months - streaming data is a hot topic, more and more people are adopting adjacent technologies (like Spark), and “in memory” is “in vogue” in the world of big data. However, a quick tour around the Hadoop Summit website reveals a few more trends that deserve some additional attention.
In its early days, Hadoop was chosen because it was much cheaper to store large amount of data, compared to Enterprise Data Warehouse . However, Hadoop required users to have strong technical background to be able to query or do anything on Hadoop. Therefore, the assumption was that Hadoop was only good for data storage.
Today, Hadoop is still the best option for inexpensive data storage. And the reality is as more technologies developed, Hadoop has become more and more user friendly too. In fact, the latest Big Data Maturity surveys indicate that in addition to the traditional data storage warehouse capability, a significant number of companies are using Hadoop for BI.
Companies like Yellow Pages are seeing sub-second BI query response time on Hadoop and have been able to drive increased Hadoop adoption across their organization. If driving Hadoop adoption has been a concern for you in the past, maybe you should reevaluate Hadoop for BI now.
You might not have noticed the photoshop work on the featured photo for this blog. This isn’t a shot of Steph Curry, the legendary point guard for the Golden State Warriors.
We’ve superimposed Josh Klahr’s face, our VP of Product, on Curry’s body. Why?
Since the 1980s, the world has been using OLAP technology to provide a business interface to analyze data stored in traditional ERP and CRM systems. As the demand for insights increased, MOLAP and ROLAP became key technologies.
With all of the different OLAP options out there, you may wonder which one can actually help you achieve your big data strategy. Which strategy is most suitable for your Hadoop environment?