While it may be tempting to focus our efforts only on self-service BI in terms of security and access control mechanisms, it is important to also place emphasis on economies to achieve success. When an enterprise develops a self-service BI environment, it undoubtedly means that their IT team adopts the role of a service provider. Data and services become available to internal business users for a price. What are these hidden costs?
A version of this article originally appeared on the Cloudera VISION blog.
One of my favorite parts of my role is that I get to spend time with customers and prospects, learning what’s important to them as they move to a modern data architecture. Lately, a consistent set of six themes has emerged during these discussions. The themes span industries, use cases and geographies, and I’ve come to think of them as the key principles underlying an enterprise data architecture.
Whether you’re responsible for data, systems, analysis, strategy or results, you can use these principles to help you navigate the fast-paced modern world of data and decisions. Think of them as the foundation for data architecture that will allow your business to run at an optimized level today, and into the future.
Keeping resolutions is hard. Research shows that most of us fail to follow through on our new years' resolutions by the second week of February! We are hopeful that 2018 will be different though! In this contributed piece, Donald Farmer takes us through his best practices for marking and keeping resolutions. Donald is highly respective figure in the Data Analytics world and has built outstanding product franchises at Qliktech and Microsoft. He is currently Principal at TreeHive Strategy, an I.T. advisory firm.
In the world of Business Intelligence and Big Data there continue to be a number of exciting innovations as new and improved options for processing large data sets appear on the market. You may be familiar with AtScale’s BI-on-Hadoop Benchmarks - where we focus on evaluating the top SQL-on-Hadoop engines and their fitness to support traditional BI-style queries. As we continue to work with customers who are navigating their journey to BI on Big Data, we are increasingly getting questions about the emerging cloud-based data processing engines.
In this blog post, we will take a deeper look at BigQuery from Google, and how it stacks up in the BI-on-Big Data ecosystem.
CONTINUING OUR TRACK RECORD OF RAPID DELIVERY & INNOVATION
Today we announced the general availability of AtScale 5.0 and I couldn’t be more excited about the host of great new features that are included in this release. As we’ve continued to gain traction in a number of industries - ranging from healthcare to retail to financial services to telco to online- we continue to learn from our customers and use these learnings to feed directly back into our product features. With the release of 5.0, AtScale customers now have an even richer set of capabilities that they can use to derive business insights and value from their Big Data investments. I’ve included some of the highlights of the release in the sections below.
I’ve asked it before and I’ll ask it again. Wouldn’t it be great if you could easily analyze ALL your data from a Excel single file? We all know this isn’t feasible; especially when dealing with big data and complex business analytics needs.
In working at the intersection of Big Data and traditional Business Intelligence, the AtScale team has encountered a number of complex business analytics use cases that are difficult, if not near-impossible, to solve using typical table-based data models and SQL. Today, I’m going to share why and how complex analysis, like for multi-level metrics, is no longer as ‘difficult’ nor ‘near-impossible’ as it once was.
Wouldn’t it be great if you could load all of your data from a single file into an Excel pivot table for easy analysis?
Unfortunately, this approach isn’t usually viable when dealing with complex business analytics and big data. Take for example a typical use case found inthe world of healthcare insurance. A large insurance provider has 10s of millions of members, and processes 100s of millions of claims a year. As flexible as Excel is, we all know it won’t handle this volume or velocity of data.
As a result, more and more enterprises store large data sets in big data platforms like Hadoop. And while Hadoop provides a low-cost and performant approach to store and process this information, there is still the challenge of supporting the many types of analytics required on claims and member data sets. But why? Why and how, with all of the advances in technology, can a simple calculation cause so much complexity?
Yes, there are actually ways to 'Do Big Data Analytics Right'.
Leaders and innovators in the Big Data space have learned the hard way, and now those of you looking to dip your toe, or jump head first, into the BI on Big Data waters can capitalize on their early experiences. Let go of the fear or ego or whatever may be holding you back and take the chance to learn from those who took the early adopter risk.
As 2016 draws to a close, and the AtScale Blog continues to grow, it is easy for a few fantastic posts to have been overlooked over the year. With this in mind, we present to you … Five BI on Big Data Blogs You May Have Missed This Year, But Shouldn't … that offer unique insights to expand your thoughts and ability to drive success in your BI and Big Data journey.
Is your Big Data ‘mature’? You may be puzzled by this question, since many in the industry have been saying ‘Big Data is Dead’ for years. But Big Data is far from dead, and instead technologies and solutions that make up the Big Data space are maturing at an ever increasing rate. From traditional players like Teradata, to open-source Hadoop, to new Cloud players like Google Big Query, the Big Data space is doing more to help companies manage and gain insights from their exploding and morphing data than at any other point in history. So what?