If you are like me, a Tableau fan, you’ve probably used Tableau for many years, attended numerous Tableau Conferences, and cheered with great enthusiasm when the engineers at Tableau demonstrated the latest and greatest enhancements to the software. You may also be very accustomed to creating your own calculations based on the row level data you are connected to. You enjoy the freedoms that Tableau offers.
Cloud computing is that magnificent technology that every day poses as a great umbrella for all the digital activities we have going on, from ordering our groceries online to keeping track of asset logistics across the globe.
If you are one of our avid blog followers, you will remember Josh’s 6 principles of modern data architecture. If you need a refresher, here is the first one: “Treat data as a shared asset.” You might be asking, what does this have to do with cloud computing? Read on to find out...
Data Lake Intelligence with AtScale
In my recent Data Lake 2.0 article I described how the worlds of big data and cloud are coming together to reshape the concept of the data lake. The data lake is an important element of any modern data architecture, and the data lake footprint will continue to expand. However, the data lake investment is only one part of delivering a modern data architecture. At Yahoo!, in addition to building a Hadoop-based data lake, we also needed to solve the problem of connecting traditional business intelligence workloads to this Hadoop data. Although the term “Data Lake” didn’t exist back then, we were solving the problem of: “How can you deliver an interactive BI experience on top of a scale-out Data Lake” - it turns out we were pioneers in delivering Data Lake Intelligence.
Our experiences and learnings from those initial efforts led to the architecture that sits at the core of the AtScale Intelligence Platform. Because AtScale has been built from the ground up to deliver business-friendly insights from the vast amounts of information in data lakes, AtScale has experienced tremendous success and adoption in enterprises ranging from financial services, to retail to digital media. With the release of AtScale 6.5, we’ve continued to build on and expand AtScale’s ability to uniquely deliver on the promise of Data Lake Intelligence. If this sounds like something you might be interested in knowing more about… keep reading!
Poor February. The short month is dismissed for its brevity (let’s not talk about the weather) but a lot transpired the past 28 days, especially in big data and analytics. ICYMI, here’s a recap of the top stories:
In the previous post we demonstrated how to model percentile estimates and use them in Tableau without moving large amounts of data. You may ask, "how accurate are the results and how much load is placed on the cluster?". In this post we discuss the accuracy and scaling properties of the AtScale percentile estimation algorithm.
In the previous post, we discussed typical use cases for percentiles and the advantages of percentile estimates. In this post, we illustrate how to model percentile estimates with AtScale and use them from Tableau.
A new and powerful method of computing percentile estimates on Big Data is now available to you! By combining the well known t-Digest algorithm with AtScale’s semantic layer and smart aggregation features AtScale addresses gaps in both the Business Intelligence and Big Data landscapes. Most BI tools have features to compute and display various percentiles (i.e. medians, interquartile ranges, etc), but they move data for processing which dramatically limits the size of the analysis. The Hadoop-based SQL engines (Hive, Impala, Spark) can compute approximate percentiles on large datasets, however these expensive calculations are not aggregated and reused to answer similar queries. AtScale offers robust percentile estimates that work with AtScale’s semantic layer and aggregate tables to provide fast, accurate, and reusable percentile estimates.
In this three-part blog series we discuss the benefits of percentile estimates and how to compute them in a Big Data environment. Subscribe today to learn the best practices of percentile estimation on Big Data and more. Let's dive right in!
Did you know that 9 out of 10 companies are only able to analyze less than half of the data they collect? In fact, 45% of these companies analyze less than one quarter of this data . Why is that? The answer is simple. We are generating more data than ever, 44 zettabytes by the year 2020 to be precise. What does this mean for you? A great data lake filled with extremely valuable potential to make better decisions for your company. Remember what Sherlock Holmes once said: “I can’t build bricks without clay!” We need data to answer complex questions.
President’s day is the perfect opportunity to explore, honor and remember the legacy of Washington, Lincoln and other presidents. It is also a great day for all those who are looking for a good sale at their favorite retail or online store. What does this mean? Hundreds of millions of sales transactions will generate enormous amount of financial and inventory data will be generated on this one day.
Data, Data, Data! All the facts, numbers, and everything in between that, when collected, can be analyzed and decisions made based upon them. Can’t live with it! Can’t live without it! This year over $14 billion will be spent globally on flowers, chocolate, and jewelry on Valentine’s Day alone. According to CIO Research, big data analytics investments are expected to be close to $187 billion. We are definitely investing more on data than expensive chocolates! The big question is, are your data investments nurturing your relationship with data the same way a bouquet of flowers and a box of chocolates can nurture your relationships?