2018 Dataworks Summit is just around the corner. As you’re preparing your travel to San Jose, it’s time to think about how to maximize your time at the Dataworks Summit. Dataworks Summit will take place from June 18 to June 21. Sessions, keynotes, and workshop are spread across eight different tracks. Check out the full agenda. Everyone may have different goals for this summit. While you’re going through the agenda to select the best sessions for you and your organization, here are our recommendations.
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run” - Roy Charles Amara
Mr Amara, an American researcher and futurist, probably didn’t anticipate how much wisdom was encapsulated in just a few words. Buying technology is hard. And Enterprise IT buyers are often left with the hard task of determining if the new piece of technology they just heard about is pure hype or if it has hope. Where are they to find consistent guidance?
Often times, customers approach me with questions around AtScale’s ability to integrate into customer’s operational stack. Today, I want to highlight a component of AtScale’s Development Extensions called Webhooks.
A webhook (also called a web callback or HTTP push API) is a way for an application to provide other applications with real-time information. A webhook delivers data to other applications as certain events occur, meaning you get data immediately as opposed to a REST API which you would need to poll for data very frequently in order to get it close to real-time. This makes webhooks much more efficient for both provider and consumer.
March is gone and Spring has arrived, at least for many of us. A lot happened in March, and we certainly don't want you to miss out on what’s big on big data. Without further ado, here is what you might have missed in March.
The joy of working as a Customer Success Solution Architect is that I have the opportunity to work with many different customers and each challenges us with a different Big Data use case.
I've worked with enterprises that offload their Netezza database into the cloud. I've seen companies analyze social media data in real-time. I've helped teams streamline operational processes and increase efficiency in production lines. Big Data provides enterprises a competitive advantage and reduces operational costs across a these varied scenarios. However, setting up a big data environment is not for the faint-hearted - or is it?
If you are like me, a Tableau fan, you’ve probably used Tableau for many years, attended numerous Tableau Conferences, and cheered with great enthusiasm when the engineers at Tableau demonstrated the latest and greatest enhancements to the software. You may also be very accustomed to creating your own calculations based on the row level data you are connected to. You enjoy the freedoms that Tableau offers.
Cloud computing is that magnificent technology that every day poses as a great umbrella for all the digital activities we have going on, from ordering our groceries online to keeping track of asset logistics across the globe.
If you are one of our avid blog followers, you will remember Josh’s 6 principles of modern data architecture. If you need a refresher, here is the first one: “Treat data as a shared asset.” You might be asking, what does this have to do with cloud computing? Read on to find out...
How valuable is an insight if you don’t know what’s driving it?
The investment in big data made in recent years by companies has been significant. Many are now looking to capitalize on the insights to be discovered in their expansive data lakes. Developing an analytic solution is a difficult and laborious process. More than a few projects have been abandoned long before any conclusive benefit is realized. Some efforts end due to constraints on time and money, others as a result of bad design or poor end user adoption. That last point is significant. You’re going to spend considerable resources to empower your decision makers. If you build it, and they don’t come, then what?
Congratulations! Your data is controlled, aggregated and turbocharged in your AtScale virtual cube. You have Tableau to create remarkable visualizations. Your data is happy! But are your cube designers and business users too? For instance, did you know that centralizing calculations in your AtScale Virtual Cube eliminates TDE perpetuation, 3rd party ETL processes and version control headaches? For an enhanced AtScale experience, here are 5 Best Practices you should be implementing in order to maximize AtScale on Tableau
Data Lake Intelligence with AtScale
In my recent Data Lake 2.0 article I described how the worlds of big data and cloud are coming together to reshape the concept of the data lake. The data lake is an important element of any modern data architecture, and the data lake footprint will continue to expand. However, the data lake investment is only one part of delivering a modern data architecture. At Yahoo!, in addition to building a Hadoop-based data lake, we also needed to solve the problem of connecting traditional business intelligence workloads to this Hadoop data. Although the term “Data Lake” didn’t exist back then, we were solving the problem of: “How can you deliver an interactive BI experience on top of a scale-out Data Lake” - it turns out we were pioneers in delivering Data Lake Intelligence.
Our experiences and learnings from those initial efforts led to the architecture that sits at the core of the AtScale Intelligence Platform. Because AtScale has been built from the ground up to deliver business-friendly insights from the vast amounts of information in data lakes, AtScale has experienced tremendous success and adoption in enterprises ranging from financial services, to retail to digital media. With the release of AtScale 6.5, we’ve continued to build on and expand AtScale’s ability to uniquely deliver on the promise of Data Lake Intelligence. If this sounds like something you might be interested in knowing more about… keep reading!