It took a while for me this week to judge the news value of the G2 Crowd analyst report about marketing automation. I thought about it, took another look and concluded that, yes, marketing automation has to be one of the most boring topics to write about. It’s fascinating in its own way, right? No. This stuff is as boring as hell if you don’t get the right angle. More so, it gets confusing if you don’t organize it right. Thoughts shoot around and land in a pile of nonsensical crap.
It’s the bigger picture that makes this story interesting. At its most basic level, the G2 report is more about how data is changing the way the old-school big technology companies are valued and judged. G2 Crowd uses crowdsourced data that it analyzes and ranks. It’s interactive, as opposed to the static “magic quadrants” that Gartner Research uses to define the leaders in different sectors of the enterprise technology market.
The G2 report collects data from people who use marketing automation software and services. It’s immediately available on the site and updated as more data gets processed into the results.
Most analyst reports are updated on an annual basis or even every two years by one or a few people who spend months doing interviews. This has its drawbacks, because a lot can happen in the market between reports, and smaller vendors are often excluded. It’s essentially a manual, interview-driven process. Surveys may be done, but syncing results into a real-time quadrant is not the norm.
The G2 Crowd grid is based on 700 user reviews by people who have been vetted to ensure they are actual users of the products. Reviews are posted in real-time. The data is aggregated and analyzed quarterly by the G2 team. The grids are designed to aggregate peer reviews, web traffic, Twitter, LinkedIn and other social media data into an algorithm to determine how companies are perceived. As I noted in May, the peer reviews are weighted more heavily than other data.
It’s this data-driven approach that will continue to filter into the buying and selling process. Evidence of this is growing as companies from various market sectors are starting to offer the capability to make quick decisions without the guess-work.
For example, Brinqa uses a NoSQL graph database to do risk analysis for customers. The company offers context to machine data so customers can build models that help them make buying decisions that don’t rely on guessing.
CEO Amad Fida said in an interview this spring that the ability to do data analytics helps customers think more about the infrastructure that they run. They are doing more analysis to determine where they keep their data and the data center operations required to process it.
Wise.io, for example, acts as a framework for ingesting data from Hadoop, MongoDB and various file sources. As I wrote this spring:
The Wise.io engine creates multi-dimensional views of the data it ingests. For example, machine learning can analyze every pixel in a picture and correlate its relationship to all the other pixels in the photo. The Wise.io engne can, as well, process the billions of other signals that any data point relates to. Scale that multi-dimensional view from billions of signals and the benefits can come in any kind of form. A streaming provider might know if the customer using the smartphone is sitting or standing. The Wise.io framework serves as a central brain that takes a holistic look at the data. It has its most useful applications in tasks that require a high level of cognition and intelligence to get the work done.
As customers use more data, they will seek ways to make their processes more efficient. Data will definitely determine the purchasing decisions they make.
But you have to wonder what the need will be for any service that is not data-driven. What will be the value of reports that are essentially compiled by hand? It’s hard to imagine that they will have as much relevance compared to data-driven services that help customers make it easier and faster to make buying decisions.