Big data — its volume, variety, and velocity — has spawned a glut of technologies for extracting insight from structured, semi-structured, and unstructured information. These technologies all have value when applied to the right problem. Big data has forced organizations to think differently about how they architect their data architecture and their analytics in response to the changes in data, the addition of modern data sources, and the emergence of various large-scale storage options such as data lakes.
In turn, the trends in big data have started to force major changes on the IT infrastructure side of the enterprise. Applications are deployed today very differently from the way they were when a lot of current business intelligence (BI) technologies were created. To handle user and data volume, they demand continuous integration and scale-out modern data architectures that can span either the cloud or the corporate data center.
Analysts at 451 Research talk about modern data architecture as a "converged data platform," which includes distributed data grid/cache, NoSQL databases, relational operational databases, and analytic databases combined with Apache Hadoop and stream processing.
In their present state, BI and visual analytic tools can’t operate effectively or evolve in this new ecosystem. For the most part, they were designed to analyze and extract insight from one type of data: the relational database. They also weren’t built to scale, which means performance suffers as the population of business users and data scientists increase and data sources grow. And their legacy architecture roots make these tools difficult to embed.
Zoomdata was purpose built for big data. If you’re using any of the modern big data sources — Hadoop, search, streaming, NoSQL — you want to use Zoomdata because it connects to them using native APIs and our Zoomdata Smart Connectors.
While traditional BI and analytics tools offer a broad set of connectors for primarily SQL data sources, Zoomdata offers the widest set of connectors for modern data stores such as Hadoop, Spark, NoSQL databases, streaming, search engines, and data stored in the cloud, as well as for traditional SQL relational database and modern data warehouses.
Since all data sources are not the same, Zoomdata doesn’t treat them with dumb, lowest common denominator connectors. Instead, Zoomdata Smart Connectors leverage the unique capabilities of each data source, in order to optimize performance and take advantage of the their query expressiveness.
For example, Zoomdata Smart Connectors will leverage data partitions in Impala and other SQL sources, faceted search queries when querying an unstructured data search engine such as Elasticsearch or Solr, native APIs for NoSQL databases, and data streams when accessing data sets being updated in real time via a streaming engine such as Apache Kafka or Apache Storm.
Big data — its volume, variety, and velocity — has spawned a glut of technologies for extracting insight from structured, semi-structured, and unstructured information.