The Hybrid Cloud And Distributed Database That Walks The IoT Talk

Internet of Things (IoT) devices and sensors are continuously producing data. This data needs to be managed and stored at large scale and accessible with velocity so it can be analyzed, cross-correlated and used to make critical data-driven decisions, often in real-time.

Learn More About How Datastax Supports IoT Applications
icon

High-Speed Data Ingestion with Kafka Support

Built from the ground up to consume streams of time-series and sensor-based data at high velocity, from anywhere in the world, with zero downtime. Built-in integration with Apache Kafka™.

icon

Advanced Analytics

Search and analyze information, cross-correlate streaming data to deliver mind-blowing experiences that will drive business value, engagement and growth.

icon

Infinite Scalability

Easily add scale and capacity while maintaining 100% uptime no matter what you've built or where it lives: at the edge, on-premises, or across multiple cloud platforms.

IoT Data at Serious Scale? No Problem.

With the rise of 5G, more IoT devices will be online producing and streaming data. 70% of wide-area IoT devices will use cellular technology, while IDC predicts that 90 ZB of data will be created on IoT devices by 2025 and global IoT spending will surpass $1 trillion by 2022.

Future-Proof Time Series Management

Whether it’s sensor of financial streaming data, DataStax is specifically built to handle time-stamped measurements and events at large scale that drive real-time business analytics from any streaming source. 

  • DataStax is specifically designed to collect and analyze time series data at scale and high-speed velocity.
  • The masterless architecture of Apache Cassandra™ and DataStax lets you collect and stream data from anywhere.
  • Integrated analytics allows you to easily unlock the value of your streaming data and forecast trends, detect patterns and anomalies and create actionable insights
database group

Fast and Efficient Logging Support

Log data represents the definitive record of what's happening in every enterprise and is an untapped resource when it comes to troubleshooting and supporting broader business objectives.

  • Apache Cassandra’s™ log-based and append-only storage engine delivers the fastest rate of log ingestion of any database on the market
  • Bundled analytics makes it easy to analyze and make decisions from collected log data
  • Enterprise search functionality supplies powerful ways to search and correlate logging information
database management

Nobody Writes Data Like Cassandra

Putting DataStax in front of applications that have to write data in the blink of an eye is something many successful enterprises do today to gain an advantage over their competition. 

  • Apache Cassandra™ is the undisputed leader in the database market when it comes to fast ingestion of data
  • Write data anywhere in the world and have it automatically synchronized with copies in other locations
  • In-memory capabilities make writing and keeping often referenced data instantly available 
dev monitor 2

The Best at Managing Streaming Financial Data

DataStax is used by top financial enterprises to collect avalanches of streaming data related to investment activity and other financial transactions.

  • Captures financial tick and similar data in the fastest and most storage-efficient manner possible 
  • Advanced security ensures the protection of sensitive financial information
  • An enterprise connector to Apache Kafka™ allows DataStax to be used as a financial repository for Kafka applications 
dev laptop secure

Successful Logistics Data Management

More than 80% of enterprises today depend upon the management of logistics data to run their business. You’re probably one of them. 

  • Easily track the flow of any business process anywhere in the world with the gold database standard in data distribution support 
  • Use DataStax Enterprise search functionality for the collection and analysis of geospatial information, which is vital for logistics management
  • Instantly deliver logistics updates to internal staff or external customers with DataStax advanced performance capabilities   
cloud database
Video

How to Manage Internet of Things (IoT) Data

Managing IoT data at scale is not a trivial task, and with rapidly evolving 5G infrastructure even more sensors are going to flood data centers with additional data. Are you ready to handle today's IoT data at tomorrow's scale? Are you dealing with disparate data sources? Learn more about how a distributed data architecture can support exponential growth of IoT data, provide the right tools for data analytics and related data sets, and ensure that your customers are relying on real-time data, delivered instantly wherever they are. Learn more at datastax.com/internet-of-things.

Learn More
Icon
Blog

Why a Hybrid Cloud Database is Your Key to AI and IoT

Not so very long ago, artificial intelligence (AI) was the stuff of science fiction. AI, at least in the minds of the general public, was relegated to fantastically fantastical sci-fi characters such as: Star Trek’s talking computer (A computer that talks? That’s crazy. Maybe in a couple hundred years…) The class M-3 model B-9 General Utility Non-Theorizing Environmental Control Robot from Lost in Space (Danger, Will Robinson!) C-3PO from Star Wars (“What have you done? I’m backwards, you filthy furball!") The Terminator from The Terminator (“I’ll be back…”) Similarly, the notion of an internet of things (IoT) is also rooted in Sci-Fi fantasy. From Dick Tracy’s ‘smart’ watch to The Jetsons’ autonomous and interconnected household devices, the IoT concept was fantasized long before the term was even coined by Kevin Ashton in 1999. The Hybrid Cloud Future Is Now Though AI and IoT aren’t likely to be abandoned by sci-fi yarn-spinners anytime soon, they no longer exist only within the flickering frames of a movie, or between the covers of a fiction book. AI and IoT have already become integral components of our everyday lives. From mission-critical applications such as completely autonomous airliner autopilots to the slightly-more-whimsical, such as personal assistants like Alexa and Cortana, AI and IoT impact our lives on a daily basis. Worldwide, investments in IoT technology are expected to reach $1.2 trillion by 2022. The consumer industry is expected to lead the way in IoT spending, followed by the insurance and healthcare industry verticals. Similarly, we’re just at the beginning of a massive growth in the implementation of AI technology. Gartner recently reported that the number of companies implementing some form of AI tripled in just the past year. Worldwide spending on AI is predicted to more than quadruple from the period starting in 2017 to 2021. The same report forecasts that three-fourths of all business enterprises will deploy AI technology by 2021. AI and IoT: Two Legs of a Tripod Most enterprises—likely including your company—will be making massive investments in AI and IoT in the coming years. But though tremendous growth is forecast for both AI and IoT separately, the truth is that both are inherently interlinked. AI and IoT enhance and strengthen each other; they supercharge each other for use cases involving massive amounts of high-velocity data. InformationWeek has labeled IoT and AI “Tech’s New BFFs,” and describes the teaming of the two technologies as “… a crucial combination that will shape corporate data strategies for years to come.” Put simply, AI cannot reach its full potential to duplicate human-like decision-making capabilities without the massive amounts of data that can be gathered through IoT’s edge devices. And, correspondingly, the full potential of the IoT is made possible through AI’s decision-making capabilities. Together, AI and IoT form two legs of the tripod upon which the future of information technology rests. But a tripod can’t stand on just two legs. The Third Leg: A Hybrid Cloud Database What is the common fuel that powers both AI and IoT? Data. Without access to massive amounts of data, the teaming of AI and IoT is of little value. But the simple availability of a huge quantum of data isn’t enough. Of equal importance is the speed, reliability, and security with which that data can be accessed. And that brings us to the third leg of this tripod that supports the future (and present) of IT: a hybrid cloud database. Why a hybrid cloud database? Why not simply a database accessed and managed through a single private or public cloud? Because the single-cloud model just cannot unfailingly and consistently provide the speed, reliability, and security that are all essential to AI and IoT. A hybrid cloud database ensures worldwide access to highly responsive data across many geographies. It provides speed, reliability, and security — and it also offers reduced costs and enhanced operational efficiencies as bonuses. A hybrid cloud database provides the active-everywhere capability that is essential to AI and IoT. In fact, hybrid cloud databases offer a wealth of advantages that make them key to the future success of all your company’s applications. There’s No Fiction to This Science… It’s amazing how quickly science fiction can turn into reality. It wasn’t so long ago that concepts such as talking computers and self-driving vehicles seemed like they belonged to the distant future. And now they’re part of everyday life. AI and the IoT have enabled a wealth of here-and-now technologies that, seemingly only yesterday, would have seemed far off. InfoWorld recently listed a few of them: Medical devices that can automatically defibrillate a malfunctioning heart and simultaneously call 911 for help Automated agricultural combines that can detect and avoid hitting a loose animal (and notify the farmer to herd the stray back home) Instantaneous credit card fraud detection On-demand recommendations for consumers (Netflix recommendations, for example) These examples just scratch the surface of what’s available, and, perhaps more importantly, hint at the astounding technological advancements to come. All are made possible with AI, IoT, and — the third leg of the triad supporting the future of information technology — the hybrid cloud database. DataStax and Microsoft Azure: The Hybrid Cloud Database Built for Global Enterprises (white paper) READ NOW

Learn More
Icon
Blog

How Fog Computing Powers AI, IoT, and 5G

It wasn’t enough to have “the cloud,” but now we have fog? What’s next—hail? The term may sound a little odd and conjure up images of distant lighthouses in faraway lands, but “fog computing”—a term created by Cisco in 2012—is actually a “thing” and is becoming more and more of a thing every day as the Internet of Things (IoT) grows. 451 Research predicts that fog computing has the potential to reach more than $18 billion worldwide by 2022, and Statista predicts the number of connected devices worldwide to grow to over 75 billion by 2025. That’s a huge market, so obviously we are talking about something important. But first—let’s make sure we’re clear on what “fog computing” is. What is fog computing? Also referred to as “fogging”, fog computing essentially means extending computing to the edge of an enterprise’s network rather than hosting and working from a centralized cloud. This facilitates the local processing of data in smart devices and smooths the operation of compute, storage, and networking services between end devices and cloud computing data centers. Fog computing supports IoT, 5G, artificial intelligence (AI), and other applications that require ultra-low latency, high network bandwidth, resource constraints, and added security. Are fog computing and edge computing the same thing? No. Fog computing always uses edge computing, but not the other way around. Fog is a system-level architecture, providing tools for distributing, orchestrating, managing, and securing resources and services across networks and between devices that reside at the edge. Edge computing architectures place servers, applications, or small clouds at the edge. Fog computing has a hierarchical and flat architecture with several layers forming a network, while edge computing relies on separate nodes that do not form a network. Fog computing has extensive peer-to-peer interconnect capability between nodes, where each edge runs its nodes in silos, requiring data transport back through the cloud for peer-to-peer traffic. Finally, fog computing is inclusive of cloud, while edge computing excludes the cloud. Why is fog computing needed? The major cloud platforms rarely experience downtime, but it still happens every now and again; and when it does, it spells big problems and big losses. In May 2018, for example, AWS was knocked offline for 30 minutes. That might not seem like a lot of time. But let’s say you’re a large enterprise with 50,000 employees. If none of them are able to access the data they need to do their jobs for 30 minutes, that’s 25,000 hours—or 3,125 eight-hour work days—the enterprise loses. Similarly, in November 2018, Microsoft Azure was down—not just once, but twice—due to multi-factor authentication issues affecting a variety of customers’ applications. Latency, security and privacy, unreliability, etc.,—all of the above are genuine reasons for concern as the number of connected devices increases and puts overwhelming stress on the big cloud providers’ networks. Real-time processing, a priority for IoT, also becomes very difficult. Without question, these difficulties have led to the Internet of Things requiring a new platform other than the cloud to function optimally—a new infrastructure that can handle all IoT transactions without any unnecessary risks. Fog computing has emerged as the solution. How does fog computing work? Depending on your background, fog computing is probably not as complicated as it may sound. Developers either port or write IoT applications for fog nodes at the network edge. The fog nodes closest to the network edge ingest the data from IoT devices. Then—and this is crucial—the fog IoT application directs different types of data to the optimal place for analysis. For most cases, time-sensitive data is analyzed on the fog node closest to the things generating the data. A centralized aggregation cluster/node is used for analysis of data that can wait seconds or minutes and then an action is performed. For historical analysis and storage purpose, data that is less time sensitive is sent to the cloud. An example here would be each of the fog nodes sending periodic summaries of data to the cloud for historical or big data analysis. Figure 1. WM-FOG software stack. The top layer is the application layer, where user applications reside. The next layer is the workflow layer, where workflow instances reside. Under the workflow layer is the system layer, where the system components reside. The bottom layer is the entity layer, where the system entities (client devices, fog nodes, and the cloud) reside. Reference: http://www.cs.wm.edu How does the fog interact with the cloud? The fog and cloud do a dance that allows each to perform optimally for the IoT use case. The main difference between fog computing and cloud computing is that cloud is a centralized system, while fog is a distributed decentralized infrastructure. Fog nodes: Receive feeds from IoT devices using any protocol, in real time. Run IoT-enabled applications for real-time control and analytics, with millisecond response time. Provide transient storage, often 1–2 hours. Send periodic data summaries to the cloud. The cloud platform: Receives and aggregates data summaries from many fog nodes. Performs analysis on the IoT data and data from other sources to gain business insight. Can send new application rules to the fog nodes based on these insights. When do I need to consider fog computing? As already stated, fog computing has become an essential aspect of IoT, but it’s not necessarily a requirement for all enterprises. Here’s a short list of how to determine if you need it for your business. If you have: Data collected at the extreme edge: vehicles, ships, factory floors, roadways, railways, etc.; Thousands or millions of things across a large geographic area are generating data; The requirement to analyze and act on this data in less than a second; Then you probably need fog computing. The benefits of fog computing Extending the cloud closer to the things that generate and act on data benefits the business in the following ways: Greater business agility: With the right tools, developers can quickly develop fog applications and deploy them where needed. Machine manufacturers can offer Machine-as-a-Service to their customers. Fog applications program the machine to operate in the way each customer needs. Better security: Protect your fog nodes using the same policy, controls, and procedures you use in other parts of your IT environment. Use the same physical security and cybersecurity solutions. Deeper insights, with privacy control: Analyze sensitive data locally instead of sending it to the cloud for analysis. Your IT team can monitor and control the devices that collect, analyze, and store data. Lower operating expenses: Conserve network bandwidth by processing selected data locally instead of sending it to the cloud for analysis. Reference: Advancing Consumer-Centric Fog Computing Architectures How does DataStax help? DataStax Enterprise Advanced Replication takes a very straightforward approach to solving these challenges. It allows many edge clusters—each acting independently and sized, configured, and deployed appropriately for each edge location—to send data to a central hub cluster. Data flowing from an edge cluster can be prioritized to ensure the most important data is sent before the less important data. Inconsistent data connections are fully accommodated at the edge with a “store and forward” approach that will save mutations until connectivity is restored. Any type of workload is supported at both the edge and the hub, allowing for advanced search and analytics use cases at remote and central locations. With DSE Advanced Replication, clusters at the edge of the enterprise can replicate data to a central hub to enable both regional and global views of enterprise data. Being fully aware of the operational challenges, such as limited or intermittent connectivity, DSE Advanced Replication was designed from the ground up to tolerate real-world failure modes. Read more about DataStax Advanced Replication here and how it provides complete data autonomy. The Power of an Active Everywhere Database (White Paper) READ NOW

Learn More

DataStax Solutions for Streaming and IoT

Built for Streaming and Time Series Data