Recently discovered security vulnerabilities Meltdown and Spectre threaten the exposure of sensitive information and led to fixes in operating systems and CPUs that may affect the performance of DataStax Enterprise.
The overall performance impact of Linux distributions that have been patched for Meltdown and Spectre on DSE in our tests was usually below 5% and did not exceed 10%. To determine the impact for your production workload you will need to run tests in your performance test environment and with a workload that matches your production workload.
Two recent and severe security vulnerabilities, Meltdown and Spectre, allow exposure of sensitive information. These fundamental flaws are widespread and affect nearly every computer manufactured in the last 20 years. Security researchers are calling them catastrophic, and these vulnerabilities have captured the attention of the whole industry. Exploits that leverage these vulnerabilities are able to read any kind of data, like passwords, private keys, or anything else that resides in memory. Mitigation strategies include at least installing security updates for the operating system, probably CPU microcode updates and maybe BIOS updates.
These security flaws arise from features built into chips that help them run faster, and while software patches are available, these patches have impacts on system performance. Measurable impact results will vary both on workload and potentially new patches in various kernel versions. This post is intended to help DataStax customers understand the issues and evaluate the possible performance implications to DataStax Enterprise (DSE).
The implications of the Meltdown and Spectre vulnerabilities are very severe, both in terms of the security breaches they expose users to and in terms of the potential performance impact when mitigating these defects. Patches for these vulnerabilities are being provided by various vendors as initial fixes can be improved upon, and anomalies may arise due to the patches themselves requiring further remediation.
Please note that results mentioned in this blog post apply only to the tests and workloads we have run against our performance test environments. To get the actual numbers for your concrete use cases and workloads, you must run your own tests in your performance test environment.
The security patches are still being improved and bugs in these patches being fixed. In general, and particularly for the Meltdown and Spectre vulnerabilities, DataStax recommends:
- Continuously install future security patches and bug fixes.
- Use the latest kernel versions on both DSE servers and clients and generally in your organization including the network components.
- Ask your operating system vendor for Linux® updates and install those.
- Ask your hardware vendor and operating system vendor for CPU microcode updates and install those.
- Ask your hardware vendor for relevant BIOS updates.
- Cloud providers might force restarts due to hypervisor updates.
- Test all updates and changes before putting those changes into production.
- Run continuous performance tests and evaluate the results.
- Since PTI is not available for 32-bit systems, consider upgrading to 64-bit.
The website https://meltdownattack.com/ contains more information about both vulnerabilities, their impact, and links to vendors.
General performance impact
Many variables go into whether there will be a performance impact on a user’s system, and how large said impact may be.
The Linux kernel version, CPU microarchitecture, and CPU microcode version have the biggest impact with respect to the patches for mitigating Meltdown and Spectre security vulnerabilities. Note that different kernel versions have different performance characteristics, despite PTI (page tables isolation, the Linux kernel feature that mitigates the Meltdown security vulnerability.
The approaches implemented in the numerous Linux kernel versions since the beginning of 2018 have provided various performance profiles. Most benchmarks imply that the performance regressions, if noteworthy, are less severe with newer Linux kernel versions and CPU microcode updates.
Please note that in virtualized environments, and therefore in the cloud, both the hypervisor, which is controlled and maintained by the cloud provider, and the operating system of the guest host, are potentially impacted by patches for these vulnerabilities as they apply in both contexts.
The impact of these various kernel versions for the hypervisor and running instances changed within the recent weeks. Initially, users reported a measurable and noticeable performance impact on at least one of the big cloud providers. Since that time, more recent benchmarks and patches show a still measurable but far less severe performance impact on systems.
As with all performance tests, multiple test runs are needed to get comparable results. Individual tests need to run for an extended time to bring the system into a “steady state”. To eliminate unwanted effects on tail latencies - for example, impacts caused by random data, unbalanced distribution, GC kicking in - multiple test runs are needed and the tail latencies should be carefully examined.
DataStax ran a series of tests for different workloads with Linux PTI enabled and PTI disabled on bare metal. Due to the nature of PTI, we have measured a slight increase of system CPU utilization in all tests with PTI. With a hypervisor, an increase for steal CPU utilization (CPU time “stolen” from the hypervisor or other VMs) might also become visible.
Concrete impact is both environment and workload specific, ranging from 0% to 10% for request latency, and cannot be determined without testing on that environment and workload. Thus far in our testing, the observed range of impact matches the outcome of other independent tests for comparable systems and workloads.
In general, when a cluster is not already overloaded or operating well beyond the bounds of advisable provisioning, the measurable impact of PTI should not exceed the range mentioned above. While performing request latency tests, ensure that you do not overload any component (servers, disks, network, clients, etc) - there must always be enough operational headroom.
The performance impact can materialize on the server side, on the client side, and maybe in a network or storage component.
Even for “database only” workloads, i.e. only writing and reading data, DataStax recommends installing all available security patches despite the potential performance impacts.
Servers used for the tests: bare metal (no hypervisor), 2x Intel® Xeon® CPU E5-2650L v3 @ 1.80GHz, 128GB RAM, Samsung® 850 SSDs, Linux 4.4.0-109-generic
For the DataStax DSE Driver tests, the clients were running on the same hardware setup as the servers. All other components remained unchanged between the test runs with PTI on and PTI off. DataStax Enterprise are 5.1.5 or 5.1.6 was used during the tests.
Tests and impact to DataStax distribution of Apache Cassandra®
Both tests for performance and load have been run. The sequence for both is:
- initial data loading for 1 hour
- pause to let compactions settle
- 90/10 workload (90% writes/10% reads) for one hour
- pause to let compactions settle
- 50/50 workload (50% writes/50% reads) for one hour
All requests used consistency level LOCAL_QUORUM against a 5 node cluster, 1 data center, replication factor 3, unthrottled compaction.
For the performance (constant throughput) tests, we see a regression in median latency between 1% and 3% for data loading, write & read for 90/10 and 50/50.
Data priming (y axis is latency in ms, x axis is time):
90/10 workload (y axis is latency in ms, x axis is time):
50/50 workload (y axis is latency in ms, x axis is time):
For the load (maximum throughput) tests, we see a regression of the achievable number of operations/second between 0.5% and 3.5% for data loading, write & read for 90/10 and 50/50.
Tests and impact of PTI to DSE Search
Insertion: Insert 500 million rows on node with search index enabled. Measure ops/s and latency.
Reindexing: Trigger reindex on a table containing 100-500 million rows. Measure reindex time.
QPS: Query 3-node cluster with 100 million rows using various query types. Measure ops/s and duration.
Tests with real-time indexes for writes and queries sees a marginal regression in request latency of 1% - 2%. Same tests with near real-time indexes see a regression in request latency of 3% - 5%. Reindexing time sees a regression of 0% to 9%. Query performance differences appear to be negligible.
Tests and impact of PTI to DSE Analytics
Test scenario was a 5-node cluster with RF=3, loading 1B rows spread across 1M DSE/Cassandra partitions. We used spark-stress to compare two things: data loading and data extraction. The tests were run against a slightly different hardware setup: 16 CPUs, 60GB RAM, 1500GB SSD.
Performance comparisons evaluated client-side latency, throughput, and operation runtime. These comparisons showed a 6% delta for request latency. Server metrics such as CPU utilization, read/write latencies, and GC activity all looked similar between these experiments.
Tests and impact of PTI to DSEFS
DSEFS performance was tested on a DSE 5.1.5 node. fs-stress was used to write and then read 1,000 640MB files. Both the write and read step took ~30 mins (for a total test time of 1 hr). Several trials were performed.
All the trials showed an increase in total CPU usage with PTI on (from >4% with PTI off to <4.5% with PTI on in our tests). We haven't seen a noteworthy regression in latency.
Tests and impact of PTI to DSE Graph
Two types of tests against DSE Graph were performed. First, OLAP: We run a single query and measure how long it takes. We repeat with different queries. Second, OLTP: We run a query at different requests per second, and measure how long it takes. We repeat with 1-hop, 2-hop, 3-hop, and id lookup queries.
OLAP: 2-10% increase in latency
DSE GraphFrames: up to 3% increase in latency
OLTP: < 10% increase in latency
Tests and impact of PTI to DSE Drivers
Basic throughput and latency tests conducted with patched/unpatched client machines, while server machines were left unpatched (isolate driver contribution). Tests were executed using the latest DSE drivers at the time of this writing: DSE Java Driver version 1.4.2, DSE Node.js Driver version 1.4.0, and DSE Python Driver version 2.3.0.
Enabling PTI does not impact drivers significantly. In workloads where the CPU is not already saturated, we observe the expected marginal increase in system CPU utilization. Latency and throughput are essentially unchanged for scalable drivers. For drivers which are CPU-bound (e.g. Node.js or Python, which are more often due to the GIL - “global interpreter lock”), users may observe some diminished throughput (single digit percentages). This effect can be mitigated to some degree by tuning the write coalescing thresholds to a higher value, which will be done by default in upcoming versions of the drivers.
Meltdown and Spectre
A fundamental security design principle in today’s operating systems is to isolate processes from each other and do not allow any process to read the kernel’s memory.
Meltdown (CVE-2017-5754) allows a program to break the isolation of applications and the operating system - the barrier “melts”.
Spectre comes in two variants (CVE-2017-5753 and CVE-2017-5715) and breaks the isolation between applications. Spectre is much more complex to use as an exploit.
Unlike other security vulnerabilities, it is at best very hard to distinguish real exploits using Meltdown or Spectre from regular benign software, so even the best and most up-to-date antivirus software has effectively no chance to detect these exploits. Those exploits do not leave any traces.
All kinds of systems (servers, PCs, set-top boxes, laptops, NAS, SAN, routers, network hardware, etc) are likely affected by at least one of these three vulnerabilities; any system should be suspected of being vulnerable unless mitigation strategies are specifically pursued (kernel patches, microcode updates, etc).
An attacker needs to be able to execute code directly on affected machines. Proof of concepts for remote execution via browsers exist and browser vendors have already started to publish security updates. It should be assumed that exploits for Meltdown and both Spectre variants already exist.
The only fix for Meltdown and both Spectre variants is a combination of operating system updates and CPU microcode/BIOS updates.
All fixes, current and upcoming, influence the performance of operations that “need” the operating system - i.e. the Linux kernel. Therefore operations like disk I/O, network I/O and context switches are affected.
Parts of the vulnerabilities have been known by many vendors for months while others were just recently unveiled; even operating system vendors had little time to build and publish fixes for some of the variants. Unlike the usual Linux development procedure, the fixes for Linux made it into even quite old and stable branches within a few days despite the invasive nature of the changes. This probably speaks for itself regarding the severity of these vulnerabilities.
Security fixes against the Meltdown vulnerability are already available and have been introduced in Linux however there are a wide range of fixes available. The initial patch called “KAISER” is already outdated. Recent Linux versions released since January 2018 contain the a feature called KPTI (“kernel page tables isolation”), or just PTI (“page tables isolation”), that is based on KAISER - KPTI and PTI are synonyms.
Different kernel branches (e.g. 4.4.x, 4.9.x, 4.14.x, etc) have different PTI implementations that can leverage the PCID to varying extents. PCID is a processor feature in Intel CPUs, introduced mid 2013 with the Haswell microarchitecture. PCID (“process context identifiers”) allows systems to omit a the “translation lookaside buffer” flush - i.e. prevent a lot CPU cache flushes. Performance implications of PTI heavily depends on the kernel version and the CPU microarchitecture & microcode version.PCID support has been introduced with 4.14.11 and 4.15.
The first kernel version with PTI was published on Jan 2nd, 2018 - followed by a series of various bug-fixes. PTI was initially released for 64-bit systems. PTI support for 32-bit systems was not available for Linux as of this posting.
Systems with Intel CPUs and CPUs from other vendors (AMD® excluded) are vulnerable to Meltdown.
Spectre v1 (CVE-2017-5753)
There is currently no patch available in Linux and it is unlikely that a patch may even make it into Linux 4.15.
Spectre v2 (CVE-2017-5715)
There are two approaches to mitigate Spectre v2. The first option: Intel updates its CPUs via microcode updates. The performance impact of this depends on the CPU (microarchitecture) itself and Linux would use the new CPU instructions introduced by the microcode update. Existing native code binaries don’t need to be updated.
Google® has proposed a new technique that is called “retpoline” (“return trampoline”), which requires new native code binaries of all executables and (shared) libraries, that are compiled with updated compilers.
Depending on how the fix for Spectre v2 finally shakes out, effectively every piece of compiled software might need to be updated or upgraded. Regardless of the mitigation strategy the industry ends up settling on, a Linux kernel update will be necessary.
Many moving parts influence the performance of a system:
- Actual workload and system state.
- Linux kernel version and enabled features.
- CPU microarchitecture and microcode. Microcode updates might come with operating system updates.
- Number of CPUs (number of processor chips / sockets).
- Hypervisor and its version and enabled features (users usually have no control over this in a cloud environment).
- Network components.
The overall performance impact of Linux PTI (“page-tables-isolation”) on DSE in our tests measuring request latency was usually below 5% and did not exceed 10%. To determine the impact of PTI for your production workload you will need to run tests in your performance test environment and with a workload that matches your production workload.
All this testing would have been impossible for a single person to accomplish, and a lot of colleagues helped out with infrastructure, performance testing individual components, reviewing everything, and reiterating. Many thanks to everybody who was involved and helped put this post together!