Last year, all of the Intel SSDs we reviewed used third party controllers supplied by either Marvell or LSI (SandForce). Which is why the Intel SSD DC S3700 we’re reviewing today is so interesting – rather than continue to use third party controllers (as recently as the Intel 335 Series released in late October 2012), Intel are using the DC S3700 to introduce an all new proprietary first party controller. This is the first time they have done so since the X-25 in 2008.
We should point out first that the 200GB and 800GB DC S3700 drives we’ll be looking at today are intended for the enterprise market. Although they use MLC flash, it is what Intel calls “HET MLC” or “High Endurance Technology”. Most consumer drives use MLC flash that is in the 3000-5000 Program/Erase range. This gives drives like the Kingston HyperX 3K a Total Bytes Written of 153.6 Terabytes for the 240GB version for instance. To reach that level of bytes written with 5 years, you’d have to write 86 GB of random data to it every single day, including going 2 years past the original warranty.
I fully expect Intel to use this controller … with standard MLC flash in consumer drives.The DC S3700 on the other hand is rated at “10 full drive writes per day over the 5-year life of the drive” which means the 200GB version would last the same five years while having 2000 GB per day written to it. Quite a big difference, but for most desktop consumers, standard MLC is sufficient.However just because these are expensive enterprise drives, doesn’t mean you shouldn’t be paying attention to them. I fully expect Intel to use this controller with some firmware tweaks, along with standard MLC flash in a future line of performance consumer drives. This is exactly what they did with the SSD 710 (enterprise) and SSD 320 (consumer) before moving on to use third party controllers.
Whenever this does happen, performance likely won’t be the same, but we can at least get an idea of what to expect from Intel in the coming years.
Consistency: The True SSD Bottleneck
It would appear that while using controllers from Marvell and LSI, Intel had spent the last few years designing their own controller, with a focus on what they thought was the most important aspect in SSD performance – consistency. It’s not something that comes up often during quick benchmarks of brand new SSDs, but to us it has been very important to take steady state performance into consideration.
From now on we’ll actually be looking at steady state performance and consistency even more closelyJust about any consumer drive can pull impressive numbers when tested in a fresh state, easily meeting or exceeding listed specifications. However, due to the way NAND works, a drive will only get slower over time as it is used and filled with data. Over the years, controllers have added technology to stave off performance degradation as much as possible, and having extra flash not visible to the end user helps mitigate this state quite a bit. Performance is further kept consistent by employing techniques like idle garbage collection, TRIM, and data compression. However it’s inevitable that a drive will continue getting slower until it reaches its “steady state” – the state at which the controller can manage to keep performance in check without dropping further.We have always looked at steady state performance separately in our SSD reviews, which is why our top SSD recommendations are usually SandForce based such as the Intel 520 or Kingston HyperX 3K. Drives like the Samsung 830 and Crucial m4 are highly popular and can boast some impressive numbers when new, but they don’t handle steady state performance nearly as well as SandForce drives, from what we have seen.
You will see a very good example of this later on in this review, because from now on we’ll actually be looking at steady state performance and consistency even more closely. We will test basic desktop performance in a fresh state, but will be including new charts that look at the performance degradation curve over time
Before getting to that, we’ll take a moment to address performance consistency. This has been an important metric to consider on the enterprise side for a long time, and I think consumers need to consider it as well.
To show performance consistency, we are going to use the same testing method employed by Intel using IOMeter. We first fill each drive with sequential 128KB data, then run 4K random writes with a queue depth of 32 for about half an hour. By recording IOPS every second, we are able to plot out what happens when a drive’s spare area is filled up, and how it handles this scenario.
What you are looking for in these graphs is the flattest line possible, particularly after it hits the performance wall once the spare area is full.
- Intel DC S3700 800GB
- Intel DC S3700 200GB
- Intel 520 240GB
- OCZ Vertex 4 256GB
- Samsung 830 256GB
- Crucial m4 512GB
- Intel 335 240GB
- Kingston HyperX 3K 240GB








As you can see, how each drive handles this test varies vastly.
The DC S3700 drives perform similarly to each other – the 200GB version ‘hits the wall’ sooner, but levels out after a few minutes. This happens to the 800GB much later, and it appears that it performance would continue to increase and would likely level out if we were to continue the test.
We have included three SandForce based drives in this tests, and what’s interesting is that one drive (Intel 335) handles this test quite differently from the other two. The Intel 520 and Kingston HyperX 3K drop to a steady state after about 3-5 minutes, but maintain pretty consistent performance after that, just over 60K IOPS. The Intel 335 appears to do about the same, but after 15 minutes loses its consistency quite a bit, and another 10 minutes later drops consistency even more, to the point where it is in the 5000 IOPS range (while sometimes still able to hit 60,000 IOPS)
We know that the OCZ Vertex 4 can pull impressive numbers when in a fresh state. Unfortunately after filling the drive with sequential data, that state doesn’t last long in this test. After about 5 minutes though, it finds its way, and performs at a very consistent (albeit somewhat slow) 33,000 IOPS.
As for the Samsung 830, the darling of the hardware community last year, it is already in a full steady state before we even perform this test. It isn’t able to find a truly consistent state, and therefore its result in this test is unimpressive to say the least.
I would recommend over-provisioning the (Crucial m4 and Samsung 830) manually to avoid this issue.The same can be said for the Crucial m4, another favourite that is highly touted on forums. It does well for a few minutes, then performance falls off a cliff completely. It has the ability to hit the same 60K IOPS, but most of the time it is stuck in the <100 IOPS range.It’s no coincidence that the drives that over-provision (ie. give the end user access to only a portion of the memory that is installed on the drive) perform the best in this test. In fact, if you have a drive that gives access to all of its installed memory, I would recommend over-provisioning it manually to avoid this issue. If you have a 256GB Crucial m4 or Samsung 830 for instance, it would be a great idea to format it with a 240GB partition, and leave the rest unformatted maintain consistent performance. It’s up to you to decide if capacity is as important as consistent performance however.
We’ll be looking more closely at steady state performance later in the review as well. For now, let’s rip these open and see what a $2000 SSD looks like inside.
