THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Joe Chang

Hardware update Aug 2013

Intel Xeon E3 12xx v3 - Haswell 22nm
Intel Xeon E3 12xx v3 processors based on Haswell 22nm came out in Q2-2013. Dell does not offer this in the PowerEdge T110, holding to the Ivy Bridge 22nm E3-12xx v2 processors (Ivy Bridge) and below. The HP ProLiant ML310e Gen8 v2 does offer the Intel E3-12xx v3 processor.

Is there a difference in performance between Sandy Bridge (32nm), Ivy Bridge (22nm) and Haswell (22nm)?
Ideally as far as SQL Server is concerned, we would like to see TPC-E and H benchmarks, but very few of these are published, and almost never for single socket systems. The other benchmark is SPEC CPU integer, but we must be very careful to account for the compiler. If possible, use the same compiler version, but there are usually compiler advances between processor generations. In general as far as SQL Server is concerned, discard the libquantum result and look only at the other. It is possible to find Sandy Bridge and Ivy Bridge Xeon E3-1220 3.1GHz (original and v2) results on matching compiler, which seem to show about 5% improvement. The only result for v3 is on the next compiler version (from Intel C++ 12.1.0.225 to 13.1.1.163) showing about 10% gain, so we do not know what can be attributed to the processor architecture versus the compiler.

In any case, it would nice if Dell would ditch the external graphics, using the Intel integrated graphics in v3. I know this is a server, but I use it as a desktop because it has ECC memory.

Intel Xeon E5 26xx and 46xx v2 - Ivy Bridge 22nm - in Sep 2013
Intel Xeon E5 26xx and 46xx v2 processors based on Ivy Bridge 22nm with up to 12 cores supporting 2 and 4 socket systems respectively should come out soon (September), super ceding the original Xeon E5 (Sandy Bridge 32nm). The 2600 series will have 12-core 2.7GHz, 10-core 3GHz and 8-core 3.3GHz at 130W. The general pattern is E5 processors will follow E3 and desktop by 12-18 months?

Intel Xeon E7 v2? - Ivy Bridge 22nm - in Q1 2014
There will be an E7 Ivy Bridge with up to 15 cores in Q1 2014 for 8 socket systems, replace Westmere-EX. I am not sure if it will be glue-less. http://download.intel.com/newsroom/kits/idf/2013_spring/pdfs/IDF-Beijing-2013-Server-FactSheet.pdf The current strategy is that there will be an E7 processor every other generation?

Storage Systems

EMC VNX2 in Sep 2013?
VNX2 was mention as early as Q3 2012. I thought it would come out at EMC World 2013 (May). Getting 1M IOPS out of an array of SSDs is not an issue, as 8(NAND)-channel SATA SSDs can do 90K IOPS. Similarly, revving the hardware from 1-socket Westmere-EP to 2-socket Sandy Bridge EP poses no problems. Perhaps however, changing the software stack to support 1M IOPS was an issue?
EMC Clariion used Windows XP as the underlying OS. One might presume VNX would be Windows 7 or Server? or would EMC have been inclined to unify the VMAX and VNX OSs?
In any case, the old IO stack intended for HDD arrays would probably be replaced with NVMe, with much deeper queues, designed for SSD. It would not be unexpected that several iterations were required to work out the bugs for a complex SAN storage system?
see http://sqlblog.com/blogs/joe_chang/archive/2013/02/25/emc-vnx2-and-vnx-future.aspx

IBM FlashSystem 720 and 820 5/10TB SLC, 10/20TB eMLC (raw capacity 50% greater, with or w/o RAID) 4x8 Gbps FC or 4x40Gbs QDR Infini-Band interfaces.
http://www-03.ibm.com/systems/storage/flash/720-820/index.html

HP MSA 2040 with four 16/ or 8Gbps FC.

I still prefer SAS in direct attach storage, or if it must be a SAN, the Infini-Band.
FC even at 16Gbps is just inconvenient in not properly supporting multi-lane operation.

Storage Components

Crossbar made a news splash with the announcement of Resitive RAM (RRAM or ReRAM) Nonvolatile Memory with working samples from a production fab partner. Products should be forth coming. Since this is very different from NAND, it would require a distinct PCI-E or SATA interface to RRAM controller, analogous to the Flash controllers for NAND.

see Crossbar-RRAM-Technology-Whitepaper-080413

Current thought is that NAND Flash technology may be near its effective scaling limits (increasing bit density). Any further increase leads to higher error rates and lower endurance. My view is that for server products, 25nm or even the previous generation is a good balance between cost and endurance/reliability. The 20nm technology should be the province of consumer products. Some companies are pursing Phase-change Memory (PCM) Crossbar is claiming better performance, endurance and power characteristics for RRAM over NAND.

Seagate lists 1200GB and 900GB 10K 2.5in HDD, along with enterprise version of 7200 RPM HDD 4TB 3.5in FF. HP lists these as options on their ProLiant servers. Dell too.
I would think that a 2TB 2.5in 7.2K disk should be possible?

Dell HDD pricing:
7.2K 3.5in SATA 1/2/4TB $269, 459, 749
7.2K 3.5in SAS 1/2/3/4TB $369, 579, 749, 939
10K 2.5in SAS 300/600/900/1200GB $299, 519, 729, 839
6Gbps MLC SAS 800GB/1.6TB $3499, 6599

Samsung described the idea of using a small portion of an MLC NAND as SLC to improve write performance in certain situations. So apparently a NAND designed as MLC can also be used as both SLC and MLC, perhaps on a page or block basis. I am thinking this feature is worth exposing?

The Samsung 2013 Global SSD Summit was in Jul. Video on youtube, I cannot find a pdf. PCI-E interface in 2.5in form factor, i.e. NVMe. Tom's HWG seems to have the best coverage.
http://www.tomshardware.com/reviews/samsung-global-ssd-summit-2013,3570.html

Supermicro is advertising 12Gbps SAS in their products, presumably the next generation of servers will have it.

 

There is a company with a SSD product attaching via the memory interface. There is a huge disparity in characteristics between DRAM and NAND, that I would have serious concerns. The Intel Xeon E5 2600/4600 processors have 40 PCI-E gen 3 lanes, capable of supporting 32GB/s IO bandwidth, so I don't see the need to put NAND on the memory channel.

Published Thursday, August 08, 2013 12:15 AM by jchang

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

No Comments

Leave a Comment

(required) 
(required) 
Submit

About jchang

Reverse engineering the SQL Server Cost Based Optimizer (Query Optimizer), NUMA System Architecture, performance tools developer - SQL ExecStats, mucking with the data distribution statistics histogram - decoding STATS_STREAM, Parallel Execution plans, microprocessors, SSD, HDD, SAN, storage performance, performance modeling and prediction, database architecture, SQL Server engine

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement