THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Jimmy May

  • Spooked by Columnstore? See My Halloween Weekend Preso at #SQLSatOregon

    This Halloween weekend I continue my efforts to make the world of SQL DWs a better place—one table at a time—via evangelizing Columnstore at #SQLSaturday337 in Portlandia

    Columnstore Indexes in SQL Server 2014: Flipping the DW /faster Bit

    If you’ve not been in a crypt since last All Hallows Eve, you likely know this is one of my staples. 

    Register, see the schedule, or see the event home page on the SQL Saturday site.  I’ll look forward to seeing you here:

    Mittleman Community Center
    6651 SW Capitol Highway
    Portland, OR 97219

    Kudos to Arnie Rowland (@ArnieRowland), Paul Turley (@Paul_Turley), Theresa Iserman (@TheresaIserman), Vern Rabe, Rob Boek (@robboek), & the rest of the Oregon SQL Users Group (@osqld) Leadership Team for their superb organizational efforts.

    FullSizeRender (3)

    SQL Saturday Oregon All-Stars: Speaker Tom Roush (@GEEQL)
    flanked by Leadership Team members Theresa Iserman (also a speaker) & MVP Arnie Rowland.

  • Two Presentations This Weekend at #SQLSaturday349 in Salt Lake City

    I have the privilege of being selected to fill two slots this weekend at #SQLSaturday349 in Salt Lake City.  The venue is simultaneously hosting the Big Mountain Data event.

    My talks are:

    To the Cloud, Infinity, & Beyond: Top 10 Lessons Learned at MSIT

    &

    Columnstore Indexes in SQL Server 2014: Flippnig the DW /faster Bit

    The latter is one of my staples.  The former is a new presentation, a preview of my forthcoming delivery for PASS Summit14.

    Register, see the schedule, or see the event home page on the SQL Saturday site.  I’ll look forward to seeing you here:

    Spencer Fox Eccles Business Building
    1768 Campus Center Drive
    Salt Lake City, UT 84112

    Kudos to Pat Wright (blog | @sqlasylum) & crew for their crazy efforts (pardon the pun) in coordinating the event.

    TJay Belt (@tjaybelt), Andrea Alred (@RoyalSQL), & Ben Miller (@DBADuck), keep your peepers peeled—I’m on the way!

  • AdventureWorks 2014 Sample Databases Are Now Available

     

    Where in the World is AdventureWorks?

    Recently, SQL Community feedback from twitter prompted me to look in vain for SQL Server 2014 versions of the AdventureWorks sample databases we’ve all grown to know & love.

    I searched Codeplex, then used the bing & even the google in an effort to locate them, yet all I could find were samples on different sites highlighting specific technologies, an incomplete collection inconsistent with the experience we users had learned to expect.  I began pinging internally & learned that an update to AdventureWorks wasn’t even on the road map

    Fortunately, SQL Marketing manager Luis Daniel Soto Maldonado (t) lent a sympathetic ear & got the update ball rolling; his direct report Darmodi Komo recently announced the release of the shiny new sample databases for OLTP, DW, Tabular, and Multidimensional models to supplement the extant In-Memory OLTP sample DB. 

    What Success Looks Like

    In my correspondence with the team, here’s how I defined success:

    1. Sample AdventureWorks DBs hosted on Codeplex showcasing SQL Server 2014’s latest-&-greatest features, including: 

    • In-Memory OLTP (aka Hekaton)
    • Clustered Columnstore
    • Online Operations
    • Resource Governor IO

    2. Where it makes sense to do so, consolidate the DBs (e.g., showcasing Columnstore likely involves a separate DW DB)

    3. Documentation to support experimenting with these features

    As Microsoft Senior SDE Bonnie Feinberg (b) stated, “I think it would be great to see an AdventureWorks for SQL 2014.  It would be super helpful for third-party book authors and trainers.  It also provides a common way to share examples in blog posts and forum discussions, for example.” 

    Exactly.  We’ve established a rich & robust tradition of sample databases on Codeplex.  This is what our community & our customers expect.  The prompt response achieves what we all aim to do, i.e., manifests the Service Design Engineering mantra of “delighting the customer”.  Kudos to Luis’s team in SQL Server Marketing & Kevin Liu’s team in SQL Server Engineering for doing so.

    Download AdventureWorks 2014

    Download your copies of SQL Server 2014 AdventureWorks sample databases here.

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    Preamble

    This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations

    Why Columnstore?

    As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore.

    Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014.

    The Customer

    DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.)

    The App: DevCon Security Reporting: Optimized & Ad Hoc Queries

    DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching.

    SSRS, SSAS, & MDX

    Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements.

    Ad Hoc Queries

    Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing.

    DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner)

    The Details

    Classic vs. columnstore before-&-after metrics are impressive.

         

    Scenario

    Conventional Structures

    Columnstore

    Δ

    SSRS via SSAS

    10 - 12 seconds

    1 second

    >10x

    Ad Hoc

    5-7 minutes
    (300 - 420 seconds)

    1 - 2 seconds

    >100x

    Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes. 

    image

    As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible. 

    image

    The Wins

    1. Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude.
    2. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired.
    3. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014.

    DevCon BI Team Lead Nathan Allan provided this unsolicited feedback:

    “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.”

    Summary

    For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure.

    I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    Preamble

    This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative.

    Why Columnstore?

    If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore.

    Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014.

    App: MSIT SONAR Aggregations

    At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore.

    The Win

    Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity.

    Details

    The table provides the raw data & summarizes the performance deltas.

    Logical Reads
    (8K pages)

    CPU
    (ms)

    Durn
    (ms)


    Columnstore

    160,323

    20,360

    9,786

    Conventional Table & Indexes

    9,053,423

    549,608

    193,903

    Δ

    x56

    x27

    x20

    The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore. 

    image

    The “Metrics (Δ)” chart expresses these values as a ratio.

    image

    Summary

    For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

  • MSDN Whitepaper: More Cowbell—Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator

    Hot off the presses is this new MSDN white paper:

    Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator

    One of the gems introduced in SQL Server 2014 is the Cardinality Estimator (CE)—new! improved! & now with more cowbell.  I'm thrilled to be a Technical Reviewer for a superb MSDN white paper authored by my friend, buddy, & pal Joe Sack (b|t). It's exciting & humbling to see my name among such an array of Contributors & Reviewers—including several former colleagues from Azure CAT (formerly SQL CAT b|t).

    What’s a CE?

    As described on the Cardinality Estimation (SQL Server) page:

    Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance.

    Why a New CE?

    The pre-existing CE is more than a decade old.  Both OLTP & DW workloads have changed—& databases are bigger by far than they used to be.  Often, cardinality changes spawned disparate plans (in one prototype, over 78 different plans were generated by the former CE).  Plainly & simply—the CE needed more cowbell.

    What’s New?

    During SQL14 TAP, SQL Engineer Kate Smith provided a heads up.  Highlights included:

    Relaxing Independence Assumption:  The old CE assumed that column values were independent.  Yet columns such as City and State, or Manufacturer, Make, and Model are tightly correlated.  Algorithms in the new CE better account for this.

    Join Changes:  Improvements to equijoins, non-equijoins, & join estimates related to primary keys.

    Ascending Key Modifications:  Newly inserted data are out of the range in histogram.  The new CE assumes not only that the data actually does exist & also is present at the average frequency of values in the table.  (And the same heuristics also apply to missing values in sample statistics.)

    In other words, more cowbell.

    Joe provides numerous examples & walk-throughs detailing the behavior of the new CE.

    Inside Baseball 

    Here’s some behind-the-scenes info.  "Cardinality Estimator" didn't appear in the original title which referred merely to performance tuning.  Who wouldn’t want to read a perf paper from Joe?  Yet the working title belied the true nature of the paper.  The published title provides the precision the topic deserves. 

    I won’t reprise the penultimate comma soliloquy that I shared with the editors, but you can learn more here, or pick up a copy of Fowler’s Modern English Usage.

    White Paper Metadata

    Tile: Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator 

    URL: http://msdn.microsoft.com/en-us/library/dn673537.aspx

    Summary: SQL Server 2014 introduces the first major redesign of the SQL Server Query Optimizer cardinality estimation process since version 7.0.  The goal for the redesign was to improve accuracy, consistency and supportability of key areas within the cardinality estimation process, ultimately affecting average query execution plan quality and associated workload performance.  This paper provides an overview of the primary changes made to the cardinality estimator functionality by the Microsoft query processor team, covering how to enable and disable the new cardinality estimator behavior, and showing how to troubleshoot plan-quality regressions if and when they occur.

    Authors: Joseph Sack (SQLskills.com b|t)

    Contributers: Yi Fang (Microsoft), Vassilis Papadimos (Microsoft)

    Technical Reviewers: Barbara Kess (Microsoft), Jack Li (Microsoft), Jimmy May (Microsoft b|t), Sanjay Mishra (Microsoft), Shep Sheppard (Microsoft), Mike Weiner (Microsoft), Paul White (SQL Kiwi Limited b|t)

    I'm confident you'll find the paper as edifying as I did. Enjoy!

  • Columnstore Preso to the Oregon SQL Server User Group

    In the latest-&-greatest effort in my mission to deliver my Columnstore presentation to every geekly denizen of the SQL community on the Northleft Coast of the United States & beyond, I’ll be delivering this week to the Oregon SQL Server User Group.

    Though Columnstore indexes were introduced in SQL Server 2012; they're still largely unknown.  In 2012, some adoption blockers remained; yet Columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & Columnstore is going to profoundly change the way we interact with our data.

    Here’re the logistical details:

    Title:  Columnstore Indexes in SQL Server 2014: Flipping the DW Faster Bit
    Group:  Oregon SQL Server User Group
    URL:  http://osql-d.org
    Twitter:  @OSQLd

    Location
    : 1515 SW 5th Avenue, Suite 900, Portland, OR  97201  (downtown Portland in the ninth floor conference room of OHSU IT Group)
    Time:  Wednesday evening April 9, 2014, 6:00p

    Thanks to MVP Arnie Rowland (b|t) for the invitation as well as Vern Rabe & Paul Turley (b|t).  As I understand it, there’ll be several MVPs & MCMs there.  No pressure—and with luck, there’ll be no Stump-the-Chump.

    I’m bringing a copy of the following for one lucky attendee: Professional SQL Server 2012 Internals and Troubleshooting by Christian Bolton (b|t), Rob Farley (b|t), Glenn Berry (b|t), Justin Langford (b|t), Gavin Payne (b|t), Amit Banerjee (b|t), with contributions & reviews from other big-time geeks such as Robert Davis (b|t) & Mike Anderson (b).

     


  • SQL Server 2014 Columnstore Indexes: The Big Deck

    The History

    Though Columnstore indexes were introduced in SQL Server 2012; they're still largely unknown.  In 2012, some adoption blockers remained; yet Columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & Columnstore is going to profoundly change the way we interact with our data.

    I’ve been working with Columnstore Indexes since Denali alpha bits were available.  As SQL CAT Customer Lab PM, I hosted over a half-dozen customers in my lab proving out our builds, finding & entering bugs, & working directly with the product group & our customers to fix them. 

    The Why

    Why Columnstore?  If we’re looking for a subset of columns from one or a few rows,  given the right indexes, SQL Server has long been able to do a superlative job of providing an answer.  But if we’re asking a question which by design needs to hit lots of rows—reporting, aggregations, grouping, scans, DW workloads, etc., SQL Server has never had a good mechanism—until Columnstore.  Columnstore was a competitive necessity—our Sybase & Oracle customers needed a solution to satisfy what was heretofore a significant feature & performance deficit in SQL Server.  Our leadership & product team stepped up & provided a superb response.

    The Presentation

    I’ve delivered my Columnstore presentation over 20 times to audiences internal & external, small & large, remote & in-person, including the 2013 PASS Summit, two major Microsoft conferences (TechReady 17 & TechReady 18), & several PASS user groups (BI Virtual chapter, IndyPASS, Olympia, PNWSQL, Salt Lake City, Utah County, Denver, & Northern Colorado).

    The deck has evolved significantly & includes a broad overview, architecture, best practices, & an amalgam of exciting success stories.  The purpose is to educate you & convince you that Columnstore is a compelling feature, to encourage you to experiment, & to help you determine whether Columnstore could justify upgrading to SQL Server 2014.

    The Table of Contents

    Here’s my deck’s ToC:

    • Overview
    • Architecture
    • SQL Server 2012 vs. new! improved! 2014
    • Building Columnstore Indexes
    • DDL
    • Resource Governor
    • Data Loading
    • Table Partitioning
    • Scenarios & Successes
      • Motricity
      • MSIT Sonar
      • DevCon Security
      • Windows Watson
      • MSIT Problem Management
    • Room for Improvement
    • Learnings & Best Practices
    • More Info

    The Demos

    I’ve included several demos, all of which are exceedingly simple & include step-by-step walkthroughs.

    • Conventional Indexes vs. Columnstore Perf
    • DDL
    • Resource Governor
    • Table Partitioning

    Let me know if you have any questions.  In the meantime, enjoy!

  • Disk Partition Alignment: It Still Matters--DPA for Windows Server 2012, SQL Server 2012, and SQL Server 2014

    Introduction

    I continue to receive dozens of inquiries each year about this issue.  The “fix” in contemporary versions of Windows Server combined with the absence of formal guidance since the white paper’s publication has led some to believe that disk partition is no longer a best practice.  This is incorrect. 

    Partition alignment remains a best practice for all versions of Windows Server as well as SQL Server, including SQL Server 2012 & SQL Server 2014.  No exceptions.  Period.  If, for whatever reason, misaligned volumes are created, they will fail to deliver their expected performance.  SQL Server installed on such volumes will suffer concomitant performance degradation. 

    Disk Partition Alignment White Paper

    Prior to joining the SQL Server Customer Advisory Team (SQL CAT), I was asked to document my presentation on Disk Partition Alignment as a formal white paper which was published in May 2009. 

    Disk Partition Alignment Best Practices for SQL Server
    http://technet.microsoft.com/en-us/library/dd758814.aspx

    Denny Lee (blog|@dennylee) co-authored the paper with me.  Some of the industry’s best & brightest storage gurus contributed or reviewed it.

    Root Cause

    A design decision that had been extant in all versions of Windows prior to Windows Server 2008 baked misalignment into the OS; thus it was a ubiquitous issue, resulting in most cases in an enormous performance hit, especially for random read I/O, often resulting in superfluous IOPs of 20%, 30%, up to 50%.  Many customers were unaware of partition alignment.  Even experienced disk administrators were often unfamiliar with it.  Explanations were often initially met with disbelief.  Indeed, when it first came to my attention, I was wide-eyed, struck plumb dumb with what I was being told.  I trusted the source, yet I performed my own experiments to verify.  It turned out to be a well-known issue amongst my colleagues at SQL CAT. 

    Fortunately, the root cause was remediated in Windows Server 2008.  All volumes created by versions of Windows since then should by default be aligned; but partitions must be validated—see below.

    Best Practices for Contemporary Versions of Windows Server & SQL Server

    • Because of the significant performance hit & the challenges associated with remediation, it’s a best practice to validate disk partition alignment on new volumes on which SQL Server will be installed, especially those from which high performance is expected, especially random read I/O.  This applies to MBR or GPT basic & dynamic disks.
    • There are vendor-specific recommendations.  When consulting with your hardware partners, be certain you do so with personnel whose knowledge on this topic is authoritative.
    • Though partition alignment is done natively by Windows Server 2008 onward, many storage admins nonetheless explicitly configure alignment during volume creation.  Whether you accept the defaults or manually align new partitions, always validate alignment per your vendor recommendations or the correlations defined by the white paper or this blog.
    • Misalignment of existing partitions created on Windows Server 2003 is not remedied simply by attaching them to newer versions of Windows Server.  It is necessary to copy the data to newly created, aligned partitions.
    • The performance impact of misalignment is not as apparent on SSD relative to spinning media.  Yet partition alignment is required for optimal performance.

    Summary

    Many factors contribute to optimal disk I/O performance. partition alignment properly correlated with stripe unit size and file allocation unit size remains a best practice.  Partition alignment provides an essential & fundamental foundation for optimal performance.  See the white paper or my blog posts for specifics.

    Contributors

    Thomas Kejser (blog), former SQL CAT PM
    Mike Ruthruff, former SQL CAT PM
    Mike Anderson (blog), Principal Engineer, Microsoft
    Sam Tudorov, Director, Simecom Inc.

    References

    Disk Partition Alignment Best Practices for SQL Server white paper
    My Blog Posts
    Windows Disk Alignment, a superb post on Mike Anderson’s SQL Velocity blog
    EMC Symmetrix with Microsoft Windows Server 2003 & 2008 Best Practices Planning

  • Columnstore Presos to Denver and Northern Colorado SQL User Groups

    I’m presenting soon to two Colorado user groups.

    Topic:  Columnstore Indexes in SQL Server 2012 & 2014: Flipping the DW /faster bit

    Monday 13 January 2014 @5:30p
    Northern Colorado SQL Server Users Group
    URL: http://nocodp.org :: http://nocodp.sqlpass.org
    Twitter: #nocodp
    Location: UNC Loveland Center at Centerra, 2915 Rocky Mountain Avenue, Loveland, CO 80538 (Breckenridge Conference Room 2nd floor)
    Erik Disparti (@ErikDis|blog)

    Thursday 16 January 2014 @5:30p
    Denver SQL Server Users Group
    URL:  http://www.denversql.org/ :: http://denver.sqlpass.org
    Twitter: @denversql
    Location: Denver Microsoft Offices, 7595 West Technology Way, Suite 400, Denver, CO 80237 (directions)
    Mike Fal (@Mike_Fal|blog)

    I’m giving away one copy at each preso of SQL matriarch Kalen Delaney et al’s latest-&-greatest hot-off-the-presses opus:
    Microsoft SQL Server 2012 Internals
    Kalen Delaney (blog|twitter), Bob Beauchemin (blog|twitter), Connor Cunningham, Jonathan Kehayias (blog|twitter), Paul Randal (blog|twitter), & Ben Nevarez (blog|twitter)

    Microsoft-SQL-Server-2012-Internals

    I was originally asked to speak last summer.  Linking my nascent passion for skiing with the timing resulted in a low latency response of something like, ‘Hey, let’s do this in six months, eh?’.  I followed up with Mike at the 2013 PASS Summit & along with Erik we carved the dates in stone.  I’m Denver-bound as I type this & am extremely eager to combine my passion for skiing along with my eagerness to evangelize one of the many new! &/or improved! features that SQL Server has to offer.  I’ll provide an update on both my experience on the slopes & with the groups. 

    In the meantime, as I write this, I’m looking out over the slopes of Vail being greeted by the winter’s dawn. 

    It’s going to be a great day—& a great week!  I hope you can join us!

  • Here’s-s-s Jimmy!

    I’m Back

    Yes, I’m back. 

    jack-nicholson-the-shining-I'm back

    Though not welding an ax like Jack during the notorious scene in The Shining, I am indeed I’m back in the saddle again.  My last post was four years ago.  At that time my blog was consistently in the top 7% on MSDN.  Collaborating with peers inside & outside of Microsoft, posting relevant content, & month-after-month watching the stats rise was a lot of fun.  Unbelievably, this blog is still in the top 12%, reflecting, I hope, robust & durable content.  We’ll see whether the stats inflect with my new content.

    Where' I’ve Been

    A lot has happened since my last post.  I’ll never forget the wintry day Mike Ruthruff called to ask whether I’d consider joining the SQL Server Product Team as a member of the SQL Server Customer Advisory Team.  Consider?  Joining SQL CAT had been a goal & a dream as a customer, & since joining Microsoft doing so had long been at the top of my professional development plan.  Would I move to Redmond?  Mark Souza (twitter) is the only person for whom I’d do so.  My lovely bride & I dutifully packed up & moved from the comfort of the Midwest to the beauty of the Northleft Coast—& what a ride it’s been!

    SQL CAT Customer Lab: Sr. Program Manager

    As the Customer Lab PM, my motto was Change the World or Go Home.  And change the world we did.  We did things in the Lab that had never before been done & are likely never to be repeated, including in 2½ years over 100—count ‘em, 100+—engagements, dozens of them with customers who’d parachuted in from all over the world bringing with them some of the biggest, fastest, best, & most interesting apps on the planet where we proved them out with our latest bits & on the best hardware available.  We validated dozens of apps on Denali, especially AlwaysOn & Columnstore; hundreds of bugs were documented & fixed for SQL Server 2012 RTM because of our work.  Dittos for Azure.

    MS Change the World or Go Home (landscape) microsoftbizcard219border

    MSIT: Principal Architect

    Last year I got a call from another friend, a former colleague from MSIT, Chris Lundquist, who asked me to onboard as an Architect to help with Enterprise BI.  More about that exciting transition in another post...

    What I’m Doing Now

    My current role offers myriad challenges.  I still get to work with the SQL Community & SQL Product Group, & last year I accepted the role of MSIT Service Design Engineering (SDE) Community Co-Lead.

    What’s Next

    Here’s what’s in the lineup for the next few months.

    /faster

    One of the original goals of this blog was flipping the SQL Server /faster bit.  Performance remains a passion & it will continue to be so.  I’ll also focus on Columnstore indexes, perhaps SQL Server newest most powerful yet underutilized feature.

    Roles

    I’ll speak from time-to-time about my specific roles at SQL CAT & MSIT as well as learnings I’m eager to share.

    Service Design Engineering

    SDE is the formalized guidance for something I’ve long evangelized to colleagues, customers, & the community, something I call “Engineering Discipline”.  In MSIT, though my title is Architect, I work under the umbrella of Service Design Engineering.  I accepted a leadership role in the MSIT SDE Community.  I’m working alongside Casie Owen, an inspiring colleague & engineer, & more recently Melissa Lowe aka “Mel” (who’s equally inspiring!), & together with other peers we’re hoping to change the world from the inside out.  I’ll share our successes & insights.

    Projects

    I’m working on various self-service Business Intelligence (SSBI) projects.  In addition, I have the privilege of being a lead in what I’m labeling The Reliability Project, adapting a Failure Mode & Effects Analysis (FMEA) initiative to MSIT & across Microsoft, including a potential collaboration with engineers at Microsoft Trustworthy Computing (TwC) & Microsoft’s Design Excellence team.

    User & MVP Community

    I remain as passionate about the User & MVP Community as ever.  As a founder of two user groups back in Indy, I’m well-acquainted with the challenges faced by groups around the country.  As a member of the SQL MVP v-Team, it’s exciting to participate in this vital & vibrant community facet.  I’ll share a bit about how I approach my nominations & nominees.  Since my last post I’ve spoken dozens of times around the country including several sessions at PASS, & I’ll continue to do so.  I’m provided the opportunity to review white papers, tech notes, & blogs, & will cross-reference them here.

    Education:  SQLMCM Program, Certification, & Mentorship

    The SQLMCM program is cancelled, & it’s tragic.  I won’t dive more deeply into that now.  Yet I’ve found certification & especially the MCM to be an invaluable asset on the job & for my career.  Earning my MCM in 2008 has been tremendously gratifying from a personal, career, & relationship perspective.  Being asked to assist others & watching them grow is one of the most exciting things about the Community & my work.  I’ll share my trials & tribulations as I sharpen the saw in preparation for upgrading my skill set as well as the goings on related to what used to be called #SQLMCM #Northleft.

    Effectiveness & Professional Development

    Efficiency is doing things right; effectiveness is doing the right thing.”
    —Peter Drucker

    Having spent much of my geekly career overworked & underpaid (raise your hand if you know what I mean), I’ve invested a great deal of time mitigating the overworked aspect by enhancing not only my efficiency, but more importantly my effectiveness.

    Kevin Kline (twitter|blog|blog) has often asked me to share my professional development insights.  Masterminds such as Principal PM J.D. Meier (MSDN blog|Sources of Insight blog|Getting Results wiki) & Alik Levin (blog) & Rob Boucher have been primary contributors.  Co-conspirators range from Robert Davis (twitter|blog), Thomas Kejser (blog), Joe Sack (twitter|blog), Mike Ruthruff (prolific twitter alias), my wingman Shahry Hashemi (twitter|blog), & many, many others.  I’ll share lessons learned from these & other brilliant resources.

    Life

    During my geekly career I’ve spent lots of nights, weekends, & holidays in the trenches.  Many of you, like me, are passionate about SQL Server.  Indeed, our enthusiasm is one of our great strengths.  Yet there’s much more to life than flipping the /faster bit!  My longtime search for balance has begun to pay off.  I’ve renewed another passion—for health & fitness.  Almost thirty years ago(!) I was one of the primary performers—one of the stars—in Richard Simmons’s original Sweatin’ to the Oldies (yeah, that’s me in the mauve tank top).  I’ve gained & lost 75 pounds (~35kg) three times in the past 20 years.  Thanks to Dr. Mark Dedomenico’s incredible 20/20 Lifestyles program at Pro Sports Club, for the first time I’ve kept it off for well over a year.  I’ve been bitten by the ski bug & I’m a mountain biking phreak. 

    There are clear tie-ins to professional development.  For example, puckerworthy moments on top of a mountain looking into the abyss down which I’m about to launch myself on a pair of boards attached to my feet offer great lessons in facing one’s fears & how to overcome perceived limitations.  Details to follow…

    Miscellaneous

    And if that’s not enough, I promise plenty of off-topic material.

    Thanks for the encouragement, support, & patience these past few years.  Stay tuned!

    Jimmy May, @aspiringgeek

  • Disk I/O: Microsoft SQL Server on SAN Best Practices from SQL CAT's Mike Ruthruff (& Prem Mehra)

    While at the PASS Community Summit in November 2008, I had the pleasure of attending a handful of excellent presentations.  One of the best was delivered by Mike Ruthruff (& not just because he shilled for my presentation on disk partition alignment later that day—though I suspect he contributed to my session being SRO).

    Mike is a member of the SQL Server Customer Advisory Team (SQL CAT).  He authored the deck with contributions from SQL CAT patriarch Prem Mehra.

    Most of you probably know Mike because he is the primary author of the landmark white paper we all know-&-love & have read over-&-over again because we know how unbelievably valuable it is:

    SQL Server Predeployment I/O Best Practices
    http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/pdpliobp.mspx

    Mike provided a copy of his latest-&-greatest deck for publication here:

    Microsoft SQL Server on SAN Best Practices

    The deck includes the following:

    • Characteristics of SQL Server I/O operations
    • Best practices
      • SQL Server Design Practices
      • Storage Configuration
      • Common Pitfalls
    • Monitoring performance of SQL Server on SAN
    • Emerging Storage Technologies
    • Additional Material In Appendix Section (not covered during session)
      • How to validate a configuration using I/O load generation tools
      • General SQL Server I/O characteristics
      • How to diagnose I/O bottlenecks 
      • Sample Configurations

    I think you'll enjoy this presentation—one of the best, perhaps the best of its kind ever assembled.  ¡Yo!  Only first-rate decks on this blog.  Besides which, SQL CAT does nothing but the best.  Get ready to be wowed by 50 slides of geekly goodness. 

    Administrivia

    Jimmy May, MCM
    SQL CAT Sr. Program Manager
    SQL Server Customer Advisory Team, Business Platform Division
    317.590.8650
    http://blogs.msdn.com/jimmymay 
    Microsoft: Change the world or go home.

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement