Tuesday, November 12, 2013

Software Design/Modification Should be considered a "Set Based" process

When Matt Cherwin, the skilled DBA at my company first joined, one of the first and best pieces of advice that he gave me was to think of SQL as a "Set Based" process.  In other words, something in our discussion had indicated that I was thinking about the specific rows in a table, and his advice was given in response to that observation.

Immediately - SQL became much "easier" to think about, manage and generally work with.

I have found that this advice also applies to software development.  I find that it is much more powerful to think about the entire system (as a set) and to think about changes to the entire system (set).  This heavily takes advantage of code refactoring - where essentially every single last change that is made to the code is done as a formal "refactor".

In other words, I'm going to refactor the code to...

  • Rename the method x.Foo() to x.Bar();
  • Add method X to Y in order to facilitate Z.
  • Add 5 new properties to class Foo.
  • Allow Bar's to be deleted.
  • Create 3 overloaded version of method X.Foo() that allow for...
The point is that any one of these changes might modify 1 line, or 10,000.  It should not matter to me as the one implementing the change.  I need to understand the implications of the code change - but I should not need to actually eyeball each change made by "the system".  

I should be able to simply indicate what I want done - and a refactoring tool should actually make the changes to the code.  The more changes that can be made "as a set", the fewer bugs there will be in the final product.

Define "Set Operation": Any operation that can be completed as a single, atomic change in the IDE.

Thursday, November 7, 2013

Find a cheap way to try!



I've spent a lot of time thinking about failure, and how the fear of failure so frequently prevents people from trying things.  Weeks or months can be spent debating the various ways that an idea might fail when a prototype could potentially be developed in days that would answer the question conclusively.

I recently watched a TED talk by Regina Dugan from DARPA - with a message about the amazing things that we can accomplish when we stop fearing failure.  One of the things that she kept mentioning, and it wasn't until this morning that I realized how important it was, was how little was known before a test, and how much additional knowledge was gained by trying - even when the attempt results in failure.

For example, when Chuck Yeager made the first Mach 1 flight - Mach 0.8 was apparently the best wind tunnel data available at that point.  Leaving the sheer courage necessary to climb into that cockpit aside for the moment, because he tried - they learned more about supersonic flight that day than in all of the previous testing/theoretical work that had been done up to that point ... combined.  

In example after example, she kept coming back to how much they learned the first time that they tried to fly Mach 20, or from the first prototype mechanical humming bird that they built (even though it only flew for seconds).  Trying, even if it results in what appears to be complete and total failure, almost always yields so much additional information about the problem being attempted, that failure can simply be categorized as an irrelevant side effect of the learning process.  Failing 10 times is fine with me, if those 10 failures lead to success - which lasts forever.