Tuesday, October 27, 2020

An Open Response to Linus Lee's thoughts

 Re: When is No-Code useful?

and Conservation of Complexity

Hi Linus,

I very much enjoyed reading your articles on Conservation of Complexity and When No-Code is useful - and I agree completely with many of the specific conclusions and observations that you make. But... I’m glad it’s still in the “notes” section of your site, as I'd love a chance to convince you that when it comes to no-code, you might be missing the forest for the trees.


I haven’t seen any no-code company or product that allows source control (and I’ve seen many no-code companies, but you’re welcome to prove me wrong.)  

I would love the opportunity to try -  - and in this post I will present what my development team and I do, with the objective of hopefully finding a new perspective in response to your challenge.  

Related Article

This response uses language and a number of terms related to the general notion of Derivative Code - and more specifically, Why "Source Code" is a terrible place to put software.   If these ideas are not familiar to you, it might be helpful to look at that article first.  

So - with those ideas as background, I will also try to provide some alternative perspectives to a few of the specific points in your article about when no-code might be useful.:

1. Transitionary, ephemeral software
  

We agree that for things like brainstorming, prototyping, developing UX, etc - no-code is often great - but that the these solutions also typically include problems such as:

1) You typically can't manage it through common/essential tools like source control, ci, defect management, etc. 

2) It's often not sufficiently scalable.

3) It's usually a black box (internally), for which you typically only have selective knobs/levers to adjust.

4) You're "stuck" as soon as the no-code product doesn't do exactly what is needed. 

These are all true of most No-Code providers - but not so much for Low-code tools. Even if you use No-code tools just to simply sketch out/brainstorm an idea first, depending a little bit on precisely which tool(s) you're using, this work will almost inevitably at this point create a really well defined, machine readable description of those rules - i.e. a Single Source of Truth or SSoT - and Low-code tools can then turn those "specification" documents into production ready code (derivative code) in the language/tech stack of your choice.

With an SSoT providing a common foundation - we can now pick out the production stack, and specific tools that we want to use.

LAMP,  JAM, MEAN, WIMP, Native Java/Swift/Typescript, Windows, Web, Mac, iOS, Android, etc.  

Importantly though, these decisions can all come after the No-Code model has been already defined.  Possibly even years after it's been defined - and after 100's or even 1000's of changes have been made to it over that time.  All this is possible because the decisions are not "buried" in hand written "source code", possibly in some long forgotten language like Fortran or Cobol.

After close to 2 decades of research in this area, what I have found in practice is that most of the requirements of virtually any technology can be well defined in a database, even without knowing and ultimately completely decoupled from the specific languages or technical contexts that you're going to need in the production environment..

So we can apply Low-code tools to No-Code models - such that developers can actually start on day 1 with much/most of the code needed for the scaffolding/framework for the project already present, regardless of the tech stack involved.  As a result, literally right out of the gates, the developers can begin addressing the needs of the end users, on the actual, final production versions of the code, most of which is just as flexible/responsive as the no-code model used to describe the rules in the first place.

2. High-churn code  

Here again, if the code is a high-churn environment, I think that we both agree that No-Code might be a good option. And if the goal of No-Code is to be the final or finished solution, then I completely agree with this assessment..  

But change resiliency over time is not the focus of no-code tools 

By contrast however, Low-code tools are actually super resilient to change, and I'd even go so far as to say that they actually future proof your code in a way that is virtually impossible to replicate in a "traditional" development environment. 

Because by putting most/all of the decisions about the solution needed into a No-Code model first (rather than hand written code in a specific language)- these decisions can be abstractly leveraged even years in the future, possibly against completely unknown languages today with different operating environments, technical contexts, etc - all because the decisions were not baked into hand coded "python" or "java", as the very 1st step - back in the day.  

3. Avoiding the same mistakes  

 

After all, the world is complex. And when we build software against the complexity of the world, that complexity needs to go somewhere. Software is complex, but only as much as the world it attempts to make sense of.  

This is precisely the Crux of it imho.  The world is complex.  And that complexity needs to go somewhere. And most of it simply should not end up in "Source code" as it's first destination.  That's basically helpful to an audience of 2.  The compiler for the language in question and developers of that specific language.  This is a really expensive & brittle place to record those decisions, with a narrow audience of people who can definitively answer the question: What does this "system" actually do?

Instead, that complexity should be captured in a specification database (i.e. a No-Code model) which can be abstractly queried and reported against now, and in the future.  The human readable, English "specification" then simply becomes a report against that database.  And any time we update the specification database, we simply re-run that report - making it the first target to "follow along" as changes occur. 

Then, when we re-run, say, the "python report” - it updates the foundational python libraries and code to also reflect the new rules/changes.  With most of the "plumbing" largely maintaining itself over time, the code that we do still have to write by hand ends up being extraordinarily efficient, because we're not constantly re-inventing the wheel and calling that "source code".

Does the distinction I'm trying to draw make any sense?

The grain of abstractions

but we as a technical industry have learned how to build and evolve software systems against changing requirements and constraints that span years and decades.

We agree that changes are inevitable, but - however good our "source code" is, it still inevitably mixes multiple things and treats them as one.  Specifically, traditional "source code" mixes the description of WHAT needs to happen into the same place as the description of HOW to do the work in question in a very specific language, context or environment.

Instead, systems can be dramatically more resilient to change, if the definition for WHAT needs to happen is consciously isolated and defined separated from the description of HOW to actually do that in a particular language.  The types of information which fit comfortably in the "specification database" that I've mentioned are rules that encode what needs to happen.  Then tools can convert those details about what needs to happen into a specific language with as much specificity as is available between the Single Source of Truth and the tool which provides the final output code.


No comments:

Post a Comment