If you haven’t personally experienced the problems of other typical architectures such as Layered, Onion / Clean architectures, this video (from 00:00 - 25:00) provides some examples and explanations.

Here, I share some of the pain points I’ve encountered and how VSA (a.k.a Package by Feature, Request EndPoint Response (REPR) Pattern) solves them.

Problems

Excessive Overhead

Creating a simple endpoint is non-trivial. Even though the amount of code / logic might sensibly fit in a single file, conforming to Layered / Onion architectures typically results in minimally creating and / or modifying N * 2 files (where N = number of layers in the architecture, multiplied by 2 because each layer typically has the interface and implementation files).

Complex Shared Code

A function might start off looking like this:

void Foo()
{
    // do something
}

Then, as more callers require some variant of that functionality, the function starts accepting parameters to determine which variant of the logic to execute, possibly leading to really convoluted code:

void Foo(bool param1, bool param2, bool param3)
{
    // do something

    if (param1)
    {
        // do something

        if (param2 && !param3)
        {
            // do something
        }

        // do something
    }

    if (param3)
    {
        // do something
    }
}

Existing business requirements generally change over time and new business requirements get added, so it is important that code is easily modifiable. However, modifying this code is non-trivial because you have to know all the other use cases to be sure you don’t break them (or at least, you have to trust that the engineers before you working on this code have written sufficiently good tests).

Overbearing Cognitive Load

The above problems make it hard to understand how a complex endpoint works: Apart from having to navigate through many files, it is also hard to reason and trace through the functions called by the endpoint. This also makes it hard for code reviewers to catch errors.

Summary

Big Ball Of Mud
Unfortunately, the codebase typically ends up looking like this

Wrapping up, the problems stated above aren’t problems with Layered / Onion architectures per-se. However, using these architectures typically result in these problems because there’s a greater emphasis (and in my opinion, over-emphasis) on these principles DRY (Don’t Repeat Yourself) and SRP (Single Responsibility Principle).

Quoting from the book “A Philosophy Of Software Design” section 4.6:

Unfortunately, the value of deep classes is not widely appreciated today. The conventional wisdom in programming is that classes should be small, not deep… developers are encouraged to minimize the amount of functionality in each new class: if you want more functionality, introduce more classes… result in classes that are individually simple, but it increases the complexity of the overall system. Small classes don’t contribute much functionality, so there have to be a lot of them, each with its own interface. These interfaces accumulate to create tremendous complexity at the system level.

How VSA solves these problems

VSA is really not a new idea, but an application of the KISS (Keep it Simple Silly) principle. I’m not explaining how VSA works here because there are already many good articles online (example), and instead I’ll only highlight some key pointers about how VSA solves the above problems.

  1. Quoting from a practitioner:

    You can build onion in each slice or for lower levels (domain persistence, accessors) as it fits… start simple and don’t over engineer it.

    It’s crucial to note that the success of this approach still depends on Software Engineers having good understanding of software engineering principles and applying it appropriately; VSA is not an excuse for poor code organization.

  2. Avoid code sharing by default.

    This change in mentality affects the quality of shared methods: When code sharing is heavily encouraged, there’s greater tendency to modify code to make it reusable even though the usage isn’t too similar. When code sharing is avoided by default, the tendency is to only share code when the callers are really using it for the same purpose, so you don’t end up with shared code that looks like the above example.

    So, for example, you might end up having a folder structure looking like this:

     Users
     |-> Create
         |-> Endpoint
         |-> AwsClient
     |-> Update
         |-> Endpoint
         |-> AwsClient
     |-> AwsClient
    

    Having 3 AwsClient files is intentional: Instead of having a mega AwsClient file that contains all the methods for operations related to Amazon Web Services, the Users/Create/AwsClient file only contains the operation that’s used by Users/Create/Endpoint, similarly for Users/Update/AwsClient. There might be shared code used by both Users/AwsClient, so a possible implementation is to have Users/Create/AwsClient and Users/Update/AwsClient inherit from Users/AwsClient. As more features are added into the codebase, the functionality in Users/AwsClient might be reused, so the relevant code will be “promoted” into a higher level AwsClient file.

    So, keep code local as far as possible, and only “promote” them to higher levels when there are real instances of code reuse.

    Then, you should find it easier to write good abstractions. Quoting from the book “A Philosophy Of Software Design” section 4.8:

    The most important issue in designing classes and other modules is to make them deep, so that they have simple interfaces for the common use cases, yet still provide significant functionality.

Further Readings

  1. Reddit post
  2. Blog post
  3. YouTube video