Swallowing exceptions? Choke on it, pendejo!!

Exception handling is the process of responding to the occurrence, during computation, of exceptions – anomalous or exceptional conditions requiring special processing – often changing the normal flow of program execution. See (Exception Handling)

While exception handling has enormous advantages to developers, yet we choose to abuse it. That is called error hiding.

There are couple reasons why devs swallow:

  • They prolly don’t know what to do with it.
  • They think they are cool and great.
  • They haven’t taken CS 101 class.
  • They are too lazy.

Please add on to the list.

Sometimes, in the catch clause comments are left in order to explain why, the possible exception is swallowed. That is OK, according many.

I just can’t stand and reason why developers purposely swallow exceptions rather than handling it.

Your software sucks

Your software sucks if you have the following signs or symptoms:

Rigidity:

Rigidity is the tension for software to change. If things are connected in such a complicated way that is hard to make a change, that causes rigidity.

For example, a small change or addition in a monolith can cause changes in several layers in the application. One smell of this example is one day of work, ends up being a one week of work.

Rigidity also causes fragility.

Fragility:

art-beauty-bubble-dream-fragility-favim-com-357541Your code is easy to break. This is a very common symptom. One example can be, if you are using setters and getters in your class definitions and you are consuming your objects through setters and getters. Imagine you have a property which is integer and you decided to make it decimal. If you need to visit several places in your code base to make the necessary changes to have the code compile again, this is a typical sign of bad design.

How about MVC? MVC promises for loose coupling, right?  Do you use your models directly in your Views? What happens when you make a change to your model? Do you need to go through a lot of views to make them actually work?

Coupling causes fragility. You can easily spot or recognize the code that is fragile.

Immobility:

If your code is hard to reuse in the same or in different projects, that is immobility. You are not using interfaces enough. Your classes are so focused so that you can’t reuse them or your classes have unnecessary dependencies.

Developers tend to write lot of generic classes or methods for reuse, however, that might cause complexity. Generics are also hard to maintain.

Viscosity:

If your software is easy to hack but hard to fix, that is a sign of bad design. When your software requires a change, there is usually more than one way of implementing it. Sometimes, developers preserve the design goals and principles but something they hack their way through. Especially, if maintaining the design goals are challenging. It is usually easy to do the wrong thing but hard to do the right thing.

Complexity:

mazeEverything that is too complicated is destined to fail. We love complicated things and problems. We enjoy it. However, software should be as simple as possible. In my previous post about core software design principles I mentioned about two types of complexity, one is accidental and other is inherent complexity. Inherent complexity is unavoidable, which is the problem domain. We should refrain from accidental complexity, that we make things complicated.

If something is complicated that is almost always bad. Look at technology that people do not use anymore. In java world there was EJB (enterprise java beans). Almost 60 percent of the projects that implemented EJBS didn’t work. 30 percent of remaining was so bad that it required so much time for deployment and configuration. Because, it was all too complicated.

Duplication:

If you have duplicate code and bad structure in your solution or projects, this is a typical symptom of bad design. We have DRY principle that tells you not to repeat yourself. Copy paste is bad. It doesn’t only cause duplication of code but also the effort. In my opinion there is nothing wrong with two lines of method. It is much better than duplication of code. Because, if you have a bug in your code, even a small one, you duplicate it in other places.

Opacity:

If your software is hard to understand, that is a symptom of bad design. This is related to complexity as well. Your code should be clear and easy to understand, not only by you but also by other developers as well.

 

These are usually the signs, smells or symptoms of bad software design. SOLID principles help for better software design, regardless of the technology you use. There are also other software design principles you can refer to while developing software.

 

Pile of shit

Certainly, Refactoring should take place during every phase of software projects. Personally, I wouldn’t accept any excuses around it. You can refactor your code, derive re-usable components and useful patterns from it, i.e.: command-query, data access patterns and so on. Recently I have chatted with project managers of a very large project. We wanted to integrate our crash reporting system into their project. They were a bit skeptical at first. Once they confessed that they use exception handling for flow control, I asked them why they don’t refactor, the response was not acceptable.

pileofshit

Yet another project I have recently witnessed has 3000 lines of code in a single method. Probably only the person who wrote the method can understand it. Compose method pattern can be used for this methods while re-factoring.

In so many ways, these projects resembles pile of shit. Yet, they are destined to be re-written. My curiosity is, will it be different? I am working on a post for “Software for a change”, which will be published soon. Please follow me on that one.

You don’t have to be a very experienced developer to realize the  problems above, use your intuition. If something doesn’t feel right, you are probably doing it wrong. If it feels too complicated, you are probably doing it wrong!

Boundaries at NoSQL

Defining and developing a model is usually the first thing we do while developing software. In this process we define classes, interfaces, with relationships and so on.

Boundaries and Aggregates is a pattern in domain driven design. A collection of related objects can be viewed as a single unit which is called aggregate. There is an integrity in the aggregate. Moreover, there are boundaries between classes and relationship.

We need to persist aggregates to the databases. We usually use ORM (object relational mappers) for this purpose while working with relational databases.

aggregate-split

Above, we have an aggregate for an online store, which is a shopping cart. Using ORM and relational data store, aggregate is persisted to related tables. Customer (1001), Ann has many line items and has a payment details. Likewise, aggregate can be populated back into object model in a similar way. So, ORMs provide great benefits while working with aggregates by simplifying the programmers work.

Same pattern applies to NoSQL databases. Regardless of the persistence technology, if we use aggregates in our software, we persist these aggregates to the data store. Likewise, we can query this aggregate by line items, payment details etc.

How does aggregates and boundaries work with NoSQL stores?

Key-value stores, document stores, columnar, graph stores, as you can imagine they have different means of storing aggregates. While key value and document stores are similar, in the sense they can both stored and queried by key, documents can also be queried by properties as well. Key value stores doesn’t provide functionality to query by properties.

Columnar databases, uses a key space and column family. Instead of using rows we have family of columns. For graph databases we can easily store and query relationships between objects.

Most NoSQL databases provides an easy way to store aggregates in memory to data stores.

While aggregate oriented databases are great transactional systems, we can’t say the same for analytics systems. Generating analytics and reports from such aggregates is difficult and inefficient due to data residence.

Even though NoSQL databases claim to be a great fit for big data analytics system, I wouldn’t agree on that. One has to create map-reduce on these particular databases, which there are serious limitations on them.