Continuous Feedback

Fixing performance problems can be tricky. I joined a new team last spring, and my first assignment was to investigate and fix some performance problems they were having.

Most of the reported performance problems were related to a complicated function with many moving parts.

I started with creating some test data that I could use to measure the current performance to have something as a baseline to compare my changes to.

I then used a combination of manually inspecting the code and running a profiler to try and pinpoint the performance bottlenecks in the code.

After a few days of work, we managed to locate and fix several issues that negatively affected the performance, most related to database access through Hibernate. We found that the same data was read multiple times from the database and had to be cached. It turned out that Hibernate produced N+1 queries due to misconfiguration of the entities. We solved one issue by adding a missing table index.

Manually finding these issues is doable but very work-intensive. There must be a better way. Our applications can give us lots of information we can utilize by enabling OpenTelementry, something I wrote about in my previous article. Observability for Agile Development: Why It’s a Game Changer

Digma is a tool that analyzes OpenTelemetry data to give us continuous feedback from our applications. I got involved as a beta-tester around the same time as I was working with the performance issues I described above, so I could directly see the benefit of having a tool to help me identify problem areas.

With Digma, we get continuous feedback on the performance of the application. Digma will instantly show us where the bottlenecks are, so we don’t have to search for them ourselves. We can now get direct feedback on how our latest fix performs compared with the previous version. It also found bottlenecks that we did not even know about. It takes the guesswork out of the picture. We save lots of time finding and fixing performance issues.

The tool is free to use locally for developers. All they ask is that we register so that they can get information about how many users they have. It consists of a plugin for your IDE and a backend running in containers. No data is exposed externally, so there should be no issue with security or privacy. The best thing is that the installation is super easy. The plugin will enable OpenTelemetry automatically for many common frameworks and application servers.

There is also an option to do a central installation of Digma to get even more benefits, but there is a licensing fee. In this article, I will share my experience working with a local installation and my thoughts on why you should consider doing a central installation. Let’s get into it!

Local environment

Digma comes bundled with Jaeger, so we can get a visualization of collected tracing data directly without leaving the IDE. We can navigate between code and traces. We can follow the flow of a request and see things like parameters and queries.

Tracing data is automatically collected when you run or debug your code in the IDE or running tests. It gives us continuous feedback on how our code is performing and if the latest change has introduced any new issues.

A new feature shows you which tests have touched a particular part of your code. This feature makes it possible to run the tests affected by the changes we are making, saving us time and providing faster feedback on the issue we are working on.

It will automatically detect many of the issues I mentioned in my example above. This feature alone saves lots of time when we no longer have to locate where the problem lies. But can focus on fixing it instead. I estimate that we could have solved the performance problems in the example above in 1/10 of the time by using this tool. It is a huge deal and the main reason I’m writing this article. We should work smarter, not harder.

Make sure you try with your application. Digma is free, easy to set up, and will boost our productivity.

Doing a central installation will allow us to take full advantage of Digma.

CI/Testing environment

It will be possible to monitor our releases over time by connecting our CI environment to Digma. OpenTelemetry has to be enabled on our product to make this possible. We will then have the option to detect changes over time. It can tell us if the performance has increased or decreased over time. We can run performance suites to detect scaling issues.

We now have the option to compare the results from the code running on the CI environment with the one we currently are working on. We get feedback on whether our latest change fixes a performance problem even before submitting the code to the repository. There is no need to wait for the build and deploy process to complete.

It can do a usage analysis to find unreached code. It can also find code reached by many flows so that we know that changes to that will affect many flows.

Production environment

By connecting our production environment to Digma, we can get real-world data about your code and issues currently in the wild. It allows us to be proactive instead of reactive.

This step requires that your customer is allowing OpenTelemetry to be collected and that there is a way for developers to connect their IDE to the central Digma instance. We have to be sensitive to their concerns regarding privacy and security.

We will now have a way to compare code running in production with code currently being developed. We can also get information about problems as they happen and even where they occur.

How impressive would it look to identify and fix a problem and have it deployed into production before our customer notices it? With Digma, this is possible!

Please visit https://digma.ai to learn more about this tool.

Observability for Agile Development: Why It’s a Game Changer

Feedback is at the core of agile development. We strive to improve and shorten the feedback loops wherever we can. We do code reviews, ask for customer feedback, and run automatic test suites to locate regressions in functionality and performance. However, there is an often overlooked feedback loop where we can gain important insights. One that will allow us to be proactive instead of reactive.

Have you ever felt worried when releasing a new version of your product? How many issues will our customers find this time? Would it not be better to locate some of these issues sooner before deploying the product into production?

We can get that feedback loop by combining OpenTelemetry and Micrometer alongside various open-source tools. We can also gain insights into how our code is performing by using a new tool. It analyses the collected data and produces insights into our code that we should look at. We can get these insights before submitting our changes to the code base. More about this tool in a bit. First, let’s look at OpenTelemetry. What is it, and how can we use the information it collects?

OpenTelemetry

OpenTelemetry is an Observability framework and toolkit designed to create and manage telemetry data such as traces, metrics, and logs. It has been standardized and works with many languages and frameworks.

It allows us to get a view into how our application is performing much in the same way a racing team uses telemetry to analyze and optimize the performance of a race car during practice and a race. By using this information, we can be proactive instead of reactive to possible issues with our applications.

How to activate this depends on your framework or language. If you are using Quarkus, you use an extension. If you are using Java without a framework, you use an agent. There is already a lots of documentation on enabling OpenTelemetry for different languages and frameworks, and I won’t repeat it here.

Collecting data won’t help us much. Luckily, several open-source tools will help us visualize the collected telemetry data.

Visualize telemetry

Jaeger allows us to monitor and troubleshoot workflows in complex distributed systems.

Prometheus helps us go from metrics to insights. It collects and stores its metrics as time series data. Metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.

Grafana enables us to query, visualize, alert on, and explore your metrics, logs, and traces wherever they are stored.

OTel Collector is a vendor-agnostic way to receive, process, and export telemetry data. It is a central hub that connects the different parts of your observability environment.

Using telemetry

Now, we can collect and visualize information about the runtime performance of our application. We can plot graphs showing the number of requests per second and timing. We can follow requests between and internally in our services.

Enabling OpenTelementry and visualizing it in Jaeger and Grafana have given us more insight into how our application is performing. We are getting earlier feedback of potential problems and can be proactive instead of reactive. The problem is that we get so much data that it’s hard to locate problem areas.

Is there something we can do about that?

As it turns out, there is. Stay tuned for the next part, where we will build on OpenTelementry to get continuous feedback from our applications.

5 Smart Ways to Use java.util.Objects

Objects provide a set of utility methods to manipulate Java objects. In this post, we will delve deep into the world of java.util.Objects and examine five smart ways to use it. From comparison, null-checking to hashing, there is much to learn, so buckle up and let’s dive in!

Introduction

The java.util.Objects class was added in Java 7 to provide utility methods for working with objects. Its purpose is to simplify common tasks such as null-checking and equality testing and to reduce the amount of boilerplate code that developers need to write.

Prior to Java 7, developers had to write their own null-checking code using if statements, which could be error-prone and time-consuming. The java.util.Objects class provides a set of static methods that handle null values in a consistent and reliable way, making it easier for developers to write robust and maintainable code.

Checking for Null objects with requireNonNull

When working with objects in Java, it is important to ensure that they are not null to avoid unexpected NullPointerExceptions. To address this, Objects contains a method, requireNonNull(), which helps check for null objects before they are used. This gives the programmer control of when to check for null objects instead of having the code blow up when the object is used.

public int doSometing(String parameter) {
   // do lots of stuff here
   return parameter.length();
}

If the method doSomething is called with a null value it will throw a NullPointerException, Cannot invoke “String.length()” because “parameter” is null. In this case, we can clearly see the problem in the code but if the method is more complicated it might not be so easy. A good programming practice is to check all parameters for validity before using them. Objects give us not one but three methods to do just that.

public int doSometing(String parameter) {
   Objects.requireNonNull(parameter);
   // do lots of stuff here
   return parameter.length();
}

This will also throw a NullPointerException. The difference, in this case, is that it will be thrown by the requireNonNull call at the start of the method. If we need to provide more information in the exception we can use one of the other two variants of this method.

Objects.requireNonNull(parameter, "Additional information");
Objects.requireNonNull(parameter, () -> "Additional information");

Both of these will use the provided message in the generated exception. The difference is that the second variant will only generate the message if the given object is null.

Providing Default Values with requireNonNullElse

The requireNonNullElse method is a convenient way to provide default values for null objects. It was introduced in Java 9 and helps reduce NullPointeExceptions by providing a value to use when a null is received, avoiding the need for the developer to explicitly check for null before using the object.

It reduces the risk of NullPointerExceptions and helps to make our code more concise and readable. By providing default values for null objects, it simplifies our code and makes it more robust. As such, it is a valuable addition to our programming arsenal and should be used whenever appropriate.

public int doSometing(String parameter) {
   String localParam = Objects.requireNonNullElse(parameter, "");
   // do lots of stuff here
   return localParam.length();
}

In addition to the basic usage described above, the method has other features that make it more flexible. For example, it works with any type of object and can handle complex objects.

To optimize performance, the alternative variant of this method, requireNonNullElseGet, can be utilized when obtaining the default value involves a resource-intensive operation. This method takes a Supplier instead of an object as the second parameter and we can use it to call a method to get the default value. The Supplier will only be called if the provided parameter is null.

public int doSometing(String parameter) {
   String localParam = Objects.requireNonNullElseGet(parameter, this::getDefaultValue);
   // do lots of stuff here
   return localParam.length();
}

private String getDefaultValue() {
   return "";
}

Comparing Objects with Objects.equals()

Objects.equals method was added to the Java language to provide a null-safe comparison of objects, and to address the limitations of the Object.equals() method. It makes it easier and more convenient to compare objects and is a useful addition to the language.

Before Java 7 and the addition of Objects.equals we had to construct something like the following example to make a null-safe comparison between two objects. It contains lots of details and boilerplate code.

private boolean isEqual(Object s1, Object s2) {
   if (s1 == null && s2 == null) {
      return true;
   }

   if (s1 == null || s2 == null) {
      return false;
   }

   return s1.equals(s2);
}

By using Objects.equals we can instead implement this method like the example below. The Objects.equals method will take care of the null checks and will only call the equals method on our object if both s1 and s2 are non-null. Not only will this remove the noise of checking for null, but it will also protect the equals method in our object from being called with a parameter that is null.

private boolean isEqual(Object s1, Object s2) {
   return Objects.equals(s1, s2);
}

Retrieving HashCodes with Objects.hashCode()

In this example, the hashCode() method is implemented using Objects.hashCode() to calculate the hash code based on the id and name fields. By passing the fields as arguments to Objects.hashCode(), it will internally handle null values and produce a consistent hash code.

import java.util.Objects;

public class MyClass {
    private int id;
    private String name;

    // Constructor, getters, and other methods

    @Override
    public int hashCode() {
        return Objects.hashCode(id, name);
    }

    // we have to implement a equals method, but it have been left out for brevity
 
}

It’s important to ensure that the fields used in hashCode() are the same fields considered in the equals() method to maintain the general contract between these two methods.

By implementing hashCode() in this manner, you can generate a hash code that takes into account the relevant fields in your class, simplifying the process and ensuring consistency.

Creating Custom Comparison Strategies with Objects.compare()

You can use the Object.compare() method if you want to compare two instances of an object that doesn’t implement a comparator or if you want to use a custom comparison strategy. It will return 0 if both instances are equal. Otherwise, it will return the result of the passed comparator. This means that it will return 0 if both instances are null. You might still get a NullPointerException, depending on the implementation of the comparator.

Integer a = 42;
Integer b = 7;
Objects.compare(a, b, Integer::compareTo);  // will return 1

By supplying a custom comparator to this method, we can create a custon comparison strategy.

Integer a = 42;
Integer b = 7;
Objects.compare(a, b, this::myCustomComparator);

private int myCustomComparator(Integer val1, Integer val2) {
   int result = 0;
   // implementation of custom comparison
   return result;
}

Conclusion: Embracing the Flexibility of java.util.Objects

In conclusion, java.util.Objects provide a wide range of useful methods that can simplify the coding process. By taking advantage of these methods, developers can write cleaner, more efficient code and reduce the likelihood of runtime errors.

Whether you’re a seasoned Java developer or just getting started with the language, understanding how to use java.util.Objects effectively is an important skill to have. By incorporating these methods into your codebase, you can improve code quality and productivity while reducing the risk of errors and bugs in your application.

Maintainable code: what is it and why should Java back-end developers care?

Most of us would rather work on a new project instead of on an old messy legacy system. Most developers I talk to tell me that they like to work with code that is easy to maintain. Somehow we end up with all these legacy systems that no one dares touch for fear of breaking something. Why is it that we continue to create projects that are hard to maintain when no one wants to work on them?

We are quite good at identifying if a codebase is maintainable or not by how hard it is to make changes to it. At the same time, we are not so good at preventing it from becoming that way in the first place. Some examples of common reasons code become hard to maintain are:

Over-engineering: The solution tries to solve a problem that doesn’t exist. The code is more complex than it needs to be.

Inconsistent coding practices: The team doesn’t have a common way of writing code.

Lack of refactoring: Existing code is only changed if there is a problem with it.

The lack of a good definition of what maintainable code is can make it easy to miss that the codebase is becoming less maintainable.

If we instead define some wanted characteristics of maintainable code. These are then used to check the code we create to see if we are making it more or less maintainable. We will also look into what benefits writing maintainable code can have for you and your career.

Characteristics of maintainable code

The goal of maintainable code is that it should enable developers to quickly understand and update it without introducing errors or unintended behaviors. A software system is rarely “finished” and is always evolving to meet new requirements. Code is like a garden. It has to constantly be cleared of weeds or otherwise, it will grow completely overgrown. To achieve this a codebase needs to have the following characteristics:

Modularity. It should consist of components that are small, independent, and reusable.

Readability. Written with a clear and consistent domain language, where all variables, functions, and modules have meaningful names.

Testability. It should have automated test suites that test the behavior of the system so changes to it do not cause unintended consequences.

Scalability. It should be able to handle increasing amounts of data and users without becoming slow or difficult to change. Do this with moderation. There is no need to be able to scale up to infinity unless there is a specific business need for that.

Documentability. It should have up-to-date documentation describing why it is implemented the way it is to help new developers understand how it works. Well-written tests can also be used as a form of documentation to show the intended behavior of the system.

Flexibility. Designed in a way that it’s easy to change and adapt to meet changes in the requirements.

Reusability. Designed in a way so that modules can be used to build and integrate with other systems.

What’s in it for me?

By writing maintainable code that is easier to change and debug, you will save time and increase productivity in the long run. You and your team will have better collaboration and teamwork because the code is easier for other developers to understand and work on. Demonstrating the ability to write maintainable code can be a valuable skill in the job market, increasing the likelihood of being hired for high-quality projects and advancing in your career. It will also enhance your reputation and credibility, leading to better opportunities and greater recognition in the software industry.

Overall, writing maintainable code is a best practice that can lead to better software and a more successful career for developers. In the coming posts I’ll talk about what we can do to achieve these characteristics so don’t forget to register to be notified when new articles are posted.