Using Spark for Java Microservices
There are many options for writing Java Microservices. Here, I will start to explore the most minimal of possible approaches - the Spark Framework.
Motivation
For a long time now, Java Microservices have been equivalent with Spring Boot for me. Boot combines an almost ridiculously easy setup - at least for Java standards - with the unparalleled feature-richness of the Spring Framework.
However, Spring Boot has it’s downsides. As the dependency tree of your application grows, it is crucial to constantly check if unwanted auto-configuration is being bootstrapped. Spring is king in hiding away abstractions from the user, to a point where it becomes almost impossible to debug issues in auto-configuration classes without in-depth knowledge of the framework. This is especially a problem when using Spring Cloud with its numerous (occasionally) under-documented features and configuration properties.
Sometimes, it is relieving to tear away everything that makes Java services so cumbersome and heavyweight and start with something really nice and easy. Spark Framework offers a bare-bones-approach to Java Microservices with an API design that resembles node.js web frameworks like express.
You can follow along the examples by grabbing the source code from my GitHub repository.
Basic setup
Let’s start with the pom.xml
. In the first iteration, we do want to bother with making the project executable as a jar file, nor do we want a proper logging setup.
1 | <project> |
This is already enough to pull together a basic Spark project with Java 8 support. Now we just need the actual main application class (as always with Maven, it goes into /src/main/{package}
) and we’re good to go:
1 | package me.aerben; |
Being a loyal Tomcat user for ages, I just had to change the listen port from its default 4567
to 8080
, but that is of course entirely up to you. We can now start the application out of an IDE like Eclipse or IntelliJ IDE and then query it to receive the expected result.
1 | $ curl localhost:8080 |
Building a standalone package
Now we’ve got something that we can run in our IDE. But when we try to build the app and run it on the command line, it won’t work:1
2
3$ mvn package
$ java -jar target/spark-sample-service-1.0.jar
no main manifest attribute, in "spark-sample-service-1.0.jar"
We did not configure the jar build yet. To fix this, we have to apply some tweaks to the project’s pom.xml
. First, we add the maven-jar-plugin
to the project’s build configuration so that an executable jar is built:
1 | <plugin> |
When we build the project with mvn package
now, we obtain a jar with the project’s name in the target folder.
However, it still won’t run:
1 | $ java -jar target/spark-sample-service-1.0.jar |
The dependencies are still absent from the built jar fild. We have to include another plugin to bundle a jar that contains everything that is necessary to start the application.
1 | <plugin> |
The assembly plugin will generate a jar file with all dependencies bundled in the target folder. The name of the file is spark-sample-service-jar-with-dependencies.jar
in our case.
We can now run the app as a normal java application:
1 | $ java -jar target/spark-sample-service-jar-with-dependencies.jar |
1 | $ curl localhost:8080 |
Final tweaks
Everything is working fine - but what to do against the ugly SLF4J-errors that we see on startup? It turns out that Spark adds SLF4J as logging facade to the classpath, but thankfully lets us choose the implementation we want to use. The slf4j-simple implementation will just dump the logs to standard output and required no additional configuration.
1 | <dependency> |
Another thing we might want to change is the file name of the generated jar with dependencies. If we want to get rid of the assembly identifier in the file name, we can add the configuration property appendAssemblyId
which you can see commented out in the above configuration of the maven-assembly-plugin
. Note that when you use the same finalName
in the maven-assembly-plugin than you use in the maven-jar-plugin, it will result in a warning that the original jar file is being overwritten.
Great, so now we have a running Spark app that can be started directly from the command line. In a later post, we will further explore what to do with it.