Andre's Blog 2016


Full Index 2016 2017

20161230 Human Readable Object Export



Now and then it happens you need to extract the data of an object to transfer or store it.

You could use Java's serialize functionality, but for deserialization the class must have the same structure as while serialization.

I was looking for a way to do this where

  1. there is only a minimum effort to implement the export
  2. I am not restricted when loading the data into new structures
  3. the data is represented human readable

My first idea was to solve this problem by implementing an export to XML.

The XML would be human readable and I could read the XML and write the data into any new structure, but this would cost me more effort than I wanted to spend.

My next idea was to use JSon, because in comparison to my experience with reading XML in Java I favour reading JSON in Groovy much more, its simply easier.

I gave it a try and started googling for how to write data into a JSON structure using Groovy.

What I found was groovy.json.JsonBuilder() creates a JSON builder, where you just have to pass objects to the call() function to export them to JSON.

This is the JsonBuilder API.

I have implemented a demonstrator expample to check it works for me. This example uses a pre-version of the JPA model class I want to use for logging into the H2 database, it contains these types of members

This class is complex enoudh to show how good the JsonBuilder performs for my problem and how much effort I have to spend for workarounds.

Let me first show you how JsonBuilder is used, then I show you my results and at the end let's have a look at the classes and scripts I have implemented for this example.

This is the code where I create my object and feed it to the JsonBuilder :

CILog log = new CILog()

log.setSummary("This is a summary of this log entry")
log.setFulltext("This is the fulltext explaining what happened")
log.setNodeId("The nodeId tells us which node has logged this information")
log.setTimestamp( Calendar.getInstance())

log.addEntry(new CIEntryData(1234, "content 1234"))
log.addEntry(new CIEntryData(2234, "content 2234"))
log.addEntry(new CIEntryData(3234, "content 3234"))
log.addEntry(new CIEntryData(4234, "content 4234"))

def json = new groovy.json.JsonBuilder()
def result = json log

println json.toPrettyString()

and this is the output :

                    "summary": "This is a summary of this log entry",
                    "nodeId": "The nodeId tells us which node has logged this information",
                    "entryList": [
                                    "index": 1234,
                                    "content": "content 1234"
                                    "index": 2234,
                                    "content": "content 2234"
                                    "index": 3234,
                                    "content": "content 3234"
                                    "index": 4234,
                                    "content": "content 4234"
                    "logEntryType": "Exception",
                    "fulltext": "This is the fulltext explaining what happened",
                    "logId": 1234567,
                    "timestamp": "2016-12-30T09:37:13+0000"

You see, you only need one line of code to feed all data of an object and all other objects connected to it via a list to a JsonBuilder, in this case json log, which is a short form of

For me this is better than all my expectations.

The only requirement seems to be, you need to provide getters for all members you want to see in the JSON configuration, but IDEs like IntelliJ IDEA provide functions to generate getters and setters.

This could be interesting using JPA when you need to export your database, one reason why I have chosen an Entity class.

How to read the data from a JSON configuration is documented in my blog post about using the JsonSlurper.

This is the class the JsonBuilder exports :

    package com.wartbar.db.model;

    import javax.persistence.Temporal;
    import javax.persistence.TemporalType;
    import javax.persistence.Column;
    import javax.persistence.Enumerated;
    import javax.persistence.EnumType;
    import javax.persistence.Entity;
    import javax.persistence.GeneratedValue;
    import javax.persistence.GenerationType;
    import javax.persistence.Id;
    import javax.persistence.Table;

    import java.util.ArrayList;

    import com.wartbar.util.CIEnumerations.CILogEntryType;

    @Table(name = "LOG")
    public class CILog implements {

        public CILog() {
            entryList = new ArrayList<>();

        public void setLogId(long logId) {
            this.logId = logId;

        public long getLogId() {
            return logId;

        @GeneratedValue(strategy = GenerationType.AUTO)
        @Column(name = "LOGID")
        private long logId;

        public java.util.Calendar getTimestamp() {
            return timestamp;

        public void setTimestamp(java.util.Calendar timestamp) {
            this.timestamp = timestamp;

        java.util.Calendar timestamp;

        public String getSummary() {
            return summary;

        public void setSummary(String summary) {
            this.summary = summary;

        @Column(name = "SUMMARY")
        private String summary;

        public String getFulltext() {
            return fulltext;

        public void setFulltext(String fulltext) {
            this.fulltext = fulltext;

        @Column(name = "FULLTEXT")
        private String fulltext;

        public String getNodeId() {
            return nodeId;

        public void setNodeId(String nodeId) {
            this.nodeId = nodeId;

        @Column(name = "NODEID")
        private String nodeId;

        public CILogEntryType getLogEntryType() {
            return logEntryType;

        public void setLogEntryType(CILogEntryType logEntryType) {
            this.logEntryType = logEntryType;

        private CILogEntryType logEntryType;

        public void addEntry(CIEntryData entryData) {

        public ArrayList<CIEntryData> getEntryList() {
            return entryList;

        public void setEntryList(ArrayList<CIEntryData> entryList) {
            this.entryList = entryList;

        private ArrayList<CIEntryData> entryList;

This is the 'enum' used in the example :

    package com.wartbar.util;

    public class CIEnumerations {

        public enum CILogEntryType {
            Info (1), Problem (2), Exception (3);

            private final int value;
            private CILogEntryType(int v) { value = v; }
            public int getVal() { return value; }

This is the class used in the 'ArrayList' :

    package com.wartbar.db.model;

    public class CIEntryData {

        public Long getIndex() {
            return index;

        public void setIndex(Long index) {
            this.index = index;

        private Long index;

        public String getContent() {
            return content;

        public void setContent(String content) {
            this.content = content;

        private String content;

        public CIEntryData(Long index, String content) {
            this.index = index;
            this.content = content;


This is the 'buildSrc/build.gradle' to provide the 'javax.persistence' dependencies :

    apply plugin: 'groovy'
    apply plugin: 'java'

    repositories {

    dependencies {
        compile (



20161228 Gradle Buildsrc Needs To Have Buildscript For Dependencies


If your Groovy or Java Class in buildSrc/src/... depends on external JAR files, you need to have a build.gradle script reflecting these dependencies placed in buildSrc, or you get such error messages :

    amos$ gradle build
    /Users/amos/gitrepo/blog/ideaprojects/JsonWriter/buildSrc/src/main/java/com/wartbar/db/model/ error: package javax.persistence does not exist
    import javax.persistence.Temporal;

20161218 H2 For Logging


Some requirements of CISystem are specifying which information have to be logged by the nodes. While implementing the concept of actions I had the idea of simply logging every action, so I could do a post mortem analysis about everything happening on a node and would not even have to implement this every time again since it would be a part of the action idea.

The only problem would be : log files, they are long, it is hard to search for patterns in them, they are no fun.

What I want is a way to handle these log files

I have decided to use the H2 Database Engine to store the log entries, because

  1. it is implemented in Java, you just need to have the jar file, this database is portable
  2. it uses a file on your harddisk and you can access this database file via your browser by just running the jar
  3. it can be run as in memory database, no need to persist the data to disk if you don't want to

I have already an example running in Eclipse based on JPA using EclipseLink.

Now I am working on the Gradle build script to build the database model and the application itself.

Building the application works, but generating the database model using Gradle is not working yet, there seems to be a compiletime conflict with the resources folder where META-INF/persistence.xml is stored.

I am working on this and will post my example code and build script when it is finished.

20161218 Me Updated


I have updated the page explaining who I am and what I am doing here.

20161216 History Outsourced



The last automation step for this year is done, I have finished my script to create the history of this blog automatically.

To save you bandwidth

  1. the history is now in an own HTML file
  2. and I start a new html file for every new year.

Instead of the History link, there are now these 3 links :

Full Index

If you are interested in my current buildscript, please download here the buildsrc and build.gradle.

20161206 A New Design

I have finished my new design for the blog.

20161115 Intellij IDEA Indention Tabs Spaces

[Intellij IDEA]

To configure the indention of your files in Intellij IDEA you have to click File / Other Settings / Default Settings and then you find it.

20161111 Intellij IDEA Keymap

[Intellij IDEA]

Something has changed my keymap of Intellij IDEA, cmd cursor up and cmd cursor down do not bring me to document start and document end.

I have found out how to fix this :

  1. click to Intellij IDEA in the menu
  2. click Preferences...
  3. click Keymap
  4. double click Move Caret to Text Start/End
  5. click add keyboard shortcut
  6. press the keys you want to use for the function
  7. if this key combination is already in use, decide to keep it / remove it
  8. if you want to remove not-needed combinations, double click Move Caret to Text Start/End and remove them

20161106 Refactored Gradle Build



Adding new functionality to my gradle build has increased its code and has made it hard read read.

For this reason I have started to refactor it.

I have decided to

  1. separate the code into different Groovy classes
  2. move the code to the buildSrc
  3. have custom Gradle tasks for those classes where I don't need to pass parameters

Please find my changes here in my gradle blog.

20161102 Consumer Producer Pattern Or Problem


Yesterday, while putting my post online, I felt a little bit funny about the Consumer Producer Pattern (CPP) term I have introduced. I must have overread it while googling it, but there is no Consumer Producer Pattern (CPP).

Everybody else calls it the Producer Consumer Problem, which makes more sense since you have to produce something before you can consume it and the interesting point about it is to solve the synchronization problem and not only to use a list. I don't like this term, it looks strange when you abbreviate it to PCP.

There are not really much people writing about this pattern, so I want to share the link with you, where I have found it :

20161101 Rework Of The Topics Page


While adding more and more tags to my posts I noticed the topics page must be reworked :

  1. the tags are not sorted
  2. the links did not work for topics containing spaces
  3. the tags should be presented in a table

So instead of publishing ASAP my last posts I have spend another few hours for fixing this.

20161030 Goodbye Netbook Hello Mac


You may have seen this coming, my Atom netbook is too slow for using Gradle.

Its graphic resolution was too small for Intellij Idea from the beginning.

But its really too slow ...

Bye Bye dear netbook.

Hello, Mac!

I am now using my Macbook Air, which I had bought a year ago to find my way into writing IOS apps ... not started that yet, but its perfect for what I am doing now :

The one point I am still not sure if it is bless or curse is the keyboard.

I really like typing on this device, never had a notebook I liked so much typing on, but it took some weeks to learn its functions by heart. It starts with lots of the brackets are not printed on the keys.

Since I don't want to start a blog about how to use a Mac, here are some of the things I had to learn ...

Mac Keyboard without number block

The cases for the closing brackets or other directions are the next right keys or the corresponding cursor direction.

Character Keys typed instead
[ alt 5
~ alt n
@ alt l
{ alt (
\ shift alt 7
| alt 7
beginning of line fn cursor left or CMD cursor left
beginning of document cmd cursor up
page up fn cursor up
delete from front fn backspace
select till end of line fn shift cursor right


To see hidden files or directories in the finder, type : cmd shift .

Desktop / MissionControl

To select an application : cmd tab

To see all applications : F3 or ctrl cursor up

To show the desktop : cmd F3 or fn F11

open TextEdit from shell

Sometimes, when you just need a simple editor to copy&paste someting, type in xterm : open -e


To get home : cmd shift h


Make a screenshot and save it in a file on desktop : cmd shift 3

Select a part and make a screenshot : cmd shift 4

20161030 Tagging Is Cool


In this blog post I have introduced my new topic tagging method for my blog posts.

I just realized how cool it is to be able to tag any post to link it to any number of related topics.

The reason for my "technical blogs" like the Atom Editor blog was, I wanted to have some kind of bracket around posts related to a specific topic.

While writing my post about how to use the Macbook Air keyboard I had to decide if I need an own technical blog to connect future posts for Mac topics.

My stomach decided : This is lame, no more technical blogs for micro topics, please!

Then I realized, I don't need to do this anymore, I just tag the posts and the topic brackts will be generated automatically from the tags.

20161028 Communication In CISystem Between Threads




My last blog post is 3 weeks ago.

Seems I have to post more often about what I am thinking than what I have implemented yet.

It guess it would be easier to follow me if I tell you my ideas before I show you my code and I could do this more often.

For me as a software developer it still feels funny to present an analysis or an idea which is not coded yet and maybe never becomes code, but I try to work on this.

This weekend I have visited Eike, which was really great! We had lots of fun and some really good discussions.

I also had time on the train rides for writing my blog and implementing an elaborate example for what has haunted me the last weeks :

How do threads communicate with each other?

So what did I do the last 3 weeks :

I had to bring some requirements and needs together :

  1. I wanted smaller code, which is easier to understand, I wanted to follow the Single-Responsibility-Principle (SRP)
  2. I had to find out how my threads will communicate with each other (CPP, OP)

Let's discuss this.

Single Responsibility Principle (SRP) And Separation Of Concerns (SoC)

My understanding of the Single Responsibility Principle is :

Every function or class should only do exactly one thing.

What does that mean?

For me it means, if a function does only do one thing but is still complex, a class should be implemented, which does the same what the function did, but which is a refactored version of it, a set of smaller functions, where every function does only one thing.

If this new class has more than 10 functions you should think about moving those functions, which have more in common with each other than with the other functions, to new classes.

What does this mean for the CISystem architecture?

One decision I made was to have an own class for every request and every response and to handle and execute them by the statemachine.

Then I remembered the requirement that every communication and action shall be logged. I don't want to mix concerns, I don't want to implement a logger part in every action to be handled. So I decided to have logger classes which take CIData object and log it. This way I also implement the segregation of concerns, because I have separated the code for actions from the code for logging by putting it into different classes.

The last question about logging was : How do I tell my software to write the CIData objects human readable into the log? The answer is, the objects must be added to a to-be-logged-list and some thread must check if there is something to log and do it.

Consumer Producer Problem

I have threads which collect information about new nodes in the network.

I have threads which create socket connections to these new nodes.

There could be a thread logging performed actions.

So some threads produce data in lists and some consume this data.

If these threads synchronize on the list, this solves the Consumer Producer Problem .

This looks like a solution how the threads can communicate with each other while they execute their run-loop.

Observer Pattern (OP)

The only thing I did not like about communicating only via lists was : When I have something to log, because a thread has put something into the to-be-logged-list, then the client has to poll this information, because it is synchronized on the list, at least this is my personal understanding of it.

I can introduce a sleep(1000), which would work, but which smells like a workaround, because I do not understand the situation yet.

And polling in general, to be honest, has become a little bit out-of-date as technique. When using git you would use a Git trigger and not have a Jenkins script polling for a change in your repo, yes?

Let's use the Observer Pattern, let's inform the logger that there is data.

Java provides the implementation for the Observable class, you just need to extend your own class with it. Then you implement the update function for those classes which implement the Observer interface.

But the notify function of the Observable class only informs the Observers about the change, while the Observers do ... what? Actively wait?

No, there is something missing.

I googled several times "java thread observer pattern", but did not find anything helpful.

I explained it to Eike, I said "What I need to find out is how to code an Observer thread waiting for a notification of an Observable thread". One minute later Eike had found this link :

There they explain the thread synchronization via the wait() and notifyAll() functions of the Object class.

The search phrase was "java thread wait example" and again it was clear : The computer does what you say, not what you want...

OK, so the Java class Object implements a mechanism to register some Observers to an object and the information produced by the server is : clients, wake up.

Combination of OP and CPP with wait and notify

After knowing how to prevent active waiting by using wait() and notify() the question was, how do I pass the data from the server thread to the client threads? Writing it to a global list or singleton automatically means new problems :

  1. you cannot write unit tests when using singletons
  2. you must come up with an idea for how to detect that an information is consumed to remove it from the list

I have decided to use the Observer Pattern implementation of Java for this, I use the notifyObservers(arg) function of the Observable class to pass the information direct into the update(arg) function of the clients. There the new information is added to a list member of the client. This happens in a synchronized block, so it will not be interrupted.

So at the end I am doing what I was starting with, I am using the Consumer Producer Pattern, because I want to decouple the threads and so I have no influence on the situation that the server can create new data before all clients have consumed the old data.

Exactly this was what I like about the Consumer Producer Pattern, it decouples the threads and at the same time guarantees that the information will be processed in the same order it was created and no information is lost.

I have implemented this in a standalone example, it is contained in the next [blog post](#example-for-combining-op-and-cpp-with-wait-and-notify

20161028 Example For Combining OP And CPP With Wait And Notify




Please find this example in my Java Blog.

20161008 Reflection On My Mind


Today I woke up with an idea how to solve a repeating-myself-problem with code reflection.

I did not work with reflection yet, but I know what it can do for some time now. Since CISystem is my private project to learn new things I decided to give it a try.

In CISystem I have a code structure I call my statemachine. It is a very simple statemachine, it only knows about the two values of type

to decide which function call, but this structure is growing with the number of values of these types.

Before this structure my code was a little bit unorganized, the code I show You here is a snapshot of my approach to refactor it.

I know this refactoring is not done yet as I now see a nice structure, but lots of repeated code statements.

You will see what I mean, it is trivial.

There is this function deciding to process a response or request :

public CIData process(CIData data) throws Exception {

  CIData response = null;

  switch (data.messageType) {

    case Response:
      response = processResponse(data);

    case Request:
      response = processRequest(data);

    throw new Exception("CIStateMachine does not process this type of message");

  return response;

and it is calling one of these two, which are growing with every new DataType :

private CIData processRequest(CIData data) throws Exception {

  CIData response;

  switch (data.dataType) {

    case ConfigNetworkProposal:
      response = processConfigNetworkProposalRequest(data);

    case ConfigNetworkData:
      response = processConfigNetworkDataRequest(data);

    case Dice:
      response = processDiceRequest(data);

    case Message:
      response = processMessageRequest(data);

      throw new Exception("processRequest does not support this DataType");

  return response;

private CIData processResponse(CIData data) throws Exception {

  CIData response;

  switch (data.dataType) {

    case ConfigNetworkProposal:
      response = processConfigNetworkProposalResponse(data);

    case ConfigNetworkData:
      response = processConfigNetworkDataResponse(data);

    case Dice:
      response = processDiceResponse(data);

    case Message:
      response = processMessageResponse(data);

      throw new Exception("processResponse does not process this DataType");

  return response;

I guess you have already noticed there is a pattern how I use MessageType and DataType to define the function names and that this is hard to read when it grows, as it is dull, boring.

Don't misunderstand me, the code is not bad, but imagine another 30 cases which all look the same and you would be the one who has to take care for maintaining this ... wouldn't you want something final here, something which does not grow linear with the number of actions to be implemented?

My idea was :

Uncle Bob says in his Clean Code book a class shall only do one thing. It will take some time to make CISystem fit this requirement, but there are some obvious violations which can be repaired.

I had already planned to have one class for every function called in my statemachine, they just need to implement the same interface, so I only need to create the corresponding object and call a common process-function which would be defined by the interface.

For every class I only need to add the corresponding name in the DataType enumeration.

The CIData class, where MessageType and DataType are defined, is



public class CIData implements Serializable {

    public enum DataType {

    public enum MessageType {

        public static int getDataTypeLength() {
            return DataType.values().length;

        public static int getMessageTypeLength() {
            return MessageType.values().length;

        public static String getDataTypeName(int position) {
            return DataType.values()[position].name();

        public static String getMessageTypeName(int position) {
            return MessageType.values()[position].name();

        public void setResponseMessage(String message) {
            messageType = MessageType.Response;
            this.message = message;

    public Integer diceValue = 0;
    public DataType dataType = null;
    public MessageType messageType = null;
    public String message = null;
    public CIConfigNetwork configNetwork = null;

I have implemented it for one case, for the request of a message.

The classes doing the real work, which replace the process-functions, implement the interface CIAction :

package com.wartbar.action;


public interface CIAction {

    public CIData process(CIData data);


and this is my example implementation for an CIAction class :

package com.wartbar.action;


public class RequestMessage implements CIAction {

    public CIData process(CIData data) {
        CIData response = new CIData();
        response.messageType = CIData.MessageType.Response;
        response.dataType = CIData.DataType.Message;
        response.message = "Hello there, Requester!";
        return response;

I have decided to add a cache for the CIAction objects as they are stateless and don't need to created more than once.

Yes, static functions would have done the job as well, but let's just guess this will not be the last change here...

Now the statemachine looks much different :

package com.wartbar.state;

import com.wartbar.action.CIAction;
import java.lang.reflect.Constructor;
import java.util.HashMap;

public class CIStateMachine {

    private static final String actionPackage = "com.wartbar.action.";
    private HashMap<String, CIAction> map = new HashMap<>();

    private String getActionName(CIData data) {
        return +;

    public CIData process(CIData data) throws Exception {
        return map.get(getActionName(data)).process(data);

    private CIAction createActionObject(String fullName) {
        CIAction action = null;

        try {
            Class<?> classType = Class.forName(fullName);
            Constructor<?> ctor = classType.getConstructor();
            action = (CIAction)ctor.newInstance();
        } catch (Exception e) {
            System.out.println("Exception in createActionObject : " + e.getMessage());

        return action;

    private void initMap(){
        for (int m = 0; m < CIData.getMessageTypeLength(); m++) {
            for (int d = 0; d < CIData.getDataTypeLength(); d++) {
                String name = CIData.getMessageTypeName(m) + CIData.getDataTypeName(d);
                String fullName = actionPackage + name;
                CIAction action = createActionObject(fullName);
                map.put(name, action);

    public CIStateMachine() {

This is my code for running it, just to see it works :

    public static void testReflection() {
        CIStateMachine stateMachine = new CIStateMachine();

        CIData data = new CIData();
        data.messageType = CIData.MessageType.Request;
        data.dataType = CIData.DataType.Message;
        data.message = "This is a request";

        try {
            CIData response = stateMachine.process(data);
        } catch (Exception e) {

and this is the output, as expected with exceptions for the unimplemented classes :

Exception in createActionObject : com.wartbar.action.RequestDice
Exception in createActionObject : com.wartbar.action.RequestConfigNetworkProposal
Exception in createActionObject : com.wartbar.action.RequestConfigNetworkData
Exception in createActionObject : com.wartbar.action.ResponseDice
Exception in createActionObject : com.wartbar.action.ResponseConfigNetworkProposal
Exception in createActionObject : com.wartbar.action.ResponseConfigNetworkData
This is a request
Hello there, Requester!

and this is the part which does the reflection :

CIAction action = null;

try {
  Class<?> classType = Class.forName(fullName);
  Constructor<?> ctor = classType.getConstructor();
  action = (CIAction)ctor.newInstance();
} catch (Exception e) {
  System.out.println("Exception in createActionObject : " + e.getMessage());

I have added a small explanation how it works here in my Java blog.

These are the sources which have helped me this week :

Stack Overflow: Creating an instance using the class name and calling constructor

20161003 Wax On Wax Off Or How To Polish This Site


I was thinking for some time now, how to improve the access to my blog posts on this site.

My first idea was implementing a toggle mechanism in JavaScript, so that the posts are loaded only as a list of links you would have to click to open a post, but I did not like the idea you would only see a list of links When you use my RSS feed.

My next idea was, that it would not really help you to have a list of links to my posts in chronological order bottom up, because that feature is already available with the history.

The idea I have implemented is a kind of index page : Topics

It is reachable via the menu page via Main - Topics Of The Blog.

This enables me to give you pages with my blog posts in chronological top down order and you can jump from any post to its topic page.

I did the following :

  1. introduced a kind of tagging for the posts : [[TOPIC]]
  2. tagged all posts
  3. implemented a Gradle task to scan my and create the topics menu and the topic pages.
  4. implemented a Gradle task to compile the tags into links to the topic pages.
  5. had to fix some old post titles, which will create new RSS feed data, I am sorry for that!

Additionally I have started to use Gradle as build tool for executing Pandoc to compile my Markdown files to HTML.

Joerg Mueller's documentation of how to execute shell commands was a big help here!

I have used a FileTree to collect all Markdown files, this way it works automatically for new Markdown files, too.

Now I only need to execute 'gradle compileMD' to

I hope you enjoy it.

Please find my complete Gradle build script in my Gradle/Groovy blog.

These are the sources which have helped me this week :

Joerg Mueller: Executing shell commands in Groovy
Gradle Userguide: Working With Files

20160925 Spock Unit Tests For CISystem


The sourcecode of CISystem is growing and I need to start writing unit tests.

Why unit tests? My domain is called "" and I really believe you need unit tests to

  1. test your software
  2. specify your software
  3. document your software

and it reduces the need for documentation, not completely but it helps.

When I have a refactoring project, I first write the unit tests and then start with the refactoring. Doing it this way I know when my refactoring breaks the code.

So, yes, I could have started with the tests before I start with the implementation. Test-Driven-Development really saves time when what you want to do.

For CISystem I first wanted to find out how to do it before I start with the tests. I wanted to find out which classes I need, threading and sockets are new concepts for me.

Before this project I was coding mainly in C++ and I had my own idea of writing unit tests.

Now coding in Java and Groovy I want to use something new, something what is really state of the art.

I have decided to write my unit tests with Spock using gradle to run them.

You run the tests with

gradle test

These are my first two tests, they are not much more than a "Hello, world!", but I am already fascinated how easy it is writing such tests :

import spock.lang.*

class NetworkInitializationTest extends spock.lang.Specification{
    def "setup node sets label 1"() {
        given: " a network object with IP address and port set"

        CINetwork network = new CINetwork();

        and: "setupNode()"


        expect: "label is IPAddress:port"

        network.getLabel() == "";

    def "setup node sets label 2"() {
        given: " a network object"

        CINetwork network = new CINetwork();

        expect: "label is IPAddress:port when setting IP address and port and then calling setupNode()"

        network.getLabel() == A + ":" + B;

        where: "where variations of IP address and port are used"

        A << ["","",""]
        B << ["200","300","1000"]


and the report result is this :


and this is my gradle build script :

apply plugin: 'groovy'
apply plugin: 'java'

repositories {

task fatJar(type: Jar) {

    manifest {
        attributes 'Main-Class': 'com.wartbar.networkinitialization.NetworkInitialization'
    baseName = + '-all'
    from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } }
    with jar

dependencies {
    compile ('org.codehaus.groovy:groovy-all:2.4.6',)




  1. I have set up Spock this week for the first time for a project, I had no experience with it before.
  2. From setup to first running test it took about 2 hours.
  3. For CISystem, the second project this week, it took only some minutes.

It is fun using Spock!

These are the sources which have helped me this week :

Spock Primer - Its goals are to teach you enough Spock to write real-world Spock specifications, and to whet your appetite for more.
Petri Kainulainen - Writing Unit Tests With Spock Framework: Creating a Gradle Project

20160907 Benefits Of Java Enumerations Compared To C++


The last days I was using java.lang.Enum a lot and I really must say : I AM SO HAPPY ABOUT IT!

I was coding C++ for years and it happens quite often you need the string representation of an enumeration value or you have to find the enumeration value corresponding to a string. In C++ you have to maintain both, the enumeration, which you can use in switch-case-statements and the functions computing one into the other.

You can end up in a worst-case-scenario if you decide to do everything with strings, because then your compiler is not able to find your typos.

In Java you declare your enumeration, e.g.

Enum Colors {

and when you need the string representation you just

String red =;

and when you need the enumeration value you just

String red = "red";
Colors value = Colors.valueOf(red);


20160907 How To Pass Only New Information


The CISystem Requirement Specification says, a node shall update all connected nodes continuously about its state and about the states of all connected nodes.

To reduce traffic and time to pass all this information around again and again a protocol to communicate which information has changed is needed.

My idea is to bundle passed information with a version number. This version number is evaluated from the receiving node to decide which information shall be requested and then the server sends only the new information.

In this post just use version from now on, it means the version number.

Maybe an Integer number is sufficient to be used as version, it would be increased every time the information changes. I have to figure out if I need one or two Integer objects to represent the version, something like "x.y" where x is incremented when y has reached the maximum value which can be represented.

Currently I know that information will be

  1. which nodes are online
  2. the joblists, the states of the jobs and everything related to the jobs

Most of these informations have only some states like

  1. available
  2. started
  3. stopped
  4. finished

so a simple Integer should be enough as version.

I am unsure if it makes sense to communicate that a node goes down, because every node is connected with every other node, so every node is noticing via an exception when a node drops the connection.

It may be useful to have this information to be able to see which node was working on a job while it went offline.

Only the owner or producer of the information may increase the version, otherwise it could be possible that different nodes share the same information using different versions.

Sure, when a node goes down it is very unlikely this node can update its states. In this scenario I see the current master node responsible for a joblist as the owner of the joblist information. As mentioned in the CISystem Requirement Specification there will be a priority list which defines which node takes over the responsibility for the joblists of a node going offline.

20160902 Upgrade To Gradle 3


The last days the number of classes of my CISystem aplication was growing. When it took 37 seconds to compile my application using Gradle 2.13 on my N150 netbook I decided to test how I can improve the speed, using the Gradle daemon or using a newer version of Gradle.

With Gradle 2.13 and Gradle daemon, building took 38 seconds, 1 second longer, so I tried a new Gradle Version.

The current version is Gradle 3.0 and now building my project takes 17 seconds without Gradle daemon and 12 seconds when the daemon is loaded. Gradle loads the daemon automatically now.

The effect depends on your machine and the complexity of your Java application, with my Mac it always takes 2,5 seconds, my project is still small and loading and running Gradle takes 2,5 seconds.

20160827 How To Decide Who is First When Everything Happens At The Same Time


I am working on my implementation of the network and communication layers of CISystem. From now on I need 2 weeks for getting something reasonable done, the connections are too big to split them and I only have 4 - 8 hours per week for it. This means from now on I post only every 2 weeks, my next post should be before 11th of September.

This week I had to think a lot about how my nodes will be connected with each other.

One of the requirements is, there is no central server node, every node shall be able to take over at any time.

My idea is to really have each node connected with every other node, but I want an efficient structure of connections.

My questions, answers and decisions are :

Which socket connections will be created automatically by the nodes?

At start a node is reading a configuration of other nodes.

There will be

  1. A thread opening connections to the configured nodes.
  2. A server socket accepting the connections of the other nodes.

Both will happen more or less at the same time and it would be hard to decide which connection to create or to allow and which not.

It means, each pair of nodes will be connected in both ways.

This ascii art shall visualize the direction in which the connections are set up, the to-node is always a server socket.

 A ---> B

 A <--- B

Which socket connections does the network really need?

Simple, only one connection between two nodes is needed. I have figured out, I can use the same implementation of communication and protocol for both client and server as both need to share the same information in both directions : they are both client and server at the same time.

This means, when a pair of nodes is connected in both directions, one connection is not needed.

Since setting up the connections happens at the same time it would be quite complicated to suppress one of them.

I have decided to drop one connection after both are set up.

The question is only : which one?

How does a pair of nodes decide which connection to drop?

Have you ever played Shadowrun? Or Cthulhu? Or any other role play or board game?

How do you decide who starts?

Sure, role a dice.

Both nodes will generate a random number and the node with the bigger number will be the lead node and drop one connection.

Every node will continuously check that it is connected with every other node, but one connection will be sufficient, so it is not necessary to inform the other node which connection will be dropped.

20160826 Five Things You Should Try


It's summertime and while waiting for a less hot evening I have decided to share some of the things I like with you ...

Everybody knows Harry Potter and Donkey Kong, so I try to come up with some maybe not-so-known or older stuff, because you would be bored if I only suggest things you already know. Maybe this could be an inspiration for you to go to your cellar or attic to look for some old boxes ...

Book Series

  1. The Harry Dresden novels written by Jim Butcher, about a mage working in Chicago.
  2. The Rivers Of London novels written by Ben Aaronovitch, about a policeman/mage in London.
  3. The Altered Carbon novels written by Richard Morgan, about what could happen when we would be able to digitally store our minds.
  4. The Joe Pitt novels by Charlie Houston, to me the best vampire series since 2 decades, starting with "Already Dead".
  5. The Voodoo series by Nick Stone, great!

Classic Rock Albums

  1. Kiss - Rock And Roll All Over
  2. Saxon - Power And The Glory
  3. Black Sabbath - The Headless Cross
  4. King Diamond - The Eye Of The Witch
  5. Death - Spiritual Healing

Stephen King Books

  1. Salem's Lot
  2. The Dead Zone
  3. Stark - The Dark Half
  4. From A Buick 8
  5. Colorado Kid

Things To Do In London

  1. Walk the south thames path from London Eye to Tower Bridge and on the north side from Tower back to Big Ben.
  2. Walk the canals from King's Cross to Camden Lock and have coffee on top of the Star Bucks building.
  3. Watch a play in Shakespeares Globe Theatre, e.g. a midsummer nights dream, as groundling, it is really cheap and the best place to watch a play here!
  4. Attend "The Ripper Haunts" walk from London Walks
  5. Look what is new in the turbine hall at Tate Modern Gallery.

Walks from Munich East

  1. Walk via Rosenheimer Platz to Muellersches Volksbad and from there along the river Isar into the english garden to the chinese tower.
  2. Walk via Wiener Platz into the english garden to the chinese tower.
  3. When Auer Dult is happening, walk to Mariahilfplatz.
  4. In December do Christmas market hopping, go to Weissenburger Platz, to Isar Tor, Marienplatz, Residenz and then to the medieval market at Wittelsbacher Platz.
  5. Walk to Rosenheimer Platz, via Muellersches Volksbad to Isar Tor and then Kaufinger Strasse to main station.

Classic PC Games

  1. Monkey Island 1+2+3
  2. Maniac Mansion + Day Of The Tentacle
  3. Indiana Jones 3+4
  4. Space Quest 3+4
  5. Warcraft 2

Classic C64 Games

  1. Giana Sisters
  2. Ghosts'n Goblins
  3. Bubble Bobble
  4. Marble Madness
  5. Defender Of The Crown

Classic Movies

  1. Bill & Ted's Excellent Adventure
  2. The Man in the Iron Mask with Richard Chamberlain (1977)
  3. De Zevensprong (Das Geheimnis des siebten Weges)
  4. Dogma with Ben Affleck and Matt Diamond
  5. Jack the Ripper with Michael Caine

Cook Sausage Dishes

  1. Broad sausage in pan, red cabbage, potatos
  2. Mettenden cut in small pieces in pan, broad beans, potatos
  3. Nuernberger in pan, Sauerkraut, potatos
  4. Boiled Wiener, potato salad
  5. Grilled Berner, tomato-cucumber salad mixed with grated sheep cheese

Learn Something New

  1. Learn a new programming language
  2. Learn a new framework
  3. Learn a new tool
  4. Try a new operating system
  5. Read a new blog

20160820 A Spike For Implementing The Threading Of CISystem



The next step in the implementation of CISystem will be the data structure storing the information about all known nodes and how to fill it.

My idea is that there will be a thread running a server socket and this thread starts a new thread for every incoming client connection. These client threads will be informed about how the network of nodes looks like and store this information in a map. This means several threads write into the map at the same time.

Another thread will check the map for new nodes to where a new connection has to be created. This thread constantly reads the map and creates new threads which open client connections to the server sockets of the other nodes. This shall happen only once per second or less often.

Finally I expect I have to react to the end of threads, for example when a socket connection is closed because a node goes offline, a scenario which is mentioned in the Requirement Specification Of CISystem.

Together this means I have to implement a lot of classes writing and reading a map at the same time. It will be a multi threaded application where nearly all threads access the same data structure.

FYI: Except for the simple unsynchronized threads in my example for Sending Objects Via Sockets I did not implement a multi threaded application before.

To prepare myself for this task I did the following :

  1. I have searched Google for hints and examples
  2. I have found out that java.util.collections provides synchronized maps
  3. I have learned at java2s how Thread.join() works
  4. I have asked a colleague about the connection between the synchronized statement and a synchronized map

After speaking to my colleague I have a better understanding of the java documentation of synchronizedMap :

public static <K,V> Map<K,V> synchronizedMap(Map<K,V> m)

Returns a synchronized (thread-safe) map backed by the specified map.
In order to guarantee serial access, it is critical that all access
to the backing map is accomplished through the returned map.

It is imperative that the user manually synchronize on the returned map
when iterating over any of its collection views:

  Map m = Collections.synchronizedMap(new HashMap());
  Set s = m.keySet();  // Needn't be in synchronized block
  synchronized (m) {  // Synchronizing on m, not s!
      Iterator i = s.iterator(); // Must be in synchronized block
      while (i.hasNext())

Failure to follow this advice may result in non-deterministic behavior.

It means calling every function of the synchronized map is thread safe, but when you need to iterate over the complete map without being interrupted, you need to do this in a synchronized block. You could use any other object for this synchronization, but it makes sense to synchronize the access to a data structure with using the reference to this data structure.

Summarized I have learned to :

  1. create a synchronized map with Collections.synchronizedMap(...)
  2. synchronize code blocks using the same monitor variable (can be any) using the synchronized(monitor variable reference) statement
  3. let one thread wait for other threads to terminate is done by calling Thread.join() at the other threads
  4. let a thread sleep for at least a second is done with Thread.sleep(1000) and should not be done in a synchronized code block!

Now that I have learned a lot about multi threading and the corresponding functionality, I have decided to start with a spike.

A spike is a test and its intention is to show that

  1. I have understood the problem
  2. I have understood all tools and how to use them
  3. I am able to solve a reasonable complex part problem with my tools

When my spike works I should be able to solve the real tasks for CISystem the same way.

Please find my description of the spike and its sourcecode in my Java blog :

Example For Multi Threaded Data Structure Access

20160820 Update Of The Atom Editor Blog


After working nearly two months with the Atom editor I have updated my HowTo and I have written a small review.

20160809 Why I Have Spent My Time On Learning How To Implement my RSS Feed


Hi there!

I have spent some time to figure out how to implement my RSS feed.

You could ask the question :


Didn't you want to write about maintainable software, network programming, maybe quality and testing?

Why did you work on that RSS thing?!?

Yes. Good question.

RSS is a XML based protocol to provide information about available news. Usually web pages and blogs have RSS feeds which say "there is something new" when there is a new article or blog post available and you can receive the information, that there is something new, with a RSS client.

Some time ago I have learned how useful the Jenkins RSS feed can be. It tells you which jobs have run and which have failed.

You could say :

Ok, but I can look it up anytime I want, so where is the benefit?


But then you have to spend your time on polling this information by going to your Jenkins website and look for your jobs. I think it's much better to be notified about failed Jenkins jobs when they break and not to look for your Jenkins every five minutes.

But this does not answer why I wanted to learn how to implement RSS feeds, it just tells you why it makes sense that Jenkins has RSS feeds.

Yes, sure, I want to inform you on this way that there is a new blog post from me available on my page, so you don't have to go to my page every day to find what's new.

The reason why I wanted to find out how it works is, I have a lot of tests running at work where I need to know their states. I always look for automation options to reduce my workload. Often the state of a Jenkins job is not enough information, maybe you want to know which tests have failed or which device under test has problems and maybe you don't want to encode all information in your Jenkins job.

My idea is : I am already able to get all information automatically from my test results, so when I am able to encode this in RSS feeds, I can register my RSS client to poll it for me and just notify me about what is going on or what is going wrong.

Some years ago I have implemented a tool in Java which produces a web page with this content. Now I would just generate the RSS feed and the information about what goes on and the information would reach my colleagues automatically.

Being able to implement RSS feeds and to understand how to use them is one step into the direction of maintainable log files, because it simply does not help you to evaluate them automatically when nobody has the time to look into the results, especially when they are only interested in the problems. Instead of buying or implementing tools, which take care for the log-file-jungle, I think it is better to just fill the gaps with small scripts like the one I have implemented for my blog let my automation do the work.

My experience with RSS will tell me the next months if this idea is working or if RSS is the wrong format to transport such kind of information. For me it is an indication they use it for Jenkins, so it could really be a low hanging fruit to use it.

20160809 Generating RSS Feeds With Gradle And Groovy For This Blog


This blog has an RSS feed from now on, you can register this URL in your client :

This page was a big help for me to start in finding out what to do : : Eigenen RSS-Feed erstellen

And this has helped me to find out what was missing :

W3C Feed Validation Service

My RSS feed looks like this :

<?xml version="1.0" encoding="ISO-8859-1" ?>
<rss version="2.0" xmlns:atom="">
<atom:link href="" rel="self" type="application/rss+xml" />
<title> RSS-Feed</title>
<description>Home of Andre's Blog</description>
<copyright>2016 by</copyright>
<title>Generating RSS Feeds With Gradle And Groovy</title>
<description>A new blog post with the subject 'Generating RSS Feeds With Gradle And Groovy'</description>
<pubDate>Tue, 9 Aug 2016 00:00:00 +0200</pubDate>
<title>Why I Write Shell Scripts With Gradle And Groovy</title>
<description>A new blog post with the subject 'Why I Write Shell Scripts With Gradle And Groovy'</description>
<pubDate>Thu, 4 Aug 2016 00:00:00 +0200</pubDate>
<title>A Blog About Using Gradle</title>
<description>A new blog post with the subject 'A Blog About Using Gradle'</description>
<pubDate>Sun, 31 Jul 2016 00:00:00 +0200</pubDate>
<title>Technical Realization Of This Blog</title>
<description>A new blog post with the subject 'Technical Realization Of This Blog'</description>
<pubDate>Tue, 16 Feb 2016 00:00:00 +0100</pubDate>

I am not 100% sure if every client takes the URL of a blog post from <link> or <guid>, but as long as <guid> is unique this should be ok.

I generate this RSS feed with a Gradle task calling some Groovy code.

If you are interested in the code, please find it here in my Gradle blog :

Andre's Technical Blog About Using Gradle

20160804 Why I Write Shell Scripts With Gradle And Groovy


This week I already write my blog post on Thursday, cause I have too much to do for the weekend. I was thinking about to adapt my rule to "release on Sundays", to "till Sunday evening".

Usually when it comes to writing a script to call some programs, everybody uses something like a Bash or Batch script. When a script has to evaluate parameters, I always have the feeling of this-should-look-much-better. When it comes to parsing files I usually end up in frustration, because I will never see any beauty in the code I have to write and I have to write it anyway to solve my problem.

Several years ago I was using Perl for some months, but it was not really satisfying, because the more different functions you use, the more return variables with specific names populate your script. Perl was better than Batch, but still cryptic.

Then I had started looking at Python, but at the same time I had to work on a new project with Java and decided to write my tools from now on with Java. It was simply easier to go from C++ to Java than learning a complete new language like Python and Java came with all classes I needed.

Now I had a language I liked, where executing software was feasible and parsing files was easy, but where should I put the binaries? Add them to Git? Have them in Nexus?

At the end of the project I heard the first time about Groovy and was immediately sad I did not here about it earlier. Groovy was exactly what I was looking for, improved scriptable Java :)

A few months later I had a crash course in Gradle and there I found the last missing part of my puzzle, a tool combined with a script language,

  1. which provides command line parameters through some kind of singleton object (== project)
  2. which lists available main functions (== task) without the need to implement this help
  3. which provides short documentation for the functions (== description) without the need to implement this help
  4. which comes with the full support of Java libraries
  5. which comes with even more support of Groovy helper functionality

To show you what I mean and to give you some answers about what I meant with my post from last week, I give you a small example, let's implement a grep for scanning files in Gradle :

task grep(group: "shell tools") {

description = "greps all lines of -Pfile= containing the value provided with -Ppattern"

doLast {
  String[] lines = file(project.file) as String[]

  for (line in lines) {
    if (line.contains(project.pattern)) {
      println line


This script shows several aspects I have mentioned before in my blog post from last week :

The basics about Gradle are all easy

  1. The task declaration is descriptive and simple
  2. The task description is assigned to the task variable description
  3. If you only use standard classes, you do not need to import much

It is not only for writing build systems, you can write any kind of script with it.

The grep example has nothing to do with building software.

Gradle scripts are written in Groovy, so if you know Groovy or Java, you already can write Gradle scripts

Just as example for writing Java instead of Groovy :

  println line

is the same as


it is just something they call syntactic sugar in Groovy.

To make Gradle find your script without extra help, you have to name it build.gradle.

My script is in blog/grepExample/build.gradle, my blog is in blog/

Every section of my blog starts with ### 20, the markdown notation for <h3> and the first digits of 2016 and all following years of this blog.

To find out which tasks my script provides I write

amos@Mini:~/gitrepo/blog$ cd grepExample/
amos@Mini:~/gitrepo/blog/grepExample$ gradle tasks

All tasks runnable from root project

Build Setup tasks
init - Initializes a new Gradle build. [incubating]
wrapper - Generates Gradle wrapper files. [incubating]

Help tasks
buildEnvironment - Displays all buildscript dependencies declared in root project 'grepExample'.
components - Displays the components produced by root project 'grepExample'. [incubating]
dependencies - Displays all dependencies declared in root project 'grepExample'.
dependencyInsight - Displays the insight into a specific dependency in root project 'grepExample'.
help - Displays a help message.
model - Displays the configuration model of root project 'grepExample'. [incubating]
projects - Displays the sub-projects of root project 'grepExample'.
properties - Displays the properties of root project 'grepExample'.
tasks - Displays the tasks runnable from root project 'grepExample'.

shell tools tasks
grep - greps all lines of -Pfile= containing the value provided with -Ppattern

To see all tasks and more detail, run gradle tasks --all

To see more detail about a task, run gradle help --task <task>


Total time: 16.517 secs

Don't get worried about the Total time : 16.517 secs, remember, I run this on a Samsung N150 with and old Atom processor!

gradle tasks tells us how to use our grep task :

shell tools tasks
grep - greps all lines of -Pfile= containing the value provided with -Ppattern

I can have more tasks in different groups in my script and Gradle will list the tasks in their groups.

Now let's test the grep task with the pattern ## 20 on my file :

amos@Mini:~/gitrepo/blog/grepExample$ gradle grep -Pfile="../" -Ppattern="## 20"
### 20160897 Why I Write Shell Scripts With Gradle And Groovy
### 20160731 A Blog About Using Gradle
### 20160723 Sometimes A Sprint Fails
### 20160723 How To Install CentOS In VMWare
### 20160723 Jenkins Entries Are Now In Jenkins Blog
### 20160717 Connecting the nodes in CISystem
### 20160717 A Blog About Jenkins
### 20160717 Now Working With Git On My Synology
### 20160710 Reading Json Configurations In CISystem
### 20160703 CISystem
### 20160703 CISystem Requirement Specification
### 20160626 Sending Objects Via Sockets In Java
### 20160626 A Blog About Coding In Java
### 20160626 A Blog About The Editor Atom
### 20160619 I will post new blog entries every Sunday from now on
### 20160619 Restructuring This Blog
### 20160603 A Blog About Setting Up Xubuntu
### 20160424 A Blog About Using Git
### 20160224 How To Give Code-Blocks A Style in HTML
### 20160221 Elephant Carpaccio Or How Many Things Can I Get Done On Sunday
### 20160216 How To Debug Gradle Scripts
### 20160216 How To Upload Third-Party-Artifacts to Nexus Using Gradle
### 20160216 How To Download Artifacts Nexus Using Gradle
### 20160216 How To Enable Artifact Upload In Nexus Via Web UI
### 20160216 How To Start Nexus For Testing
### 20160216 How To Install Nexus
### 20160216 Technical Realization Of This Blog


Total time: 13.065 secs

Works nice, right?

I plan to use parts of this script to generate a rss.xml file for my website, soon!

20160731 A Blog About Using Gradle


I use Gradle now for about 1 year. I like Gradle for being a solution for a bunch of problems or better for providing solutions for a lot of common problems.

In this blog I try to explain for total newbies what Gradle is, what I like about Gradle and where I am working on to improve my skills using Gradle.

Maybe as a professional Gradle script writer you would describe it a little bit different, maybe the way I use Gradle is not much more than being a clever frontend for my Groovy scripts, but I have found out, that it is much better to try to inspire somebody for a new language or tool by showing that you only need to know a few things to achieve big things with it.

This is what I like about Gradle :

  1. The basics about Gradle are all easy.
  2. It is not only for writing build systems, you can write any kind of script with it.
  3. Gradle scripts are written in Groovy, so if you know Groovy or Java, you already can write Gradle scripts.
  4. In Gradle you can define tasks, something like main functions, which can be started from the shell.
  5. A task can have a small documentation and it can be assigned to a group of tasks.
  6. You can list all grouped tasks with their documentation, so it is easy to find out what a script can do for you.
  7. A task can depend on other tasks but you can as well exclude tasks from execution.
  8. Gradle comes with lots of helper functionality on top of Groovy, e.g. Copy tasks, Zip support and Maven support for accessing Nexus.
  9. If something is missing, e.g. you need to parse a log file or generate a config file you just do it writing Groovy code.

Sharing my knowledge about tools and languages is not the main aspect of my blog. What I try to do is to show you on which topics I currently work, what I have achieved, where I am struggling and how I try to solve my problems.

Though using Gradle for a lot of things for some time now, I did not find the time to understand how configurations in Gradle work. This is the first time I start writing a script for abuilding a Java application from scratch and I want to understand how it works in detail and not only copy solutions from the internet.

So what I want to show you in this Gradle blog is with the help of some examples how easy it is to use Gradle to solve common problems and as well document my steps in understanding how to write build scripts.

Please find my Jenkins blog here :

Andre's Technical Blog About Using Gradle

20160723 Sometimes A Sprint Fails


This week my plan was to combine the code examples for CISystem to one working example :

  1. read the network configuration from a json config file
  2. determine the local IP address
  3. start some nodes on the same machine with their server sockets on different ports
  4. tell every node only how to reach 2 other the other nodes and let them find out the others by communicating over the network with the other nodes

If you work in a SCRUM team, this may be no new news to you : Sometimes a sprint fails.

This does not mean you did not give enough to reach the goal and it does not mean you did not achieve a lot of it but it is just not finished.

One reason for failed sprints can be you did not see the whole effort and have planned too much for it.

This happened to me this week. My plan was to adapt existing code to fit together, which was no problem. To make it more demanding I wanted to have independent Java programs communicate via sockets to exchange the network configuration of the other nodes. I think I would have managed this, I have already found out how to set up synchronized lists and maps to handle the information at a node which is asking and responding on different threads with other nodes at the same time.

What happened to me was, I totally underestimated the fact, that I have no experience with building software with Gradle with dependencies to other jar files. I have my Gradle script for building my own .jar files and it was easy to find out how to implement the dependency to the groovy-all-2.4.7.jar file, which I have only referenced in a classpath in the shell when calling javac and java till now.

It compiles really nice with :

apply plugin: 'java'

repositories {



jar {

    dependencies {
      compile 'org.codehaus.groovy:groovy-all:2.4.7'

    manifest {
        attributes 'Main-Class': 'com.wartbar.cisystem.CISystem'


What I did not achieve yet is to make it run.

java -cp .:c:\tools\groovy-2.4.7\embeddable\groovy-all-2.4.7.jar -jar build\libs\NetworkInitialization.jar

Exception in thread "main" java.lang.NoClassDefFoundError: groovy/lang/GroovyClassLoader
        at com.wartbar.cisystem.CISystem.readConfigNetwork(
        at com.wartbar.cisystem.CISystem.main(
Caused by: java.lang.ClassNotFoundException: groovy.lang.GroovyClassLoader
        at Source)
        at java.lang.ClassLoader.loadClass(Unknown Source)
        at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
        at java.lang.ClassLoader.loadClass(Unknown Source)
        ... 3 more

It means, the class groovy.lang.GroovyClassLoader was available while compilation, but not while execution.

I was already googling for some time on this problem, but till now I only have these ideas :

  1. I can run my code through a Gradle JavaExec task, I guess that would work, but I don't know how this would help me.
  2. I can compile my code as before with -cp, I already know that works, but I want to do it with Gradle.
  3. I can ask some colleagues and friends and go on googling.

This week was no failure for me, it has just shown me I had planned too much and did not split my tasks like should have done it (read my blog post about Elephant Carpaccio Or How Many Things Can I Get Done On Sunday for more details).

Some ways to split my task could be :

  1. Simulate the Groovy/Json part by using hard coded data and just make everything work without adding new functionality.
  2. Make the network communication between the different nodes work.
  3. Make execution based on a Gradle compiled jar file work.

The second task can be split again into :

The third task has an external dependency, it depends on when I find out how to do it. I should better start soon with contacting some friends :)

I just want to say, do not let your sprint totally depend on external dependencies, always have some other story to do while waiting for the missing solution to show up.

My short summary how to ensure you have something to present to your stakeholders at the end of a sprint :

  1. split your stories and split your tasks
  2. reduce external dependencies

Personally I think, the best is to kick external dependencies out of the scope and use some fake instead. You can integrate the real code when it works, but your sprint should not depend on code where you don't know how to write it or where to get it.

Yes, 2 points are enough here, 3 points don't make it better and if you achieve to implement these 2 points in your SCRUM, you have achieved a lot!

If you think this is too simple, you cannot just split stories and tasks and work around external dependencies, maybe this article from Eike could be something you were looking for :

Kikentai Management : List hell: 3 reasons list are evil

20160723 How To Install CentOS In VMWare


Last week Mat has written a very nice step-by-step guide for how to install a minimal CentOS :

Linuxpinguin : CentOS 7 minimal install

As target he uses VMWare on a Mac.

Since I have spend some time on listing how to install which software on a Xubuntu I have really enjoyed reading his article about CentOS.

20160723 Jenkins Entries Are Now In Jenkins Blog


I have moved my early blog entries about Jenkins to my Jenkins blog. As mentioned some weeks before, I don't want you to jump between different pages for the same topic.

I have inserted the entries in the Jenkins blog where they make sense, where you would expect them when reading a book about how to use Jenkins. The dates in the section names tell you when they were written.

You still find all of them in the order they were written in my blog History and now all entries in my main blog have a date.

20160717 Connecting the nodes in CISystem


My plan is to have both options for connecting the nodes in CISystem :

  1. configure pairs of IP address + port number
  2. just configure the port numbers and scan a range of IP addresses

Yes, scanning ports is not nice, but the idea of CISystem is to create a network where the nodes can be replaced on-the-fly and that means you need to scan for nodes anyway.

A node shall only need to know one other node to find the network, all other information will be communicated via the network to all other nodes.

Due to DHCP it would be quite an effort to fix the network configurations every time the router changes the addresses.

This all means, a node should find its own local IP address and tell it to the other nodes of the network.

I have implemented an example to show how this works in Java :

Find Your Local IP Address

This is the complete example code including Gradle build script : zipped example of Find Your Local IP Address

20160717 A Blog About Jenkins


The last weeks, when I was talking to friends about Jenkins and the new Jenkins 2 release, there were questions like

  1. What is Jenkins?
  2. What does Jenkins?
  3. How do you tell Jenkins to execute Software for you?

Maybe my summaries about how to configure job execution with Jenkins will be interesting for you.

If you have any specific questions, please send me a mail.

If you have an idea how to make it better than described here, please, send me a mail.

Today I was only able to write about the simple basics, but I promise I will tell you more about which plug-ins to use and how to use them, soon.

Please find my Jenkins blog here :

Andre's Technical Blog About Using Jenkins

20160717 Now Working With Git On My Synology


The number of files of this blog is growing and I have decided to put them under git control.

To have a backup of this git repository I have set up one on my Synology station.

I have documented the commands here :

Andre's Technical Blog About Using Git

20160710 Reading Json Configurations In CISystem


To connect the nodes in CISystem I need to tell at least every node where to find one other node. When every node knows the address of one other node and there is a way from every node to every other node we have our network. The nodes can populate the network with information while connecting, at the end every node will know all other nodes.

Maybe it makes sense to start a node with information about more than one node. For example if the other node is not up and running you need a second or third connection to the network.

You also need to specify a port. If you choose one port for all of your nodes maybe on some PCs this port is already in use. Maybe there is a reason why you want to run more than one node on one PC. I want to be able to test my network on only one PC with several nodes, so the port must be configurable.

The next question is, how shall this configuration look like? Command line parameters, XML, an .ini file? Since I really making new experiences and I did work with with Json yet, I have decided to try using a Json configuration for it.

The only problem is, Java does not come with a JSON parser, you need to choose some third party jar like GSON or implement your own parser...

Or you use Groovy's JsonSlurper. It's really nice, elements are returned as maps or lists of maps and it is really easy to get the information out of the Json file.

The only problem is, how to execute Groovy code from Java and retrieving the information. Well, that is another thing I wanted to get my hands on. You only need the GroovyClassLoader and the GroovyObject class, both come with Groovy and you only need to import them and have the groovy-all-2.4.6.jar (or later) in your classpath and then you can parse the code of a Groovy script and execute it via GroovyObject.invokeMethod(). GroovyObject is the base class of all Groovy classes/scripts and so this is easy.

What I did was

  1. define my Json configuration
  2. implement a Java class representing my Json configuration
  3. implement a Groovy script to read my Json configuration and store it in my Java object
  4. implement a Java function to call my Groovy script

This has the benefit when Java is supporting Json in the future, then I can kick out my Groovy call/code and replace it with Java code and I don't need third party jars meanwhile.

I explain my implementation in detail in my Java/Groovy coding blog :

Calling Groovy From Java To Read Json

This is the complete example code including my build.gradle script : zipped example of Calling Groovy From Java To Read Json

20160703 CISystem


This week I start writing about my development of CISystem, a continuous integration execution system based on a peer-to-peer network of nodes providing resources required for executing jobs.

I am now working for some years with Jenkins and I really love. It is rock solid, easy to understand, very flexible and there are a lot of plug-ins which usually fix all your problems when Jenkins cannot do this alone.

But ... there is one point I simply miss in Jenkins and that is the support of job-lists. Yes, you can configure dependencies between jobs and I guess, you can configure your jobs using the gradle plug-in and yes Jenkins 2.0 comes with a lot of new features where maybe even the gradle plug-in is not needed anymore.

But ... for me it feels like I have to make the decision that Jenkins is my all-in-one-swiss-army-knife-for-everything and simply hope that Jenkins really can do how I want to work with what I call job-lists. I would need to make this decision, because it takes time to learn how it works and if it does not work I still don't have it.

Now, working with Jenkins 1.x, when I need to use resources, which are only available on specific nodes, I often come to the point where it looks like you need to have exactly one job instance for every resource used to be able to release a resource ASAP. The problem is, when you issue 300 job instances and then decide to stop the execution, how do you stop all the running jobs and all the jobs waiting in the queue? Maybe there is a plug-in for stopping jobs or removing them from the queue or you can solve it with a Groovy script, but I still have the feeling, that there is something missing in Jenkins,something related to job-lists.

Jenkins provides a plug-in API, so you can write your own plug-ins to control parts of Jenkins but what do you do when the concept how Jenkins works is not what you want? You would end up in workarounds, because it is not a missing feature you are working on, it is a concept you are working against.

I want to have an execution system which not only works for a list of jobs, I want to have a system which handles lists of jobs but where a list is not only like a super-job, which needs to be finished before the next list can be started.

I always wanted to implement multi-threaded software, I always wanted to write software communicating via the network.

So I have decided to implement my own execution network.

I call it CISystem, which stands for something like Continuously Integrating System.

Yes, I know, CI is old school, now it is more hip to do continuous delivery, but CISystem does not mean Continuous Integration as a way to test and deploy your software. It means the nodes are continuously integrating themselves into the network and when they are taken away from the network, they can reinsert themselves later back into the network and execute jobs of an already running job-list. I say they reinsert themselves, because I want the nodes in the network to inform each other about how the network looks like, for example which nodes are online and which resources are available at each node. This information must be continuously updated, some of it changes with every new job being executed, e.g. a job uses a resource, so it is not available for other jobs.

Regarding the make-or-buy decision, does it make sense to reinvent the wheel once again? Yes, for me, because I can gather experience in implementing a multi-threaded distributed software communicating over the network.

Does this mean I don't like Jenkins anymore or will not continue blogging about using it? No. As mentioned, I really like Jenkins, it is really good at what it does. I have already planned to connect my CISystem with Jenkins, for example I don't plan to implement something like a cron-job feature, I will use Jenkins for that.

20160703 CISystem Requirement Specification


Please find my requirement specification for CISystem here :

Andre's Requirement Specification Of CISystem

20160626 Sending Objects Via Sockets In Java


For my software project I have to implement a communication between different PCs.

I have decided :

  1. I want to communicate via sockets.
  2. I want to communicate using objects.

I like socket communication, because synchronization via files depends on the implementation of the filesystem used for the sharedrive and I guess it is faster using sockets.

I like objects, because they help me passing packets of information and that means I don't need to implement my own language based on strings to do so. Communicating only via strings would increase the complexity of the program and decrease its performance. It would also mean to reinvent again what objects already are : containers for information.

FYI: If you suffer from maintaining software which has its own string-based language to pass information instead of using objects you may be interested in these blog articles of my friend Eike :

Kikentai Management : Dead horses: 5 reasons to ride
Kikentai Management : Not invented here

When I started implementing my socket connection everything was fine. I had started with sending strings to check the communication itself and then I switched over to sending objects.

To cut the long story short : It did not work, only my first object was passed, but the others were not send. After playing around for some time I even had the case that I received the same object again and again.

I have found out that I was dealing with these problems :

  1. You have to flush your ObjectOutputStream after sending the object.
  2. Objects which have been send will not be send again, even if their content changes.

I have not spend time yet on finding out when to flush a stream, but I will do as soon as I notice a performance problem here. Till then I just accept I have to flush the stream.

For handling the objects you have 2 options :

  1. You can create a new object every time you send an object.
  2. You can tell Java to write the object unshared to the stream which means, write the object in any case, even if this would not send any new content.

As long as I will always update all members of my object before sending it I feel its a good idea to reuse the same object again.

Here I show you some code-snippets of how I did it :

Sending Objects Via Sockets

This is the complete example code including my build.gradle script : zipped example of Sending Objects Via Sockets

20160626 A Blog About Coding In Java


There are always tricky problems where you had to spend more time on than you had expected.

Here I share some of my experiences with you :

Andre's Technical Blog About Coding In Java/Groovy

20160626 A Blog About The Editor Atom


At the beginning of 2015 I have seen Atom for the first time. A colleague had started using it and though he did not use any other editor at work from that time on he was struggling with some keyboard settings and this over days. I think that was the point where it became interesting for me : Why is he using a tool which does not work for him? My colleague uses a Mac Book at home, don't these guys reject everything which is not user-friendly?

The next time I met somebody using Atom was some months ago. When asked why is he using Atom he told me : It has the same keyboard shortcuts on Mac OS as they are on Windows.

Aha! Must have something to with usability why the Mac guys use Atom. Maybe that is the reason why I started using Atom myself when I switched from Mac Book to my net book with Xubuntu for writing my blog.

Writing a blog I learned very quick, you need to have an editor you really like using, where you know the shortcuts, which is fast and which is available on all of your machines.

Great! Andre? Why do you use Atom? Why not VI or VIM?

Yes, perfectly right, but I don't feel comfortable with using VI.

These are my reasons why I have chosen Atom as my new default Editor :

  1. I want to use the same editor on every OS and Atom is available on Linux, Mac OS and Windows.
  2. It comes with good Markdown support out-of-the-box.
  3. It comes with two dark themes I really enjoy.
  4. It is free.
  5. I like the feature of having a project folder file menu on the left side of the screen, even with this small net book.
  6. It has a plug-in interface, so new features do not depend only on new releases.
  7. It comes with a lot of plug-ins and they seem to be hand-selected.
  8. It is fast, it does not feel like working with an IDE.
  9. I like the way the preferences are organized.
  10. I have read that Microsoft are using Atom as base for their free Visual Studio IDE and since I will not have the time to look into that, I was even more interested in starting to use Atom.

And yes, Atom has bugs, there are things I don't understand, I would do some things different, but at the end I am very happy with my choice.

OK, so, why a technical blog about it, why isn't this post enough?

It took some time to find out how to solve my problems with using Atom. This topic does not really fit into the Xubuntu blog and it is growing.

Please read my technical blog about this topic here :

Andre's Technical Blog About The Editor Atom

20160619 I will post new blog entries every Sunday from now on


I want to add continuity to my blog and will post new blog entries every Sunday from now on.

Maybe I post entries for the technical blogs earlier, but will then reference these changes in the Sunday post, so you know what has happened last week.

20160619 Restructuring This Blog


I have decided to restructure my blog.

You read my entries bottom-up. That is my intention, because I want you to find my latest entry at the top of my blog page.

What I don't want is to splinter the content of a topic over different blog entries, you would have to read them bottom-up and if I write a week about a different topic, this would only confuse you.

So I started with changing the order in my technical blog about using git. For my technical blog about setting up Xubuntu I had decided to write it top-down direct from the start.

Second I have moved the introductions for the technical blogs to this page, they are blog entries now.

Third I will move the other topics over time to dedicated technical blogs, You can identify their migration by their "yearmonthday" prefix in this page.

Last I have moved the navigation to the end of this page. I now call it History. Instead you see the links to my technical blogs at the top of this page. This makes it easier for you to find my latest entry and you don't have to skip the growing list of links to my weekly blog entries.

20160603 A Blog About Setting Up Xubuntu


I have decided to write a blog about how I set up my new Linux installation. On the one hand because I want to have a document where I can look up the details when I do it next time, on the other hand because I want to share my decisions and experiences about this installation with you.

I think everybody does it different, you first have to decide which distribution to use. This decision depends on how much you want to reuse, e.g. you could compile all binaries of your Linux on your own machine or choose a distribution by criteria like LTS (Long Term Service) where you get stable, secure binaries over the next years.

The next decision is, which desktop to choose. If your PC is up-to-date you can choose every desktop you want, you can choose any desktop you want. If your machine is old, your choice will most likely depend on how much RAM you have available and how fast your CPU is.

The last days I have finished the hardware update of my net book Samsung N150 I am currently using for writing this blog. It now has 2GB RAM and a 240GB SSD. Now the bottleneck is the ATOM CPU, but that's OK since I spend more time on implementation than on compilation. Yes, true, I am a nerd and no, it is not like I really need this machine, I have better notebooks, but it makes fun changing hardware and installing new software.

Second, I really like its keyboard and I have started implementing a software project in Java with it and it is sufficient for this and the net book is neat handle and not heavy at all, even compared to my Mac Book Air I usually use, so why not.

I use Xubuntu, because the hardware is simply too old and the RAM is too small to run KDE (Kubuntu) or Gnome (was the default many years) or the current Ubuntu desktop, which is called Unity.

So, I thought when I now setup Xubuntu from scratch, let's write down what is necessary to be able to implement software and feel comfortable with doing it.

You can see from my blog that it is important for me to reuse things which are already there. If somebody had a good idea (please, don't repeat the stupid things, 100 wrong don't make it right) and if it works, then I usually give it a try.

For this reason I use Xubuntu as OS, because it is meant to to work for the common cases and that you don't need to be a Linux professional to make it work for you. When I was attending university I have learned a lot about setting up Linux in general, but I also made two experiences :

  1. it takes time to learn it
  2. the way it is done will change soon, there will always be new software to Configure

To focus on my software project, I don't want to spend my time on Linux configuration, that's why I have chosen Xubuntu.

This blog only explains how to setup the tools I need for my software project, which do not come with the Xubuntu installation or which cannot be installed using the Xubuntus graphical application installer. The graphical application manager changes from time to time and I don't want to look at it at the moment. Further I often need different tool versions in parallel, so a single installation is not helpful for me. At the end I am spending a lot of time using this machine, so I spend time on tweaking it and I also want to write about these aspects.

If you are only interested in how to install the Xubuntu image, here is a rough walk through :

  1. download the Xubuntu image
  2. download the Universal USB Installer
  3. use the Universal USB installer to write the Xubuntu image on a USB stick
  4. start you computer booting from the stick and then choose to install Xubuntu

Please read my technical blog about this topic here :

Andre's Technical Blog About How To Setup Xubuntu

20160424 A Blog About Using Git


I am using git now for about 2 years.

I am no git professional and there is a lot I still have to learn.

Now and then I have a problem with git I did not have before.

I use this blog to write down what I have understood, what works for me and what I have recently learned.

Please read my technical blog about this topic here :

Andre's Technical Blog About Using Git

@20160224: Technical blogs may look like the 80s, right?

20160224 How To Start Jenkins With Java From Shell


This entry has been moved here

20160224 How To Block Jenkins Jobs With Lockable Resources


This entry has been moved here

20160224 How To Give Code-Blocks A Style in HTML


Today I have spend some time on figuring out how to get my code examples in boxes. Giving the code-tags a style changes the output per line, but not per block.

The solution was to give the pre-tag a style :

pre {   
    padding: 15px;


        the code example

@20160221 "Wartbar, die Bar wo man nur wartet aber nie etwas bekommt!"

This week I mentioned to a colleague, that I now have a blog on my domain "". She laughed and said "Wart-Bar, the bar where you only wait but never get anything.". It's a joke which only works in German, "warten" means to wait...

20160221 How To Modify the Hello-World-JPI And See The Modification In Jenkins


This entry has been moved here

20160221 How To Manually Test The Hello-World-JPI With Gradle


This entry has been moved here

20160221 Compile A Hello-World-JPI


This entry has been moved here

20160221 How To Create The Skeleton Of A Jenkins Plugin With Gradle


This entry has been moved here

20160221 Elephant Carpaccio Or How Many Things Can I Get Done On Sunday


OK, ...

My aim : I want to implement a jenkins plugin (JPI).

My problem : I don't even know how to set up the structure of a JPI.

Available time : 2 hours

Solution : Elephant carpaccio

Short summary : It's an exercise in which you split a problem into the smallest pieces you can imagine, but every piece must be testable and deliverable.

Outcome : Instead of going through frustration by aiming at getting my plugin implemented in 2 hours I instead decided to

... and that worked, in 2 hours.

20160216 The first version is online.


20160216 How To Debug Gradle Scripts


To debug a gradle script, you have to connect a debugger with the running gradle process.

You have to set the environment variable GRADLE_OPT (like here in bash) :

export GRADLE_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005"

and then start your task with the option --no-daemon.

Then you can connect to your gradle process using the remote configuration of intellij idea.

You create a remote configuration by clicking

Run / Edit Configurations... / + / Remote

20160216 How To Upload Third-Party-Artifacts to Nexus Using Gradle


Add this to the script for uploading artifacts you did not build with gradle.

It will create a task uploadArchives which will upload your artifact to the releases repository in your local nexus.

FYI: You will most likely only upload every tool version once, so better adapt this script to take artifact, version and so on as project properties, so you can pass them as command-line arguments!

uploadArchives {
    repositories {
        mavenDeployer {
            repository(url: "") {
                authentication(userName: "username", password: "password")
            pom.version = 'version'
            pom.artifactId = 'artifact'

    archives file: file('')

20160216 How To Download Artifacts Nexus Using Gradle


Put this script into a build.gradle and call gradle tasks, then you get the task copyNexus which will copy the configured tools to $buildDir/tools.

Calling the task unzipNexus will unzip the tools to ${buildDir}/unpacked.

FYI : Gradle will use the user anonymous, give it the roles it needs to be able to access the configured repositories!

apply plugin: "java"
apply plugin: 'maven'

repositories {
    maven {
        url ""

configurations {

dependencies {
    tools "group:artifact:version:specifier@zip"

task copyNexus(type: Copy) {
    into "$buildDir/tools"

task unzipNexus(type: Copy) {

    dependsOn copyNexus

    def zipFile = file("$buildDir/tools/")
    def outputDir = file("${buildDir}/unpacked")

    from zipTree(zipFile)
    into outputDir    

20160216 How To Enable Artifact Upload In Nexus Via Web UI


After you have Nexus up running it may be convenient to upload something via the web interface.

You have to :

FYI: This will only enable UI artifact upload for Release repositories, Snapshot repositories do not support it.

20160216 How To Start Nexus For Testing


Currently I run Nexus only for testing it, so I don't need it as a service, I start it as standalone application with :

./nexus console

and then the UI is available at


20160216 How To Install Nexus


Here you find everything.

The default credentials are

admin / admin123

20160216 Technical Realization Of This Blog


I write this blog in markdown and convert it with Pandoc.

New topics will be added at the top.

[Valid RSS]