JGit Flow Maven plugin integration with Bamboo

JGit Flow is a good plugin to apply git-flow practice with Maven projects. Since it's a pure Java implementation, it's very easy to integrate with most CI servers.

However, if you are using Atlassian Bamboo, there are some workarounds for particular issues.

Git repository url

Bamboo use a fake Git repository after checkout. The repository's url is something like file:///nothing. So JGit Flow cannot perform actual Git operations on this repository. You can:

1) Set repository url in plugin configuration

<configuration>  
    <defaultOriginUrl>[repository url]</defaultOriginUrl>
    <alwaysUpdateOrigin>true</alwaysUpdateOrigin>
</configuration>  

2) Use Git command to update repository url

${bamboo.capability.system.git.executable} remote set-url origin ${bamboo.repository.git.repositoryUrl}

Git repository authentication

You can use -Dusername and -Dpassword in JGit Flow plugin to set the repository's username and password. To execute Bamboo shell script with Git commands, a .netrc file with authentication details needs to be created. This can be done via agent start script or using echo in inline script.

machine bitbucket.org  
login <username>  
password <password>  

Clean old release branches

After finishing a release using release-finish, the remote release branch is deleted by default. But the branch may still exist in local. These old release branches should be removed, otherwise next release-start goal will fail.

${bamboo.capability.system.git.executable} fetch --prune --verbose

${bamboo.capability.system.git.executable} branch -vv | awk '/: gone]/{print $1}' | xargs ${bamboo.capability.system.git.executable} branch -d 2> /dev/null

echo 'stale branches deleted'  

Elasticsearch - Delete documents by type

If you want to delete documents in Elasticsearch by type using Java API, below are some options:

  • For Elasticsearch 1.x, use the deprecated prepareDeleteByQuery method of Client. 2.x has removed this method.
  • For Elasticsearch 2.x, use delete-by-query plugin.

Or use scroll/scan API as below.

SearchResponse scrollResponse = this.client.prepareSearch(INDEX_NAME)  
        .setTypes(type)
        .setSearchType(SearchType.SCAN)
        .setScroll(new TimeValue(60000))
        .setQuery(QueryBuilders.matchAllQuery())
        .setSize(100)
        .get();
final BulkRequestBuilder bulkRequestBuilder = this.client.prepareBulk().setRefresh(true);  
while (true) {  
    if (scrollResponse.getHits().getHits().length == 0) {
        break;
    }

    scrollResponse.getHits().forEach(hit -> bulkRequestBuilder.add(
        this.client.prepareDelete(INDEX_NAME, type, hit.getId()))
    );
    scrollResponse = this.client.prepareSearchScroll(scrollResponse.getScrollId())
            .setScroll(new TimeValue(60000))
            .get();
}
if (bulkRequestBuilder.numberOfActions() > 0) {  
    bulkRequestBuilder.get();
}

Properties ordering of Groovy JsonSlurper parsing

Groovy JsonSlurper is a useful tool to parse JSON strings. For a JSON object, the parsing result is a Map object. In certain cases, we want to keep the iteration order of Map properties same as encounter order in original JSON string.

By default, JsonSlurper uses a TreeMap, so the properties are actually sorted. Given following program, the result will be obj => {a=0, x=2, z=1}.

public class Test {  
    public static void main(String[] args) {
        String jsonString = "{\"obj\": {\"a\": 0, \"z\": 1, \"x\": 2}}";
        JsonSlurper jsonSlurper = new JsonSlurper();
        Map map = (Map) jsonSlurper.parseText(jsonString);
        map.forEach((k, v) -> System.out.println(String.format("%s => %s", k, v)));
    }
}

To keep the original properties ordering, you can add -Djdk.map.althashing.threshold=512 as the JVM argument, then the output will be obj => {a=0, z=1, x=2}.

The reason is in the source code of groovy.json.internal.LazyMap used by JsonSlurper. See in GitHub. So If jdk.map.althashing.threshold system property is set, LazyMap will use a LinkedHashMap implementation instead of TreeMap, then it will keep the properties ordering.

private static final String JDK_MAP_ALTHASHING_SYSPROP = System.getProperty("jdk.map.althashing.threshold");

private void buildIfNeeded() {  
   if (map == null) {
        /** added to avoid hash collision attack. */
        if (Sys.is1_7OrLater() && JDK_MAP_ALTHASHING_SYSPROP != null) {
            map = new LinkedHashMap<String, Object>(size, 0.01f);
        } else {
            map = new TreeMap<String, Object>();
        }

        for (int index = 0; index < size; index++) {
            map.put(keys[index], values[index]);
        }
        this.keys = null;
        this.values = null;
    }
}

Please note this solution should be used as a hack as it depends on Groovy's internal implementation details. So this behavior may change in future version of Groovy.

Note for Java 8

jdk.map.althashing.threshold system property is removed in Java SE 8, but this hack still works in Java 8 as the implementation only checks the existence of this system property, but not actually uses it.

Install Ghost on CentOS 7

I just moved my personal blog from Jekyll on Heroku to Ghost on Digital Ocean. Although Digital Ocean provides a 1-click application image for Ghost, I decided that I wanted to install Ghost myself, so I can have more control over the instance and application.

Node version

After creating a droplet with CentOS 7, the first to install ins Node.js. Recommended Node for Ghost is >0.10.40. So I used nvm to install this Node version.

When installing nvm, it's a good idea to specify the NVM_DIR and point to a shared path. By default, nvm is installed to current user's home directory, which could be root user's home directory. This can cause file permission issues when Ghost is started using another user.

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | NVM_DIR="/var/nvm" bash  

Then use nvm install 0.10.43 to install Node. Now we can use which node to find the actual path of Node binary.

Service script

To make sure Ghost is started after system restarts, we need to add a service script. Create /etc/init.d/ghost file with following content. The script is based on what I found in this article.

The most important part in this script is the command to start Ghost: /var/nvm/v0.10.43/bin/node index.js >> /var/log/ghost/ghost.log &. Here I used the Node binary installed by nvm to start Ghost. I also used user ghost to run Ghost.

Use chkconfig --add ghost to have the script run at startup. Use service ghost start to start Ghost, service ghost stop to Stop Ghost.

#!/bin/sh
#
# ghost - this script starts the ghost blogging package
#
# chkconfig:   - 95 20
# description: ghost is a blogging platform built using javascript \
#              and running on nodejs
#

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

exec="/var/nvm/v0.10.43/bin/node index.js >> /var/log/ghost/ghost.log &"  
prog="ghost"

[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog

lockfile=/var/lock/subsys/$prog

start() {  
    #[ -x $exec ] || exit 5
    echo -n $"Starting $prog: "
    # if not running, start it up here, usually something like "daemon $exec"
    export NODE_ENV=production
    cd /var/data/ghost/
    daemon --user=ghost $exec
    retval=$?
    echo
    [ $retval -eq 0 ] && touch $lockfile
    return $retval
}

stop() {  
    echo -n $"Stopping $prog: "
    # stop it here, often "killproc $prog"
    pid=`ps -u $prog -fw | grep $prog | grep -v " grep " | awk '{print $2}'`
    kill -9 $pid > /dev/null 2>&1 && echo_success || echo_failure
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile
    return $retval
}

restart() {  
    stop
    start
}

my_status() {  
        local base pid lock_file=

        base=${1##*/}

        # get pid
        pid=`ps -u $prog -fw | grep $prog | grep -v " grep " | awk '{print $2}'`

        if [ -z "${lock_file}" ]; then
        lock_file=${base}
        fi
        # See if we have no PID and /var/lock/subsys/${lock_file} exists
        if [[ -z "$pid" && -f /var/lock/subsys/${lock_file} ]]; then
                echo $"${base} dead but subsys locked"
                return 2
        fi

        if [ -z "$pid" ]; then
                echo $"${base} is stopped"
                return 3
        fi

        if [ -n "$pid" ]; then
                echo $"${base} (pid $pid) is running..."
                return 0
        fi

}

rh_status() {  
    # run checks to determine if the service is running or use generic status
    my_status $prog
}

rh_status_q() {  
    rh_status >/dev/null 2>&1
}



case "$1" in  
    start)
        rh_status_q && exit 0
        $1
        ;;
    stop)
        rh_status_q || exit 0
        $1
        ;;
    restart)
        $1
        ;;
    status)
        rh_status
        ;;
    *)
        echo $"Usage: $0 {start|stop|restart|status}"
        exit 2
esac  
exit $?  

Nginx

Install Nginx by following this guide.

Add Ghost Nginx config /etc/nginx/conf.d/ghost.conf. Also make sure Nginx default server config is removed in file /etc/nginx/nginx.conf.

server {  
    listen 0.0.0.0:80;
    server_name midgetontoes.com;
    access_log /var/log/nginx/midgetontoes.log;

    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;

        proxy_pass http://127.0.0.1:2368;
        proxy_redirect off;
    }
}

New Book - A Practical Guide for Java 8 Lambdas and Streams

This book is not the first book about Java 8 lambda expressions and streams, and it's definitely not the last book about lambda expressions and streams. Java 8 is a Java platform upgrade which the community looking forward to for a long time. Lambda expressions and streams quickly gain popularity in Java developers. There are already a lot of books and online tutorials about lambda expressions and streams. This book is trying to explain lambda expressions and streams from a different perspective.

  • For lambda expressions, this book explains in details based on JSR 335.
  • For streams, this book covers fundamental concepts of Java core library.
  • This book provides how-to examples for lambda expressions and streams.
  • This book also covers the important utility class Optional.

Lambda expressions and streams are easy to understand and use. This book tries to provide some insights about how to use them efficiently.

Buy this book

New Book - Build Mobile Apps with Ionic and Firebase

With the prevalence of mobile apps, more and more developers want to learn how to build mobile apps. Developers can choose iOS or Android platforms to develop mobile apps. But learning Objective-C/Swift or Java is not an easy task. Web development programming languages, HTML, JavaScript and CSS, are easier to understand and learn. Building mobile apps is made possible by Apache Cordova, which creates a new type of mobile apps - Hybrid mobile apps. Hybrid mobile apps are actually running in an internal brower inside a wrapper created by Apache Cordova. With hybrid mobile apps, we can have one single code base for different platforms. Developers also can use their existing web development skills.

Ionic framework builds on top of Apache Cordova and provides out-of-box components which make developing hybrid mobile apps much easier. Ionic uses Angular as the JavaScript framework and has nice default UI style with similar look & feel as native apps. Firebase is a realtime database which can be accessed in web apps using JavaScript. With Ionic and Firebase, you just need to develop front-end code. You don't need to manage any back-end code or servers.

This book is an introductory guide to build hybrid mobile apps using Ionic and Firebase. This book is sample driven. In this book, we are going to build a Hacker News client app. This app can view top stories in Hacker News, view comments of a story, add stories to favorites, etc. This book covers various topics in mobile apps development:

  • Local development environment setup
  • Ionic quickstart
  • Work with Firebase
  • State transition
  • Common UI components: lists, cards, modals, popups
  • Forms & inputs
  • User authentication
  • Publish apps

Source code of the sample app is available on GitHub. View screen-shots of the sample at here.

Buy this book

NodeJS API proxy with CORS support

Our application's backend is Java-based and exports REST API, frontend is AngularJS-based. During frontend development, use Grunt connect to start the development server for CoffeeScript/LESS and static files. To enable AngularJS to access the API which running on different port, we need a proxy with CORS support. So I created a simple proxy server using connect and node-http-proxy.

The proxy code is very simple. In the code below, API server is running on port 8080 and proxy server is running on port 8000. The proxy server sets Access-Control-* headers to enable CORS support. It also provides basic authentication header.

var connect = require('connect'),  
  httpProxy = require('http-proxy');

var app = connect();

var proxy = httpProxy.createProxyServer({  
  target: 'http://127.0.0.1:8080'
});

proxy.on('proxyReq', function(proxyReq, req, res, options) {  
  proxyReq.setHeader('Authorization', 'Basic YWRtaW46cGFzc3dvcmQ=');
});

proxy.on('error', function(e) {  
  console.log(e);
});

app.use(function(req, res, next) {  
  if (req.headers['origin']) {
    res.setHeader('Access-Control-Allow-Origin', req.headers['origin']);
    res.setHeader('Access-Control-Allow-Methods', 'POST, PUT, GET, OPTIONS, DELETE');
    res.setHeader('Access-Control-Max-Age', '3600');
    res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, Authorization, Content-Type');
  }
  if (req.method !== 'OPTIONS') {
    next();
  }
  else {
    res.end();
  }
});

app.use(function(req, res) {  
  proxy.web(req, res);
});

app.listen(8000);  
console.log('Proxy server started.')  

AngularJS needs to have cross-domain requests enabled.

app.config(function($httpProvider) {  
  $httpProvider.defaults.useXDomain = true
});

Then you should be able to access the API.

Tips for using ProGuard with Spring framework

ProGuard is a is a free Java class file shrinker, optimizer, obfuscator, and preverifier. You may want to use ProGuard to obfuscate your Java binary code first before you release it to customers, especially for Android apps, on-premise enterprise apps or libraries. The whole obfuscation process is very painful and you need to run a lot of tests to make sure your code still works properly after obfuscation.

Here are some tips to use ProGuard, especially when Spring framework is used.

Use the Maven plugin

If you use Maven to manage your project, then you should use the Maven plugin for ProGuard. It's easy to set up and use.

<plugin>  
    <groupId>com.github.wvengen</groupId>
    <artifactId>proguard-maven-plugin</artifactId>
    <version>2.0.10</version>
    <executions>
        <execution>
            <id>proguard</id>
            <phase>package</phase>
            <goals>
                <goal>proguard</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <obfuscate>true</obfuscate>
        <injar>${shaded-jar.name}</injar>
        <outjar>${shaded-jar.name}</outjar>
        <libs>
            <lib>${java.bootstrap.classes}</lib>
            <lib>${java.cryptographic.extension.classes}</lib>
            <lib>${java.secure.socket.extension.classes}</lib>
        </libs>
        <injarNotExistsSkip>true</injarNotExistsSkip>
        <options>
        </options>
    </configuration>
    <dependencies>
        <dependency>
            <groupId>net.sf.proguard</groupId>
            <artifactId>proguard-base</artifactId>
            <version>5.2.1</version>
            <scope>runtime</scope>
        </dependency>
    </dependencies>
</plugin>  

In <options> of <configuration>, there should be a list of <option> to configure ProGuard.

Multi-modules project

If your Maven projects have multiple modules, then you should use Maven shade plugin to create a shaded jar for all your modules, then run ProGuard against this single jar. This can make sure ProGuard has the correct mappings for all your application's classes.

<plugin>  
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-shade-plugin</artifactId>
    <version>2.4</version>
    <configuration>
        <outputFile>${shaded-jar.name}</outputFile>
        <artifactSet>
            <includes>
                <include>com.myapp:*</include>
            </includes>
        </artifactSet>
        <transformers>
            <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                <resource>META-INF/spring.factories</resource>
            </transformer>
            <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                <resource>META-INF/spring.handlers</resource>
            </transformer>
            <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                <resource>META-INF/spring.schemas</resource>
            </transformer>
            <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                <resource>META-INF/spring.provides</resource>
            </transformer>
        </transformers>
    </configuration>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>shade</goal>
            </goals>
        </execution>
    </executions>
</plugin>  

If you use Spring, make sure transformers are added to process Spring's various files.

Disable optimization class/marking/final

By default ProGuard marks classes as final when possible even when classes are not declared as final. This causes problems for Spring as Spring doesn't allow classes with @Configuration annotation to be final.Use following <option> to disable it.

<option>-optimizations !class/marking/final</option>  

Adapt Spring resources

If you use configuration files like spring.factories to customise Spring, make sure these configuration files are adapted by ProGuard, otherwise the class names in those files will be wrong. META-INF/spring.* in following code specifies Spring configuration files.

<option>-adaptresourcefilecontents **.properties,META-INF/MANIFEST.MF,META-INF/spring.*</option>  

Keep annotations

Spring uses annotations extensively, so annotations should be kept in the runtime to make sure Spring still works properly. *Annotation* in code below is used to keep annotations.

<option>-keepattributes Exceptions,InnerClasses,Signature,Deprecated,SourceFile,LineNumberTable,*Annotation*,EnclosingMethod</option>  

Keep application launch class

If you use Spring Boot, the Application class should be kept to launch the app. The option in code below keeps any class with main method.

<option>-keepclasseswithmembers public class * { public static void main(java.lang.String[]);}</option>  

Keep your REST resources classes

If your app exposes a REST API, make sure those resources classes are kept. Most likely you rely on Jackson or other libraries to convert your resources objects to JSON or XML. These libraries use reflection to find out the properties in your resources classes, so these resources classes should be kept to make sure the JSON/XML representations are correct.

For example, given a resource class User,

public class User {  
    private String firstName;
    private String lastName;

    public String getFirstName() {
        return this.firstName;
    }

    public String getLastName() {
        return this.lastName;
    }
}

After ProGuard processed this class file, the methods getFirstName and getLastName may be renamed to something like a or b. Then Jackson cannot use reflection to find JavaBean properties in this class file. The output will be just an empty JSON object.

<option>-keep public class com.myapp.**.model.** { *; }</option>  

Process bean classes

You can also following examples in ProGuard website to process bean classes by keeping setter and getter methods.

<option>  
-keep class com.myapp.**.model.** {
    void set*(***);
    boolean is*();
    *** get*();
}
</option>  

Add name to Spring beans

If Spring annotations @Service, @Component and @Configuration are used to declare beans, make sure a name is assigned to each bean, e.g. @Component("userHelper") or @Service("userService"). This is because when no name is assigned, Spring uses the class's name as the bean name, but ProGuard will change class names to something like a, b, or c. This will have name conflicts across different packages. For example, package com.myapp.a.a has a class a, package com.myapp.a.b also has a class a, these two class use the same bean name a, but the type is different. So beans should be explicitly named to avoid name conflicts.

Keep members with Spring annotations

If you use Spring annotations like @Value to inject values into your classes like below:

@Value("${myval}")
private String myVal;  

ProGuard is smart enough to infer that the value of myVal is null as this variable has not been assigned any value, so it replaces all occurrences of myVal with null in the binary code, then a lot of NullPointerExceptions will be thrown at runtime. To avoid this, use following options:

<option>-keepclassmembers class * {  
    @org.springframework.beans.factory.annotation.Autowired *;
    @org.springframework.beans.factory.annotation.Value *;
}
</option>  

AngularJS - Features Toggle with Grunt Build

Background

Spring Boot back-end with AngularJS front-end.

Scenario

Our product has two versions: lite version and standard version. Some features are only available in standard version. So some UI components need to be hidden in lite version. This is controlled by build process. By passing different flags to the build process, different versions can be built. Front-end code uses the same flag to show/hide different components.

Solution

Install grunt-ng-constant and load task grunt-ng-constant in Grunt config.

npm install grunt-ng-constant --save-dev  

Then add ngconstant config in Gruntfile. In the config below, I defined two environments, development and production. All environment-related configurations are put into ENV object. distType is the type I want to specify different release versions. In development build, this value is set to standard. In production build, this value is set to grunt.option('distType'), so it's controlled by command line arguments when Grunt is invoked.

ngconstant: {  
  options: {
    space: '  ',
    wrap: 'define(["angular"], function(angular){ \n return {{ "{%= __ngModule " }}%} \n\n });',
    name: 'config'
  },
  development: {
    options: {
      dest: '<%= appConfig.build %>/scripts/config.js'
    },
    constants: {
      ENV: {
        name: 'development',
        apiEndpoint: 'http://localhost:8080/',
        distType: 'standard'
      }
    }
  },
  production: {
    options: {
      dest: '<%= appConfig.build %>/scripts/config.js'
    },
    constants: {
      ENV: {
        name: 'production',
        apiEndpoint: '/',
        distType: grunt.option('distType')
      }
    }
  }
}

Add ngconstant:production to the list of production build tasks. To build a lite version, use grunt build --distType=lite. To build a standard version, use grunt build --distType=standard.

grunt-ng-constant generates the config.js file in specified directory. Include this file using <script> or load it using RequireJS.

define(['angular', 'config'], (angular, config) ->  
  angular.module('myApp', (ENV) ->
    // Use ENV to check version
  )
)

Build Apache Camel Custom Component

If you create a custom Apache Camel component, you can build it using Maven to generate necessary metadata, then this component can be auto-discovered by Camel.

Create a custom component following the guide. Add file META-INF/services/org/apache/camel/component/FOO to src/main/resources folder with content like below:

class=com.example.CustomComponent  

Then add following code to Maven's pom.xml. Maven plugin camel-package-maven-plugin is used to generate component.properties file.

<build>  
    <plugins>
        <plugin>
            <groupId>org.apache.camel</groupId>
            <artifactId>camel-package-maven-plugin</artifactId>
            <version>${camel.version}</version>
            <executions>
                <execution>
                    <goals>
                        <goal>prepare-components</goal>
                    </goals>
                    <phase>generate-resources</phase>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>  

Then add Maven dependency of org.apache.camel:apt. This apt module processes Camel annotations and generate component JSON schema and HTML documentation. See Camel 2.15 release note.

<dependency>  
    <groupId>org.apache.camel</groupId>
    <artifactId>apt</artifactId>
    <version>${camel.version}</version>
    <scope>provided</scope>
</dependency>  

After this, you should be able to list your component and its JSON schema from JMX.