Skip to main content

Saturday night’s alright for coding

by Donald Raab at December 11, 2018 04:44 PM

Programming fun with Java and Eclipse Collections on a Saturday night.

Combining upper and lowercase letters of the English alphabet

Do you ever just sit down and write code?

Sometimes I just like to sit down in front of my IDE and write some Java code. I don’t always know what I am going to write, and if I find I am not creating anything useful I will just abandon the work. Other times, I find myself creating something that could become a new code kata or potentially a new useful API.

Saturday night I was bored watching TV, and I started writing the code pictured in the image above. I wanted to see if I could use the primitive APIs in Eclipse Collections to create an alphabet with both uppercase and lowercase letters quickly and fluently.

public void alphabet()
var alphabet = IntInterval.fromTo('A', 'Z')
.zipInt(IntInterval.fromTo('a', 'z'))
.collect(i -> Strings.toCodePoints(
"Aa,Bb,Cc,Dd,Ee,Ff,Gg,Hh,Ii,Jj,Kk,Ll,Mm,Nn," +

This is not how I originally wrote the code. I started out coding with JDK 8 using intermediate types, and then switched to JDK 11 so I could use var. The original version of the code looked like this.

public void alphabet()
IntInterval uppercase = IntInterval.fromTo('A', 'Z');
IntInterval lowercase = IntInterval.fromTo('a', 'z');
ImmutableList<IntIntPair> pairs = uppercase.zipInt(lowercase);
ImmutableList<CodePointAdapter> strings =
pairs.collect(pair ->
String alphabet = strings.makeString(",");
"Aa,Bb,Cc,Dd,Ee,Ff,Gg,Hh,Ii,Jj,Kk,Ll,Mm,Nn," +

Replacing all of the types with var made the code look like this.

The code is less dense, but I also have less information about the intermediate types

IntelliJ helps make things really clear and simple when using a fluent coding style in Java by showing the intermediate types on the right. See the text underlined in green below. Using JDK 11 here doesn’t really make too much of a difference here, as var is only three characters shorter than String.

No need to type on the left, as the IDE displays intermediate types on the right

Fluent Primitives

Having a fluent primitive API available makes coding with primitive types much more fun and productive. In the above examples I was able to leverage IntInterval to create both uppercase and lowercase alphabets. Then using the zipInt method, I was able to combine the two IntInterval instances into an ImmutableList of IntIntPair. Then I was able to transform the IntIntPair instances into CodePointAdapter instances using the collect method. CodePointAdapter implements the CharSequence interface, which is also implemented by the String class in Java. Finally, makeString(“,”) creates a comma separated String combining all of the CodePointAdapter instances.

There was a limited amount of boxing required here, when I had to create the IntIntPair and CodePointAdapter instances. But at no point did I have to box any of the int values into their Integer wrapper counterparts to continue the fluent calls.

Method Reference Preference

I prefer method references over lambdas in Java. Method references are often more succinct than lambdas, and are usually self-documenting. I had to use a lambda to transform the IntIntPair to a CodePointAdapter using Strings.toCodePoints.

I wonder whether it would be worthwhile to add a new overloaded toCodePoints method to the Strings class which takes an IntIntPair as a parameter. I also wonder whether it would be worthwhile to add a version of the method which takes an IntIterable as well. Right now, toCodePoints takes an int array.

Here’s what the code would look like if I used a method reference instead of a lambda. I extracted the code contained in the lambda into a method called toCodePoints. I find I do this quite frequently these days.

Maybe next time I am writing code on a Saturday night, I will see if it makes sense to add one or both of the toCodePoints utility methods to the Strings class in Eclipse Collections.

Sunday night’s alright for blogging and coding

I was thinking I would be finished with the blog Sunday evening, but then decided I wanted to try some additional coding using Java and Streams. I was curious to see if I could come up with a similarly fluent solution to creating an uppercase/lowercase alphabet using primitive Java Streams. I was unsuccessful at keeping everything in one fluent call, but this was the best solution I could come up using only built-in Java types and APIs.

Not quite as fluent as I had to turn the IntStreams into int arrays

Final thoughts

In the future, I might turn these code examples into a new code kata for Eclipse Collections and Java Streams. Until then, here’s the final versions of the code in raw text if you want to experiment with the APIs on your own.

Eclipse Collections Version

public void alphabet()
var alphabet = IntInterval.fromTo('A', 'Z')
.zipInt(IntInterval.fromTo('a', 'z'))
"Aa,Bb,Cc,Dd,Ee,Ff,Gg,Hh,Ii,Jj,Kk,Ll,Mm,Nn," +

private CodePointAdapter toCodePoints(IntIntPair pair)
return Strings.toCodePoints(pair.getOne(), pair.getTwo());

Java Streams Version

public void alphabetStream()
var upper = IntStream.rangeClosed('A', 'Z').toArray();
var lower = IntStream.rangeClosed('a', 'z').toArray();
var alphabet = IntStream.range(0, 26)
.mapToObj(i ->
new String(new int[]{upper[i], lower[i]}, 0, 2))
"Aa,Bb,Cc,Dd,Ee,Ff,Gg,Hh,Ii,Jj,Kk,Ll,Mm,Nn," +

Happy Coding!

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.

by Donald Raab at December 11, 2018 04:44 PM

Jenkins 1: IDE Setup and an Empty Plugin

by Christian Pontesegger ( at December 11, 2018 03:09 PM

I recently started to write my first Jenkins plugin and I thought I share the experience with you. For that reason I started a new series of tutorials.

Today we are building a simple Jenkins plugin using Eclipse. The basic steps of this tutorial are extracted from the Jenkins Plugin Tutorial.

Jenkins Tutorials

For a list of all jenkins related tutorials see Jenkins Tutorials Overview.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.


As you are interested in writing a plugin for Jenkins I expect that you have a rough idea what Jenkins is used for and how to administer it.

While our build environment allows to run a test instance of Jenkins with our plugin enabled I also liked to have a 'real' Jenkins instance available to test my plugins. Therefore I use docker to quickly get started with a woking Jenkins installation.

Once you have installed docker (extended tutorial for debian), you can download the latest Jenkins container by using
docker pull jenkins/jenkins:lts

Now run your container using
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts

After the startup process your server is reachable via http://localhost:8080/.

To manage containers use commands
docker container ls
docker container stop <container name>

Step 1: Maven configuration

We will need maven installed and ready for creating the project, building and testing it, so make sure you have set it up correctly.

Maven needs some configuration ready to learn about jenkins plugins, therefore you need to adapt the configuration file slightly. On linux change ~/.m2/settings.xml, on windows modify/create %USERPROFILE%\.m2\settings.xml and set following content:

<!-- Give access to Jenkins plugins -->

Hint: On windows I had to remove the settings file <maven install folder>\conf\settings.xml as it was used in favor of my profile settings.

Step 2: Create the plugin skeleton

To create the initial project open a shell and execute:
mvn archetype:generate -Dfilter=io.jenkins.archetypes:empty-plugin
You will be asked some questions how your plugin should be configurated:
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] ------------------------------------------------------------------------
[INFO] >>> maven-archetype-plugin:3.0.1:generate (default-cli) > generate-sources @ standalone-pom >>>
[INFO] <<< maven-archetype-plugin:3.0.1:generate (default-cli) < generate-sources @ standalone-pom <<<
[INFO] --- maven-archetype-plugin:3.0.1:generate (default-cli) @ standalone-pom ---
[INFO] Generating project in Interactive mode
[INFO] No archetype defined. Using maven-archetype-quickstart (org.apache.maven.archetypes:maven-archetype-quickstart:1.0)
Choose archetype:
1: remote -> io.jenkins.archetypes:empty-plugin (Skeleton of a Jenkins plugin with a POM and an empty source tree.)
Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): : 1
Choose io.jenkins.archetypes:empty-plugin version:
1: 1.0
2: 1.1
3: 1.2
4: 1.3
5: 1.4
Choose a number: 5:
[INFO] Using property: groupId = unused
Define value for property 'artifactId': builder.hello
Define value for property 'version' 1.0-SNAPSHOT: :
[INFO] Using property: package = unused
Confirm properties configuration:
groupId: unused
artifactId: builder.hello
version: 1.0-SNAPSHOT
package: unused
Y: :
[INFO] ----------------------------------------------------------------------------
[INFO] Using following parameters for creating project from Archetype: empty-plugin:1.4
[INFO] ----------------------------------------------------------------------------
[INFO] Parameter: groupId, Value: unused
[INFO] Parameter: artifactId, Value: builder.hello
[INFO] Parameter: version, Value: 1.0-SNAPSHOT
[INFO] Parameter: package, Value: unused
[INFO] Parameter: packageInPathFormat, Value: unused
[INFO] Parameter: version, Value: 1.0-SNAPSHOT
[INFO] Parameter: package, Value: unused
[INFO] Parameter: groupId, Value: unused
[INFO] Parameter: artifactId, Value: builder.hello
[INFO] Project created from Archetype in dir: ~/Eclipse/
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 36.202 s
[INFO] Finished at: 2018-11-27T20:34:15+01:00
[INFO] Final Memory: 16M/169M
[INFO] ------------------------------------------------------------------------

We just created the basic skeleton files and could start working right away. But as we want to do it the eclipse way we need to convert the project to a proper eclipse project. Therefore change into the created project directory and execute:
mvn -DdownloadSources=true -DdownloadJavadocs=true -DoutputDirectory=target/eclipse-classes eclipse:eclipse

The first run might take some time as maven has to fetch tons of dependencies. So sit back and enjoy the show...

Once this step is done we can import our project using the Eclipse import wizard using File / Import... and then select General/Existing Projects into Workspace. On the following page select the project folder that was created by maven.

Step 3: Update configuration files

The created pom.xml file for our plugin provides a good starting point for development. Typically you might want to update it a little before you actually start coding. Fields like name, description, license should be pretty clear. More interesting is
Upgrading the java level to 8 should be pretty safe these days. Further Jenkins 2.7.3 is really outdated. To check out which versions are available you may browse the jenkins artifactory server. Open the jenkins-war node and search for a version you would like to use.

I further adapt the .project file and remove the groovyNature as this would trigger install requests for groovy support in eclipse. As we are going to write java code, we do not need groovy.

Step 4: Build & Deploy the Plugin

To build your plugin simply run
mvn package
This will build and test your package. Further it creates an installable *.hpi package in the com.codeandme.jenkins.helloworld/target folder.

Step 5: Test the plugin in a test instance

To see your plugin in action you might want to execute it in a test instance of Jenkins. Maven will help us to set this up:
mvn hpi:run -Djetty.port=8090
After the boot phase, open up your browser and point to http://localhost:8090/jenkins to access your test instance.

Debugging is also quite simple, just add environment settings to setup your remote debugger. On Windows this would be:
set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=8000,suspend=n
On Linux use
export MAVEN_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=8000,suspend=n"
Then you should be able to setup a Remote Java Application debug configuration in Eclipse.

Writing a simple builder will be our next step, stay tuned for the next tutorial.

by Christian Pontesegger ( at December 11, 2018 03:09 PM

HTTP response validation with the Vert.x Web Client

by tsegismont at December 10, 2018 12:00 AM

By default, a Vert.x Web Client request ends with an error only if something wrong happens at the network level. In other words, a 404 Not Found response, or a response with the wrong content type, are not considered as failures.

Hence, you would usually perform sanity checks manually after the response is received:

  .get(8080, "", "/some-uri")
  .send(ar -> {
    if (ar.succeeded()) {
      HttpResponse<Buffer> response = ar.result();
      if (response.statusCode() == 200 && response.getHeader("content-type").equals("application/json")) {
        // Decode the body as a json object
        JsonObject body = response.bodyAsJsonObject();
      } else {
        System.out.println("Something went wrong " + response.statusCode());
    } else {
      System.out.println("Something went wrong " + ar.cause().getMessage());

Starting with Vert.x 3.6, you can can trade flexibility for clarity and conciseness using response predicates.

Response predicates

Response predicates can fail a request when the response does not match criterion.

The Web Client module comes with a set of ready-to-use predicates:

  .get(8080, "", "/some-uri")
  .send(ar -> {
    if (ar.succeeded()) {
      HttpResponse<Buffer> response = ar.result();
      // Safely decode the body as a json object
      JsonObject body = response.bodyAsJsonObject();
    } else {
      System.out.println("Something went wrong " + ar.cause().getMessage());

The web is full of HTTP/JSON endpoints, so there is no doubt the ResponsePredicate.SC_SUCCESS and ResponsePredicate.JSON can be handy.

Nevertheless, you might also need to check that the status code is whithin a specific range:

  .get(8080, "", "/some-uri")
  .expect(ResponsePredicate.status(200, 202))
  .send(ar -> {
    // ....

Or that the content is of a specific type:

  .get(8080, "", "/some-uri")
  .send(ar -> {
    // ....

Please refer to the ResponsePredicate documentation for a full list of predefined predicates.

Custom predicates

Eventually, predicates were not designed for status code and content type checking only, so feel free to create your own validation code:

// Check CORS header allowing to do POST
Function<HttpResponse<Void>, ResponsePredicateResult> methodsPredicate = resp -> {
  String methods = resp.getHeader("Access-Control-Allow-Methods");
  if (methods != null) {
    if (methods.contains("POST")) {
      return ResponsePredicateResult.success();
  return ResponsePredicateResult.failure("Does not work");

// Send pre-flight CORS request
  .request(HttpMethod.OPTIONS, 8080, "", "/some-uri")
  .putHeader("Origin", "")
  .putHeader("Access-Control-Request-Method", "POST")
  .send(ar -> {
    if (ar.succeeded()) {
      // Process the POST request now
    } else {
      System.out.println("Something went wrong " + ar.cause().getMessage());

Note that response predicates are evaluated before the response body is received. Therefore you can’t inspect the response body in a predicate test function, only status code, status message and response headers.

Dealing with failures

By default, response predicates (including the predefined ones) use a generic error converter which discards the response body and conveys a simple message. You can customize the exception class by changing the error converter:

ResponsePredicate predicate = ResponsePredicate.create(ResponsePredicate.SC_SUCCESS, result -> {
  return new MyCustomException(result.message());

Beware that creating exceptions in Java comes with the performance cost of capturing the call stack. The generic error converter generates exceptions that do not capture it.

Reading details in error responses

Many web APIs provide details in error responses. For example, the Marvel API uses this JSON object format:

  "code": "InvalidCredentials",
  "message": "The passed API key is invalid."

To avoid losing this information, it is possible to wait for the response body to be fully received before the error converter is called:

ErrorConverter converter = ErrorConverter.createFullBody(result -> {

  // Invoked after the response body is fully received
  HttpResponse<Buffer> response = result.response();

  if (response.getHeader("content-type").equals("application/json")) {
    // Error body is JSON data
    JsonObject body = response.bodyAsJsonObject();
    return new MyCustomException(body.getString("code"), body.getString("message"));

  // Fallback to defaut message
  return new MyCustomException(result.message());

ResponsePredicate predicate = ResponsePredicate.create(ResponsePredicate.SC_SUCCESS, converter);

That’s it! Feel free to comment here or ask questions on our community channels.

by tsegismont at December 10, 2018 12:00 AM

The RSS reader tutorial. Step 3.

by Sammers21 at December 06, 2018 12:00 AM

Quick recap

Now that Vert.x 3.6.0 has been released, it’s the perfect time to conclude our Vert.x Cassandra Client tutorial!

In the previous step we have successfully implemented the second endpoint of the RSS reader app.

The RSS reader example assumes implementing 3 endpoints. This article is dedicated to implementing the last GET /articles/by_rss_link?link={rss_link} endpoint.

Before completing this step, make sure your are in the step_3 git branch:

git checkout step_3

Implementing the 3rd endpoint

The 3rd endpoint serves a list of articles, related to a specific RSS channel. In a request, we specify RSS channel by providing a link. On the application side, after receiving a request we execute the following query:

SELECT title, article_link, description, pubDate FROM articles_by_rss_link WHERE rss_link = RSS_LINK_FROM_REQUEST ;


For obtaining articles by RSS link we need to prepare a related statement first. Change AppVerticle#prepareSelectArticlesByRssLink in this way:

private Future prepareSelectArticlesByRssLink() {
    return Util.prepareQueryAndSetReference(client,
            "SELECT title, article_link, description, pubDate FROM articles_by_rss_link WHERE rss_link = ? ;",

And now, we can implement the AppVerticle#getArticles method. Basically, it will use the selectArticlesByRssLink statement for finding articles by the given link. Here’s the implementation:

private void getArticles(RoutingContext ctx) {
    String link = ctx.request().getParam("link");
    if (link == null) {
    } else {
        client.executeWithFullFetch(selectArticlesByRssLink.bind(link), handler -> {
            if (handler.succeeded()) {
                List rows = handler.result();

                JsonObject responseJson = new JsonObject();
                JsonArray articles = new JsonArray();

                rows.forEach(eachRow -> articles.add(
                        new JsonObject()
                                .put("title", eachRow.getString(0))
                                .put("link", eachRow.getString(1))
                                .put("description", eachRow.getString(2))
                                .put("pub_date", eachRow.getTimestamp(3).getTime())

                responseJson.put("articles", articles);
            } else {
                log.error("failed to get articles for " + link, handler.cause());
                ctx.response().setStatusCode(500).end("Unable to retrieve the info from C*");


During the series we have showed how the RSS reader app can be implemented with Vert.x Cassandra client.

Thanks for reading this. I hope you enjoyed reading this series. See you soon on our Gitter channel!

by Sammers21 at December 06, 2018 12:00 AM

Slides from JavaFX-Days Zürich on e(fx)clipse APIs

by Tom Schindl at December 05, 2018 08:57 AM

If you could not attend my talk at the JavaFX-Days Zürich yesterday or you did and wanted to recap what you’ve presented. Here are the slides.

I enjoyed the conference and I hope you did as well. See you next year!

by Tom Schindl at December 05, 2018 08:57 AM

Using Eclipse Hono's Command&Control with Eclipse Ditto

December 05, 2018 05:00 AM

With version 0.8.0 Eclipse Ditto can now interact with Eclipse Hono using the Command & Control feature. It is possible to send a Thing (or Feature) message at the Ditto Message API, which is then forwarded to Hono as a command message. Hono routes the message to the device, which in turn can send a response to the command including a status, telling if the command was successfully processed or not. This response is routed back via Hono to the Ditto Message API.

In this example we connect the Ditto sandbox and the Hono sandbox to send a message (3) to a simulated device via the Ditto Messages API. The device receives the command from the Hono HTTP Adapter and responds with a message (4) that is routed back to the caller at the Ditto Message API (5). For the sake of simplicity we use simple curl commands both for the Ditto and Hono HTTP APIs.

The following steps are covered in this example:

  1. Setup a connection between Eclipse Ditto and Hono sandboxes
  2. Signal availability of the device
  3. Send a Ditto message
  4. Device receives command and sends command response
  5. Caller receives response at Ditto Message API

Command and Control

Prerequisites: A Ditto digital twin and a Hono device

The creation of a Hono device and Ditto digital twin has already been covered in the blog post Connecting Eclipse Ditto to Eclipse Hono. For brevity we will just list the required commands to create a twin/device here. For a detailed explanation of the steps please refer to the previous post.

Create Hono device

# setup a tenant
$ curl -X POST -i -H 'Content-Type: application/json' -d '{"tenant-id": "org.eclipse.ditto"}'
# create a device
$ curl -X POST -i -H 'Content-Type: application/json' -d '{"device-id": "org.eclipse.ditto:teapot"}'
# add device credentials
$ PWD_HASH=$(echo -n 'teapot' | openssl dgst -binary -sha512 | base64 -w 0)
$ curl -X POST -i -H 'Content-Type: application/json' -d '{
  "device-id": "org.eclipse.ditto:teapot",
  "type": "hashed-password",
  "auth-id": "teapot",
  "secrets": [{
      "hash-function" : "sha-512",
      "pwd-hash": "'$PWD_HASH'"

Create Ditto digital twin

# create thing in Ditto
$ curl -X PUT -i -u demo5:demo -H 'Content-Type: application/json' -d '{
    "features": {
      "water": {
        "properties": {
          "temperature": 20

Setup a connection for Command & Control

In order to forward Ditto Messages to the device as a Hono command we first need to setup and configure a connection between Eclipse Ditto and Eclipse Hono that is prepared for Command & Control messages. According to the Hono documentation the connection must contain a target with the address control/<tenant-id>/<device-id> and a source with the address control/<tenant-id>/<reply-identifier>. The reply-identifier can be chosen arbitrarily, but must be set as the reply-to header of a command exactly as defined in the connection:

curl -X POST -i -u devops:devopsPw1! \
     -H 'Content-Type: application/json' \
     -d '{
           "targetActorSelection": "/system/sharding/connection",
           "headers": {
             "aggregate": false
           "piggybackCommand": {
             "type": "connectivity.commands:createConnection",
             "connection": {
               "id": "command-and-control-connection",
               "connectionType": "amqp-10",
               "connectionStatus": "open",
               "uri": "amqp://",
               "failoverEnabled": true,
               "sources": [{
                   "addresses": [
                   "authorizationContext": [
                   "headerMapping": {
                     "correlation-id": "{{ header:correlation-id }}",
                     "status": "{{ header:status }}"
               "targets": [{
                   "address": "control/org.eclipse.ditto/{{ thing:id }}",
                   "authorizationContext": [
                   "headerMapping": {
                     "correlation-id": "{{ header:correlation-id }}",
                     "subject": "{{ topic:subject }}",
                     "reply-to": "control/org.eclipse.ditto/replies"
         }' \

As described in the Hono API description a command message has three mandatory properties: correlation-id, subject and reply-to, these are defined in the target header mapping of the connection. The source header mapping defines a mapping for correlation-id and status to internal headers, they are required to properly map the Hono command response to a Ditto message response.

Signal availability of device

As we are using the Hono HTTP Adapter to connect our device, send telemetry and receive commands, the designated way is therefor to signal readiness to receive a command by specifying the hono-ttd parameter on an arbitrary event (for detailed description please consult the [Hono HTTP Adapter] ( guide).

curl -X POST -i -u teapot@org.eclipse.ditto:teapot -H 'hono-ttd: 60' -H 'Content-Type: application/json' \
     -d '{
           "topic": "org.eclipse.ditto/teapot/things/twin/commands/modify",
           "path": "/features/water/properties/temperature",
           "value": 23
         }' \

The request is now open to receive a command for 60 seconds before it is terminated.

Send a Ditto message

Now we can use the Ditto Messages API to send a message to the device waiting for a command:

curl -i -X POST '' \
     -u demo5:demo \
     -H 'x-correlation-id: command-and-control' \
     -d '{"targetTemperature":85}'

Device receives the command

The message is forwarded to Hono as configured in the connection and finally terminates the pending request we opened before with a status code of 200 OK:

HTTP/1.1 200 OK
hono-command: brew
hono-cmd-req-id: 013command-and-controlreplies
Content-Type: application/octet-stream
Content-Length: 17
Connection: Keep-Alive

Hono adds two headers besides the standard HTTP headers: hono-command and hono-cmd-req-id. hono-command contains the subject of the message and hono-cmd-req-id identifies the messages and is used to correlate the request and the response we are now going to send.

Device sends a command response

We use the header value of hono-cmd-req-id to construct the response address:

Another curl command completes the roundtrip with a response from the simulated device:

curl -i -X POST -u teapot@org.eclipse.ditto:teapot \
     -H 'Content-Type: application/json' \
     -H 'hono-cmd-status: 200' \
     -d '{
           "topic": "org.eclipse.ditto/teapot/things/live/messages/brew",
           "headers": {
             "content-type": "application/json",
             "correlation-id": "command-and-control"
           "path": "/inbox/messages/brew",
           "value": { "eta": 56},
           "status": 200
         }' \

Message response is received at Ditto Message API

And finally we receive the command response at the Ditto Message API where we sent the original message:

HTTP/1.1 200 OK
correlation-id: command-and-control
message-id: command-and-control
status: 200
Content-Type: application/json
Content-Length: 10


Alternative: Receive command and send response via MQTT

Alternatively we can also receive the command by subscribing to the MQTT topic control/+/+/req/# at the Hono MQTT Adapter:

$ mosquitto_sub -d -h -p 8883 -v -u 'teapot@org.eclipse.ditto' -P teapot -t 'control/+/+/req/#'

And also publish the command response on the MQTT topic control///res/013command-and-controlreplies/200:

mosquitto_pub -d -h -p 8883 -u 'teapot@org.eclipse.ditto' -P teapot \
              -t control///res/013command-and-controlreplies/200 \
              -m '{
                    "topic": "org.eclipse.ditto/teapot/things/live/messages/brew",
                    "headers": {
                      "content-type": "application/json",
                      "correlation-id": "command-and-control"
                    "path": "/inbox/messages/brew",
                    "value": {
                      "eta": 58
                    "status": 200

If you have any wishes, improvements, are missing something or just want to get in touch with us, you can use one of our feedback channels.


The Eclipse Ditto team

December 05, 2018 05:00 AM

Announcing e(fx)clipse DriftFX – Integrating Native Rendering Pipelines into JavaFX

by Tom Schindl at December 04, 2018 07:17 PM

I’ve had the honor to introduce DriftFX at todays JavaFX-Days conference in Zürich.

What is DriftFX

DriftFX is a JavaFX extension allowing you to embed native rendering engines (eg OpenGL) into the JavaFX-Scenegraph.

To embed external rendering engines DriftFX exposes a new Node-Type DriftFXSurface one can put at any place in the SceneGraph and treat it like any other JavaFX-Node – you can apply transformations, effects, … on it like you do with any other JavaFX-Node. In general you can think of it as ImageView on steroids.

A more real world example what one can make happen with DriftFX can be seen in this video provide by Netallied a partner company.

What does DriftFX do?

DriftFX allows you to directly embed content rendered by native pipelines into JavaFX WITHOUT going through the Main-Memory. The rendered artifacts stay at the GPU all time!

What DriftFX does not do?

DriftFX is not a rendering engine itself and it does not provide any abstraction layer to write native rendering engines. Its only purpose is to bring your rendered content directly in JavaFX.

What platforms does DriftFX support?

Our Beta implementation currently supports all 3 major Desktop-Systems supported by OpenJFX – Windows, OS-X and Linux.

We currently targeted JavaFX-8 because this is what our customers are running their applications on. We plan to provide support for OpenJFX-11 and future releases of OpenJFX in the week/months to come.

Is DriftFX opensource?

Yes. We’ve been lucky that the sponsors of the work agreed to make DriftFX opensource. Currently there’s an open Pull-Request at Github for the e(fx)clipse project.

Note that it will take some time until the PR is merged. The reason is that we are going to run this through the Eclipse IP-Process to make sure you can safely embed it into your application.

Does DriftFX use internal “APIs”

Yes. It integrates into the JavaFX-Rendering pipeline so it needs to access internals. We are aware that those can change at any time. We are open to contribute DriftFX in future to OpenJFX but it heavily depends on the OpenJFX community and the stewards of OpenJFX.

Is DriftFX production ready?

Propably not 100% yet, we are just about to integrate it into our customers application and fix problems as they arise. So we are somewhere in between Beta and production readiness.

The reason to opensource it now is that we expect the OpenJFX community to take a look and help us improving it. We know that there are many very clever people in the OpenJFX community who can hopefully help us pushing DriftFX forward.

How can you/your company help?

First of all take a look at it and in case it looks interesting get in touch with us to help fund the ongoing work on this matter.


First of all I’d like to thank EclipseSource Munich and Netallied for their on going support on DriftFX. Special thank goes out to my co-worker (and partner in crime) Christoph and all other people at who made DriftFX happen!

by Tom Schindl at December 04, 2018 07:17 PM

Building web-based modeling tools – EclipseSource OS Week 2018

by Jonas Helming and Maximilian Koegel at December 04, 2018 03:44 PM

At EclipseSource, we continuously drive innovation in a variety of open source projects to built tools upon. Our goal is to...

The post Building web-based modeling tools – EclipseSource OS Week 2018 appeared first on EclipseSource.

by Jonas Helming and Maximilian Koegel at December 04, 2018 03:44 PM

Xtext 2.16 released!

by Karsten Thoms ( at December 04, 2018 03:35 PM

The Xtext Team happily announces the availability of release 2.16.0. This release is part of the Eclipse 2018-12 simultaneous release, which is the 2nd release with the new quarterly release cadence. Besides stability and performance again, compatibility with the new Eclipse release was on the top index of work items.

Accelerated Release Cycle

The new release cadence is keeping the Xtext Team busy. Since we decided not only to participate in Eclipse’s quarterly simultaneous release (SimRel), but also deliver in the same rate new Xtext releases, we have to improve our internal processes. This means to constantly improve the release procedure and working on automation of recurring tasks for the release process itself, but also for maintenance tasks.

While with release 2.15 we entered the SimRel quite late near to its release, we managed to enter the release train already with M1 this time, and delivered also for M3. This helps our adopters within the SimRel to integrate with the upcoming release early and provide valuable integration feedback before the release.

Preparing for Java 11

Java 11 is out now and we planned to fully support this LTS release version with 2.16 already. Xtext depends on many components that have themselves be ready and available to run with Java 11: JDT, Gradle, Tycho, Maven, ASM, … It turned out that we could not wait or integrate all of them for 2.16 already, so it was decided to re-target full support to 2.17.

With Xtext 2.16, most preparation steps have been done already and many components have been upgraded: Xtext integrates with latest JDT, Gradle 4.10, Tycho 1.3, ASM 7. The remaining tasks are tracked in issue #1182.

Xtext 2.16 is already ready to perform on a Java 11 VM unless the source level for used libraries and the required execution environment are targeting a lower language level.

R.I.P. Ancient Eclipse Versions!

Already Xtext 2.15 was declared to require Eclipse Oxygen 3.a as minimal platform version. With 2.16 Xtext now defined lower bound version constraints for all bundle dependencies from Eclipse Platform to their Oxygen.3a version.

As a consequence, Xtext 2.16 can’t be installed on a lower platform version anymore! We decided to cut off the older platform versions to leverage also newer APIs from the platform. And, to be honest, the team has not the capacity to support integration problems arising from a mixture of brand-new Xtext release with Eclipse versions from the stone age.

Quickfix Testing API

The org.eclipse.xtext.ui.testing bundle was extended by a new base class for testing quick fixes, AbstractQuickfixTest. This class provided a convenient way to test the availability of specific quick fixes in code sequences and their effect on the code when applied. This is done by using the provided method.

public void testQuickfixesOn(CharSequence model, String issueCode, Quickfix... quickfixes)

The usage of this API is best demonstrated by the following example from Xtext’s Domainmodel example language:

Xtext quickfix testing API screenshot


As you can see, the approach is to provide some DSL snippet which contains issues (like lower case entity name in fix_invalid_entity_name()). For this snippet the testQuickfixesOn() method is invoked for the INVALID_TYPE_NAME issue code and expects the quick fixes’ name, description and the capitalized name after appliance.

Take the example’s Quickfix tests as a blue print and you should be able to easily test your implemented quick fixes now. Besides the Domainmodel example we have extended also the Statemachine and Home Automation example projects.

Latest Language Server Protocol Support

Xtext 2.16 is built against LSP4J 0.6, which reflects the latest state of the Language Server Protocol. With this release, Xtext now supports hierarchical document symbols. This has been added with LSP 3.10.

Xtext language servers can now use different strategies for the exit command. The default behavior to handle this command was to terminate the VM by calling System.exit(). While this is the desired behavior when running the language server in a separate process than the client, it is harmful when language client and server are sharing the same VM. Therefore a new interface ILanguageServerShutdownAndExitHandler has been introduced, whose default implementation preserves the previous behavior. The ILanguageServerShutdownAndExitHandler.NullImpl can be bound for languages which are used in embedded scenarios, where the VM has to be kept alive.

Parallel XtextBuilder

Xtext’s incremental project builder XtextBuilder used the workspace root as scheduling rule so far, like most commonly used builders in the Eclipse platform did also. This meant that while building Xtext projects the builder held an exclusive lock on the workspace.

Starting with Eclipse 4.8 Photon, the project build has been enabled to support builders with a more “relaxed” scheduling strategy. This potentially allows projects in the workspace to be built in parallel, when they do not depend on each other and all configured builders are computing a more fine grained locking strategy.

The XtextBuilder has been extended to support a variety of different scheduling strategies. This is an experimental feature for now, which needs activation by a preference that is not exposed to the UI.
By setting org.eclipse.xtext.builder/schedulingrule preference the XtextBuilder can be configured to use one of these locking strategies:

  • WORKSPACE: Workspace root scheduling. This is the default value.
  • ALL_XTEXT_PROJECTS: Lock all projects with Xtext nature configured. Allows projects of other natures to be built in parallel.
  • PROJECT: Locks the currently built project.
  • NULL: No locking.

Growing The Team

We are happy that more people are helping the team with active contributions. Those who show their dedication to the project and deliver numerous valuable patches and features deserve their inclusion to the development team.

As a result, the Xtext team is welcoming two new committers who have earned their nomination: Tamas Miklossy and Akos Kitta. Tamas’ dedication is mainly in the area of contributions to the Xtend language, Testing APIs, Example projects and maintenance work. Akos mainly works on the LSP integration. A warm welcome to them! With their help the project can keep up its high development speed.

Are you the next Xtext super hero?

After The Release Is Before The Release

With the Xtext 2.16 release available, we immediately start with working on 2.17. As mentioned above, full Java 11 support is on top of our list.

Further you can expect that we’ll provide Maven BOMs soon, which eases the configuration of compatible dependencies. This has been addressed as a consequence of some trouble users have with incompatible dependencies in their Maven and Gradle builds.

As we are continuing to participate in Eclipse’s simultaneous release, 2.17 release dates are already known:

  • 2.17.0.M1: Jan 14th 2019
  • 2.17.0.M2: Feb 4th 2019
  • 2.17.0.M3: Feb 25th 2019
  • 2.17.0 Final Release: Mar 3th 2019

Save the date!

Final Notes

Xtext 2.16 is available now on the release update site, through the Eclipse Marketplace and on Maven Central. Besides the features highlighted by this article, more information on the release can be gathered from the Xtext and Xtend release notes.

Having questions or feedback on Eclipse Xtext? The Xtext team is eager to hear you! We are working hard to improve Xtext continuously and deliver it to you now even faster. Do you have something that you are missing? We are happily guiding you to add your own contribution, or get in contact with us to get support for adding features you need.

by Karsten Thoms ( at December 04, 2018 03:35 PM

Jenkins 6: Advanced Configuration Area

by Christian Pontesegger ( at December 03, 2018 07:44 PM

Our build is advancing. Today we want to move optional fields into an advanced section and provide reasonable defaults for these entries.

Jenkins Tutorials

For a list of all jenkins related tutorials see Jenkins Tutorials Overview.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Advanced UI Section

To simplify the job setup we now move all parameters except the build message to an advanced section.
The only thing necessary in the config.jelly file is to create the section and move all affected input elements into it:
 <f:entry title="Custom build message" field="buildMessage">
<f:textbox default="${descriptor.getDefaultBuildMessage()}" />

<f:entry title="Fail this build" field="failBuild">
<f:checkbox />

<f:entry title="Build Delay" field="buildDelay">
<f:select />

Afterwards the UI looks like this:

Step 2: Java Refactoring

Basically we do not need to change anything in the Java code to make this work. However we want to prepare a little for pipeline builds, so we remove non-required parameters from the constructor and create separate setters for them. To make Jenkins aware of these setters, use the @DataBoundSetter annotation:
public class HelloBuilder extends Builder implements SimpleBuildStep {

private boolean fFailBuild = false;

private String fBuildMessage;

private String fBuildDelay = "none";

public HelloBuilder(String buildMessage) {
fBuildMessage = buildMessage;

public void setFailBuild(boolean failBuild) {
fFailBuild = failBuild;

public void setBuildDelay(String buildDelay) {
fBuildDelay = buildDelay;

Whenever a parameter is not required, remove it from the constructor and use a setter for it.

by Christian Pontesegger ( at December 03, 2018 07:44 PM

Combining EMF Models with Xtext DSLs

by Tamas Miklossy ( at December 03, 2018 03:00 PM

This blog post demonstrates use cases on combining EMF models with Xtext DSLs. It is based on Martin Fowler's secret compartment state machine implementation available via the Xtext Example Wizard.

The meta-model of the statemachine language describes that a state machine consists of certain commands, events, states and transitions.

Combining EMF models with xtext DSLs – Statemachine class diagram


All these elements can be defined either in one dsl file, or can be split up into several *.statemachine files.

Xtext DSL


However, if we describe the commands and events in EMF models on the one hand and the states and transitions in an Xtext DSL on the other hand, the validation reports problems on the fowler.statemachine file showing that the referenced commands and events cannot be resolved.

Xtext EMF not ok


We would like to be able to reference EMF models from the Xtext DSL. Before starting adapting the productive code base, we capture our requirement in a StatemachineParsingTest test case.

Statemachine parsing test not ok


Since we have a failing test now, we are allowed to extend the implementation. In order to be able to reference EMF models from an Xtext DSL, a new Xtext language has to be defined for the EMF resources. Thus, we define the org.eclipse.xtext.example.fowlerdsl.emfstatemachine Xtext language for the *.emfstatemachine resources using the following steps:

  1. For the new language, we define the runtime dependency injection container by adding the EmfStatmachineRuntimeModule class extending the AbstractGenericResourceRuntimeModule class. The AbstractGenericResourceRuntimeModule contains the default bindings for EMF resources, the EmfStatemachineRuntimeModule defines the language specific configurations such as the language name and the file extension.
  2. For the Eclipse-based UI services, we define the UI dependency injection container by adding the EmfStatemachineUiModule class extending the EmfUiModule class.
  3. We implement the FowlerDslActivatorEx class inheriting from the generated FowlerDslActivator class and register it in the MANIFEST.MF file to ensure that the EmfStatemachineRuntimeModule and EmfStatemachineUiModule classes are used while creating the injector of the newly defined Xtext language.
  4. In order to register the Xtext UI language services to the Xtext's registry, we implement the EmfStatemachineExecutableExtensionFactory that extends the AbstractGuiceAwareExecutableExtensionFactory. Last one delivers the bundle and the injector of the newly created Xtext language via the FowlerDslActivatorEx class. Additionally, we register the language in the plugin.xml via the org.eclipse.xtext.extension_resourceServiceProvider extension point with the uriExtension emfstatemachine and an instance of the EmfResourceUIServiceProvider created via the EmfStatemachineExecutableExtensionFactory class.
  5. Optionally, we can register the EmfStatemachineReflectiveTreeEditorOpener in the EmfStatemachineUiModule to open the EMF Reflective Tree Editor when the user follows a reference from the Xtext editor to an EMF element. To be able to use the EMF Reflective Tree Editor, we have to add the org.eclipse.emf.ecore.editor plugin to the Require-Bundle section in the MANIFEST.MF file.

After implementing these steps, the successful execution of StatemachineParsingTest test case confirms that the EMF model references from the Xtext DSL can be resolved.

Statemachine parsing test is ok


When starting the Eclipse Runtime again, the previously reported validation errors are also gone. Note that not only the validation, but also the content assistant and the quickfix provider are aware of the existing EMF model elements.

Xtext EMF ist Ok


To protect the implemented EMF-Xtext integration against regressions, it is strongly recommended to extend the test base by Indexing, Linking, Scoping, ContentAssist and Quickfix test cases.

An additional feature could be to add Xtext serialization support to EMF models; when the user edits an EMF element and saves the *.emfstatemachine file, we want that the corresponding *.statemachine Xtext DSL file is automatically generated.

To achieve that, we implement the EmfStatemachineSerializer class and bind it in the EmfStatemachineRuntimeModule class. Furthermore, the EmfStatemachineSerializationHandler class registered in the plugin.xml triggers the manual conversion from EMF to Xtext via the Serialize EMF State-Machine with Xtext context menu of the Package/Project Explorer.

If you would like to learn more about the Xtext-EMF integration, I recommend looking into Karsten and Holger's presentation on How to build Code Generators for Non-Xtext Models with Xtend.

by Tamas Miklossy ( at December 03, 2018 03:00 PM

Jenkins 5: Combo Boxes

by Christian Pontesegger ( at December 03, 2018 02:59 PM

Combo boxes are the next UI element we will add to our builder.

Jenkins Tutorials

For a list of all jenkins related tutorials see Jenkins Tutorials Overview.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: UI Definition

In the config.jelly file we simply define that we want to use a combo box:
 <f:entry title="Build Delay" field="buildDelay">
<f:select />
The definition does not contain entries to select. These will be populated by the Descriptor class.

Step 2: Item Definition

Jenkins will look for a method called doFill<field>Items in our Descriptor class to populate the combo. We are doing a first approach now to understand the scheme:
  public ListBoxModel doFillBuildDelayItems() {
ListBoxModel model = new ListBoxModel();

model.add(new Option("None", "none"));
model.add(new Option("Short", "short"));
model.add(new Option("Long", "long"));

return model;
ListBoxModel is basically an ArrayList of Option instances. The first string represents the text visible to the user, the second one the value that will actually be stored in our variable (see next step).

If we would populate the combo this way, the first item would always be selected by default, even if we re-open a job that was configured differently. The Option constructor allows for a third parameter defining the selected state. We then just need to know the value that got stored with the job definition. Therefore we can inject the desired query parameter into our method parameters:
  public ListBoxModel doFillBuildDelayItems(@QueryParameter String buildDelay) {
ListBoxModel model = new ListBoxModel();

model.add(new Option("None", "none", "none".equals(buildDelay)));
model.add(new Option("Short", "short", "short".equals(buildDelay)));
model.add(new Option("Long", "long" , "long".equals(buildDelay)));

return model;
Now buildDelay contains the value that got stored by the user when the build step was originally configured. By comparing its string representation we can set the right option in the combo. Typically combo options could be populated from an Enum. To reduce the risk of typos we could write a small helper to create our Options:
 public static Option createOption(Enum<?> enumOption, String jobOption) {
return new Option(enumOption.toString(),,;

Step 3: Glueing it all together

Finally we need to extend our constructor with the new parameter. Then we can use it in our build step:
public class HelloBuilder extends Builder implements SimpleBuildStep {

private String fBuildDelay;

public HelloBuilder(boolean failBuild, String buildMessage, String buildDelay) {
fBuildDelay = buildDelay;

public void perform(Run<?, ?> run, FilePath workspace, Launcher launcher, TaskListener listener)
throws InterruptedException, IOException {
listener.getLogger().println("This is the Hello plugin!");

switch (getBuildDelay()) {
case "long":
Thread.sleep(10 * 1000);

case "short":
Thread.sleep(3 * 1000);

case "none":
// fall through
// nothing to do

if (isFailBuild())
throw new AbortException("Build error forced by plugin settings");

public String getBuildDelay() {
return fBuildDelay;

by Christian Pontesegger ( at December 03, 2018 02:59 PM

Jenkins 4: Unit Tests

by Christian Pontesegger ( at December 03, 2018 02:18 PM

Now that our builder plugin is working we should start writing some unit tests for it.

Jenkins Tutorials

For a list of all jenkins related tutorials see Jenkins Tutorials Overview.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Wrinting a simple test case

Jenkins tests can be written as JUnit tests. The test instance needed for execution tests can be created using a JUnit Rule.

Create a new JUnit Test Case com.codeandme.jenkins.builder.HelloBuilderTest in the src/test/java folder:
public class HelloBuilderTest {

public JenkinsRule fJenkinsInstance = new JenkinsRule();

public void successfulBuild() throws Exception {
HelloBuilder builder = new HelloBuilder(false, "JUnit test run");

FreeStyleProject job = fJenkinsInstance.createFreeStyleProject();
FreeStyleBuild build = fJenkinsInstance.buildAndAssertSuccess(job);

fJenkinsInstance.assertLogContains("JUnit test run", build);
In line 4 we create a test instance for our unit test. This instance is used from line 10 onwards to create and run our test job. The instance provides a set of assertion commands which we use to check the build result and the log output of the job execution.

You can run these tests as JUnit tests right from Eclipse or you can execute them via maven by running
mvn test

Step 2: A test expecting an execution fail

We use the same approach as before. To check for a failed build we need to run the build job a little bit different:
public void failedBuild() throws Exception {
HelloBuilder builder = new HelloBuilder(true, "JUnit test fail");

FreeStyleProject job = fJenkinsInstance.createFreeStyleProject();
QueueTaskFuture<FreeStyleBuild> buildResult = job.scheduleBuild2(0);

fJenkinsInstance.assertBuildStatus(Result.FAILURE, buildResult);
fJenkinsInstance.assertLogContains("JUnit test fail", buildResult.get());

by Christian Pontesegger ( at December 03, 2018 02:18 PM

Eclipse Che vs. Eclipse Theia

by Jonas Helming and Maximilian Koegel at December 03, 2018 10:34 AM

In this article, we compare Eclipse Che with Eclipse Theia and explain their relationship – their differences and their overlap. In...

The post Eclipse Che vs. Eclipse Theia appeared first on EclipseSource.

by Jonas Helming and Maximilian Koegel at December 03, 2018 10:34 AM

Eclipse Vert.x 3.6.0 released !

by vietj at December 03, 2018 12:00 AM

We are pleased to announce the Eclipse Vert.x 3.6.0 release.

As always, the community contributions have been key in achieving this milestone. To all of you who participated: thank you, you are awesome!

Without further ado, let’s take a look at some of the most exciting new features and enhancements.

Vert.x Cassandra client

In this release we introduce the Vert.x Cassandra client, an extension for interation with Apache Cassandra.

The client supports:

  • prepared queries
  • batching
  • query streaming
  • bulk fetching
  • low level fetching

To give you an idea of how the API usage may looks like, we provide this example:

cassandraClient.queryStream("SELECT my_string_col FROM my_keyspace.my_table where my_key = 'my_value'", queryStream -> {
    if (queryStream.succeeded()) {
    CassandraRowStream stream = queryStream.result();

    // resume stream when queue is ready to accept buffers again
    response.drainHandler(v -> stream.resume());

    stream.handler(row -> {
        String value = row.getString("my_string_col");

        // pause row stream when we buffer queue is full
        if (response.writeQueueFull()) {

    // end request when we reached end of the stream
    stream.endHandler(end -> response.end());

    } else {
    // response with internal server error if we are not able to execute given query
        .end("Unable to execute the query");

Vert.x for Kotlin

Vert.x for Kotlin has been updated to the very recent Kotlin 1.3 (and coroutines 1.0).

Vert.x 3.5 introduced a powerful way to write synchronous non-blocking code with Kotlin coroutines:

val result = awaitResult<ResultSet> { client.queryWithParams("SELECT TITLE FROM MOVIE WHERE ID=?", json { array(id) }, it) };

In this release, awaitResult idiom as extension methods are provided, so now you can directly write:

val result = client.queryWithParamsAwait("SELECT TITLE FROM MOVIE WHERE ID=?", json { array(id) })

Note the Await suffix: all Vert.x asynchronous methods provide now an awaitified extension.

Web API gateways

The new Vert.x Web API Service module allows you to create Vert.x Web API Contract gateways.

Web API Service Architecture

@WebApiServiceGen can annotate your service interface to handle OpenAPI 3: Vert.x Web API Service requests:

interface TransactionService {

  void getTransactionsList(String from, String to, OperationRequest context, Handler> resultHandler);

  void putTransaction(JsonObject body, OperationRequest context, Handler> resultHandler);

The OpenAPI3RouterFactory web router becomes an API gateway sending requests directly to your services.

These services are powered by the Vert.x event bus and benefits from features like load balancing and clustering.

Check the complete documentation for more details (a tutorial post is coming soon!)

Web Client

Our beloved WebClient is now capable of handling client sessions. The WebClientSession is a client extension that is very helpful when you need to manage cookies on the client side.

// The session is created per user
// from now on cookies are handled by the session
WebClientSession session = WebClientSession.create(client);

Cherry on the cake, the web client is now capable of performing server side response checks using response predicates:

  .get(8080, "", "/some-uri")
  .send(result -> { ... });

The server side response must validate the expectations defined before sending the request in order to make the response successful, relieving the user code to perform these checks manually. Of course many out of box expecations are provided and you can always create your own to implement custom checks.

Use templating everywhere

Template engines can now be used outside the realm of Vert.x Web. One great use case is to use them to generate email content:

TemplateEngine template = ...

template.render(new JsonObject(), "my-template.txt, res -> {
   // Send result with the Vert.x Mail client

OpenID Connect Discovery

Oauth2 has been greatly enhanced to support more of OpenID Connect, the most noticible is the support of OpenID Connect Discovery 1.0.

What this means for the end user is that, configuration is now a trivial task, as it is “discoverd“ from the server, e.g.:,
  new OAuth2ClientOptions()
  res -> {
    if (res.succeeded()) {
      // the setup call succeeded.
      // at this moment your auth is ready to use and
      // google signature keys are loaded so tokens can be decoded and verified.
    } else {
      // the setup failed.

If you know your clientId and your provider server URL (of course), all the remaining endoints, key signature algorithms and JSON Web Keys are “discovered” for you and configured to the auth provider.

Password Hashing strategy

Vert.x auth components now support user defined password hashing strategies, If you’re not happy with the provided implementations: SHA512 or PKDBF2 you can now provide your own strategy so it can be used with JDBC or Mongo auth.

The hash verification algorithm has been improved to be time constant, regardless of the result, which protects Vert.x applications from hash timing attacks.

Externalized configuration of Vert.x options

Whether you run your Vert.x app with the CLI or as an executable JAR with the Launcher, you can now provide Vert.x options as a command line parameter:

java -jar my-app.jar -options /path/to/my/file.json

Or with the CLI:

vertx run my-verticle.js -options /path/to/my/file.json

This is particularly useful for complex clustered event-bus setups (encryption, public host vs cluster host…).

And more…

Here are some other important improvements you can find in this release:


The 3.6.0 release notes can be found on the wiki, as well as the list of deprecations and breaking changes

Docker images are available on Docker Hub.

The Vert.x distribution can be downloaded on the website but is also available from SDKMan and HomeBrew.

The event bus client using the SockJS bridge is available from:

The release artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

That’s it! Happy coding and see you soon on our user or dev channels.

by vietj at December 03, 2018 12:00 AM

Read the newest Jakarta EE Newsletter!

November 30, 2018 02:15 PM

This month we bring you a fully loaded Jakarta EE newsletter with a mix of technical content, community news, and a tutorial!

November 30, 2018 02:15 PM

Eclipse Foundation Specification Process, Part III: Creation

November 30, 2018 02:15 PM

Creation' is an integral part of the Eclipse Foundation Specification Process. Read Wayne Beaton's newest blog on the topic.

November 30, 2018 02:15 PM

EMF Forms 1.18.0 Feature: Label Layouting

by Jonas Helming and Maximilian Koegel at November 29, 2018 10:34 AM

EMF Forms makes it easy to create forms that are capable of editing your data based on an EMF model. To...

The post EMF Forms 1.18.0 Feature: Label Layouting appeared first on EclipseSource.

by Jonas Helming and Maximilian Koegel at November 29, 2018 10:34 AM

Hello world!

by okamoto3289 at November 29, 2018 09:56 AM

WordPress へようこそ。これは最初の投稿です。編集もしくは削除してブログを始めてください !

by okamoto3289 at November 29, 2018 09:56 AM

Eclipse Foundation Specification Process, Part III: Creation

by waynebeaton at November 28, 2018 04:22 PM

The Eclipse Foundation Specification Process (EFSP) includes an image that provides an overview of what goes into creating a new Specification Project. By creating, we mean the process of taking a Specification from an initial idea or concept through to the point where the necessary resources and permissions are in place to do real development. It’s the same basic process that we follow when creating a regular Eclipse Project with an extra Specifications-specific approval added in.


The first couple of steps of the project creation process usually happen via private email exchanges between the people involved with the project and the Eclipse Management Organization (EMO) (this is one of the few things that we do in private at the Eclipse Foundation). With this communication, we take the idea behind a Specification Project and create a Proposal that describes and defines it. The Proposal includes important information like a statement of Scope, a description, and the list of the initial Specification Team (Project Leads, Committers, and Mentors). Other useful information is contained the Proposal, including background information, descriptions of content that already exists, and initial plans.

In the steady state (i.e. after the Specification Project has been created), all roles are assigned via demonstrations of merit and votes from the existing Committers. Since we have no existing Committers before the project is created, the creation process serves as the display of merit for the community and vote.

The EMO works with the proposers to whip the Proposal into shape. When the EMO decides that the content is complete, it is delivered to the Executive Director of the Eclipse Foundation (EMO(ED)) for approval to post for community review.

Once we have EMO(ED) approval and the Proposal is posted for the community to review, it is said—according to the Eclipse Development Process (EDP)—to have entered the Proposal Phase. We often refer to this as the community review period. During this period, other developers may step up and volunteer to become Committers, community members may ask questions, the Scope and other aspects of the Proposal may be tweaked, and more. Crucially, during this period, Member Companies of the Eclipse Foundation have an opportunity to express concerns and even (possibly) reject the Project Proposal (this has never actually happened). A Specification Project must stay in the Proposal Phase for a minimum of two weeks to give the various stakeholders enough time to review and respond.

The EMO uses the Proposal Phase to ensure that we can reasonably assert ownership of the Specification Project’s name as a trademark on behalf of the community (or at least that the project name does not infringe on somebody else’s trademark). If a Specification Project’s name is already used by the Specification Team (or an organization that’s involved with the Proposal), then EMO will work with them to transfer ownership of the trademark and associated Internet domains to the Eclipse Foundation. In parallel, the EMO will also work with the Eclipse Architecture Council to identify a Mentor, and seek approval from the Project Management Committee (PMC) of the target Top-level Project. It’s not shown on the image, but we’d use this time to inform the corresponding Specification Committee that the new Specification Project is coming.

At the Eclipse Foundation, we organize all Projects and Specification Projects hierarchically. Top-level Projects, which sit at the the top level of the hierarchy, are generally not used for actual development work. It is in the Projects (also referred to as Subprojects) under the Top-level Projects where the real work happens. All Top-level Projects have a PMC who—as part of the Project Leadership Chain for all projects that fall under the Top-level Project—are responsible for ensuring that the Specification Teams work as good open source projects (as defined by the EDP).

When all of these separate threads are resolved, we move to the Creation Review. During this period, no further changes are allowed and the Proposal is locked down. The EMO uses this time to ensure that the process was followed. The Creation Review lasts for a minimum of one week. We schedule all reviews—Creation Reviews included—based on the date that they conclude; reviews are scheduled to conclude on the first and third Wednesday of every month (we may schedule additional reviews in exceptional cases).

To successfully complete a Creation Review for a Specification Project, the EFSP requires that we have Super-majority approval of the Specification Committee (Super-majority is defined as two thirds of the Specification Committee members). The EFSP is silent regarding the specifics of how such a vote is executed, but is generally accepted that votes run for a minimum of a week (a Specification Committee may decide that more or less time is required). The factors that the Specification Committee must consider when casting their votes are also not specified. In general though, the Specification Committee will be looking at whether or not new Specification Project Proposals are viable and are a good fit with the corresponding Working Group.

Upon successful conclusion of the Creation Review, the Proposal is sent to the Eclipse Webmaster Team for provisioning. The provisioning process starts with Committer Paperwork: once we have complete paperwork for one Committer, the Webmaster will create resources including websites, Git repositories, Issue trackers, etc. for the Specification Team.

With provisioned Committers and resources, the Specification Project is said to be in the Incubation Phase (or “in incubation”); it will stay in this phase until the Specification Team engages in a Graduation Review (which is generally combined with the project’s first or second Release and corresponding Release Review). The first action of the Specification Team with their new Specification Project is to submit their Initial Contribution (when a Specification comes to the Eclipse Foundation with existing intellectual property) to the Eclipse IP Team for review; with their approval of the Initial Contribution, that content can be added to the project’s Git repository and the Specification Team can start to weave their magic.

For more background, please see Part I, and Part II of this series.

by waynebeaton at November 28, 2018 04:22 PM

Back to the top