IoT developer survey : my 2 cents one year later …

by ppatierno at April 28, 2017 06:59 AM

As last year, I have decided to write a blog post about my point of view on the IoT developer survey from the Eclipse Foundation (IoT Working Group) with IEEE, Agile IoT and the IoT Council.

From my point of view, the final report gives always interesting insights on where the IoT business is going and about that, Ian Skerrett (Vice President of Marketing at Eclipse Foundation) has already analyzed the results, available here, writing a great blog post.

I want just to add 2 more cents on that …

Industry adoption …

It’s clear that industries are adopting IoT and there is a big increment for industrial automation, smart cities, energy management, building automation, transportation, healthcare and so on. IoT is becoming “real” even if, as we will see in the next paragraphs, it seems that we are still in a prototyping stage. A lot of companies are investing on that but few of them have real solutions running in the field. Finally, from my point of view, it could be great to add more information about countries because I think that there is a big difference on how and where every country is investing for IoT.

The concerns …

Security is always the big concern but, as Ian said, interoperability and connectivity are on a downward trend; I agree with him saying that all the available middleware solutions and the IoT connectivity platforms are solving these problems. The great news is that all of them support different open and standard protocols (MQTT, AMQP but even HTTP) that is the way to go for having interoperability; at same time we are able to connect a lot of different devices, supporting different protocols, so the connectivity problem is addressed as well.

Coming back to security, the survey shows that much more software developers are involved on building IoT solutions even because all the stuff they mostly use are SSL/TLS and data encryption so at software level. From my point of view, some security concerns should be addressed at hardware level (using crypto-chip, TPM and so on) but this is an area where software developers have a lack of knowledge. It’s not a surprise because we know that IoT needs a lot of different knowledge from different people but the survey shows that in some cases not the “right” people are involved on developing IoT solution. Too much web and mobile developers are working on that, too few embedded developer with a real hardware knowledge.

Languages : finally a distinction !

Last year, in my 2 cents, I asked for having a distinction on which side of an IoT solution we consider the most used programming languages. I’m happy to know that Eclipse Foundation got this suggestion so this year survey asked about languages used on constrained devices, gateway and cloud.

iot_survey

The results don’t surprise me : C is the most used language on “real” low constrained devices and all the other languages from Java to Python are mostly used on gateways; JavaScript fits in the cloud mainly with NodeJS. In any case, NodeJS is not a language so my idea is that providing only JavaScript as possible answer was enough even because other than using a server-side framework like NodeJS the other possibility is using JavaScript in “function as a service” platforms (i.e. Lambda from AWS, Azure Functions and so on) that are mostly based on NodeJS. Of course, the most used language in the cloud is Java.

What about OS ?

Linux is the most used OS for both constrained devices and IoT gateways but … here a strange thing comes in my mind. On “real” constrained devices that are based on MCUs (i.e. Cortex-Mx) you can run few specific Linux distros (i.e. uCLinux) and not a full Linux distro so it’s strange that Linux wins on constrained devices but then when the survey shows what distros are used, uCLinux has a very low percentage. My guess is that a lot of software developers don’t know what a constrained device is 🙂

On constrained devices I expect that developers uses “no OS” (programming on bare metal) or a really tiny RTOS but not something closed to Linux.

On gateways I totally agree with Linux but Windows is growing from last year.

Regarding the most used distros, the Raspbian victory shows that we are still in a prototyping stage. I can’t believe that developers are using Raspbian so the related Raspberry Pi hardware in production ! If it’s true … I’m scared about that ! If you know what are the planes, trains, building automation systems which are using something like that, please tell me … I have to avoid them 🙂

Regarding the protocols …

From my point of view, the presence of TCP/IP in the connectivity protocols results is misleading. TCP/IP is a protocol used on top of Ethernet and Wi-Fi that are in the same results and we can’t compare them.

Regarding communication protocols, the current know-how is still leading; this is the reason why HTTP 1.1 is still on the top and HTTP 2.0 is growing. MQTT is there followed by CoAP, which is surprising me considering the necessity to have an HTTP proxy for exporting local traffic outside of a local devices network. AMQP is finding its own way and I think that in the medium/long term it will become a big player on that.

Cloud services

In this area we should have a distinction because the question is pretty general but we know that you can use Amazon AWS or Microsoft Azure for IoT in two ways :

  • as IaaS hosting your own solution or an open source one for IoT (i.e. just using provided virtual machines for running an IoT software stack)
  • as PaaS using the managed IoT platforms (i.e. AWS IoT, Azure IoT Hub, …)

Having Amazon AWS on the top doesn’t surprise me but we could have more details on how it is used by the IoT developers.

Conclusion

The IoT business is growing and its adoption as well but looking at these survey results, most of the companies are still in a prototyping stage and few of them have a real IoT solution in the field.

It means that there is a lot of space for all to be invited to the party ! 😀

 



by ppatierno at April 28, 2017 06:59 AM

Orion Moved to Github

by Mike Rennie at April 27, 2017 04:24 PM

You may have heard a couple of weeks back that Orion moved to Github. If not, then let this be the notice that Orion moved to Github!

We are very excited about the move. The webmasters (Derek Toolan in particular) did an amazing job in making the transition seamless and simple.

Why?

You are probably wondering to yourself: “why did you move, you had a great home at eclipse.org?”. The answer boils down to simpler contributing. With Github, we felt that it would be far easier for committers, community and everyone in-between to be able to contribute to our project. No more Gerrit, confusing Gerrit configurations, or multiple remotes – just fork the project, make awesome code and open a pull request. Simple.

Ok, so where’s the code

All of the Orion source code can now be found in the following locations:

  • orion.client – the Orion client code
  • orion.server – the Orion Java server code
  • orion.server.node – this will eventually be the home of the Node.js-based Orion server. Its currently empty while we sort out what code we want to separate out
  • orion.electron – this will eventually be the place we host our Electron-based app from. Currently it is empty while we sort out the builds, etc for the app

What else do I need to know?

There are a few pretty important things that need to be addressed – especially if you are currently a contributor / committer to Orion.

  1. Make sure your Eclipse account is linked to your Github id. This is super-mega-ultra important, especially if you are a committer. The webmasters have provided a great wiki page that talks more about this.
  2. Make sure to update your repos / remotes. The old Git repositories have been set to read-only (as have the gerrits) – so make sure you update (or just re-clone) the repositories to avoid accidentally working against the old stuff.
  3. We are still using Bugzilla (so no changes in how to file / search / triage bugs). For the time being we will keep using it until we figure out a good flow for tracking issues across multiple repositories in GitHub.
  4. All contributions should be made as pull requests. The Gerrit instances for each old repository are set to read-only so they cannot be used, and if you really want to, you could still attach a patch to the bug you want to fix (but seriously, please use a pull request).

Thanks again to everyone that helped make this possible.

Happy coding!


by Mike Rennie at April 27, 2017 04:24 PM

Unveiling the Eclipse IoT Open Testbeds

by Benjamin Cabé at April 27, 2017 02:44 PM

Today we are announcing the Eclipse IoT Open Testbeds, a new initiative for driving adoption of open source and open standards in the industry.

For more than five years, over 30 open source projects have been calling Eclipse IoT home. Yet, it doesn’t necessarily make it easy for people to understand how to put all the pieces together, from integration with sensors and hardware, to networking and connectivity, to cloud computing and enterprise integration.

More often than not, I am asked about where to find blueprints or reference architectures for IoT, and how one is expected to leverage open source software such as what Eclipse IoT has to offer. These are very legitimate questions as building any IoT solution requires much more than just open source software components.

I believe the Eclipse Open IoT Testbeds are a unique approach to answering these kind of questions, especially since this is the first time IoT leading companies are effectively developing the testbeds in open source.

Open Source FTW!

Creating testbeds that demonstrate how a particular set of technologies can be used is certainly not a new idea, I’ll give you that. What is unique with the approach we are taking, though, is that we are making the testbeds available in open source.

This means that you can really learn firsthand how all the pieces of an IoT solution are being put together to solve a real business case, as well as experiment with the actual code and dive into the architecture.

Over time I certainly expect people will start forking the testbeds’ code to create their own extensions and, even better, will contribute them back to the community.


Open Testbed for Asset Tracking

The first testbed we have been working on is around Asset Tracking Management.

In a nutshell, we are showing how to track valuable assets (think expensive/valuable parcels such as artwork) in real-time in order to optimize their transport, and in particular minimize the costs due to spoilage, damage or delays.

The testbed features Eclipse open source projects such as Eclipse Kura, Eclipse Kapua, Eclipse Paho or Eclipse Che, but is of course also leveraging other technologies and commercial offerings – like any solution should, right?

Head over to the Asset Tracking testbed webpage to learn how, to name a few, OpenShift, Zulu Embedded, Samsung ARTIK, and more, have been integrated to demonstrate a full end-to-end IoT solution, all the way from data collection to complex event processing, to exposing information to 3rd parties through open APIs.

What’s next?

The Asset Tracking Open Testbed is our first take at demonstrating how companies are building real IoT Solutions today.

We are already planning to create other testbeds around e.g Smart Manufacturing, and therefore are inviting anyone interested in existing or future testbeds to join us at https://iot.eclipse.org/testbeds.

Join us at Red Hat Summit and IoT World 2017!

If you are attending Red Hat Summit (May 2-4, Boston) or IoT World 2017 (May 16-18, Santa Clara), please make sure to stop by our Asset Tracking Testbed Demo, see it run live, and understand better the contribution each partner has been making to the testbed.


by Benjamin Cabé at April 27, 2017 02:44 PM

New Eclipse IoT Open Testbeds

April 27, 2017 01:05 PM

Announcing the creation of the Eclipse IoT Open Testbeds, an initiative to drive adoption of IoT open source and open standards.

April 27, 2017 01:05 PM

New Release of Eclipse Kura 3.0 Drives Simplification of IoT Edge Computing

April 27, 2017 12:30 PM

Eclipse Kura 3.0 will be available for download in early May.

April 27, 2017 12:30 PM

Program Ready for EclipseCon France 2017

April 27, 2017 09:10 AM

See the program, and register by May 12 for the best price.

April 27, 2017 09:10 AM

Building a real-time web app with Angular/Ngrx and Vert.x

by benorama at April 26, 2017 12:00 AM

Nowadays, there are multiple tech stacks to build a real-time web app. What are the best choices to build real-time Angular client apps, connected to a JVM-based backend? This article describes an Angular+Vertx real-time architecture with a Proof of Concept demo app.

this is a re-publication of the following Medium post

Intro

Welcome to the real-time web! It’s time to move on from traditional synchronous HTTP request/response architectures to reactive apps with connected clients (ouch… that’s a lot of buzzwords in just one sentence)!

Real-time app

Image source: https://www.voxxed.com

To build this kind of app, MeteorJS is the new cool kid on the block (v1.0 released in october 2014): a full stack Javascript platform to build connected-client reactive applications. It allows JS developers to build and deploy amazing modern web and mobile apps (iOS/Android) in no time, using a unified backend+frontend code within a single app repo. That’s a pretty ambitious approach but it requires a very opinionated and highly coupled JS tech stack and it’s still a pretty niche framework.

Moreover, we are a Java shop on the backend. At AgoraPulse, we rely heavily on :

  • Angular and Ionic for the JS frontend (with a shared business/data architecture based on Ngrx),
  • Groovy and Grails ecosystem for the JVM backend.

So my question is:

What are the best choices to build real-time Angular client apps, connected to a JVM-based backend these days?

Our requirements are pretty basic. We don’t need full Meteor’s end-to-end application model. We just want to be able to :

  1. build a reactive app with an event bus on the JVM, and
  2. extend the event bus down to the browser to be able to publish/subscribe to real-time events from an Angular app.

Server side (JVM)

Reactive apps is a hot topic nowadays and there are many great libs/platforms to build this type of event-driven architecture on the JVM:

Client side

ReactJS and Angular are the two most popular framework right now to build modern JS apps. Most platforms use SockJS to handle real-time connections:

  • Vertx-web provides a SockJS server implementation with an event bus bridge and a vertx-evenbus.js client library (very easy to use),
  • Spring provides websocket SockJS support though Spring Messaging and Websocket libs (see an example here)

Final choice: Vert.x + Angular

In the end, I’ve chosen to experiment with Vert.x for its excellent Groovy support, distributed event bus, scalability and ease of use.

I enjoyed it very much. Let me show you the result of my experimentation which is the root of our real-time features coming very soon in AgoraPulse v6.0!

Why Vert.x?

Like other reactive platform, Vert.x is event driven and non blocking. It scales very well (even more that Node.js).

Unlike other reactive platforms, Vert.x is polyglot: you can use Vert.x with multiple languages including Java, JavaScript, Groovy, Ruby, Ceylon, Scala and Kotlin.

Unlike Node.js, Vert.x is a general purpose tool-kit and unopinionated. It’s a versatile platform suitable for many things: from simple network utilities, sophisticated modern web applications, HTTP/REST microservices or a full blown back-end message-bus application.

Like other reactive platforms, it looks scary in the begining when you read the documentation… ;) But once you start playing with it, it remains fun and simple to use, especially with Groovy! Vert.x really allows you to build substantial systems without getting tangled in complexity.

In my case, I was mainly interested by the distributed event-bus provided (a core feature of Vert.x).

To validate our approach, we built prototypes with the following goals:

  • share and synchronize a common (Ngrx-based) state between multiple connected clients, and
  • distribute real-time (Ngrx-based) actions across multiple connected clients, which impact local states/reducers.

Note: @ngrx/store is a RxJS powered state management inspired by Redux for Angular apps. It’s currently the most popular way to structure complex business logic in Angular apps.

Redux

Source: https://www.smashingmagazine.com/2016/06/an-introduction-to-redux/

PROOF OF CONCEPT

Here is the repo of our initial proof of concept:

http://github.com/benorama/ngrx-realtime-app

The repo is divided into two separate projects:

  • Vert.x server app, based on Vert.x (version 3.3), managed by Gradle, with a main verticle developed in Groovy lang.
  • Angular client app, based on Angular (version 4.0.1), managed by Angular CLI with state, reducers and actions logic based on @ngrx/store (version 2.2.1)

For the demo, we are using the counter example code (actions and reducers) from @ngrx/store.

The counter client business logic is based on:

  • CounterState interface, counter state model,
  • counterReducer reducer, counter state management based on dispatched actions, and
  • Increment, decrement and reset counter actions.

State is maintained server-side with a simple singleton CounterService.

class CounterService {
    static INCREMENT = '[Counter] Increment'
    static DECREMENT = '[Counter] Decrement'
    static RESET = '[Counter] Reset'
    int total = 0
    void handleEvent(event) {
        switch(event.type) {
            case INCREMENT:
                total++
                break
            case DECREMENT:
                total--
                break
            case RESET:
                total = 0
                break
        }
    }
}

Client state initialization through Request/Response

Initial state is initialized with simple request/response (or send/reply) on the event bus. Once the client is connected, it sends a request to the event bus at the address counter::total. The server replies directly with the value of CounterService total and the client dispatches locally a reset action with the total value from the reply.

Vertx Request Response

Source: https://www.slideshare.net/RedHatDevelopers/vertx-microservices-were-never-so-easy-clement-escoffier

Here is an extract of the corresponding code (from AppEventBusService):

initializeCounter() {
    this.eventBusService.send('counter::total', body, (error, message) => {
    // Handle reply
    if (message && message.body) {
            let localAction = new CounterActions.ResetAction();
            localAction.payload = message.body; // Total value
            this.store.dispatch(localAction);
        }
    });
}

Actions distribution through Publish/Subscribe

Action distribution/sync uses the publish/subscribe pattern.

Counter actions are published from the client to the event bus at the address counter::actions.

Any client that have subscribed to counter::actions address will receive the actions and redispatch them locally to impact app states/reducers.

Vertx Publish Subscribe

Source: https://www.slideshare.net/RedHatDevelopers/vertx-microservices-were-never-so-easy-clement-escoffier

Here is an extract of the corresponding code (from AppEventBusService):

publishAction(action: RemoteAction) {
    if (action.publishedByUser) {
        console.error("This action has already been published");
        return;
    }
    action.publishedByUser = this.currentUser;
    this.eventBusService.publish(action.eventBusAddress, action);
}
subscribeToActions(eventBusAddress: string) {
    this.eventBusService.registerHandler(eventBusAddress, (error, message) => {
        // Handle message from subscription
        if (message.body.publishedByUser === this.currentUser) {
            // Ignore action sent by current manager
            return;
        }
        let localAction = message.body;
        this.store.dispatch(localAction);
    });
}

The event bus publishing logic is achieved through a simple Ngrx Effects. Any actions that extend RemoteAction class will be published to the event bus.

@Injectable()
export class AppEventBusEffects {

    constructor(private actions$: Actions, private appEventBusService: AppEventBusService) {}
    // Listen to all actions and publish remote actions to account event bus
    @Effect({dispatch: false}) remoteAction$ = this.actions$
        .filter(action => action instanceof RemoteAction && action.publishedByUser == undefined)
        .do((action: RemoteAction) => {
            this.appEventBusService.publishAction(action);
        });

    @Effect({dispatch: false}) login$ = this.actions$
        .ofType(UserActionTypes.LOGIN)
        .do(() => {
            this.appEventBusService.connect();
        });
}

You can see all of this in action by locally launching the server and the client app in two separate browser windows.

Demo app screen

Bonus: the demo app also includes user status (offline/online), based of the event bus connection status.

The counter state is shared and synchronized between connected clients and each local action is distributed in real-time to other clients.

Mission accomplished!

Typescript version of Vertx EventBus Client
The app uses our own Typescript version of the official JS Vertx EventBus Client. It can be found here, any feedback, improvement suggestions are welcome!


by benorama at April 26, 2017 12:00 AM

JBoss Tools 4.4.4.AM3 for Eclipse Neon.3

by jeffmaury at April 25, 2017 02:09 PM

Happy to announce 4.4.4.AM3 (Developer Milestone 3) build for Eclipse Neon.3.

Downloads available at JBoss Tools 4.4.4 AM3.

What is New?

Full info is at this page. Some highlights are below.

OpenShift 3

Pipeline builds support

Pipeline based builds are now supported by the OpenShift tooling. When creating an application, if using a template, if one of the builds is based on pipeline, you can view the detail of the pipeline:

pipeline wizard

When your application is deployed, you can see the details of the build configuration for the pipeline based builds:

pipeline details

More to come as we are improving the pipeline support in the OpenShift tooling.

Enjoy!

Jeff Maury


by jeffmaury at April 25, 2017 02:09 PM

Results, results, results — IoT Developer Survey 2017

by Roxanne on IoT at April 25, 2017 09:50 AM

In February & March 2017, we conducted the third annual IoT Developer Survey and 713 of you took the time to complete it! Thank you for contributing to this initiative. It might only a small sample, but it gives many IoT community, companies, and individuals a glimpse into what is going on in the vast and continually changing world we call the Internet of Things.

If you thought of Carmen Sandiego when reading this question — you are awesome!

Here are some quick survey highlights for you to devour:

Download images hereRead our full analysisView the slides

Make your own opinion and tell us how you interpret the survey results! Tweet: @roxannejoncas @EclipseIoT or leave a comment below!


by Roxanne on IoT at April 25, 2017 09:50 AM

Papyrus Architecture Framework

by tevirselrahc at April 24, 2017 11:00 AM

For the upcoming Oxygen release, I am getting a new, improved architecture framework that is aligned with ISO 42010.Now, I’m not (yet) an expert in this, but my minions are! And they have created a nice YouTube video explaining what it does and what it provides to Toolsmiths

Now, I’m not (yet) an expert in this, but my minions are! And they have created a nice YouTube video explaining what it does and what it provides to Toolsmiths.

If you are a toolsmith for Me, hope to become one or are just curious, you must go see it (and the other Me videos on YouTube)!


Filed under: DSML, Papyrus, Papyrus Core, Uncategorized Tagged: toolsmiths

by tevirselrahc at April 24, 2017 11:00 AM

Host your own eclipse signing server

by Christian Pontesegger (noreply@blogger.com) at April 24, 2017 10:03 AM

We handled signing plugins with tycho some time ago already. When working in a larger company you might want to keep your certificates and passphrases hidden from your developers. For such a scenario a signing server could come in handy.

The eclipse CBI project provides such a server which just needs to get configured in the right way. Mikael Barbero posted a short howto on the mailing list, which should contain all you need. For a working setup example follow this tutorial.

To have a test vehicle for signing we will reuse the tycho 4 tutorial source files.

Step 1: Get the service

Download the latest service snapshot file and store it to a directory called signingService. Next download the test server, we will use it to create a temporary certificate and keystore.

Finally we need a template configuration file. Download it and store it to signingService/jar-signing-service.properties.

Step 2: A short test drive

Open a console and change into the signingService folder. There execute:
java -cp jar-signing-service-1.0.0-20170331.204711-10.jar:jar-signing-service-1.0.0-20170331.204711-10-tests.jar org.eclipse.cbi.webservice.signing.jar.TestServer
You should get some output giving you the local address of the signing service as long as the certificate store used:
Starting test signing server at http://localhost:3138/jarsigner
Dummy certificates, temporary files and logs are stored in folder: /tmp/TestServer-2590700922068591564
Jarsigner executable is: /opt/oracle-jdk-bin-1.8.0.121/bin/jarsigner
We are not ready yet to sign code, but at least we can test if the server is running correctly. If you try to connect with a browser you should get a message that HTTP method GET is not supported by this URL.

Step 3: Preparing the tycho project

We need some changes to our tycho project so it can make use of the signing server. Get the sources of the tycho 4 tutorial (checking out from git is fully sufficient) and add following code to com.codeandme.tycho.releng/pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>

<pluginRepositories>
<pluginRepository>
<id>cbi</id>
<url>https://repo.eclipse.org/content/repositories/cbi-releases/</url>
</pluginRepository>
</pluginRepositories>

<build>
<plugins>
<!-- enable jar signing -->
<plugin>
<groupId>org.eclipse.cbi.maven.plugins</groupId>
<artifactId>eclipse-jarsigner-plugin</artifactId>
<version>${eclipse.jarsigner.version}</version>
<executions>
<execution>
<id>sign</id>
<goals>
<goal>sign</goal>
</goals>
<phase>verify</phase>
</execution>
</executions>
<configuration>
<signerUrl>http://localhost:3138/jarsigner</signerUrl>
</configuration>
</plugin>

</plugins>
</build>
</project>
The code above shows purely additions to the pom.xml, no sections were removed or replaced.

You may try to build your project with maven already. As I had problems to connect to https://timestamp.geotrust.com/tsa my local build failed, even if maven reported SUCCESS.

Step 4: Configuring a productive instance

So lets get productive. Setting up your keystore with your certificates will not be handled by this tutorial, so I will reuse the keystore created by the test instance. Copy the keystore.jks file from the temp folder to the signingService folder. Then create a text file keystore.pass:
echo keystorePassword >keystore.pass

Now we need to adapt the jar-signing-service.properties file to our needs:
### Example configuration file

server.service.pathspec=/jarsigner
server.service.pathspec.versioned=false

jarsigner.bin=/opt/oracle-jdk-bin-1.8.0.121/bin/jarsigner

jarsigner.keystore=/somewhere/signingService/keystore.jks
jarsigner.keystore.password=/somewhere/signingService/keystore.pass
jarsigner.keystore.alias=acme.org

jarsigner.tsa=http://timestamp.entrust.net/TSS/JavaHttpTS

  • By setting the versioned flag to false in line 4 we simplify the service web address (details can be found in the sample properties file).
  • Set the jarsigner executable path in line 6 according to your local environment.
  • Lines 8-10 contain details about the keystore and certificate to use, you will need to adapt them, but above settings should result in a working build.
  • The change in line 12 was necessary at the time of writing this tutorial because of connection problems to https://timestamp.geotrust.com/tsa.
Run your service using
java -jar jar-signing-service-1.0.0-20170331.204711-10.jar
Remember that your productive instance now runs on port 8080, so adapt your pom.xml accordingly.

by Christian Pontesegger (noreply@blogger.com) at April 24, 2017 10:03 AM

Eclipse IoT @ Red Hat Summit

by Roxanne on IoT at April 21, 2017 03:31 PM

In less than two weeks, we will be at the Red Hat Summit in Boston, MA.

We’re really excited! We will be involved in many aspects of the conference including the Red Hat IoT Partner Showcase, where we will be demoing something very cool! Stay tuned for the details.

Benjamin Cabé will also be speaking at on May 2 @ 10 am during the Lightning Talks.

Plan to join this year’s IoT CodeStarter starting on May 2 @ 6 pm to experience and use open source projects such as Eclipse Kura, Eclipse Kura Wires and Eclipse Kapua.

Stop by our demo station at the Red Hat IoT Partner Showcase to say hello!


by Roxanne on IoT at April 21, 2017 03:31 PM

JSON Forms – Day 6 – Custom Renderers

by Maximilian Koegel and Jonas Helming at April 21, 2017 09:28 AM

JSON Forms is a framework to efficiently build form-based web UIs. These UIs allow end users to enter, modify, and view data and are usually embedded within a business application. JSON Forms eliminates the need to write HTML templates and Javascript for databinding by hand. It supports the creation of customizable forms by leveraging the capabilities of JSON and JSON schema and providing a simple and declarative way of describing forms. Forms are then rendered with a UI framework, currently one that is based on AngularJS. If you would like to know more about JSON Forms, the JSON Forms homepage is a good starting point.

In this blog series, we wish to introduce the framework based on a real-world example application, a task tracker called “Make It happen”. On day 1 we described the overall requirements, from day 2 to 5, we have created a fully working form for the entity “Task”. If you would like to follow this blog series please follow us on twitter, where we will announce every new blog post regarding JSON Forms.

So far, on the previous days, we have created a fully functional form rendered in AngularJS, by simply using two schemata: a JSON Schema to define the underlying data and a UI Schema to specify the UI. JSON Forms provides a rendering component, which translates this two schemas into a AngularJS form, including data binding, validation, rule-based visibility, and so on. While this is very efficient, you may wonder what you should do if the form rendered by JSON Forms does not exactly look like what you expected. Let us have a look at the form we have so far:

While the generic structure and the layout look pretty good, there are two controls, which could require some aesthetic improvement.

First, the checkbox for “done” is very small, we would rather have something like this:

Second, the control for “rating” is just a plain number field, a rating would better be expressed by a control like this:

Both improvements can be addressed by customizing existing renderers or adding new custom renderers to JSON Forms. This use case is actually not special at all, it is anticipated and fully supported.

In JSON Forms a renderer is only responsible for displaying one particular UI element, like a control or a horizontal layout. JSON Forms ships with default renderers for all UI schema elements. The default renderers are meant as a starting point, and therefore, it is very likely that you will add new renderers and extend existing ones.

The good news is that you still do not have to implement the complete form manually. Rather, you just need to add some code for the customized part. That means you can iteratively extend the framework with custom renderers, while the complete form remains fully functional. Let us have a quick look at the architecture of the JSON Forms rendering component. In fact there is not only one renderer, there is at least one renderer per concept of the UI schema. Renderers are responsible for translating the information of the UI schema and the data schema into a running HTML UI.

All those renderers are registered at a renderer factory (see following diagram). For every renderer, there is a “Tester”, which decides, whether a certain renderer should be responsible for rendering a certain UI element. This can depend on the type of the UI schema element (e.g. all controls), on the type of the referenced data property (e.g. a renderer for all String properties), or even on the name of the data property (e.g. only the attribute “rating”).

This architecture allows you to register renders in an extremely flexible way. If there are no custom renderers, the default renderer will be used. Please note, that JSON Forms supports renderers written in JS5, JS6, and Typescript. In the following, we will use Typescript.

So let’s customize the styling of the default renderer for the “done” attribute by CSS styling and by adding a custom renderer for the rating attribute. Therefore, we start with a customization of the CSS of the sample application. By adding the following style you can change the size of the checkbox of the done attribute alone:

#properties_done {
  width:35px;
  height:35px;
}

Second we need a tester for the custom rating renderer, which should only be applied for the rating property:

.run(['RendererService', 'JSONFormsTesters', function(RendererService, Testers) {
        RendererService.register('rating-control', Testers.and(
            Testers.schemaPropertyName('rating')
        ), 10);
    }])

As you can see, the tester references a renderer, so the next step is to implement it:

.directive('ratingControl', function() {
    return {
        restrict: 'E',
        controller: ['BaseController', '$scope', function(BaseController, $scope) {
            var vm = this;
            BaseController.call(vm, $scope);
            vm.max = function() {
                if (vm.resolvedSchema.maximum !== undefined) {
                    return vm.resolvedSchema.maximum;
                } else {
                    return 5;
                }
            };
        }],
        controllerAs: 'vm',
        templateUrl: './renderer/rating.control.html'
    };
})
<jsonforms-control>
 <uib-rating
 id="{{vm.id}}"
 readonly="vm.uiSchema.readOnly"
 ng-model="vm.resolvedData[vm.fragment]"
 max="vm.max()"></uib-rating>
 </uib-rating>
</jsonforms-control>

 

Please see this tutorial for more details about implementing your own custom renderer in either JS5, JS6, or Typescript. After adding our CSS customization and the custom renderer to our project, we can see the result embedded in our form:

Please note, that the data schema and the UI schema do not have to be adapted at all. JSON Forms facilitates a strict separation between the definition of a form and its rendering. That enables you to not only adapt the look and feel of your UI, but also render the same UI schema in different ways.

If you are interested in implementing your own renderer or if you miss any feature in JSON Forms, please contact us. If you are interested in trying out JSON Forms, please refer to the Getting-Started tutorial. This tutorial explains how to set up JSON Forms in your project as well as how you can try out the first steps on your own. If you would like to follow this blog series, please follow us on twitter. We will announce every new blog post on JSON Forms there.

 

List of all available days to date:


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with Angular, emf, emf forms, JSON, json forms, Angular, emf, emf forms, JSON, json forms


by Maximilian Koegel and Jonas Helming at April 21, 2017 09:28 AM

Technical Debt: How Do You Unfork a Fork?

by Tracy M at April 20, 2017 01:13 PM

filled_cirle_point_style_graph

Everyone knows how to fork that interesting open source project, it’s simple and handy to do. What’s not so easy to do is to merge back a fork that has over the years taken on a life of its own and for many reasons has diverged drastically from the original repo.

This is a case study of an ongoing project we are doing with SWT XYGraph, a visualisation project that is now part of Eclipse Nebula. It is the story of a fork of SWT XYGraph maintained by Diamond Light Source, the UK’s national synchrotron. But mostly it is a story about the efforts to merge the fork, reduce technical debt, and work towards the goal of sharing software components for Science, a key goal of the Eclipse Science Working Group.

Know Your History

One of the first things in this project was to understand the history – spanning 8 years – of the fork.  We knew the Diamond fork was done before SWT XYGraph became part of Nebula and under the Eclipse Foundation umbrella. The fork was made in order to quickly add in a number of new features that required some fundamental architectural changes to the code base.

However on looking through the history, we found there were more than just 2 forks involved. The original project had been developed as part of Control System Studio (CSS) from Oakridge National Labs. CSS had in turn been forked by Diamond and customised for the local facility. Even though SWT XYGraph had been contributed to the Eclipse Nebula project, the original repo and many, many forks were still out there: more than enough forks for a dinner party. I can’t explain it any further in words so will dump our illegible working diagram of it all here:

forks

Patches were pulled across and merged across forks when it was straightforward to do so. But with so many forks, this was a case where git history really mattered. Anywhere the history was preserved it was straightforward to track the origins of a specific feature – much harder in the cases where the history was lost. Git history is important, always worth some effort to preserve.

Choose Your Approach Carefully

Deciding if it worthwhile to merge a big fork takes some consideration. The biggest question to ask is: Are the architectural changes fundamentally resolvable? (Not like Chromium’s fork of Webkit – Blink). If that is a yes, then it’s a case of trading off the long-term benefits for the short term pain. In this case, Diamond knew it was something they wanted to do, more a matter of timing and picking the correct approach.

Together there seemed to be 2 main ways to tackle removing the fork that was part of a mature product in constant use at the scientific facility.

Option 1: Create a branch and work in parallel to get the branch working with upstream version, then merge the branch.

Option 2: Avoid a branch, but work to incrementally make the fork and upstream SWT XYGraph plug-ins identical, then make the switch over to the upstream version.

Option 1 had been tried before without success; there were too many moving parts and it created too much overhead, and ironically another fork to maintain. So it was clear this time Option 2 would be the way forward.

Tools are Your Friend

The incremental merging of the two needed to be done in a deliberate, reproducible manner to make it easier to trace back any issues coming up. Here are the tools that were useful in doing this.

1. Git Diff

The first step was to get an idea of the scale of the divergence, both quantitatively and qualitatively.

For quantity, a rough and ready measure was obtained by using git diff:

$ git diff --shorstat <diamond> <nebula>
399 files changed, 15648 insertions(+), 15368 deletions(-)

$ git diff <diamond> <nebula> | wc -l
37874

2. Eclipse IDE’s JDT formatter

Next, we needed to remove diffs that were just down to formatting. For this using Eclipse IDE and the quick & easy formatting. Select “src” folder, choose Source menu -> Format. All code formatted to Eclipse standard in one go.

format_src_folder

3. Merge Tools

Then it was time to dive into the differences and group them into features, separating quick fixes from changes that broke APIs. For this we used the free and open meld on Linux.

3. EGit Goodness

Let’s say we found a line of code different in the fork. To work out where the feature had come from, we could use ‘git blame‘ but much nicer is the eGit support in Eclipse IDE. Show annotations was regularly used to try to work out where that feature had come from, which fork it had been originally created on and then see if we could find any extra information such as bugzilla or JIRA tickets describing the feature. We were always grateful for code with good and helpful commit messages.

egit_annotations.png

3. Bug Tracking Tools

In this case we were using two different bug trackers: Bugzilla on the Eclipse Nebula side of things and JIRA on the Diamond side of things. As part of the merge, we were contributing lots and lots of distinct features to Nebula, we had a parent issue: Bug 513865 to which we linked all the underlying fixes and features, aiming to keep each one distinct and standalone. At the time of writing that meant 21 dependent bugs.

4. Gerrits & Pull Requests

Gerrits were created for each bug for Eclipse Nebula. Pull requests were created for each change going to Diamond’s DAWN (over 50 to date). Each was reviewed before being committed back. In many cases we took the opportunity to tidy code up or enhance it with things like standalone examples that could be used to demonstrate the feature.

5. Github Built-in Graphs

It was also good to use the built in Github built in Graphs  (on any repository click on ‘Graphs’ tab), first to see other forks out in the wild (Members tab):

members

Then the ‘Network’ tab to keep track of the relationship with those forks compared to the main Diamond fork:

networkgraph

Much nicer than our hand-drawn effort from earlier, though in this case not all the code being dealt with was in Github.

Win/Win

The work is ongoing and we are getting to the tricky parts – the key reasons the forks were created in the first place – to make fundamental changes to the architecture. This will require some conversations to understand the best way forward. Already with the work that has been done, there has been mutual benefits: Diamond get new features and bug fixes developed in the open source and Eclipse Nebula get new features and bug fixes developed at Diamond Light Source. The New & Noteworthy for Eclipse Nebula shows off screenshots of all the new features as a result of this merge.

Nebula_N&N_1.3_-_improved_mouse_cursors

Going forward this paves the way for Diamond to not only get rid of duplicate maintenance of >30,000 lines of Java code (according to cloc), but to contribute some significant features they have developed that integrate with SWT XYGraph. In doing so with the Eclipse Science Working Group it make a great environment to collaborate in open source and make advancements that benefit all involved.



by Tracy M at April 20, 2017 01:13 PM

Eclipse Newsletter - Mastering Eclipse CDT

April 20, 2017 11:10 AM

Learn all about Eclipse CDT, a fully functional C & C++ IDE for the Eclipse platform in this month's newsletter.

April 20, 2017 11:10 AM

Access OSGi Services via web interface

by Dirk Fauth at April 20, 2017 05:58 AM

In this blog post I want to share a simple approach to make OSGi services available via web interface. I will show a simple approach that includes the following:

  • Embedding a Jetty  Webserver in an OSGi application
  • Registering a Servlet via OSGi DS using the HTTP Whiteboard specification

I will only cover this simple scenario here and will not cover accessing OSGi services via REST interface. If you are interested in that you might want to look at the OSGi – JAX-RS Connector, which looks also very nice. Maybe I will look at this in another blog post. For now I will focus on embedding a Jetty Server and deploy some resources.

I will skip the introduction on OSGi DS and extend the examples from my Getting Started with OSGi Declarative Services blog. It is easier to follow this post when done the other tutorial first, but it is not required if you adapt the contents here to your environment.

As a first step create a new project org.fipro.inverter.http. In this project we will add the resources created in this tutorial. If you use PDE you should create a new Plug-in Project, with Bndtools create a new Bnd OSGi Project using the Component Development template.

PDE – Target Platform

In PDE it is best practice to create a Target Definition so the work is based on a specific set of bundles and we don’t need to install bundles in our IDE. Follow these steps to create a Target Definition for this tutorial:

  • Create a new target definition
    • Right click on project org.fipro.inverter.http → New → Other… → Plug-in Development → Target Definition
    • Set the filename to org.fipro.inverter.http.target
    • Initialize the target definition with: Nothing: Start with an empty target definition
  • Add a new Software Site in the opened Target Definition Editor by clicking Add… in the Locations section
    • Select Software Site
    • Software Site http://download.eclipse.org/releases/oxygen
    • Disable Group by Category
    • Select the following entries
      • Equinox Core SDK
      • Equinox Compendium SDK
      • Jetty Http Server Feature
    • Click Finish
  • Optional: Add a new Software Site to include JUnit to the Target Definition (only needed in case you followed all previous tutorials on OSGi DS or want to integrate JUnit tests for your services)
    • Software Site http://download.eclipse.org/tools/orbit/R-builds/R20170307180635/repository
    • Select JUnit Testing Framework
    • Click Finish
  • Save your work and activate the target platform by clicking Set as Target Platform in the upper right corner of the Target Definition Editor

Bndtools – Repository

Using Bndtools is different as you already know if you followed my previous blog posts. To be also able to follow this blog post by using Bndtools, I will describe the necessary steps here.

We will use Apache Felix in combination with Bndtools instead of Equinox. This way we don’t need to modify the predefined repository and can start without further actions. The needed Apache Felix bundles are already available.

PDE – Prepare project dependencies

We will prepare the project dependencies in advance so it is easier to copy and paste the code samples to the project. Within the Eclipse IDE the Quick Fixes would also support adding the dependencies afterwards of course.

  • Open the MANIFEST.MF file of the org.fipro.inverter.http project and switch to the Dependencies tab
  • Add the following two dependencies on the Imported Packages side:
    • javax.servlet (3.1.0)
    • javax.servlet.http (3.1.0)
    • org.fipro.inverter (1.0.0)
    • org.osgi.service.component.annotations (1.3.0)
  • Mark org.osgi.service.component.annotations as Optional via Properties…
  • Add the upper version boundaries to the Import-Package statements.

Bndtools – Prepare project dependencies

  • Open the bnd.bnd file of the org.fipro.inverter.http project and switch to the Build tab
  • Add the following bundles to the Build Path
    • org.apache.http.felix.jetty
    • org.apache.http.felix.servlet-api
    • org.fipro.inverter.api

Create a Servlet implementation

  • Create a new package org.fipro.inverter.http
  • Create a new class InverterServlet
@Component(
    service=Servlet.class,
    property= "osgi.http.whiteboard.servlet.pattern=/invert",
    scope=ServiceScope.PROTOTYPE)
public class InverterServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Reference
    private StringInverter inverter;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        String input = req.getParameter("value");
        if (input == null) {
            throw new IllegalArgumentException("input can not be null");
        }
        String output = inverter.invert(input);

        resp.setContentType("text/html");
        resp.getWriter().write(
            "<html><body>Result is " + output + "</body></html>");
        }

}

Let’s look at the implementation:

  1. It is a typical Servlet implementation that extends javax.servlet.http.HttpServlet
  2. It is also an OSGi Declarative Service that is registered as service of type javax.servlet.Servlet
  3. The service has PROTOTYPE scope
  4. A special property osgi.http.whiteboard.servlet.pattern is set. This configures the context path of the Servlet.
  5. It references the StringInverter OSGi service from the previous tutorial via field reference. And yes since Eclipse Oxygen this is also supported in Equinox (I wrote about this here).

PDE – Launch the example

Before explaining the details further, launch the example to see if our servlet is available via standard web browser. For this we create a launch configuration, so we can start directly from the IDE.

  • Select the menu entry Run -> Run Configurations…
  • In the tree view, right click on the OSGi Framework node and select New from the context menu
  • Specify a name, e.g. OSGi Inverter Http
  • Deselect All
  • Select the following bundles
    (note that we are using Eclipse Oxygen, in previous Eclipse versions org.apache.felix.scr and org.eclipse.osgi.util are not required)

    • Application bundles
      • org.fipro.inverter.api
      • org.fipro.inverter.http
      • org.fipro.inverter.provider
    • Console bundles
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.runtime
      • org.apache.felix.gogo.shell
      • org.eclipse.equinox.console
    • OSGi framework and DS bundles
      • org.apache.felix.scr
      • org.eclipse.equinox.ds
      • org.eclipse.osgi
      • org.eclipse.osgi.services
      • org.eclipse.osgi.util
    • Equinox Http Service and Http Whiteboard
      • org.eclipse.equinox.http.jetty
      • org.eclipse.equinox.http.servlet
    • Jetty
      • javax.servlet
      • org.eclipse.jetty.continuation
      • org.eclipse.jetty.http
      • org.eclipse.jetty.io
      • org.eclipse.jetty.security
      • org.eclipse.jetty.server
      • org.eclipse.jetty.servlet
      • org.eclipse.jetty.util
  • Ensure that Default Auto-Start is set to true
  • Switch to the Arguments tab
    • Add -Dorg.osgi.service.http.port=8080 to the VM arguments
  • Click Run

Note:
If you include the above bundles in an Eclipse RCP application, ensure that you auto-start the org.eclipse.equinox.http.jetty bundle to automatically start the Jetty server. This can be done on the Configuration tab of the Product Configuration Editor.

If you now open a browser and go to the URL http://localhost:8080/invert?value=Eclipse you should get a response with the inverted output.

Bndtools – Launch the example

  • Open the launch.bndrun file in the org.fipro.inverter.http project
  • On the Run tab add the following bundles to the Run Requirements
    • org.fipro.inverter.http
    • org.fipro.inverter.provider
    • org.apache.felix.http.jetty
  • Click Resolve to ensure all required bundles are added to the Run Bundles via auto-resolve
  • Add -Dorg.osgi.service.http.port=8080 to the JVM Arguments
  • Click Run OSGi

Http Service & Http Whiteboard

Now why is this simply working? We only implemented a servlet and provided it as OSGi DS. And it is “magically” available via web interface. The answer to this is the OSGi Http Service Specification and the Http Whiteboard Specification. The OSGi Compendium Specification R6 contains the Http Service Specification Version 1.2 (Chapter 102 – Page 45) and the Http Whiteboard Specification Version 1.0 (Chapter 140 – Page 1067).

The purpose of the Http Service is to provide access to services on the internet or other networks for example by using a standard web browser. This can be done by registering servlets or resources to the Http Service. Without going too much into detail, the implementation is similar to an embedded web server, which is the reason why the default implementations in Equinox and Felix are based on Jetty.

To register servlets and resources to the Http Service you know the Http Service API very well and you need to retrieve the Http Service and directly operate on it. As this is not every convenient, the Http Whiteboard Specification was introduced. This allows to register servlets and resources via the Whiteboard Pattern, without the need to know the Http Service API in detail. I always think about the whiteboard pattern as a “don’t call us, we will call you” pattern. That means you don’t need to register servlets on the Http Service directly, you will provide it as a service to the service registry, and the Http Whiteboard implementation will take it and register it to the Http Service.

Via Http Whiteboard it is possible to register:

  • Servlets
  • Servlet Filters
  • Resources
  • Servlet Listeners

I will show some examples to be able to play around with the Http Whiteboard service.

Register Servlets

An example on how to register a servlet via Http Whiteboard is shown above. The main points are:

  • The servlet needs to be registered as OSGi service of type javax.servlet.Servlet.
  • The component property osgi.http.whiteboard.servlet.pattern needs to be set to specify the request mappings.
  • The service scope should be PROTOTYPE.

For registering servlets the following component properties are supported. (see OSGi Compendium Specification Release 6 – Table 140.4):

Component Property Description
osgi.http.whiteboard.servlet.asyncSupported Declares whether the servlet supports the asynchronous operation mode. Allowed values are true and false independent of case. Defaults to false.
osgi.http.whiteboard.servlet.errorPage Register the servlet as an error page for the error code and/or exception specified; the value may be a fully qualified exception type name or a three-digit HTTP status code in the range 400-599. Special values 4xx and 5xx can be used to match value ranges. Any value not being a three-digit number is assumed to be a fully qualified exception class name.
osgi.http.whiteboard.servlet.name The name of the servlet. This name is used as the value of the javax.servlet.ServletConfig.getServletName()
method and defaults to the fully qualified class name of the service object.
osgi.http.whiteboard.servlet.pattern Registration pattern(s) for the servlet.
servlet.init.* Properties starting with this prefix are provided as init parameters to the javax.servlet.Servlet.init(ServletConfig) method. The servlet.init. prefix is removed from the parameter name.

The Http Whiteboard service needs to call javax.servlet.Servlet.init(ServletConfig) to initialize the servlet before it starts to serve requests, and when it is not needed anymore javax.servlet.Servlet.destroy() to shut down the servlet. If more than one Http Whiteboard implementation is available in a runtime, the init() and destroy() calls would be executed multiple times, which violates the Servlet specification. It is therefore recommended to use the PROTOTYPE scope for servlets to ensure that every Http Whiteboard implementation gets its own service instance.

Note:
In a controlled runtime, like an RCP application that is delivered with one Http Whiteboard implementation and that does not support installing bundles at runtime, the usage of the PROTOTYPE scope is not required. Actually such a runtime ensures that the servlet is only instantiated and initialized once. But if possible it is recommended that the PROTOTYPE scope is used.

To register a servlet as an error page, the service property osgi.http.whiteboard.servlet.errorPage needs to be set. The value can be either a three-digit  HTTP error code, the special codes 4xx or 5xx to specify a range or error codes, or a fully qualified exception class name. The service property osgi.http.whiteboard.servlet.pattern is not required for servlets that provide error pages.

The following snippet shows an error page servlet that deals with IllegalArgumentExceptions and the HTTP error code 500. It can be tested by calling the inverter servlet without a query parameter.

@Component(
    service=Servlet.class,
    property= {
        "osgi.http.whiteboard.servlet.errorPage=java.lang.IllegalArgumentException",
        "osgi.http.whiteboard.servlet.errorPage=500"
    },
    scope=ServiceScope.PROTOTYPE)
public class ErrorServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        resp.setContentType("text/html");
        resp.getWriter().write(
        "<html><body>You need to provide an input!</body></html>");
    }
}

Register Filters

Via servlet filters it is possible to intercept servlet invocations. They are used to modify the ServletRequest and ServletResponse to perform common tasks before and after the servlet invocation.

The example below shows a servlet filter that adds a simple header and footer on each request to the servlet with the /invert pattern:

@Component(
    property = "osgi.http.whiteboard.filter.pattern=/invert",
    scope=ServiceScope.PROTOTYPE)
public class SimpleServletFilter implements Filter {

    @Override
    public void init(FilterConfig filterConfig)
            throws ServletException { }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
            throws IOException, ServletException {
        response.setContentType("text/html");
        response.getWriter().write("<b>Inverter Servlet</b><p>");
        chain.doFilter(request, response);
        response.getWriter().write("</p><i>Powered by fipro</i>");
    }

    @Override
    public void destroy() { }

}

To register a servlet filter the following criteria must match:

  • It needs to be registered as OSGi service of type javax.servlet.Filter.
  • One of the given component properties needs to be set:
    • osgi.http.whiteboard.filter.pattern
    • osgi.http.whiteboard.filter.regex
    • osgi.http.whiteboard.filter.servlet
  • The service scope should be PROTOTYPE.

For registering servlet filters the following service properties are supported. (see OSGi Compendium Specification Release 6 – Table 140.5):

Service Property Description
osgi.http.whiteboard.filter.asyncSupported Declares whether the servlet filter supports asynchronous operation mode. Allowed values are true and false independent of case. Defaults to false.
osgi.http.whiteboard.filter.dispatcher Select the dispatcher configuration when the
servlet filter should be called. Allowed string values are REQUEST, ASYNC, ERROR, INCLUDE, and FORWARD. The default for a filter is REQUEST.
osgi.http.whiteboard.filter.name The name of a servlet filter. This name is used as the value of the FilterConfig.getFilterName() method and defaults to the fully qualified class name of the service object.
osgi.http.whiteboard.filter.pattern Apply this servlet filter to the specified URL path patterns. The format of the patterns is specified in the servlet specification.
osgi.http.whiteboard.filter.regex Apply this servlet filter to the specified URL paths. The paths are specified as regular expressions following the syntax defined in the java.util.regex.Pattern class.
osgi.http.whiteboard.filter.servlet Apply this servlet filter to the referenced servlet(s) by name.
filter.init.* Properties starting with this prefix are passed as init parameters to the Filter.init() method. The filter.init. prefix is removed from the parameter name.

Register Resources

It is also possible to register a service that informs the Http Whiteboard service about static resources like HTML files, images, CSS- or Javascript-files. For this a simple service can be registered that only needs to have the following two mandatory service properties set:

Service Property Description
osgi.http.whiteboard.resource.pattern The pattern(s) to be used to serve resources. As defined by the [4] Java Servlet 3.1 Specification in section 12.2, Specification of Mappings.This property marks the service as a resource service.
osgi.http.whiteboard.resource.prefix The prefix used to map a requested resource to the bundle’s entries. If the request’s path info is not null, it is appended to this prefix. The resulting
string is passed to the getResource(String) method of the associated Servlet Context Helper.

The service does not need to implement any specific interface or function. All required information is provided via the component properties.

To create a resource service follow these steps:

  • Create a folder resources in the project org.fipro.inverter.http
  • Add an image in that folder, e.g. eclipse_logo.png
  • PDE – Add the resources folder in the build.properties
  • Bndtools – Add the following line to the bnd.bnd file on the Source tab
    -includeresource: resources=resources
  • Create resource service
@Component(
    service = ResourceService.class,
    property = {
        "osgi.http.whiteboard.resource.pattern=/files/*",
        "osgi.http.whiteboard.resource.prefix=/resources"})
public class ResourceService { }

After starting the application the static resources located in the resources folder are available via the /files path in the URL, e.g. http://localhost:8080/files/eclipse_logo.png

Note:
While writing this blog post I came across a very nasty issue. Because I initially registered the servlet filter for the /* pattern, the simple header and footer where always added. This also caused setting the content type, that didn’t match the content type of the image of course. And so the static content was never shown correctly. So if you want to use servlet filters to add common headers and footers, you need to take care of the pattern so the servlet filter is not applied to static resources.

Register Servlet Listeners

It is also possible to register different servlet listeners as whiteboard services. The following listeners are supported according to the servlet specification:

  • ServletContextListener – Receive notifications when Servlet Contexts are initialized and destroyed.
  • ServletContextAttributeListener – Receive notifications for Servlet Context attribute changes.
  • ServletRequestListener – Receive notifications for servlet requests coming in and being destroyed.
  • ServletRequestAttributeListener – Receive notifications when servlet Request attributes change.
  • HttpSessionListener – Receive notifications when Http Sessions are created or destroyed.
  • HttpSessionAttributeListener – Receive notifications when Http Session attributes change.
  • HttpSessionIdListener – Receive notifications when Http Session ID changes.

There is only one component property needed to be set so the Http Whiteboard implementation is handling the listener.

Service Property Description
osgi.http.whiteboard.listener When set to true this listener service is handled by the Http Whiteboard implementation. When not set or set to false the service is ignored. Any other value is invalid.

The following example shows a simple ServletRequestListener that prints out the client address on the console for each request (borrowed from the OSGi Compendium Specification):

@Component(property = "osgi.http.whiteboard.listener=true")
public class SimpleServletRequestListener
    implements ServletRequestListener {

    public void requestInitialized(ServletRequestEvent sre) {
        System.out.println("Request initialized for client: "
            + sre.getServletRequest().getRemoteAddr());
    }

    public void requestDestroyed(ServletRequestEvent sre) {
        System.out.println("Request destroyed for client: "
            + sre.getServletRequest().getRemoteAddr());
    }

}

Servlet Context and Common Whiteboard Properties

The ServletContext is specified in the servlet specification and provided to the servlets at runtime by the container. By default there is one ServletContext and without additional information the servlets are registered to that default ServletContext via the Http Whiteboard implementation. This could lead to scenarios where different bundles provide servlets for the same request mapping. In that case the service.ranking will be inspected to decide which servlet should be delivered. If the servlets belong to different applications, it is possible to specify different contexts. This can be done by registering a custom ServletContextHelper as whiteboard service and associate the servlets to the corresponding context. The ServletContextHelper can be used to customize the behavior of the ServletContext (e.g. handle security, provide resources, …) and to support multiple web-applications via different context paths.

A custom ServletContextHelper it needs to be registered as service of type ServletContextHelper and needs to have the following two service properties set:

  • osgi.http.whiteboard.context.name
  • osgi.http.whiteboard.context.path
Service Property Description
osgi.http.whiteboard.context.name Name of the Servlet Context Helper. This name can be referred to by Whiteboard services via the osgi.http.whiteboard.context.select property. The syntax of the name is the same as the syntax for a Bundle Symbolic Name. The default Servlet Context Helper is named default. To override the
default, register a custom ServletContextHelper service with the name default. If multiple Servlet Context Helper services are registered with the same name, the one with the highest Service Ranking is used. In case of a tie, the service with the lowest service ID wins. In other words, the normal OSGi service ranking applies.
osgi.http.whiteboard.context.path Additional prefix to the context path for servlets. This property is mandatory. Valid characters are specified in IETF RFC 3986, section 3.3. The context path of the default Servlet Context Helper is /. A custom default Servlet Context Helper may use an alternative path.
context.init.* Properties starting with this prefix are provided as init parameters through the ServletContext.getInitParameter() and ServletContext.getInitParameterNames() methods. The context.init. prefix is removed from the parameter name.

The following example will register a ServletContextHelper for the context path /eclipse and will retrieve resources from http://www.eclipse.org. It is registered with BUNDLE service scope to ensure that every bundle gets its own instance, which is for example important to resolve resources from the correct bundle.

Note:
Create it in a new package org.fipro.inverter.http.eclipse within the org.fipro.inverter.http project, as we will need to create some additional resources to show how this example actually works.

@Component(
    service = ServletContextHelper.class,
    scope = ServiceScope.BUNDLE,
    property = {
        "osgi.http.whiteboard.context.name=eclipse",
        "osgi.http.whiteboard.context.path=/eclipse" })
public class EclipseServletContextHelper extends ServletContextHelper {

    public URL getResource(String name) {
        // remove the path from the name
        name = name.replace("/eclipse", "");
        try {
            return new URL("http://www.eclipse.org/" + name);
        } catch (MalformedURLException e) {
            return null;
        }
    }
}

Note:
With PDE remember to add org.osgi.service.http.context to the Imported Packages. With Bndtools remember to add the new package to the Private Packages in the bnd.bnd file on the Contents tab.

To associate servlets, servlet filter, resources and listeners to a ServletContextHelper, they share common service properties (see OSGi Compendium Specification Release 6 – Table 140.3) additional to the service specific properties:

Service Property Description
osgi.http.whiteboard.context.select An LDAP-style filter to select the associated ServletContextHelper service to use. Any service property of the Servlet Context Helper can be filtered on. If this property is missing the default Servlet Context Helper is used. For example, to select a Servlet Context Helper with name myCTX provide the following value:
(osgi.http.whiteboard.context.name=myCTX)To select all Servlet Context Helpers provide the following value:
(osgi.http.whiteboard.context.name=*)
osgi.http.whiteboard.target The value of this service property is an LDAP style filter expression to select the Http Whiteboard implementation(s) to handle this Whiteboard service. The LDAP filter is used to match HttpServiceRuntime services. Each Http Whiteboard implementation exposes exactly one HttpServiceRuntime service. This property is used to associate the Whiteboard service with the Http Whiteboard implementation that registered the HttpServiceRuntime service. If this property is not specified, all Http Whiteboard implementations can handle the service.

The following example will register a servlet only for the introduced /eclipse context:

@Component(
    service=Servlet.class,
    property= {
        "osgi.http.whiteboard.servlet.pattern=/image",
        "osgi.http.whiteboard.context.select=(osgi.http.whiteboard.context.name=eclipse)"
    },
    scope=ServiceScope.PROTOTYPE)
public class ImageServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        resp.setContentType("text/html");
        resp.getWriter().write("Show an image from www.eclipse.org");
        resp.getWriter().write(
            "<p><img src='img/nattable/images/FeatureScreenShot.png'/></p>");
    }

}

And to make this work in combination with the introduced ServletContextHelper we need to additionally register the resources for the /img context, which is also only assigned to the /eclipse context:

@Component(
    service = EclipseImageResourceService.class,
    property = {
        "osgi.http.whiteboard.resource.pattern=/img/*",
        "osgi.http.whiteboard.resource.prefix=/eclipse",
        "osgi.http.whiteboard.context.select=(osgi.http.whiteboard.context.name=eclipse)"})
public class EclipseImageResourceService { }

If you start the application and browse to http://localhost:8080/eclipse/image you will see an output from the servlet together with an image that is loaded from http://www.eclipse.org.

Note:
The component properties and predefined values are available via org.osgi.service.http.whiteboard.HttpWhiteboardConstants. So you don’t need to remember them all and can also retrieve some additional information about the properties via the corresponding Javadoc.

The sources for this tutorial are hosted on GitHub in the already existing projects:

 


by Dirk Fauth at April 20, 2017 05:58 AM

IoT Developer Trends - 2017 Edition

April 19, 2017 01:00 PM

The third annual IoT Developer Survey results are now available! What has changed in IoT this year?

April 19, 2017 01:00 PM

IoT Developer Trends 2017 Edition

by Ian Skerrett at April 19, 2017 12:00 PM

For the last 3 years we have been tracking the trends of the IoT developer community through the IoT Developer Survey [2015] [2016]. Today, we released the third edition of the IoT Developer Survey 2017. As in previous years, the report provides some interesting insights into what IoT developers are thinking and using to build IoT solutions. Below are some of the key trends we identified in the results.

The survey is the results of a collaboration between the Eclipse IoT Working Group, IEEE, Agile-IoT EU and the IoT Council. Each partner promoted the survey to their respective communities. A total of 713 individuals participated in the survey. The complete report is available for everyone and we also make available the detailed data [xls, odf].

As with any survey of this type, I always caution people to see these results as one data point that should be compared to other industry reports. All of these surveys have inherent biases so identifying trends that span surveys is important.

Key Trends from 2017 Survey

 1. Expanding Industry Adoption of IoT

The 2017 survey participants appear to be involved in a more diverse set of industries. IoT Platform and Home Automation industries continue to lead but industries such as Industrial Automation, Smart Cities, Energy Management experience significant growth between 2016 to 2017.

industries

2. Security is the key concern but….

Security continues to be the main concern IoT developers with 46.7% respondents indicating it was a concern. Interoperability (24.4%) and Connectivity (21.4%) are the next most popular concerns mentioned. It would appear that Interoperability is on a downward trend for 2015 (30.7%) and 2016 (29.4%) potentially indicating the work on standards and IoT middleware are lessening this concern.

concerns2017

This year we asked what security-related technologies were being used for IoT solutions. The top two security technologies selected were the existing software technologies, ie. Communication Security (TLS, DTLS) (48.3%) and Data Encryption (43.2%). Hardware oriented security solutions were less popular, ex. Trusted Platform Modules (10%) and Hardware Security Modules (10.6%). Even Over the Air Update was only being used by 18.5% of the respondents. Security may be a key concern but it certainly seems like the adoption of security technology is lagging.

security

3. Top IoT Programming Language Depends…

Java and C are the primary IoT programming languages, along with significant usage of C++, Python and JavaScript. New this year we asked in the survey, language usage by IoT categories: Constrained Devices, IoT Gateway and IoT Cloud Platform. Broken down by these categories it is apparent that language usage depends on the target destination for the developed software:

  • On constrained devices, C (56.4%) and C++ (38.3%) and the dominant languages being used. Java (21.2%) and Python (20.8%) have some usage but JavaScript (10.3%) is minimal.
  • On IoT Gateways, the language of choice is more diverse, Java (40.8%), C (30.4%), Python (29.9%) and C++ (28.1%) are all being used. JavaScript and Node.js have some use.
  • On IoT Cloud Platforms, Java (46.3%) emerges as the dominant language. JavaScript (33.6%), Node.js (26.3%) and Python (26.2%) have some usage. Not surprisingly, C (7.3%) and C++ (11.6%) usage drops off significantly.

Overall, it is clear IoT solution development requires a diverse set of language programming skills. The specific language of choice really depends on the target destination.

4. Linux is key OS; Raspbian and Ubuntu top IoT Linux distros

Linux continues to be the main operating system for IoT. This year we asked to identify OS by the categories: Constrained Device and IoT Gateway. On Constrained Devices, Linux (44.1%) is the most popular OS but the second most popular is No OS/ Bar Metal (27.6%). On IoT Gateway, Linux (66.9%) becomes even more popular and Windows (20.5%) becomes the second choice.

The survey also asked which Linux distro is being used. Raspbian (45.5%) and Ubuntu (44.%) are the two top distros for IoT.

linuxdistros

If Linux is the dominant operating system for IoT, how are the alternative IoT operating systems doing? In 2017, Windows definitely experienced a big jump from previous years. It also seems like FreeRTOS and Contiki are experiencing growth in their usage.

 5. Amazon, MS and Google Top IoT Cloud Platforms

Amazon (42.7%) continues to be the leading IoT Cloud Platform followed by MS Azure (26.7%) and Google Cloud Platform (20.4%). A significant change this year has been the drop of Private / On-premise cloud usage, from 34.9% in 2016 to 18.4% in 2017. This might be an indication that IoT Cloud Platforms are now more mature and developers are ready to embrace them.

cloud

6. Bluetooth, LPWAN protocols and 6LowPAN trending up; Thread sees little adoption

For the last 3 years we have asked what connectivity protocols developers use for IoT solutions. The main response has been TCP/IP and Wi-Fi. However, there are a number of connectivity standards and technologies that are being developed for IoT so it has been interesting to track their adoption within the IoT developer community. Based on the 2017 data, it would appear Bluetooth/Bluetooth Smart (48.2%), LPWAN technologies (ex LoRa, Sigfox, LTE-M) (22.4%) and 6LoWPAN (21.4%) are being adopted by the IoT developer community. However, it would appear Thread (6.4%) is still having limited success with developer adoption.

connectivity2017

Summary

Overall, the survey results are showing some common patterns for IoT developers. The report also looks at common IoT hardware architecture, IDE usage, perceptions of IoT Consortiums, adoption of IoT standards, open source participation in IoT and lots more. I hope the report provides useful information source to the wider IoT industry.

Next week we will be doing a webinar to go through the details of the results. Please join us on April 26 at 10:30amET/16:30pmCET.

2017 IoT Survey - webinar 2

Thank you to everyone who participated in the survey, the individual input is what makes these surveys useful. Also, thank you to our co-sponsors Eclipse IoT Working Group, IEEE, Agile IoT and the IoT Council. It is great to be able to collaborate with other successful IoT communities.

We will plan to do another survey next year. Feel free to leave any comments or thoughts on how we can improve it.

 

 

 



by Ian Skerrett at April 19, 2017 12:00 PM

Why DSLs?

by Sebastian Zarnekow (noreply@blogger.com) at April 17, 2017 09:04 PM

A lot has been written about domain specific languages, their purpose and their application. According to the ever changing wisdom of wikipedia, a DSL “is a computer language specialized to a particular application domain. This is in contrast to a general-purpose language (GPL), which is broadly applicable across domains.” In other words, a DSL is supposed to help to implement software systems or parts of those in a more efficient way. But it begs the question, why engineers should learn new syntaxes, new APIs and new tools rather than using their primary language and just get the things done?

Here is my take on this. And to answer that question, let’s move the discussion away from programming languages towards a more general language understanding. And instead of talking abstract, I’ll use a very concrete example. In fact one of the most discussed domains ever - and one that literally everyone has an opinion about: The weather.

We all know this situation: When watching the news, the forecaster will tell something about sunshine duration, wind speed and direction, or temperature. Being not a studied meteorologist, I can still find my way through most of the facts, though the probability of precipitation always gives me a slight headache. If we look at the vocabulary that is used in an average weather forecast, we can clearly call that a domain specific language, though it only scratches the surface of meteorology. But what happens, when two meteorologists talk to each other about the weather? My take: they will use a very efficient vocabulary to discuss it unambiguously.
Now let’s move this gedankenexperiment forward. There are approximately 40 non-compound words in the Finnish language that describe snow. Now what happens, when a Finnish forecaster and a German news anchor talk about snowy weather conditions and the anchorman takes English notes on that? I bet it is safe to assume that there will be a big loss of precision when it comes to the mutual agreement on the exact form of snowy weather. And even more so, when this German guy later on tries to explain to another Finn what the weather was like. The bottomline of this: common vocabulary and language is crucial to successful communication.

Back to programming. Let’s assume that the English language is a general purpose programming language, the German guy is a software developer and the Finnish forecaster is a domain expert for snowy weather. This may all sound a little farfetched, but in fact it is exactly how most software projects are run: A domain expert explains the requirements to a developer. The dev will start implementing the requirements. Other developers will be onboarded on the project. They try to wrap their head around the state of the codebase and surely read the subtleties of the implementation differently, no matter how fluent they are in English. Follow-up meetings will be scheduled to clarify questions with the domain experts. And the entire communication is prone to loss in precision. In the end all involved parties talk about similar yet slightly different things. Misunderstandings go potentially unnoticed and cause a lot of frustration on all sides.

This is where domain specific languages come into play! Instead of a tedious, multi-step translation from one specialized vocabulary to a general purpose language and vice versa, the logic is directly implemented using the domain specific terms and notation. The knowledge is captured with fewer manual transformation steps; the system is easier to write, understand and review. This may even work to the extent that the domain experts do write the code themselves. Or they pair up with the software engineers and form a team.

As usual, there is no such thing as free lunch. As long as your are not Omnilingual, you should probably not waste your time learning Finnish by heart, especially when you are working with Spanish people next week, and the French team the week thereafter. But without any doubt, fluent Finnish will pay off as long as your are working with the Finns.

A development process based on domain specific languages and thus based on a level of abstraction close to the problem domain can relief all involved people. There are fewer chances for misunderstandings and inaccurate translations. Speaking the same language and using the same vocabulary naturally feels like pulling together. And that’s what makes successful projects.

by Sebastian Zarnekow (noreply@blogger.com) at April 17, 2017 09:04 PM

Dynamic Routing in Serverless Microservice with Vert.x Event Bus

by bytekast at April 14, 2017 12:00 AM

this is a re-publication of the following blog post

SERVERLESS FRAMEWORK

The Serverless Framework has become the De Facto toolkit for building and deploying Serverless functions or applications. Its community has done a great job advancing the tools around Serverless architecture.

However, in the Serverless community there is debate among developers on whether a single AWS Lambda function should only be responsible for a single API endpoint. My answer, based on my real-world production experience, is NO.

Imagine if you are building a set of APIs with 10 endpoints and you need to deploy the APIs to DEV, STAGE and PROD environments. Now you are looking at 30 different functions to version, deploy and manage - not to mention the Copy & Paste code and configuration that will result from this type of set-up. NO THANKS!!!

I believe a more pragmatic approach is 1 Lambda Function == 1 Microservice.

For example, if you were building a User Microservice with basic CRUD functionality, you should implement CREATE, READ, UPDATE and DELETE in a single Lambda function. In the code, you should resolve the desired action by inspecting the request or the context.

VERT.X TO THE RESCUE

There are many benefits to using Vert.x in any application. With Vert.x, you get a rock-solid and lightweight toolkit for building reactive, highly performant, event-driven and non-blocking applications. The toolkit even provides asynchronous APIs for accessing traditional blocking drivers such as JDBC.

However, for this example, we will mainly focus on the Event Bus. The event bus allows different parts of your application to communicate with each other via event messages. It supports publish/subscribe, point to point, and request-response messaging.

For the User Microservice example above, we could treat the combination of the HTTP METHOD and RESOURCE PATH as a unique event channel, and register the subscribers/handlers to respond appropriately.

Let’s dive right in.

GOAL:

Create a reactive, message-driven, asynchronous User Microservice with GET, POST, DELETE, PUT CRUD operations in a single AWS Lambda Function using the Serverless Framework

Serverless stack definition:

SOLUTION:

Use Vert.x‘s Event Bus to handle dynamic routing to event handlers based on HTTP method and resource path from the API input.

Lambda Handler:

CODE REVIEW

Lines 14-19 initializes the Vert.x instance. AWS Lambda will hold on to this instance for the life of the container/JVM. It is reused in subsequent requests.

Line 17 registers the User Service handlers

Line 22 defines the main handler method that is called when the Lambda function is invoked.

Line 27 sends the Lambda function input to the (dynamic) address where handlers are waiting to respond.

Lines 44-66 defines the specific handlers and binds them to the appropriate channels (http method + resource path)

SUMMARY

As you can see, Vert.x‘s Event Bus makes it very easy to dynamically support multiple routes in a single Serverless function. This reduces the number of functions you have to manage, deploy and maintain in AWS. In addition, you gain access to asynchronous, non-blocking APIs that come standard with Vert.x.

Serverless + Vert.x = BLISS


by bytekast at April 14, 2017 12:00 AM