Beware diversity-led marketing (& three cheers for these companies doing diversity right)

by Tracy M at November 17, 2017 11:10 AM


There’s an increased level of discussion on improving diversity in tech communities. Or maybe it was always there and I just started paying attention. Either way, it’s a really good thing. But in the mix there is definitely a certain amount of diversity-led marketing e.g “We need more women-in-tech! Women, sign up for this tech course here…”.  Some are more obvious than others.

What about tech conferences? Take the case of a tech conference getting some promotion off the back of free diversity tickets. Is that just diversity-led marketing and a bad thing? After all it’s great to get underrepresented folks into conferences, right? The answer is if it is done in isolation then it is probably just self-serving marketing and pr. Underrepresented folks aren’t merely props for your agenda

Diversity is complicated. It’s easy to get it wrong and end up like the conference organizers who violated their own code-of-conduct and had a speaker cancel their talk.  Or fall into the Google case of  trying to inspire teen girls to code while simultaneously systematically limiting the careers of women in your company.  Again diversity is complicated, so we all want to focus our energies on those doing it the right way.

Open source has a worse record than most when it comes to diversity.  At this year’s Eclipsecon Europe, as with every tech conference I attend, I did my own evaluation of what’s being done well and not so well. This year, I noticed a few companies who are doing something right as evidenced by the women and underrepresented minorities that are in leadership positions.

  1. IBM – IBM had a noticeably improved presence at this year’s conference. I learnt that they actively encouraged speaker proposals. If a talk is accepted, there are good company policies in place to ensure speakers can travel and attend the conferences. As a result we had four awesome women speakers from IBM, not just any speakers, but experts in their respective fields: Eclipse JDT leads Noopur Gupta and Sarika Sinha, Eclipse SWT committer Lakshmi Shanmugam and Eclipse Microprofile committer Emily Jiang.
  2. OBEOOBEO specialize in graphical modelling and are well respected in the community. Melanie Bats is one of the rockstars in the community, doing terrific and imaginitive tech talks and also recently took over as the Eclipse Planning council lead. OBEO recently promoted Melanie to CTO, which is written about beautifully here: Zero-to-CTO.
  3. BREDEX – BREDEX specialize in testing and are well represented at EclipseCon by the indefatigable Alex Schladebeck. Alex can be found leading the highly enjoyable Kahoot quiz at EclipseCon as well as heading up the Project Quality day. Doing great things in the testing world, it was great to learn that Alex has been promoted to  ‘Head of Software Quality and Test Consulting’ at BREDEX.

These three companies set a great example for the rest of us, not to mention make us better at our work as a community. Which brings me to the picture at the top of this blog post. I like to get setup really early before I do a tech talk, especially one in a huge room with a massive screen. So while getting setup, Jonah Graham of Kichwa Coders and Sarika Sinha of IBM got into a discussion about debugger protocols and threading issues. To discuss the finer points my laptop was commandeered and out came the code. It was one of those serendipitous moments and I didn’t want my pre-talk nerves to stop them. So I took a seat and took pictures while taking deep breaths. I think my talk went well anyway. That one conversation really informed our thinking on our work on the future of debuggers. And really it reminded me in a powerful way how things are always better the more different types of people you get involved. And little moments like these make it all worthwhile, and worth doing right, in the best way possible.

by Tracy M at November 17, 2017 11:10 AM

Preposition Preference

by Donald Raab at November 15, 2017 09:53 PM

What’s up? A preposition.

Photo taken at Grounds for Sculpture in Hamilton, NJ

A friend of mine at Rutgers University would always respond to the question “What’s up?” with the consistent witty response: “A preposition.” I fell into this trap far too many times.

Have you ever thought about how much we use prepositions in our Java APIs?

We use several different prepositions in APIs in Eclipse Collections. Each one conveys a different meaning. Some prepositions that appear in Eclipse Collections multiple times are “with, of, by, as, to, from, into”. When we use a preposition in an API, it should help convey meaning clearly. If it doesn’t, then we would have been better off without it.

Two prepositions enter. One preposition leaves.

At JavaOne this year, I described a battle we once had between two prepositions for naming our collection factory methods in the Eclipse Collections API. The battle was between of and with.

MutableList<String> list = Lists.mutable.of("1", "2", "3");
MutableList<String> list = Lists.mutable.with("1", "2", "3");

After an intense debate, we wound up with both options for our collection factory classes. We thought this was one place where we could provide multiple options to allow developers to use their own preference. This however was not the end of the story. Sometimes there is more than just a single battle to be won.

The prepositions of and with both work well for naming factory methods for creating collections. I personally prefer with, mostly because this is what was used with Smalltalk. In Smalltalk, I would regularly write the following:

set := Set with: ‘1’ with: ‘2’ with: ‘3’.

The following is the equivalent using Java with Eclipse Collections.

MutableSet<String> set = Sets.mutable.with("1", "2", "3");

If you prefer, you can also create a collection using the of factory method.

MutableSet<String> set = Sets.mutable.of("1", "2", "3");

There are also forms that take an Iterable as a parameter. These are called ofAll and withAll.

In java.util.Collection, there are methods for adding and removing elements to and from collections. They are named add, addAll, remove and removeAll. These four methods return boolean. This makes them unsuitable for writing code fluently.

We have our own Mutable interfaces in Eclipse Collections, so we knew we could fix the fluency problem by using a one of the two prepositions. We decided to go with with, because with has a natural opposite named without.

Set<String> set = 
Sets.mutable.with("1", "2", "3")
Assert.assertEquals(Sets.mutable.with("1", "3", "4"), set);

This naming pattern also worked well when adding elements via an Iterable.

Set<String> set =
Sets.mutable.with("1", "2", "3")
.withoutAll(Lists.mutable.with("1", "3"));
Assert.assertEquals(Sets.mutable.with("2", "4"), set);

As you can see, we have with, withAll, without and withoutAll as instance methods directly on our mutable collections. Instead of returning boolean like add or remove, these methods return this, which is the collection that the method is operating on. These methods have good symmetry with the existing methods on Collection that return boolean, and also with each other.

We extended this pattern to our immutable collections as well.

ImmutableSet<String> set =
Sets.immutable.with("1", "2", "3")
.newWithoutAll(Lists.mutable.with("1", "3"));
Assert.assertEquals(Sets.mutable.with("2", "4"), set);

In the mutable case, the withAll and withoutAll methods mutate the existing collection. In the newWithAll and newWithoutAll cases, a new collection is returned each time, thus preserving the immutability of the original collection.

Attack Of the Clones

The preposition of lost the battle of the instance-based collection factory methods in Eclipse Collections, because there is no good natural opposite for of like there is for with. That said, of is sometimes an important part of other method names in the Eclipse Collections API.

// Bag API - occurrencesOf
MutableBag<String> bag = Bags.mutable.with("1", "2", "3");
Assert.assertEquals(1, bag.occurrencesOf("2"));
// List API - indexOf
MutableList<String> list = Lists.mutable.with("1", "2", "3");
Assert.assertEquals(1, list.indexOf("2"));
// RichIterable API - sumOfInt, sumOfLong, sumOfFloat, sumOfDouble 
MutableList<String> list = Lists.mutable.with("1", "2", "3");
long sum = list.sumOfInt(Integer::parseInt);
Assert.assertEquals(6L, sum);
// RichIterable API - selectInstancesOf
MutableList<String> list = Lists.mutable.with("1", "2", "3");
MutableList<String> filtered = list.selectInstancesOf(String.class);
Assert.assertEquals(list, filtered);

Revenge of the With

With methods in the RichIterable interface

With became more prevalent in the Eclipse Collections APIs when it was used to augment existing APIs like select, reject, collect, etc. The With methods in the RichIterable API were originally added as optimizations. They allowed us to make anonymous inner classes static, by providing more opportunities to make them completely stateless. As a completely independent and accidental benefit, the With methods provide more opportunities for us to use Method References with Eclipse Collections APIs. This is a good thing, because I have a Method Reference Preference. Here are some examples of using some of these methods with Method References using the domain from the Eclipse Collections Pet Kata.

boolean any =
this.people.anySatisfyWith(Person::hasPet, PetType.CAT);

boolean all =
this.people.allSatisfyWith(Person::hasPet, PetType.CAT);

boolean none =
this.people.noneSatisfyWith(Person::hasPet, PetType.CAT);

Person found =
this.people.detectWith(Person::hasPet, PetType.CAT);

int count =
this.people.countWith(Person::hasPet, PetType.CAT);
Assert.assertEquals(2, count);

MutableList<Person> selected =
this.people.selectWith(Person::hasPet, PetType.CAT);
MutableList<Person> rejected =
this.people.rejectWith(Person::hasPet, PetType.CAT);
PartitionMutableList<Person> partition =
this.people.partitionWith(Person::hasPet, PetType.CAT);
Assert.assertEquals(selected, partition.getSelected());
Assert.assertEquals(rejected, partition.getRejected());

Good API design is hard, because naming is hard. It is a great feeling when you discover and use a name that communicates intent clearly to other developers. The best way to do that, is to run your names by other developers you work with you to get a consensus before settling on a name. On very rare occasions where a consensus is not possible (e.g. two equally good alternatives), either just pick a winner or take the cost of providing both. My preference is almost always to just pick a winner and move on. Providing both of and with factory methods will hopefully be a rare exception.

I hope you found this blog helpful. If you liked the blog, claps are appreciated.

by Donald Raab at November 15, 2017 09:53 PM

Help Pick the New Name for Java EE

by Mike Milinkovich at November 15, 2017 04:29 PM

This blog post is based on the text of Eclipse EE4J’s very first GitHub Issue. Please join the conversation over there!

We need a new brand name for the set of specifications that will be created by the new community process. This brand name will also become a certification mark in the industry for compatible, independent implementations. The open source projects that fall under the Eclipse EE4J top level project will be one such implementation. In short, we need a new name to replace “Java EE”. Much like the OpenJDK project implements the Java SE Platform specification, the EE4J projects will provide implementations of a set of specifications that we today call Java EE: we need a brand name for this set of specifications.

With this in mind, we are initiating a community process to select the brand name. This process will be managed by the EE4J Project Management Committee (“PMC”) with assistance from the Eclipse Management Organization (“EMO”). The name that is selected by this process must pass legal and other trademark searches to ensure that the names are available for use. As a result, it is possible that the favoured selection will not be the ultimate choice. The final decision will be made by the EMO Executive Director (“EMO(ED)”) in consultation with the PMC.

The process is described in greater detail below.


Names can be nominated by anyone in the community via this GitHub Issue record.

Nominations will be open from November 15 until November 30, 2018.

Naming Guidelines

All suggested names must conform to the following:

Any suggested names which fail to meet the above criteria will be rejected.

Name Selection Process

The process will be executed as follows:

  1. Members of the community will be invited to enter their nominations into the specified channel;
  2. At the end of the nomination period, the names suggested by the community will be reviewed by the PMC to identify those which meet the criteria specified in the by the naming guidelines (depending on response, the PMC may decide to further reduce the list to a manageable size);
  3. The PMC will then initiate a community vote using the CIVS system (which will produce an overall ranking of the choices); and
  4. The results of the vote will be delivered to the EMO(ED) who will engage in the required legal and other trademark searches to ensure that the names are available for use, and consult with the PMC to make the final decision.

Since we have no idea what sort of community response to expect, it is difficult to time box anything other than the initial nomination process. But this will be an open and transparent process, and we invite the community to engage in all aspects of it. There is a great deal of legal, marketing, and community thought that goes into selecting an industry brand, so it’s important that we get this right. This may take a little time.

Filed under: Foundation

by Mike Milinkovich at November 15, 2017 04:29 PM

Open IoT Challenge 4.0 | Extended Deadline Nov 20

November 15, 2017 10:30 AM

One last week to submit your open IoT solution idea. You can then work on it until March 15. Submit now!

November 15, 2017 10:30 AM

Say Goodbye to the Eclipse Foundation Nova Theme

November 15, 2017 12:00 AM

The Eclipse Foundation is planning on removing a few deprecated components from in an effort to reduce our code base for

We are using Bug 526827 - Fall clean up of to track the following tasks:

I believe this “clean up” exercise is our first step towards the execution of my long term plan for migrating our code base to follow recommendations from the PHP Framework Interop Group. I will be writing about this subject in a future blog post but for now, you can take a look at Bug 496514 - PHP sites maintained by the EF should follow recommendations from the Framework Interop Group.

Bug 468336 - Remove support for the Nova theme in

During the summer of 2014, my team was responsible for implementing the redesign of via Bug 432342 - Website Redesign 2014. We created a new look and feel called Solstice and since then, we’ve been busy migrating all of our web properties.

Solstice look and feel

Today, I am happy to announce that it’s finally time to retire the Eclipse Nova theme. We are planning on removing the Nova theme from on December 5th, 2017.

The Solstice look and feel will become the default theme for all web pages that are currently being served with Nova.

This change will be affecting the following websites:

Please use Bug 468336 - Remove support for the Nova theme in to discuss any concerns regarding this change.

November 15, 2017 12:00 AM

Debug Protocol: Coming soon to an IDE near you

by Tracy M at November 14, 2017 11:43 AM

Our absolute favourite type of talk to do is one where we’ve been working on something so cutting-edge that we don’t have all the answers yet. This was the case with our work on the debug protocol and our talk at EclipseCon Europe. We got to take where we’d gotten to and present it to a room full of smart people who were generous enough to show up, ask questions and give us their insightful feedback.

The talk gives an overview of the debug protocol then demonstrates it working in:

  • VS Code
  • Eclipse IDE
  • Eclipse Che

We demo 7 different types of debug functionality as supported by the protocol:

  1. Launch
  2. Source lookup
  3. Call stacks
  4. Breakpoints
  5. Variables
  6. Run control (step, continue, etc)
  7. Termination

OK, so we are amazing at building debuggers but we are completely hopeless at repeating questions so you can hear them in presentation talk recordings. And the questions we were asked were really good, so here is my attempt to record them, as well as repost the answers plus update with the latest state of affairs.

Where can I find the code?

Here’s where you can find the git repo: But even better, we are actively working on pull requests so that the code will become part of existing projects LSP4J, LSP4E and Eclipse Che. You can checkout the first PR for LSP4J here:

Does the debug protocol support drop to frame? What about evaluate expressions? Hovers? Step into?

Definitely yes to evaluate expressions, hovers and step into. Don’t think it supports drop to frame, not obviously but may be there using different terminology.

Is the protocol extensible?

Yes, so you could add functionality like drop-to-frame. But the extensibility is not formalized so that is something we would like to help address.

Is the protocol blocking?

No it is asynchronous.

Does it use JSON-RPC just like the Language Server Protocol?

No it pre-dates JSON-RPC 2.0 and uses its own custom protocol. It is similar enough to JSON-RPC. While it would be nice to have the same protocol being used, it is probably not worth the effort to change this given the number of existing debug adapter implementations already out in the wild.

Also thanks to all those who left written feedback on the talk, especially the person who had this to say:

A really good initiative. Don’t over do it though by pushing making the protocol complex just to support features of one language, seems like it is a case for C/C++, i.e. registers support. It could be a separate extension.

Great point, thanks for the input!


by Tracy M at November 14, 2017 11:43 AM

Eclipse Vert.x meets GraphQL

by jotschi at November 14, 2017 12:00 AM

I recently added GraphQL support to Gentics Mesh and I thought it would be a good idea to boil down the essence of my implementation in example so that I could share it in a simpler form. The example I’m about to show will not cover all aspects that I have added to the Gentics Mesh API (e.g. paging, search and error handling) but it will give you a basic overview of the parts that I put together. GraphQL does not require a GraphDB even if the name might suggest it.

Using a graphdb in combination with GraphQL does nevertheless provide you with some advantages which I will highlight later on.

What is GraphQL? What is it good for?

GraphQL as the name suggests is a new query language which can be used to load exactly the amount of data which you ask for.

The query is defined in way that the query fields correlate to the JSON data that is being retrieved. In our StarWars Demo domain model this query will load the name of human 1001 which is Darth Vader.

  vader: human(id: 1001) {

Would result in:

  "data": {
    "vader": {
      "name": "Darth Vader"

The Demo App

The demo application I build makes use of the graphql-java library. The data is being stored in a graph database. I use OrientDB in combination with the OGM Ferma to provide a data access layer. GraphQL does not necessarily require a graph database but in this example I will make use of one and highlight the benefits of using a GraphDB for my usecase.

You can find the sources here:


The StarWarsData class creates a Graph which contains the Star Wars Movies and Characters, Planets and their relations. The model is fairly simple. There is a single StarWarsRoot vertex which acts as a start element for various aggregation vertices: Movies are stored in MovieRoot, Planets in PlanetsRoot, Characters are stored in HumansRoot and DroidsRoot.

The model classes are used for wrappers of the specific graph vertices. The Ferma OGM is used to provide these wrappers. Each class contains methods which can be used to traverse the graph to locate the needed vertices. The found vertices are in turn again wrapped and can be used to locate other graph elements.


The next thing we need is the GraphQL schema. The schema describes each element which can be retrieved. It also describes the properties and relationships for these elements.

The graphql-java library provides an API to create the object types and schema information.

private GraphQLObjectType createMovieType() {
  return newObject().name("Movie")
    .description("One of the films in the Star Wars universe.")

    // .title
        .description("Title of the episode.")
        .dataFetcher((env) -> {
          Movie movie = env.getSource();
          return movie.getName();

    // .description
        .description("Description of the episode.")


A type can be referenced via a GraphQLTypeReference once it has been created and added to the schema. This is especially important if you need to add fields which reference other types. Data fetchers are used to access the context, traverse the graph and retrieve properties from graph elements.

Another great source to learn more about the schema options is the GarfieldSchema example.

Finally all the created types must be referenced by a central object type QueryType. The query type object is basically the root object for the query. It defines what query options are initially possible. In our case it is possible to load the hero of the sage, specific movies, humans or droids.


The GraphQLVerticle is used to accept the GraphQL request and process it.

The verticle also contains a StaticHandler to provide the Graphiql Browser web interface. This interface will allow you to quickly discover and experiment with GraphQL.

The query handler accepts the query JSON data.

An OrientDB transaction is being opened and the query is executed:

demoData.getGraph().asyncTx((tx) -> {
    // Invoke the query and handle the resulting JSON
    GraphQL graphQL = newGraphQL(schema).build();
    ExecutionInput input = new ExecutionInput(query, null, queryJson, demoData.getRoot(), extractVariables(queryJson));
}, (AsyncResult rh) -> {

The execute method initially needs a context variable. This context passed along with the query. In our case the context is the root element of the graph demoData.getRoot(). This context element also serves as the initial source for our data fetchers.

.dataFetcher((env) -> {
    StarWarsRoot root = env.getSource();
    return root.getHero();

The data fetchers for the hero type on the other hand will be able to access the hero element since the fetcher above returned the element. Using this mechanism it is possible to traverse the graph. It is important to note that each invocation on the domain model methods will directly access the graph database. This way it is possible to influence the graph database query down to the lowest level. When omitting a property from the graphql query it will not be loaded from the graph. Thus there is no need to write an additional data access layer. All operations are directly mapped to graph database.

The StarWarsRoot Ferma class getHero() method in turn defines a TinkerPop Gremlin traversal which is used to load the Vertex which represents the hero of the Star Wars saga.

Apache TinkerPop
Apache TinkerPop is an open source, vendor-agnostic, graph framework / API which is supported by many graph database vendors. One part of TinkerPop is the Gremlin traversal language which is great to query graph databases.

public Droid getHero() {
    // Follow the HAS_ROOT edge and return the first found Vertex which could be found. 
    // Wrap the Vertex explicitly in the Droid Ferma class.  
    return traverse((g) -> g.out(HAS_HERO)).nextOrDefaultExplicit(Droid.class, null);

Once the query has been executed the result handler is being invoked. It contains some code to process the result data and potential errors. It is important to note that a GraphQL query will always be answered with a 2xx HTTP status code. If an element which is being referenced in the query can’t be loaded an error will be added to the response JSON object.


Testing is fairly straight forward. Although there are multiple approaches. One approach is to use unit testing directly on the GraphQL types. Another option is to run queries against the endpoint.

The GraphQLTest class I wrote will run multiple queries against the endpoint. A Parameterized JUnit test is used iterate over the queries.

A typical query does not only contain the query data. The assertions on the response JSON are directly included in query using plain comments.

I build an AssertJ assertion to check the comments of a query and verify that the assertion matches the response.


Run the example

You can run the example by executing the GraphQLServer class and access the Graphiql browser on http://localhost:3000/browser/

Where to go from here?

The example is read-only. GraphQL also supports data mutation which can be used to actually modify and store data. I have not yet explored that part of GraphQL but I assume it might not be that hard to add mutation support to the example.

Additionally it does not cover how to actually make use of such API. I recently updated my Vert.x example which shows how to use Vert.x template handlers to build a small server which renders some pages using data which was loaded via GraphQL.

Thanks for reading. If you have any further questions or feedback don’t hesitate to send me a tweet to @Jotschi or @genticsmesh.

by jotschi at November 14, 2017 12:00 AM

Eclipse Oxygen.1a IDE Improvements: Java 9, JUnit 5, General, Gradle and PHP

by howlger at November 11, 2017 01:44 PM

Oxygen.1a (4.7.1a) was released two weeks after Oxygen.1 (4.7.1) on October 11, 2017. Oxygen.1 includes bug fixes and minor improvements. Since Oxygen.1a, the Eclipse IDE runs out of the box with Java 9 and supports development for Java 9 as well as testing with JUnit 5. Many thanks to all of you who have contributed in any way.

As usual I made a short video that shows some improvements in action:

Eclipse Oxygen.1a IDE Improvements: Java 9, JUnit 5, General, Gradle and PHP

Java 9

(see also Eclipse Project 4.7.1a New and Noteworthy – Java 9 and Eclipse Wiki – Java 9 Examples)

JUnit 5

(see also Eclipse Project 4.7.1a New and Noteworthy – JUnit 5)


(see also in Eclipse 4.7.1/4.7.1a closed bugs)


(see also Eclipse Buildship 2.1.0 New and Noteworthy)


(see also in PDT 5.1 closed bugs)

More details about Java 9, JUnit 5 and other Eclipse topics can be seen in the recently uploaded 177 EclipseCon Europe 2017 videos with a total duration of more than 3 days and 19 hours.

Next releases (calendar / ICAL):

Upgrade to Eclipse Oxygen.1a or download Eclipse Oxygen.1a and happy coding!

by howlger at November 11, 2017 01:44 PM

Replay EclipseCon Europe 2017

by Anonymous at November 10, 2017 01:04 PM

If you missed a talk at this year's conference, there's a good chance that you'll be able to replay it. You'll find most of the talks from this year are online.

by Anonymous at November 10, 2017 01:04 PM

Debug Protocol vs Language Server Protocol

by Tracy M at November 08, 2017 01:09 PM

It is safe to say that the language server protocol (LSP) is the future of developer tools.  When it comes to the equivalent for debug, the debug protocol is ‘LSP for debuggers’. It is a useful tagline but here are three key differences to be aware of:

  1. State – the big mindshift with LSP is that servers are stateless. The clients store all the state e.g. the files themselves and settings (what language, classpath, etc). Any state kept on the serverside (e.g. the index) is usually purely for performance reasons. What this means is that, for instance, servers can be restarted and carry on seamlessly without having to know anything of what happened in the system beforehand. On the other hand, the debug protocol is not stateless, servers need to know all sorts of state and sequences of what has happened, so the design space is different for this protocol.
  2. JSON RPC 2.0 Spec (and cancellable requests) – The LSP defines JSON-RPC messages for requests, responses and notifications. The debug protocol was created before JSON RPC 2.0 spec was finalized, debug protocol uses a similar structure, but it is not cross compatible. For example, the JSON field name for the method to call is command in debug protocol and method in JSON RPC 2.0. The type of message (event, request, or response) is explicit in DSP in the type field, but is implicit in JSON RPC 2.0 (based on presence of the combination of method, id, result and error fields). However using a library like org.eclipse.lsp4j.jsonrpc can hide such differences and provide equivalent higher level abstractions (like Java interfaces) and leave the library to handle the differences. The LSP has a nice feature, cancellable requests, that is an extension to JSON RPC2.0 that is also not available in the debug protocol:
  3. Ubiquity – although the LSP was defined originally for use in VS Code do you know that the 2 flagship languages for VS Code are not based on LSP? Typescript and javascript language tooling is not done using LSP. On the other hand the debug protocol does underpin all the debugger implementations in VS Code, including the flagship node debuggers for V8 and Chrome.

All that being said, it’s worth repeating the common benefits of both protocols which are:

  • good separation of client/server
  • building the tooling once and reusing it across IDEs and editors i.e. vs code, eclipse, atom, etc
  • better unit/isolated testing
  • write in language most suited to the job
  • more effective way for tool developers to keep pace with underlying tools

by Tracy M at November 08, 2017 01:09 PM

Accelerating the Adoption of Java Microservices with Eclipse MicroProfile

by Monica Beckwith at November 08, 2017 06:30 AM

InfoQ caught up with Emily Jiang from IBM to hear about the new Eclipse MicroProfile project and the new release, with extended fault tolerance and other new features.

By Monica Beckwith

by Monica Beckwith at November 08, 2017 06:30 AM

How to add Eclipse launcher in Gnome dock

by Lorenzo Bettini at November 07, 2017 01:00 PM

In this post I’ll show how to add an Eclipse launcher as a favorite (pinned) application in the Gnome dock (I’m using Ubuntu Artful). This post is inspired by

First of all, you need to create a .desktop file, where you need to specify the full path of your Eclipse installation:

[Desktop Entry]
Name=Eclipse Java
Comment=Eclipse is an IDE

This is relative to my installation of Eclipse which is in the folder /home/bettini/eclipse/java-latest-released/eclipse, note the executable “eclipse” and the “icon.xpm”. The name “Eclipse Java” is what will appear as the launcher name both in Gnome applications and later in the dock.

Make this file executable.

Copy this file in your home folder in .local/share/applications.

Now in Gnome Activities search for such a launcher and it should appear:

Select it and make sure that Eclipse effectively runs.

Unfortunately, in the dock, there’s no contextual menu for you to add it as a favorite and pin it to the dock:

But you can still add it to the dock favorites (and thus pin it there) by using the corresponding contextual menu that is available when the launcher appears in the Activities:

And there you go: the Eclipse launcher is now on your dock and it’s there to stay 🙂


Be Sociable, Share!

by Lorenzo Bettini at November 07, 2017 01:00 PM

Macro keyboard recording on Eclipse / LiClipse 4.3.1

by Fabio Zadrozny ( at November 07, 2017 12:21 PM

I'm currently working on a new engine for record and playback of keyboard macros in Eclipse. It's currently available as a project in the Eclipse incubation (see for more details and how to install it to Eclipse) and I hope it'll be a part of Eclipse 4.8 -- the current version should be stable already, but it needs some hardening, so, feedback is appreciated ;)

Also, I decided to integrate it into LiClipse too (so, the new 4.3.1 release of LiClipse adds support to record and playback of macros).

The idea is that this will fix a long standing (and popular) feature request in Eclipse -- -- and a pet peeve on mine too as it's not uncommon for me to copy a file to notepad++ to make a record/playback and then copy it back to Eclipse... this really should be builtin (as a note, the rectangular edition in Eclipse, multiple cursors in LiClipse  and regexps in the find dialog do solve a number of use-cases, but I find that record/playback on some cases is invaluable).

Now, why not just use Well, I've used it some times but unfortunately I didn't find it as stable as I wanted -- it's really hard to come up with a stable record of what's happening in Eclipse -- so, I think an effective solution must really be builtin to Eclipse in order to be effective.

Also, in order to be feasible, there are some limitations in the scope on what it should accomplish: it's not a general Eclipse IDE record/playback, but rather a record/playback in the scope of an editor, so, it should properly record/playback the keystrokes in the editor and the resulting actions from such commands (although the implementation is done in a way which could support more general recording in Eclipse, but this would require changing many other parts of the IDE to be aware of the macro record/playback, so, I'm not sure it'd be feasible).

This means that some particular implementations which don't work through commands will have to be adapted -- so, for instance, record in the find dialog currently doesn't work because it doesn't work through commands and there's no way to listen that a find action was actually triggered from there... right now, the workaround is using the Incremental Find (Ctrl+J) and Find Next (Ctrl+K) commands when on record mode (my idea is taking a look at it after the macro record/playback engine is integrated to core Eclipse as I don't think it's a blocker due to having a proper workaround through actual commands), the focus now is making it solid for record and playback of commands which are related to the text editor.

So, that's it, hope you enjoy macro recording on LiClipse 4.3.1 -- and hopefully on Eclipse itself later on ;)

by Fabio Zadrozny ( at November 07, 2017 12:21 PM

JBoss Tools and Red Hat Developer Studio for Eclipse Oxygen.1a

by jeffmaury at November 06, 2017 11:31 AM

JBoss Tools 4.5.1 and Red Hat JBoss Developer Studio 11.1 for Eclipse Oxygen.1a are here waiting for you. Check it out!



JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our JBoss Products page and run it like this:

java -jar jboss-devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more:

This release requires at least Eclipse 4.7 (Oxygen) but we recommend using the latest Eclipse 4.7.1a Oxygen JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat JBoss Developer Studio".

For JBoss Tools, you can also use our update site directly.

What is new?

Our main focus for this release was on adoption of Java9, improvements for container based development and bug fixing. Eclipse Oxygen itself has a lot of new cool stuff but let me highlight just a few updates in both Eclipse Oxygen and JBoss Tools plugins that I think are worth mentioning.

OpenShift 3

CDK 3.2 Server Adapter

A new server adapter has been added to support the next generation of CDK 3.2. While the server adapter itself has limited functionality, it is able to start and stop the CDK virtual machine via its minishift binary. Simply hit Ctrl+3 (Cmd+3 on OSX) and type CDK, that will bring up a command to setup and/or launch the CDK server adapter. You should see the old CDK 2 server adapter along with the new CDK 3 one (labeled Red Hat Container Development Kit 3.2+ ).

cdk3.2 server adapter

All you have to do is set the credentials for your Red Hat account, the location of the CDK’s minishift binary file, the type of virtualization hypervisor and an optional CDK profile name.

cdk3.2 server adapter1

Once you’re finished, a new CDK Server adapter will then be created and visible in the Servers view.

cdk3.2 server adapter2

Once the server is started, Docker and OpenShift connections should appear in their respective views, allowing the user to quickly create a new Openshift application and begin developing their AwesomeApp in a highly-replicatable environment.

cdk3.2 server adapter3
cdk3.2 server adapter4

New command to tune resource limits

A new command has been added to tune resource limits (CPU, memory) on an OpenShift deployment. It’s available for a Service, a DeploymentConfig, a ReplicationController or a Pod.

To activate it, go the the OpenShift explorer, select the OpenShift resource, right click and select Edit resource limits. The following dialog will show up:

edit resource limits

After you changed the resource limits for this deployment, it will be updated and new pods will be spawned (not for ReplicationController)

edit resource limits1

Discover Docker registry URL for OpenShift connections

When an OpenShift connection is created, the Docker registry URL is empty. When the CDK is started through the CDK server adapter, an OpenShift connection is created or updated if a matching OpenShift connection is found. But what if you have several OpenShift connections, the remaining ones will be left with the empty URL.

You can find the matching Docker registry URL when editing the OpenShift connection through the Discover button:

edit connection discover

Click on the Discover button and the Docker registry URL will be filled if a matching started CDK server adapter is found:

edit connection discover1 login

It is possible to login from JBoss Tools to A single account will be maintained per workspace. Once you initially logged onto, all needed account information (tokens,…​) will be stored securely.

There are two ways to login onto

  • through the UI

  • via a third party service that will invoke the proper extension point

UI based login to

In the toobar, you should see a new icon Toolbar. Click on it and it will launch the login.

If this is the first time you login to or if you account tokens are not valid anymore, you should see a browser launched with the following content:

osio browser

Enter your RHDP login and the browser will then auto-close and an extract (for security reasons) of the token will be displayed:

osio token dialog

This dialog will be also shown if an account was configured in the workspace and the account information is valid.

Via extension point

The integration can be invoked by a third party service through the extension point. This extension point will perform the same actions as the UI but basically will return an access token for to the third party service. A detailed explanation of how to use this extension point is described here: Wiki page

You can display the account information using the Eclipse Jboss Tools → preference node. If you workspace does not contain an account yet, you should see the following:

osio preferences

If you have a configured account, you should see this:

osio preferences1

Server tools

EAP 7.1 Server Adapter

A server adapter has been added to work with EAP 7.1 and Wildfly 11. It’s based on WildFly 11. This new server adapter includes support for incremental management deployment like it’s upstream WildFly 11 counterpart.

Fuse Tooling

Global Beans: improve support for Bean references

It is now possible to set Bean references from User Interface when creating a new Bean:

Create Factory Bean Reference

Editing Bean references is also now available on the properties view when editing an existing Bean:

Edit Factory Bean Reference

Additional validation has been added to help users avoid mixing Beans defined with class names and Beans defined referencing other beans.

Apache Karaf 4.x Server Adapter

We are happy to announce the addition of new Apache Karaf server adapters. You can now download and install Apache Karaf 4.0 and 4.1 from within your development environment.

Apache Karaf 4x Server Adapters

Switch Apache Camel Version

You can now change the Apache Camel version used in your project. To do that you invoke the context menu of the project in the project explorer and navigate into the Configure menu. There you will find the menu entry called Change Camel Version which will guide you through this process.

Switch Camel Version

Improved Validation

The validation in the editor has been improved to find containers which lack mandatory child elements. (for instance a Choice without a child element)

Improved validation

Java Developement Tools (JDT)

Support for Java™ 9

Java™ 9 is here, and JDT fully supports it:

  • The Eclipse compiler for Java (ECJ) implements all the new Java 9 language enhancements

  • Updated significant features to support Java Modules, such as compiler, search and many editor features.

It is not mandatory to run Eclipse with Java Runtime 9 to get the Java 9 support. However, a Java runtime 9 is required to be on a project’s build path to compile a modular project against the system modules.

  • When a Java Runtime 9 is added to a project’s build path, the system modules are listed under the System library in the package explorer

java9 package explorer
  • An existing non-modular Java project can be quickly converted to a module by creating a for that project. This feature can be availed once the project has been moved to compliance 9

java9 create module
  • With Java 9 support, a library or a container can now be added to the module path as opposed to the classpath

java9 module path
  • Once a module has been added to a project’s module path, its encapsulation properties can further be modified by clicking on the Is Modular option and editing the Module properties. The following example shows how module can can be made to export its packages in the context of the current Java project

java9 module properties
  • Java search now includes a new search scope - Module

java9 module search

Support for JUnit 5

JUnit 5 support is now available in Eclipse.

  • Create a new JUnit Jupiter test via *New JUnit Test Case wizard:

new junit jupiter test
  • Add JUnit 5 library to the build path

    • New JUnit Test Case wizard offers to add it while creating a new JUnit Jupiter test

add junit 5 lib
  • Quick Fix (Ctrl+1) proposal on @Test, @TestFactory, @ParameterizedTest and @RepeatedTest annotations

add junit 5 lib quick fix
  • Add JUnit library in Java Build Path dialog

add junit 5 lib java build path
  • Create a JUnit Jupiter test method with the new test_jupiter template

junit jupiter test method template
  • Create a @TestFactory method with the new test_factory template

junit jupiter test factory template
  • JUnit Jupiter’s Assertions, Assumptions, DynamicContainer and DynamicTest classes are now added to Eclipse Favorites by default

content assist favorites

This allows you to quickly import the static methods from these classes in your code via Content Assist (Ctrl + Space) and Quick Fix (Ctrl + 1).

  • View all the failures from grouped assertions in the same Result Comparison dialog opened from JUnit view

grouped assertions result comparison
  • View the number of disabled tests and tests with assumption failures on hover in JUnit view

skipped tests
  • Use Go to File action or just double-click to navigate to the test from JUnit view even when the test is displayed with a custom name

display name
  • (Re-)Run a single @Nested test class by using the Run action in JUnit view or Outline view. You can even right-click on a nested test class name in the editor and use the Run As action

run nested class
  • The Test Method Selection dialog in JUnit launch configuration now shows the method parameter types also

test method selection dialog
  • You can provide tags to be included in or excluded from a test run in the Configure Tags dialog of JUnit launch configuration

junit tags
  • If you are using an Eclipse workspace where you were running your JUnit 5 tests via @RunWith(JUnitPlatform.class) in Eclipse without JUnit 5 support then you will have JUnit 4 as the test runner in their launch configurations. Before executing these tests in Eclipse with JUnit 5 support, you should either change their test runner to JUnit 5 or delete them so that new launch configurations are created with JUnit 5 test runner while running the tests

test runner update

We do not support running tests in a setup where an old Eclipse build (not having JUnit 5 support) is using a new Eclipse build (having JUnit 5 support) as target. Also, developers who have the JDT JUnit runtime bundles (org.eclipse.jdt.junit.runtime, org.eclipse.jdt.junit4.runtime) checked out and pull the latest changes will run into the above issue. You are expected to use a new Eclipse build for the development.

And more…​

You can find more noteworthy updates in on this page.

What is next?

Having JBoss Tools 4.5.1 and Developer Studio 11.1 out we are already working on the next maintenance release for Eclipse Oxygen.


Jeff Maury

by jeffmaury at November 06, 2017 11:31 AM

⦏Breaking News Eclipse Sirius⦎ SiriusCon 2017!

by Cédric Brun ( at November 06, 2017 12:00 AM

SiriusCon 2017 has never been so close! The third edition of this conference will take place this Thursday, the 9th of November in the beautiful Paris! A whole day of exclusive content: presentations with a specific focus on Eclipse Sirius itself, others about companion technologies and feedback from users who used these technologies to build their custom tools! This year these tools are covering activities ranging from the Risk Analysis in Industry Supply Chains to the design and management of REST API for an enterprise information system. One can learn a lot from the experiences of the others and I’m often amazed at the tools which are getting built.

We’ll start the day with a keynote by Mike Milinkovich, the executive director of the Eclipse Foundation. It’s an honor to host such an important actor in the Open-Source movement, he has a unique perspective regarding what happened in the Software Industry the last decade and how Open-Source enabled worldwide collaborations. This keynote is especially relevant for such a conference as Sirius is part of a rich ecosystem of technologies, products and companies and is an integral part of the Eclipse world.

The event also features talks with a particular focus: document generation from models, simulation and animation, diagram layouting, using Xtext and Sirius or learning how to smooth the user experience of your graphical modeler. Last but not least, tutorials during the afternoon to either discover or sharpen your skills! Find out the complete program.

Content is one thing, but we are also organizing the event so that you will enjoy a nice time in a friendly atmosphere. This is the occasion to get in touch with the team, chat with other users, directly provide your feedback or get some direct help with the clinic. We’ll also prepare a showcase area to demonstrate tools and technologies, both for beginners and advanced users. Do yourself a favor and do not hesitate to reach out to people here.

You get exclusive content at SiriusCon, insights you can’t find anywhere else, sharp details about the technology, its use and deployment and its direction, and you also get to enjoy lively chats in a delightful place, right next to the Montparnasse train station (at the Novotel Paris Vaugirard). The event is free but the room is limited, make sure to register soon!

⦏Breaking News Eclipse Sirius⦎ SiriusCon 2017! was originally published by Cédric Brun at CTO CEO @ Obeo</a> on November 06, 2017.</p>

by Cédric Brun ( at November 06, 2017 12:00 AM

UnifiedMap: How it works?

by Nikhil Nanivadekar at November 05, 2017 03:14 PM

UnifiedMap: How it works?

Eclipse Collections comes with it’s own implementations of List, Set and Map. It also has additional data structures like Multimap, Bag and an entire Primitive Collections hierarchy. In this blog we will take a look under the hood of UnifiedMap.


UnifiedMap is the Map implementation of Eclipse Collections and is implemented very differently than a JDK HashMap.

java.util.HashMap is backed by a table of Entry objects. The Entry implementation in HashMap has hashcode, key, value, next as members, HashMap essentially caches the hashcode of keys.

UnifiedMap schematic

A UnifiedMap is backed by an array where keys and values occupy alternate slots in the array :protected transient Object[] table . The flattened array structure stores only the keys and values there by creating a leaner Map implementation with reduced memory footprint. Added benefit of having keys and values in consecutive slots is improved cache locality.

Collisions in the main array are handled by putting a special object called CHAINED_KEY in key slot. The value slot of CHAINED_KEY (slot after CHAINED_KEY slot) contains another Object[] array similar to main array called CHAINED_OBJECT with keys and values in alternate slots.

Look ups in UnifiedMap use a standard hashcode index algorithm to find the location of a key in the array. If a key is not a CHAINED_KEY then the next slot contains the value. If a key is a CHAINED_KEY then the CHAINED_OBJECT is evaluated linearly to find the required key and the next slot in CHAINED_OBJECT contains the value.

Since UnifiedMap does not cache the hashcode, for each look up, hashcode needs to be computed. So, the performance of UnifiedMap is directly dependent on the hashcode implementation of the key.

Below are a few memory and performance comparisons between JDK 1.8 HashMap and Eclipse Collections 9.0.0 UnifiedMap.

Memory Footprint (lower number the better)

Memory Comparison HashMap<Integer, Integer> vs UnifiedMap<Integer, Integer>
Memory Comparison HashMap<String, String> vs UnifiedMap<String, String>

Performance Tests (higher number the better)

Source code for memory tests and performance tests is available on GitHub.

Summary: Eclipse Collections UnifiedMap has ~50% smaller memory footprint compared to JDK HashMap, but that comes with a minor performance implication.

Eclipse Collections is open for contributions. If you like it star it.

UnifiedMap: How it works? was originally published in Oracle Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Nikhil Nanivadekar at November 05, 2017 03:14 PM

Eclipse DemoCamp Munich: Registration opens today

by Maximilian Koegel and Jonas Helming at November 03, 2017 03:20 PM

Eclipse Democamp Munich December 4th 2017 – Registration opens today!

We are pleased to invite you to participate in the Eclipse DemoCamp Munich 2017. The DemoCamp Munich is one the biggest DemoCamps worldwide and therefore an excellent opportunity to showcase all the cool, new and interesting technology being built by the Eclipse community. This event is open to Eclipse enthusiasts who want to show demos of what they are doing with Eclipse.

Registration will open today, November 6th at 2.30 pm

Please click here for detailed information and the registration.

Seating is limited, so please register soon if you plan to attend.

We look forward to welcoming you to the Eclipse DemoCamp 2017!


The last DemoCamps have always been sold out. Due to legal reasons, we have a fixed limit for the room and cannot overbook. However, every year some places unfortunately remain empty. Please unregister if you find you are unable to attend so we can invite participants from the waiting list. If you are not attending and have not unregistered, we kindly ask for a donation of 10 Euros to “Friends of Eclipse”. Thank you in advance for your understanding!

If the event is sold out, you will be placed on the waiting list. You will be informed if a place becomes available, and you will need to confirm your attendance after we contact you.

If you are interested in giving a talk, please send your presentation proposal to for consideration. There are always more proposals than slots in the agenda, so we will make a selection from the submissions.

The post Eclipse DemoCamp Munich: Registration opens today appeared first on EclipseSource.

by Maximilian Koegel and Jonas Helming at November 03, 2017 03:20 PM

The language server protocol will revolutionize how you code

by Tracy M at November 03, 2017 12:26 PM


The next generation Spring Boot tooling has been completely refactored to be based on the language server protocol (LSP). The tooling covers things like code assist for the Spring Boot property file, support for Cloud Foundry manifest files as well as Spring Boot support for Java files with typical features such as validation, content-assist and quick-fixes.

Beyond IDEs

This new generation of Spring tools, Spring Tools 4, will be fully unveiled at the SpringOne Platform conference later this year. Traditionally Spring tooling was written in Java as Eclipse plugins and could not be reused in other developer environments. The language server protocol enables Spring tooling to go beyond Eclipse and provide similarly great user experiences for Atom and Visual Studio Code, and likely any platform which implement language server clients. The list of clients supporting LSP is growing fast, even with Vi and emacs featuring it on their road-maps. In fact, the notable exception is IntelliJ. The market leader already has a well established language extension framework and the resources to keep up with language changes. It will be intriguing to see when and if they jump on the bandwagon and acknowledge the users banging the door down.

Every Programming Language that Matters

On the server side the number of protocol implementations is growing fast. There were 27 implementations back in June, my current count stands at 46. LSP easily makes sense for newer modern languages such as Typescript and Go, not to mention any domain-specific language. When it comes to languages with great IDE support such as Java, LSP is a big step back in functionality. However this could change quickly as the pace of Java releases quickens. Even for C/C++, LSP presents a chance to start afresh with a new approach.

LSP: The Good, Bad & Ugly

At EclipseCon Europe, Martin Lippert spoke about Pivotal’s journey to get Spring tooling to this stage and the good, bad and ugly of language server protocols. Here are some of the highlights of his insights.

Good: Write once, run everywhere

It is a no-brainer for toolsmiths to switch to something which enables them to creating tooling just once and make it readily available across different tools.

  • building lightweight tooling (separation of client/server)
  • the freedom of starting fresh
  • building the tooling once and reusing it across various environments
  • isolated testing
  • writing tools in the language most suited to the job

Bad: More features, better integration?

LSP covers many language features but is not feature-rich. Syntax highlighting is not included though readily handled by Textmate grammars. Other features such as advanced refactoring should fall within the remit of LSP so hopefully this will only  be a matter of time.

Additionally there is no communication between language servers. If the Spring language tooling provides completions for Java in a property file, will it be able to reuse an existing Java LSP or does it have to launch its own and redo all the indexing and parsing work?  As things scale this will be interesting to see how interactions between language servers can best be handled.


Ugly: More protocol please

The protocol itself isn’t quite enough of a protocol and still leaves room for interpretation. Which means the promise of client-agnostic language servers doesn’t fully exist yet. Currently each client behaves differently. For example each client has its own way to start a server and define an extension. Additionally every client is different, which for now means feature flags per client to work around issues, for example, for magic indentation on multi-line code completion (VS Code client indents using its own rules, Atom defers to the language server). Having more standardization on client implementation specifics and things like a client registry would make all the difference here.

Language Server Protocol is the future of language tooling

What Martin’s talk and the Spring tooling underlines is that there is no longer much debate about whether LSP is the future of language tooling. Now the conversation has shifted to what’s missing and how do we improve it. Developer communities and toolsmiths are restructuring to work together, be more effective and as a result revolutionize the landscape of our language tooling options.

by Tracy M at November 03, 2017 12:26 PM

Enter the Open IoT Challenge 4.0: some ideas for your submission

by Benjamin Cabé at November 03, 2017 09:34 AM

There are only a couple weeks left (deadline is Nov. 13) for entering the fourth edition of our Open IoT Challenge!

The Open IoT Challenge encourages IoT enthusiasts and developers to build innovative solutions for the Internet of Things using open standards and open source technology.

Open IoT Challenge 4.0

You probably remember last year’s edition and its winner, Sébastien Lambour, who built an amazing solution: InTheMoodForLife. The project aims at analyzing sleep patterns to anticipate mood disorder episodes for people suffering from bipolar disorder.

Sébastien won $3,000, and we also provided him with an additional $1,000 to fund his participation in Eclipse IoT events where he presented his project. For example, Sébastien attended the Eclipse IoT Day in London, a couple months ago, and gave a brilliant talk where he shared his experience.

As a reminder, there is already a nice incentive for you to enter the challenge, even if you feel like you can’t compete for the first place (and you shouldn’t feel like that, by the way!). In fact, if you are among the 10 best submissions, our jury will award you a $150 gift certificate that will help you buy the hardware you need to build your project!

I thought it would be useful to share some of the ideas I would like to see come to life, and some of the technologies that I think would be interesting to use.

Deep learning

With Deeplearning4j moving to Eclipse, now is the perfect time to think of having deep learning play a role in your project. Whether you plan on doing image recognition, predictive maintenance, or natural language processing, I am curious to see if Deeplearning4j, or other open source projects in the machine learning / deep learning area, can help you implement your solution. There are some IoT gateways out there that are fairly capable, and even some that include a GPU, so please be crazy and push the boundaries of edge computing! 🙂

Industrial IoT

Many – too many, probably – IoT solutions are very consumer oriented (connected toothbrush anyone?), so I certainly hope to see lots of projects that are more industry-focused. Our production performance management testbed can probably give you some pointers regarding open source projects that are applicable, and even provide you some code to get started.

Blockchain and distributed ledgers

Beyond the buzzword and the hype, blockchain is still an interesting topic, which you may want to put to use in your Open IoT Challenge project.

Security is one key aspect of IoT. Blockchain might be a way for you to secure communications or data exchanges, but it is also interesting to think about using distributed ledgers as a way to enable a sharing economy.

What if you could “share” some of your IoT devices’ processing or networking power and get compensated, in a secure way, using a blockchain? And how about true “pay-per-use” scenarios, where e.g construction companies share a pool of high-value power tools for which actual usage is tracked, logged, and billed accordingly, through a blockchain?

Of particular interest, in my opinion, is IOTA, an open source distributed ledger which, by design, has no transaction fees (it has no miners, as the 2,779,530,283,277,761 (!) tokens that form the ‘tangle’ were all generated in the so-called genesis transaction).

This means that you can leverage the IOTA tangle to implement secured micro-transactions in your IoT solution, e.g to expose (and monetize) sensor data to the world. I would be particularly curious to see how IOTA performs on constrained devices, and how well it scales.

Low-power, long-life

MangOH RedOne of our sponsors this year, Sierra Wireless, will be providing the 10 best proposals with a MangOH Red IoT board. This board is perfectly suited for low-power IoT applications and would be the ideal partner in crime if running for years out of a small battery is important to you.

It is often an afterthought in IoT to think about the maintenance of the equipment. I am interested in seeing how your proposal will highlight how you plan on making your solution easy to operate, including things like over-the-air updates, energy consumption optimization, etc.

If you are not sure if your idea would make for a cool project, feel free to ping me, I will be happy to give you some feedback 🙂

I am looking forward to reviewing your proposals, and seeing all the cool projects you will be building over the next few months!

Submit by Nov. 13 for the IoT Challenge

And last but not least, big thanks to Bitreactive, CONTACT Software, Eurotech, Intel, Red Hat, and Sierra Wireless for sponsoring the Open IoT Challenge this year.

The post Enter the Open IoT Challenge 4.0: some ideas for your submission appeared first on Benjamin Cabé.

by Benjamin Cabé at November 03, 2017 09:34 AM

Save the date: Eclipse DemoCamp Munich on 4th of December 2017

by Maximilian Koegel and Jonas Helming at November 02, 2017 07:31 PM

We are pleased to invite you to participate to the Eclipse DemoCamp Munich 2017. Like in the past years, we continue in our tradition of hosting a “Christmas DemoCamp”. Please save the following date and mark your calendars on December 4th 2017 for the next Eclipse DemoCamp in Munich!  

We offer 80 seats, but we usually receive around 200 registrations. To give everybody the same chance, registration for the event will start exactly on November 6th 2017 2pm.

More details on the event and the possibility to register you will find here.

If you are interested in giving a talk, please send your presentation proposal to for consideration. There are always more proposals than slots in the agenda, so we will make a selection from submitted proposals.

We are looking forward to your registration and seeing you in December!

The post Save the date: Eclipse DemoCamp Munich on 4th of December 2017 appeared first on EclipseSource.

by Maximilian Koegel and Jonas Helming at November 02, 2017 07:31 PM