Remote Services over (Unreliable) Networks

by Scott Lewis (noreply@blogger.com) at April 29, 2016 05:43 PM

In a previous post, I described how ECF Remote Services provided a way to create, implement, test, deploy and upgrade transport-independent remote services.

Note that 'transport-independent' does not mean 'transparent'.

For example, with network services it's always going to relatively likely that a remote service (whether an OSGi Remote Service or any other kind of service) could fail at runtime.   The truth of this is encapsulated in the first fallacy of distributed computing:  The network is reliable.

A 'transparent' remote services distribution system would attempt to hide this fact from the service designer and implementer.   Alternatively, the ECF Remote Services approach allows one to choose a distribution provider(s) to meet the reliability and availability requirements for that remote service, and/or potentially change that selection in the future if requirements change.  

Also, the dynamics of OSGi services (inherited by Remote Services), allows network failure to be mapped by the distribution system to dynamic service departure.   For example, using Declarative Services, responding at the application level to a network failure can be as simple as implementing a method:

void unbindStudentService(StudentService service) throws Exception {
// Make service unavailable to application
this.studentService = null;
// Also could respond by using/binding to a backup service, removing
// service from UI, etc
}
This is only possible if
  1. The distribution system detects the failure
  2. The distribution system maps the detected failure to a service unregistration of the proxy
If those two things happen, then using DS the proxy unregistration will result in the unbind method above being called by DS.
But the two requirements on the distribution system above may not be satisfied by all distribution providers. For example, for a typical REST/http-based service, there may be no way for the distribution system to detect network failure, and so no unbinding of the service can/will occur.

The use of transport-independent remote services, along with the ability to choose/use/create custom distribution providers as appropriate allows micro service developers to easily deal with the realities of distributed computing, as well as changing service requirements.

by Scott Lewis (noreply@blogger.com) at April 29, 2016 05:43 PM

EMF Forms 1.8.0 Feature: New Group Rendering Options

by Maximilian Koegel and Jonas Helming at April 29, 2016 11:51 AM

With Mars.2, we released EMF Forms 1.8.0. EMF Forms makes it really simple to create forms that edit your data based on an EMF model. To get started with EMF Forms please refer to our tutorial. In this post, we wish to outline an important enhancement for rendering group elements in forms, which allows you to create form-based UIs even more efficiently.

The core of EMF Forms is the so-called view model, which essentially describes how a custom form-based UI should look. “Group” is one of the most frequently used layout elements in EMF Forms. This allows you to contain any view element, such as controls or other containers, and therefore, enables you to structure a form. The element group is very flexible, it does not directly imply a certain way of being rendered. The standard EMF Forms renderer will render a group as an SWT group:

image08

However it is pretty common to provide custom renderers. This allows you to change the way groups are visualized in your custom product. The benefit is that you do not have to adapt your view models in any way, you just need to provide another renderer. As an example, groups can also be rendered as a Nebula PGroup and therefore make it collapsible:

image13

From various customer projects, we have learned that making a group collapsible is a fairly common need. To free adopters from the need to always implement a custom renderer, we have added this as an option the view model itself. Therefore for every group you can specifically configure its collapsibility. If you open a group within your view model, you can change the “Group Type” to “Collapsible”. Additionally, you can then set the initial collapsed state. The EMF Forms default renderer will then render it like this, using an SWT ExpandBar:

image06

Another common issue with groups was that they are independent elements, which are also rendered independently. As this makes sense from a conceptional point of view, it sometimes produced unexpected results when it come to layouting. The following screenshot shows two groups below each other. As you can see, the layout is calculated independently for both groups, therefore, the controls are not aligned.

image10

As this behavior is fine in some use cases, some users would expect the alignment. In this case, the renderer of the group would have to embed itself into a parent GridLayout, which is calculated and rendered for both groups together. This is now also supported by the EMF Forms default renderer. If you configure the “Group Type” to “Embedded”, the renderer will not create independent Grids for every group, but rather embed them, producing a more homogenous layout:

image05

Of course there are many more possible adaptations available for group and other elements. To keep the view model language simple, we try to only add options, which are commonly used across projects. However, by enhancing the existing renderers, all types of customizations are possible. If you miss any feature or ways to adapt it, please provide feedback by submitting bugs or feature requests or contact us if you are interested in enhancements or support.


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with eclipse, emf, emfcp, emfforms, eclipse, emf, emfcp, emfforms


by Maximilian Koegel and Jonas Helming at April 29, 2016 11:51 AM

Configuring your Orion project

by Mike Rennie at April 28, 2016 04:00 PM

Everyone loves to customize stuff.

In Orion 11.0, we provided support for .tern-project files so you can do just that. A .tern-project file, for those unfamiliar with them, is a configuration file that lives at the root of your project, is entirely JSON, and tells Tern how and what to run with.

Lets see an example file (the one used in the Orion client):

{
  "plugins": {
    "node": {},
    "requirejs": {},
    "express": {}
  },
  "ecmaVersion": 5,
  "libs": [
    "ecma5",
    "browser",
    "chai"
  ]
}

See? It’s not so bad. Now lets talk about what all the parts mean, and why you would want to make one.

The first thing typically asked when talking about these files and configuring your project is: “what if I don’t have a .tern-project file?”. The short answer is; we start Tern with a default configuration that contains every plugin we pre-package in Orion.

The longer answer is:

  1. you get all of the required core services plugins from Orion (like HTML support, parsing plugins, etc)
  2. you get all of the pre-packaged plugins (mongodb, redis, mysql, express, amqp, postgres, node, requirejs and angular)
  3. you get a default ECMA parse level of 6
  4. you get a default set of definitions (ecma5, ecma6, browser and chai)

Basically, everything will work right out of the box, the downside is that you get a lot more stuff loaded in Tern than you might need (or want).

ecmaVersion

This is the most common element that people create a .tern-project file for, and its an easy one. If you are coding against ECMA 6, the defaults are fine. If you are not, its best to change this to be 5.

libs

This entry describes type libraries you want Tern to use. These libraries provide type information that is used for content assist and inferencing. At the moment, only the definitions that come pre-packaged in Orion can be entered in this list, which include  ecma5, ecma6, browser and chai.

I know what you are thinking. What if I have a definition not in the list I would like to use? We are working on support for custom definition files directly in the project, and providing a UI to install new ones.

plugins

This entry describes all of the plugins that you want Tern to run with. As mentioned, by default you get everything. Leave it out? You get everything. Empty plugins entry? You get everything.

Regardless of what you put in the plugins entry, you will always get the core services plugins that Orion needs to function – so no, you cannot break the tools.

While everything will work fine right out of the box with all plugins running – you can really improve the performance and memory usage of the tools if you tell us what support you actually need. For example, working on a node.js project? only include the node plugin. Working on AMD browser project? only include the requirejs plugin. You get the idea – only load what you actually need.

At the moment, the list of plugins that can be enabled is amqp, angular, express, mongodb, mysql, node, postgres, redis, requirejs. And yes, we are working on ways to provide your own.

loadEagerly

This entry is a list of files that are so important, that you just have to have Tern load them when it starts up. All joking aside, this entry is best for pointing Tern at the ‘main’ in your project. For example, say you were working on a web page. You could add index.html in the loadEagerly entry to have Tern automatically load that file and all the dependent scripts right away, so everything was primed and ready to go immediately (as opposed to Tern filling in its type information as you open and close files).

dependencyBudget

Don’t change this entry. Set too low, it will cause dependency resolution to time out (incomplete type information). Set too high, the IDE will wait longer before answering whatever you asked it (content assist, open decl, etc).

So thats it. Those are all of the supported entries that can appear in the .tern-project file. A pretty short list.

You can’t break the tools by setting a bad entry – we have sanity checking that will revert to a “something is wrong, load the defaults” state. We also have linting / sanity checking that will alert you if you have broken something in the file.

Tern project file linting

Tern project file linting

Broke the file and still navigated to another file (or maybe brought in a change from git that broke the file)? We will alert you that its bad in the banner, with a handy link to jump to the file and fix it!

Tern project file banner message

Tern project file banner message

Remember, you can ignore all the warnings (if you really want to) and Tern will still start with the defaults as a safety measure.

Feeling a bit lazy and don’t want to type out a brand new file for your project? Just open up the new (empty) .tern-project and hit Ctrl+Space, which will auto-magically insert a shiny new template for you.

Happy configuring!


by Mike Rennie at April 28, 2016 04:00 PM

Why it’s time to kill the Eclipse release names:Neon, Oxygen, etc

by Tracy M at April 28, 2016 01:26 PM

noneonmarsluna

I thought I’d heard all the arguments for why developers choose IntelliJ IDEA over Eclipse IDE, but this was a new one. I was at a meet-up with lots of Java developers, and inevitably the conversation went to the topic of preferred Java IDE. One developer raised the point ‘I never understand the different versions of Eclipse, you know, Luna, Mars, Neon – what does what, which is the Java one I should download? With IntelliJ it’s just IntelliJ community or ultimate,I know what to get.’ I had to stop myself from launching into a let-me-explain-it’s-simple-and-alphabetic explanation and instead just listen and look around to see others were nodding along in agreement with the speaker.

Not long after that I was reading this article: Kill extra brand names to make your open source project more powerful by Chris Grams. In the article, Grams talks about the ‘mental brand tax‘  incurred when projects have additional brand names users are expected to understand. This was the name for what the developers were expressing. As Grams explains “…having a bunch of different brand names can be exclusionary to those who do not have the time to figure out what they all are and what they all do.” This sounded like those developers who are busy solving their problems and keeping pace with the fast developments in software.

From my corporate days, engineering often had a working project name. For example, there were the projects named after US state names: ‘New Jersey’, ‘California’, etc. However, when it came to release, these internal names were always scrubbed out by the product marketing department and never referred to from the user perspective. In those cases it was easy to see how the project names could cause real confusion out in the world.

In Eclipse, the names are part of the development flow. It’s a nice way for the community to get together to choose them and it is a common language for us as Eclipse developers to use. Often we don’t differentiate between developer-users and developer-extenders. We expect all users to know they are alphabetic and follow a similar theme. But if you think about it isn’t that just another level of abstraction put onto Eclipse versioning? Should these names really be going out to the Eclipse users? Should we expect our users to know Neon is the same as Eclipse 4.6 which is the same as the version that was released in 2016? Ditto for all previous versions? (And that is before we get into the different flavours of the IDE e.g. Java, C/C++, etc).

So what could we use instead? I don’t have all the answers, but want to kick off the conversation with a proposal. As Grams summarizes “Sometimes the most powerful naming strategy is an un-naming strategy”. What if we did that? The Eclipse simultaneous release is reliably once a year. How about we use the year it comes out to version it. So this year, Neon would be Eclipse IDE 2016, Oxygen becomes Eclipse IDE 2017 and so on. The added benefit to users is that it becomes immediately obvious how old previous versions are. So instead of ‘Are you going to fix my bug in Luna?‘ someone might ask ‘Are you going to fix my bug in Eclipse.2014?‘ It might be more straightforward for them to see they are already 2-3 years behind the latest release.

As we, as a community, move towards thinking and treating Eclipse more as a product, this is a change that could be well worth the effort. As Grams notes:Just because you have a weak confederation of unsupported brands today doesn’t mean you can’t change. Try killing some brand names, replacing them with descriptors, and channel that power back into your core brand.”



by Tracy M at April 28, 2016 01:26 PM

Modeling at EclipseCon France

by Maximilian Koegel and Jonas Helming at April 28, 2016 12:10 PM

EclipseCon France is only a couple of weeks away. I’m looking forward to this great conference with a carefully selected program in a beautiful city . And, I’m definitely looking forward to presenting three topics!

On Tuesday the conference starts with the tutorials and we start with our tutorial on AngularJS – What every Java developer should know about AngularJS. This tutorial specifically addresses Java/Eclipse developers with no previous experience developing web frontends and provides a good hands-on introduction into AngularJS. So, be sure to sign up and bring your own laptop!

If you would like to spice up your day with some more web development, we can also recommend the CHE tutorial –  Extending Eclipse Che to build custom cloud IDEs – about how to extend the new web-based IDE at Eclipse.

Getting back to  AngularJS, we will also present an ignite talk on JSONForms. This component ships with EMF Forms and allows you to render an EMF-Forms-based form in an AngularJS-based application. With JSONForms, you can leverage the mature tooling of EMF Forms for modeling forms while developing state-of-the-art single-page web applications based on HTML5, CSS, JavaScript/Typescript and JSON/JSON Schema.

If you are new to EMF Forms, you could also drop by my talk, “EMF, myself and UI” on building UIs for EMF-based data. While preparing the presentation, I was amazed by all features that have been added since I presented it at EclipseCon France last year. Here are just three of the important features:

1. We have added AngularJS-based rendering as mentioned earlier.

image00

2. We have made it really simple to use the Treeviewer and Tableviewer components standalone.

image03

3. And finally, we built a brand-new Ecore Editor with improved usability based on EMF-Forms.

image02

 

In the talk we will also give you a sneak preview of unpublished features in the pipeline, so don’t miss it!

image01

Apart from the technical content, any EclipseCon is a good opportunity to meet the people behind the technology, to get in contact and maybe solve one or two technical problems on the spot.

We are looking forward to meeting you at EclipseCon France. Do not forget to register quickly as there is a discount if you register by May 10th. See you soon in Toulouse!


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with AngularJS, eclipse, eclipsecon, emf, emf forms, JSON, JSONForms, modeling, AngularJS, eclipse, eclipsecon, emf, emf forms, JSON, JSONForms, modeling


by Maximilian Koegel and Jonas Helming at April 28, 2016 12:10 PM

Samsung and Codenvy release Artik IDE for IoT

by Alex Blewitt at April 27, 2016 06:30 PM

Today at the Samsung Developers Conference, Codenvy announced the first public release of the Samsung Artik IDE which allows building applications for the Samsung Artik IoT devices.

By Alex Blewitt

by Alex Blewitt at April 27, 2016 06:30 PM

Presentation: Let's Treat Eclipse Neon More Like a Product

by Wayne Beaton at April 26, 2016 03:00 PM

Wayne Beaton overviews the current state of Eclipse, discussing how to improve the user experience, support channels, and how to tap into the funding available to work on Eclipse IDE improvements.

By Wayne Beaton

by Wayne Beaton at April 26, 2016 03:00 PM

Boost Productivity with MyEclipse—Project Setup

by Srivatsan Sundararajan at April 26, 2016 02:36 PM

MyEclipse supports a large number of features, ranging from Java EE support of specifications like JPA and JAX-RS, to modern web capabilities like Angular JS and CSS 3. Given the breadth of the IDE, it can be easy to miss key timesaving features that have been added to MyEclipse over the years. There are too […]

The post Boost Productivity with MyEclipse—Project Setup appeared first on Genuitec.


by Srivatsan Sundararajan at April 26, 2016 02:36 PM

Using an e4view in combination with Guice injection

by Stefan Winkler (stefan@winklerweb.net) at April 26, 2016 10:21 AM

I am currently working for a customer on an existing Eclipse RCP (based on Luna) which consists of 99% Eclipse 3.x API. The customer wants to migrate to E4 gradually, but there is no budget to migrate existing code all at once. Instead, the plan is to start using e4 with new features and migrate the other code step by step.

So, when I was given the task of creating a new view, I wanted to use the "new" (in Luna, anyways) e4view element for the org.eclipse.ui.views extension point. The good thing about this is that you can easily write JUnit tests for the new class because it is a POJO and does not have many dependencies. 

My problem is that part of the customer's RCP uses Xtext and several components or "services" (not actual services in an OSGi sense) are available via Guice.

So I was confronted with the requirement to get a dependency available via Guice injected in an E4-style view implementation:

public class MyViewPart
{
    @Inject // <- should be injected via Guice
    ISomeCustomComponent component;

    @PostConstruct // <- should be called and injected via E4 DI
    public void createView(Composite parent)
    {
        // ...
    }
}

 

The usual way to get classes contributed via extension point injected by Guice is to use an implementation of AbstractGuiceAwareExecutableExtensionFactory like this:

<plugin>
   <extension
         point="org.eclipse.ui.views">
      <e4view
            class="my.app.MyExecutableExtensionFactory:my.app.MyViewPart"
            id="my.app.view"
            name="my view"
            restorable="true">
      </e4view>
   </extension>
</plugin>

The colon in the class attribute is usually interpreted by the framework in a way that the class identified before the colon is instantiated as an IExecutableExtensionFactory and the actual object is identified by the parameter (given after the colon) and created by that factory.

But I did not expect this to work, because I thought it would bypass the E4 class creation mechanism; and actually, it seems to be the other way round and the e4view.class element seems to ignore the extension factory create the my.app.MyViewPart to inject it with E4DI. The MyExecutableExtensionFactory is never getting called.

As I said, I didn't expect both DI frameworks to coexist without conflict, so I thought the solution to my problem would be to put those objects which I need injected into the E4 context. After googling a bit, I have found multiple approaches, and I didn't know which one is the "correct" or "nice" one.

Among the approaches I have found, there were:

  1. providing context functions which delegate to the guice injector
  2. retrieving the objects from Guice and configure them as injector bindings
  3. retrieving the objects from Guice, obtain a context and put them in the context

(The first two approaches are mentioned in the "Configure Bindings" section of https://wiki.eclipse.org/Eclipse4/RCP/Dependency_Injection)

I ended up trying all three, but could only get the third alternative to work.

This is what I tried:

Context Functions

I tried to register the context functions as services in the Bundle Activator with this utility method:

private void registerGuiceDelegatingInjection(final BundleContext context, final Class<?> clazz)
{
IContextFunction func = new ContextFunction()
{
@Override
public Object compute(final IEclipseContext context, final String contextKey)
{
return guiceInjector.getInstance(clazz);
}
};

ServiceRegistration<IContextFunction> registration =
context.registerService(IContextFunction.class, func,
new Hashtable<>(Collections.singletonMap(
IContextFunction.SERVICE_CONTEXT_KEY, clazz.getName()
)));
}

and called registerGuiceDelegatingInjection() in the BundleActivator's start() method for each class I needed to be retrieved via Guice.

For some reason, however, this did not work. The service itself was registered as expected (I checked via the OSGi console) but the context function was never called. Instead I got injection errors that the objects could not be found during injection.

Injector Bindings

I quickly found out that this solution does not work for me, because you can only specify an interface-class to implementation-class mapping in the form

InjectorFactory.getDefault().addBinding(IMyComponent.class).implementedBy(MyComponent.class)

You obviously cannot configure instances or factories this way, so this is not an option, because I need to delegate to Guice and get Guice-injected instances of the target classes...

Putting the objects in the context

Finally, the solution that worked for me was getting the IEclipseContext and put the required classes there myself during the bundle activator's start() method.

private void registerGuiceDelegatingInjection(final BundleContext context, final Class<?> clazz)
{
  IServiceLocator s = PlatformUI.getWorkbench();
  IEclipseContext ctx = (IEclipseContext) s.getService(IEclipseContext.class);
  ctx.set(clazz.getName(), guiceInjector.getInstance(clazz));
}

This works at least for now. I am not sure how it works out in the future if more bundles would directly put instances in the context; maybe in the long-term named instances would be needed. Also, for me this works, because the injected objects are singletons, so it does not do any harm to put single instances in the context.

I would have liked the context function approach better, but I could not get it to work so far.

Maybe one of you, the readers, can see my mistake. If so, please feel free to comment or to add an answer to my initial StackOverflow question.


by Stefan Winkler (stefan@winklerweb.net) at April 26, 2016 10:21 AM

IoT Standards and Remote Services

by Scott Lewis (noreply@blogger.com) at April 25, 2016 07:46 PM

In a recent posting, Ian Skerrit asks:   Can open source solve the too-many standards problem?

Even though it's clear that IoT developers want interoperability (the putative reason for many communications standards), there are other reasons that lead to multiple competing standards efforts (e.g. technical NIH, desire for emerging market dominance, different layering, etc).

I agree with Ian that open source provides one way out of this problem, as open implementations can provide interoperability much more quickly than formal standardization efforts.   One doesn't have to look very far to see that's been the case for previous communication and software standards efforts.

One complication for the IoT developer, however, is that they frequently need to make a choice...so that they can build their applications.   This choice has risks, however, because if the communications protocol one chooses doesn't end up being widely adopted/popular, does not provide interoperability with the necessary systems, or is a poor fit to the application-level or non-functional needs for your app (e.g. performance/bw requirements, round-trips, etc), then it could mean a very costly re-architecture and/or re-implementation of one's app or service.  

One way to hedge this risk is provided by ECF's implementation of Remote Services.   OSGi Remote Services is a simple specification for exposing an arbitrary service for remote access,   The spec says nothing about how the communication is done (protocol, messaging pattern, serialization), but rather identifies a pluggable distribution provider role that must be present for a remote service to be exported.   Each service can be exported with a distinct distribution provider, and the decision about what provider is to be used is done at service registration time.

One effect of this is that the remote service can be declared, implemented, tested, deployed, used, and versioned without ever binding to a distribution system.   In fact, it's possible to use one distribution provider to develop and test a remote service, and deploy with a completely different distribution provider simply by changing the values of some service properties.   With ECF's implementation, it's easy to either use an existing distribution provider, or create your own (open source or not), using your favorite communications framework.

ECF Remote Services allows IoT developers maximum flexibility to meet their application's technical needs, now and in the future, without having to commit permanently to a single communication framework, transport, or standard.








by Scott Lewis (noreply@blogger.com) at April 25, 2016 07:46 PM

A new interpreter for EASE (5): Support for script keywords

by Christian Pontesegger (noreply@blogger.com) at April 25, 2016 06:01 AM

EASE scripts registered in the preferences support  a very cool feature: keyword support in script headers. While this does not sound extremely awesome, it allows to bind scripts to the UI and will allow for more fancy stuff in the near future. Today we will add support for keyword detection in registered BeanShell scripts.

Read all tutorials from this series.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online. 

Step 1: Provide a code parser

Code parser is a big word. Currently all we need to detect in given script code are comments. As there already exists a corresponding base class, all we need to do is to provide a derived class indicating comment tokens:
package org.eclipse.ease.lang.beanshell;

import org.eclipse.ease.AbstractCodeParser;

public class BeanShellCodeParser extends AbstractCodeParser {

@Override
protected boolean hasBlockComment() {
return true;
}

@Override
protected String getBlockCommentEndToken() {
return "*/";
}

@Override
protected String getBlockCommentStartToken() {
return "/*";
}

@Override
protected String getLineCommentToken() {
return "//";
}
}

Step 2: Register the code parser

Similar to registering the code factory, we also need to register the code parser. Open the plugin.xml and select the scriptType extension for BeanShell. There register the code parser from above. Now EASE is able to parse script headers for keywords and interprets them accordingly.


by Christian Pontesegger (noreply@blogger.com) at April 25, 2016 06:01 AM

GEF4 Common Collections and Properties - Guava goes FX

by Alexander Nyßen (noreply@blogger.com) at April 25, 2016 03:20 AM

As a preparation for our upcoming Neon release, I intensively worked on adopting JavaFX collections and properties to the entire GEF4 code base (see #484774).

We had already used them before, but only in those parts of the framework that directly depend on the JavaFX toolkit. In all other places we had used our own extensions to Java Beans properties, because we did not want to introduce JavaFX dependencies there. However, as JavaFX is part of JavaSE 1.7 and JavaSE-1.8 and its collections and properties are completely independent of the UI toolkit, we (no longer) considered the additional dependencies as being a real show stopper. Instead, we highly valued the chance to provide only a single notification mechanism to our adopters. The very rich possibilities offered by JavaFX's binding mechanism in addition tipped the scales.

As JavaFX only provides observable variants for Set, Map, and List, I had to create observable variants and related (collection) properties for those Google Guava collections we use (Multiset and SetMultimap) as a prerequisite. I had to dig deep into the implementation details of JavaFX collections and properties to achieve this, learned quite a lot, and found the one or other oddity that I think is worth to be shared. I also implemented a couple of replacement classes to fix problems in JavaFX collections and properties, which are published as part of GEF4 Common. 

But before going into details, let me shortly explain what JavaFX observable collection and properties are about.

Properties, Observable Collections, and (Observable) Collection Properties


In general, a JavaFX property may be regarded as a specialization of a Java Beans property. What it adds is support for lazy computation of its value as well as for notification of invalidation (a lazily computed value may need to be re-computed) and change listeners, which may - in contrast to Java Beans - be registered directly at the property, not at the enclosing bean.

Using JavaFX properties implies a certain API style (similar to Java Beans), as it is expected to provide a getter and setter to access the property value, as well as an accessor for the property itself. We may consider javafx.scene.Node as an example, which amongst various others provides a boolean pickOnBounds property that (is lazily created and) controls whether picking is computed by intersecting with the rectangular bounds of the node or not:

  public abstract class Node implements EventTarget, Stylable {
    ...
    private BooleanProperty pickOnBounds;
    
    public final void setPickOnBounds(boolean value) {
      pickOnBoundsProperty().set(value);
    }
    
    public final boolean isPickOnBounds() {
      return pickOnBounds == null ? false : pickOnBounds.get();
    }
    
    public final BooleanProperty pickOnBoundsProperty() {
      if (pickOnBounds == null) {
        pickOnBounds = new SimpleBooleanProperty(this, "pickOnBounds");
      }
      return pickOnBounds;
    }
  }

In addition to the notification support already mentioned, values of properties may be bound to values of others, or even to values that are computed by more complex expressions, via so called bindings. This is a quite powerful mechanism that reduces the need for custom listener implementations significantly. If the pickOnBounds value of one node should for instance be kept equal to the pickOnBounds value of another, the following binding is all that is required:

  node1.pickOnBoundsProperty().bind(node2.pickOnBoundsProperty());

Binding it to a more complex boolean expression is pretty easy as well:

  node1.pickOnBoundsProperty().bind(node2.pickOnBoundsProperty().or(node3.visibleProperty()));

One can even define own bindings that compute the property value (lazily) based on values of arbitrary other properties:

  node1.pickOnBoundsProperty().bind(new BooleanBinding() {
    {
      // specify dependencies to other properties, whose changes
      // will trigger the re-computation of our value
      super.bind(node2.pickOnBoundsProperty());
      super.bind(node3.layoutBoundsProperty());
    }
    
    @Override
    protected boolean computeValue() {
      // some arbitrary expression based on the values of our dependencies
      return node2.pickOnBoundsProperty().get() &&
             node3.layoutBoundsProperty().get().isEmpty();
    }
  });

JavaFX provides property implementations for all Java primitives (BooleanPropertyDoublePropertyFloatPropertyIntegerPropertyLongPropertyStringProperty, as well as a generic ObjectProperty, which can be used to wrap arbitrary object values. It is important to point out that an ObjectProperty will of course only notify invalidation and change listeners in case the property value is changed, i.e. it is altered to refer to a different object identity, not when changes are applied to the contained property value. Accordingly, an ObjectProperty that wraps a collection only notifies about changes in case a different collection is set as property value, not when the currently observed collection is changed by adding elements to or removing elements from it:

  ObjectProperty<List<Integer>> observableListObjectProperty = new SimpleObjectProperty<>();
  observableListObjectProperty.addListener(new ChangeListener<List<Integer>>() {
    @Override
    public void changed(ObservableValue<? extends List<Integer>> observable,
                List<Integer> oldValue, List<Integer> newValue)    
      System.out.println("Change from " + oldValue + " to " + newValue);
    }
  });
  
  // change listener will be notified about identity change from 'null' to '[]'
  observableListObjectProperty.set(new ArrayList<Integer>());
  // change listener will not be notified
  observableListObjectProperty.get().addAll(Arrays.asList(1, 2, 3));

This is where JavaFX observable collections come into play. As Java does not provide notification support in its standard collections, JavaFX delivers dedicated observable variants: ObservableListObservableMap and ObservableSet. They all support invalidation listener notification (as properties do) and in addition define their own respective change listeners (ListChangeListenerMapChangeListener, and SetChangeListener).

ObservableList also extends List by adding setAll(E... elements) and setAll(Collection<? extends E> c), which combines a clear() with an addAll(Collection< ? extends E> c) into a single atomic replace operation, as well as a remove(int from, int to) that supports removal within an index interval. This allows to 'reduce noise', which is quite important to a graphical framework like GEF, where complex computations might be triggered by changes.

List changes are iterable, i.e. they comprise several sub-changes, so that even a complex operation like setAll(Collection<? extends E> c) results in a single change notification:

  ObservableList<Integer> observableList = FXCollections.observableArrayList();
  observableList.addListener(new ListChangeListener<Integer>() {
    
    @Override
    public void onChanged(Change<? extends Integer> change) {
      while (change.next()) {
        int from = change.getFrom();
        int to = change.getTo();
        // iterate through the sub-changes
        if (change.wasReplaced()) {
          // replacement (simultaneous removal and addition in a continuous range)
         System.out.println("Replaced " + change.getRemoved()  
                   + " with " + change.getAddedSubList() + ".");
        } else if (change.wasAdded()) {
          // addition (added sublist within from-to range)
         System.out.println("Added " + change.getAddedSubList() 
                   + " within [" + from + ", " + to + ").");
        } else if (change.wasRemoved()) {
          // removal (change provides removed sublist and from index)
  System.out.println("Removed " + change.getRemoved() + " at " + from + ".");
        } else if (change.wasPermutated()) {
          // permutation (change provides mapping of old indexes to new indexes)
  System.out.print("Permutated within [" + change.getFrom() + ", " + to + "):");
  for (int i = from; i < to; i++) {
    System.out.print((i == from ? " " : ", "
                        +  i + " -> " + change.getPermutation(i)
                        + (i == to - 1 ? ".\n" : ""));
  }
        }
      }
    }
  });
  
  // one comprised change: 'Added [3, 1, 2] within [0, 3).'
  observableList.setAll(Arrays.asList(3, 1, 2));
  
  // one comprised change: 'Permutated within [0, 3): 0 -> 2, 1 -> 0, 2 -> 1.'
  Collections.sort(observableList);
  
  // one comprised change: 'Replaced [1, 2, 3] with [4, 5, 6].'     
  observableList.setAll(Arrays.asList(4, 5, 6));
  
  // two comprised changes: 'Removed [4] at index 0.', 'Removed [6] at index 1.'
  observableList.removeAll(Arrays.asList(4, 6));

Similar to properties, observable collections may even be used to establish bindings using so called content bindings:

  // ensure that elements of list are synchronized with that of observableList
  List<Integer> list = new ArrayList<>();
  Bindings.bindContent(list, observableList);

As such, observable collections are quite usable, even if not being wrapped into a property. As long as the identity of an observable collection is not to be changed, it may directly be exposed without being wrapped into a property. And that's exactly how JavaFX uses them in its own API. As an example consider javafx.scene.Parent, which exposes its children via an ObservableList:

  public abstract class Parent extends Node {
    ...
  protected ObservableList<Node> getChildren() {
      return children;
    }
    
    @ReturnsUnmodifiableCollection
    public ObservableList<Node> getChildrenUnmodifiable() {
      return unmodifiableChildren;
    }
  }

Wrapping it into a property however is required, if a collection's identity is to be changed (in a way transparent to listeners) or properties are to be bound to it. In principle an observable collection could be wrapped directly into an ObjectProperty but this has the disadvantage that two listeners are required if collection changes are to be properly tracked.

Consider an ObservableList being wrapped into a SimpleObjectProperty as an example. While changes to the list can be observed by registering a ListChangeListener a ChangeListener is required in addition to keep track of changes to the property's value itself (and to transfer the list change listener from an old property value to a new one):

  ObjectProperty<ObservableList<Integer>> observableListObjectProperty
    new SimpleObjectProperty<>();
  
  final ListChangeListener<Integer> listChangeListener = new  ListChangeListener<Integer>(){
    @Override
    public void onChanged(ListChangeListener.Change<? extends Integer> c) {
      // react to list changes
    }
  };
  
  // register list change listener at (current) property value
  observableListObjectProperty.get().addListener(listChangeListener);
  
  // register change listener to transfer list change listener
  observableListObjectProperty.addListener(new ChangeListener<ObservableList<Integer>>() {
      @Override
      public void changed(ObservableValue<? extends ObservableList<Integer>> observable,
                  ObservableList<Integer> oldValue,
                  ObservableList<Integer> newValue) {
        // transfer list change listener from old value to new one
        if(oldValue != null && oldValue != newValue){
          oldValue.removeListener(listChangeListener);
        }
        if(newValue != null && oldValue != newValue){
          newValue.addListener(listChangeListener);
        }
      }
    });
  }

As this is quite cumbersome, JavaFX offers respective collection properties that can be used as an alternative: ListPropertySetProperty, and MapProperty. They support invalidation and change listeners as well as the respective collection specific listeners and will even synthesize a collection change when the observed property value is changed:

  ListProperty<Integer> listProperty = new SimpleListProperty<>(
    FXCollections.<Integer> observableArrayList());
  
  final ListChangeListener<Integer> listChangeListener = new ListChangeListener<Integer>() {
    @Override
    public void onChanged(ListChangeListener.Change<? extends Integer> change) {
      // handle list changes
    }
  };
  listProperty.addListener(listChangeListener);
  
  // forwarded list change: 'Added [1, 2, 3] within [0, 3).'
  listProperty.addAll(Arrays.asList(1, 2, 3));
  
  // synthesized list change: 'Replaced [1, 2, 3] with [4, 5, 6].'
  listProperty.set(FXCollections.observableArrayList(4, 5, 6));

In addition, collection properties define their own (read-only) properties for emptiness, equality, size, etc., so that advanced bindings can also be created:

  // bind boolean property to 'isEmpty'
  BooleanProperty someBooleanProperty = new SimpleBooleanProperty();
  someBooleanProperty.bind(listProperty.emptyProperty());
  
  // bind integer property to 'size'
  IntegerProperty someIntegerProperty = new SimpleIntegerProperty();
  someIntegerProperty.bind(listProperty.sizeProperty());

While observable properties are thus not required to notify about collection changes (which is already possible using observable collections alone), they add quite some comfort when having to deal with situations where collections may be replaced or where bindings have to rely on certain properties (emptiness, size, etc.) of a collection.


GEF4 Common Collections


As already mentioned, GEF4 MVC uses some of Google Guava's collection classes, while JavaFX only offers observable variants of SetMap, and List. In order to facilitate a unique style for property change notifications in our complete code base, observable variants had to be created up front. That actually involved more design decisions than I had expected. To my own surprise, I also ended up with a replacement class for ObservableList and a utility class to augment the original API, because that seemed quite necessary in a couple of places.

Obtaining atomic changes

As this might not be directly obvious from what was said before, let me point out that only ObservableList is indeed capable of notifying about changes atomically in the way laid out beforeObservableSet and ObservableMap notify their listeners for each elementary change individually. That is, an ObservableMap notifies change listeners independently for each affected key change, while ObservableSet will do likewise for each affected element. Calling clear() on an ObservableMap or can thus lead to various change notifications.

I have no idea why the observable collections API was designed in such an inhomogeneous way (it's discussed at JDK-8092534 without providing much more insight), but I think that an observable collection should rather behave like ObservableList, i.e. fire only a single change notification for each  method call. If all required operations can be performed atomically via dedicated methods, a client can fully control which notifications are produced. As already laid out, ObservableList follows this to some extend with the additionally provided setAll() methods that combine clear() and addAll() into a single atomic operation, which would otherwise yield two notifications. However, an atomic move() operation is still lacking for ObservableList, so that movement of elements currently cannot be performed atomically.

When creating ObservableSetMultimap and ObservableMultiset, I tried to follow the contract of ObservableList for the above mentioned reasons. Both notify their listeners through a single atomic change for each method call, which provides details about elementary sub-changes (related to a single element or key), very similar to ListChangeListener#ChangeIn accordance to the addAll() of ObservableList, I added a replaceAll() operation to both to offer an atomic operation via which the contents of the collections can be replaced. Change notifications are iterable, as for ObservableList:

  ObservableSetMultimap<Integer, String> observableSetMultimapCollectionUtils.<Integer, String> observableHashMultimap();
  observableSetMultimap.addListener(new SetMultimapChangeListener<Integer, String>() {
    @Override
    public void onChanged(SetMultimapChangeListener.Change<? extends Integer,
                                                           ? extends String> change) {
      while (change.next()) {
        if (change.wasAdded()) {
          // values added for key
          System.out.println("Added " + change.getValuesAdded() 
                           + " for key " + change.getKey() + ".");
        } else if (change.wasRemoved()) {
          // values removed for key
          System.out.println("Removed " + change.getValuesRemoved() + " for key " 
                           + change.getKey() + ".");
        }
      }
    }
  });
  
  // one comprised change: 'Added [1] for key 1.'
  observableSetMultimap.put(1, "1");
  
  // one comprised change: 'Added [2] for key 2.'
  observableSetMultimap.put(2, "2");
  
  // two comprised changes: 'Removed [1] for key 1.', 'Removed [2] for key 2.'
  observableSetMultimap.clear();

I also though about providing replacement classes for ObservableMap and ObservableSet that rule out the inhomogeneity of the JavaFX collections API, but this would have required to extend their respective listener interfaces, and I thus abstained. 

Retrieving the "previous" contents of an observable collection

While in principle I like the API of ListChangeListener#Change, what really bothers me is that there is no convenience method to retrieve the old state of an ObservableList before it was changed. It has to be recomputed from the respective addition, removal, and permutation sub-changes (which will be propagated when sorting the list) that are comprised. 

When creating ObservableMultiset and ObservableSetMultimap, I added a getPreviousContent() method to both, so clients can easily access the contents the collection contained before the change was applied. I also added a utility method (within CollectionUtils) that can be used to retrieve the previous contents of an ObservableList:

  ObservableList<Integer> observableList = FXCollections.observableArrayList();
  observableList.addAll(4, 3, 1, 5, 2);
  observableList.addListener(new ListChangeListener<Integer>() {
    @Override
    public void onChanged(ListChangeListener.Change<? extends Integer> change) {
      System.out.println("Previous contents: " + CollectionUtils.getPreviousContents(change));
    }
  });
  
  // Previous contents: [4, 3, 1, 5, 2]
  observableList.set(3, 7);
  // Previous contents: [4, 3, 1, 7, 2]
  observableList.clear();

Obtaining immutable changes from an observable collection

While list fires atomic changes, its change objects are not immutable (see JDK-8092504). Thus, when a listener manipulates the observed list as a result to a change notification, the change object it currently processed will actually be changed. This is a likely pitfall, as client code may not even be aware that it is actually called from within a change notification context. Consider the following snippet:


  ObservableList<Integer> observableList = FXCollections
.observableArrayList();
  observableList.addListener(new ListChangeListener<Integer>() {
    @Override
    public void onChanged(ListChangeListener.Change<? extends Integer> change) {
      while (change.next()) {
        System.out.println(change);
      }
      // manipulate the list from within the change notification
      if (!change.getList().contains(2)) {
((List<Integer>) change.getList()).add(2);
      }
    }
  });
  // change list by adding element '1'
  observableList.add(1);

It will yield the following in valid change notifications:


  { [1] added at 0 }
  { [1, 2] added at 0 }

Again, I cannot understand why this was designed that way, but the documentation clearly states that the list may not be manipulated from within a change notification context and the implementation follows it consequently by finalizing the change object only after all listeners have been notified. In a framework as GEF having non immutable change objects is a real show stopper. Accordingly, I designed ObservableMultiset and ObservableSetMultimap to use immutable change objects only. I also created a replacement class for com.sun.javafx.collections.ObservableListWrapper (the class that is instantiated when calling FXCollections.observableArrayList()) that produces immutable change objects. It can be created using a respective utility method:

  ObservableList observableList = CollectionUtils.observableArrayList();

When using the replacement class in the above quoted scenario, it will yield the following output:

  Added[1] at 0.
  Added[2] at 1.

As ObservableSet and ObservableMap are not capable of comprising several changes into a single atomic one, the immutability problem does not arise there, so the alternative ObservableList implementation is all that is needed.

GEF4 Common (Collection) Properties


In addition to the ObservableMultiset and ObservableSetMultimap collections, I also added respective properties (and related bindings) to wrap these values to GEF4 Common: SimpleMultisetPropertyReadOnlyMultisetPropertySimpleSetMultimapProperty, and ReadOnlySetMultimapProperty.

It should no longer be surprising that I did not only create these, but also ended up with replacement classes for JavaFX's own collection properties (SimpleSetPropertyExReadOnlySetWrapperExSimpleMapPropertyExReadOnlyMapWrapperExSimpleListPropertyExReadOnlyListWrapperEx), because a comparable, consistent behavior could otherwise not be guaranteed.

Stopping the noise

As I have elaborated quite intensively already, observable collection notify about element changes, whereas properties notify about (identity) changes of their contained (observable) value. Collection properties in addition forward all collection notifications, so that a replacement of the property's value is transparent to collection listeners.

However, this is not the full story. The observable collection properties offered by JavaFX fire change notifications even if the observed value did not change (JDK-8089169). That is, every collection change will not only lead to the notification of collection-specific change listeners, but also to the notification of all property change listeners:

  SimpleListProperty<Integer> listProperty = new SimpleListProperty<>(
    FXCollections.<Integer> observableArrayList());
  
  listProperty.addListener(new ChangeListener<ObservableList<Integer>>() {
    @Override
    public void changed(ObservableValue<? extends ObservableList<Integer>> observable,
                        ObservableList<Integer> oldValue,
                        ObservableList<Integer> newValue) {
      System.out.println("Observable (collection) value changed.");
    }
  });
  
  listProperty.addListener(new ListChangeListener<Integer>() {
    @Override
    public void onChanged(ListChangeListener.Change<? extends Integer> c) {
      System.out.println("Collection changed.");
    }
  });
  
  // will (incorrectly) notify (property) change listeners in addition to
  // list change listeners
  listProperty.add(5);
  
  // will (correctly) notify list change listeners and (property) change
  // listeners
  listProperty.set(FXCollections.<Integer> observableArrayList());

As this leads to a lot of unwanted noise, I have ensured that MultisetProperty and SetMultimapProperty, which are provided as part of GEF4 Common, do not babble likewise. I ensured that the replacement classes we provide for the JavaFX collection properties behave accordingly, too.

Guarding notifications consistently

JavaFX observable collection capture all exceptions that occur during a listener notification. That is, the listener notification is guarded as follows:

  try {
    listener.onChanged(change);
  } catch (Exception e) {
    Thread.currentThread().getUncaughtExceptionHandler().
    uncaughtException(Thread.currentThread(), e);
  }

Interestingly, JavaFX properties don't behave like that. Thus, all observable collection properties are inconsistent in a sense that collection change listener invocations will be guarded, while (property) change listener notifications won't. Again, I have ensured that the GEF4 Common provided collection properties show consistent behavior by guarding all their listener notifications, and I have done likewise for the replacement classes we provide.

Fixing further issues and ensuring JavaSE-1.7 compatibility

Having the replacement classes at hand, I also fixed some further inconsistencies that bothered us quite severely. This includes the recently introduced regression that JavaFX MapProperty removes all attached change listeners when just one is to be removed (JDK-8136465), as well as the fact that read-only properties can currently not be properly bound (JDK-8089557).

As GEF4 still aims at providing support for JavaSE-1.7, I tried to ensure that all collection properties we provide, including the replacement classes, can be used in a JavaSE-1.7 environment, too. Except that the static methods JavaFX offers to create and remove bindings cannot be applied (because the binding mechanism has been completely revised from JavaSE-1.7 to JavaSE-1.8), this could be achieved.

While we make intensive use of GEF4 Common Collections and Properties in our own GEF4 framework, GEF4 Common can be used independently. If you need observable variants of Google Guava's Multiset or SetMultimap, or if you want to get rid of the aforementioned inconsistencies, it may be worth taking a look.

by Alexander Nyßen (noreply@blogger.com) at April 25, 2016 03:20 AM

IAdaptable - GEF4's Interpretation of a Classic

by Alexander Nyßen (noreply@blogger.com) at April 24, 2016 04:07 PM

Adaptable Objects as a Core Pattern of Eclipse Core Runtime

The adaptable objects pattern is probably the most important one used by the Eclipse core runtime. Formalized by the org.eclipse.core.runtime.IAdaptable interface, an adaptable object can easily be queried by clients (in a type-safe manner) for additional functionality that is not included within its general contract.
public interface IAdaptable {
  /**
   * Returns an object which is an instance of the given class
   * associated with this object. Returns <code>null</code> if
   * no such object can be found.
   *
   * @param adapter the adapter class to look up
   * @return a object castable to the given class, 
   *    or <code>null</code> if this object does not
   *    have an adapter for the given class
   */
   public Object getAdapter(Class adapter);
}
From another viewpoint, if an adaptable object properly delegates its getAdapter(Class) implementation to an IAdapterManager (most commonly Platform.getAdapterManager()) or provides a respective proprietary mechanism on its own, it can easily be extended with new functionality (even at runtime), without any need for local changes, and adapter creation can flexibly be handled through a set of IAdapterFactory implementations.

Why org.eclipse.core.runtime.IAdaptable is not perfectly suited for GEF4 

As it has proven its usefulness in quite a number of places, I considered the adaptable objects pattern to be quite a good candidate to deal with the configurability and flexibility demands of a graphical editing framework as well. I thus wanted to give it a major role within the next generation API of our model-view-controller framework (GEF4 MVC).

GEF4 MVC is the intended replacement of GEF (MVC) 3.x as the core framework, from which to build up graphical editors and views. As it has a high need for flexibility and configurability it seemed to be the ideal playground for adaptable objects. However, the way the Eclipse core runtime interprets the adaptable objects pattern does not make it a perfect match to fulfill our requirements, because:
  • Only a single adapter of a specific type can be registered at an adaptable. Registering two different Provider implementations at a controller (e.g. one to specify the geometry used to display feedback and one to depict where to layout interaction handles) is for instance not possible.
  • Querying (and registering) adapters for parameterized types is not possible in a type-safe manner. The class-based signature of getAdapter(Class) does for instance not allow to differentiate between a Provider<IGeometry> and a Provider<IFXAnchor>.
  • IAdaptable only provides an API for retrieving adapters, not for registering them, so (re-) configuration of adapters at runtime is not easily possible. 
  • Direct support for 'binding' an adapter to an adaptable object, i.e. to establish a reference from the adapter to the adaptable object, is not offered (unless the adapter explicitly provides a proprietary mechanism to establish such a back-reference).

    Adaptable Objects as Interpreted by GEF4 Common

    I thus created my own interpretation of the adaptable objects pattern, formalized by org.eclipse.gef4.common.adapt.IAdaptable. It is provided by the GEF4 Common component and can thus be easily used standalone, even for applications that have no use for graphical editors or views (GEF4 Common only requires Google Guice and Google Guava to run).

    AdapterKey to combine Type with Role

    Instead of a simple Class-based type-key, adapters may now be registered by means of an AdapterKey, which combines (a Class- or TypeToken-based) type key (to retrieve the adapter in a type-safe manner) with a String-based role.

    The combination of a type key with a role allows to retrieve several adapters of the same type with different roles. Two different Provider implementations can for instance now easily be retrieved (to provide independent geometric information for selection feedback and selection handles) through:
    getAdapter(AdapterKey.get(new TypeToken<Provider<IGeometry>>(){}, "selectionFeedbackGeometryProvider"))
    getAdapter(AdapterKey.get(new TypeToken<Provider<IGeometry>>(){}, "selectionHandlesGeometryProvider"))

    TypeToken instead of Class

    The second significant difference is that a com.google.common.reflect.TypeToken (provided by Google Guava) is used as a more general concept instead of a Class, which enables parameterized adapters to be registered and retrieved in a type-safe manner as well. A geometry provider can for instance now be easily retrieved through getAdapter(new TypeToken<Provider<IGeometry>>(){}), while an anchor provider can alternatively be retrieved through getAdapter(new TypeToken<Provider<IFXAnchor>>(){}). For convenience, retrieving adapters by means of Class-based type keys is also supported (which will internally be converted to a TypeToken).

    IAdaptable as a local adapter registry

    In contrast to the Eclipse core runtime interpretation, an org.eclipse.gef4.common.adapt.IAdaptable has the obligation to provide means to not only retrieve adapters (getAdapter()) but also register or unregister them (setAdapter(), unsetAdapter()). This way, the 'configuration' of an adaptable can easily be changed at runtime, even without providing an adapter manager or factory.

    Of course this comes at the cost that an org.eclipse.gef4.common.adapt.IAdaptable is itself responsible of maintaining the set of registered adapters. This (and the fact that the interface contains a lot of convenience functions) is balanced by the fact that a base implementation (org.eclipse.gef4.common.adapt.AdaptableSupport) can easily be used as a delegate to realize the IAdaptable interface.

    IAdaptable.Bound for back-references

    If adapters need to be 'aware' of the adaptable they are registered at, they may implement the IAdaptable.Bound interface, which is used to establish a back reference from the adapter to the adaptable. It is part of the IAdaptable-contract that an adapter implementing the IAdaptable.Bound will be provided with a back-reference during registration (if an adaptable uses org.eclipse.gef4.common.adapt.AdaptableSupport to internally realize the interface, this contract is  of course guaranteed). 

    IAdaptables and Dependency Injection

    While the possibility to re-configure the registered adapters at runtime is quite helpful, proper support to create an initial adapter configuration during instantiation of an adaptable is also of importance. To properly support this, I integrated the GEF4 Common adaptable objects mechanism with Google Guice. 

    That is, the adapters that are to be registered at an adaptable can be configured in a Guice module, using a specific AdapterMap binding (which is based on Guice's multi-bindings). To register an adapter of type VisualBoundsGeometryProvider at a FXGeometricShapePart adaptable can for instance be performed using the following Guice module configuration:
    protected void configure() {
      // enable adapter map injection support
      install(new AdapterInjectionSupport());
      // obtain map-binder to bind adapters for FXGeometricShapePart instances
      MapBinder<AdapterKey<?>, Object> adapterMapBinder
    AdapterMaps.getAdapterMapBinder(binder(), FXGeometricShapePart.class);
      // bind geometry provider for selection handles as adapter on FXGeometricShapePart
      adapterMapBinder.addBinding(AdapterKey.role("selectionHandlesGeometryProvider")).
            to(VisualBoundsGeometryProvider.class);
      ...
    }
    It will not only inject a VisualBoundsGeometryProvider instance as an adapter to all direct instances of FXGeometricShapePart but also to all instances of its sub-types, which may be seen as a sort of 'polymorphic multi-binding'.

    Two prerequisites have to be fulfilled in order to make use of adapter injections:
    1. Support for adapter injections has to be enabled in your Guice module by installing an org.eclipse.gef4.common.inject.AdapterInjectionSupport module as outlined in the snippet above.
    2. The adaptable (here: FXGeometricShapePart.class) or any of its super-classes has to provide a method that is eligible for adapter injection:
    @InjectAdapters
    public <T> void setAdapter(TypeToken<T> adapterType, T adapterString role) {
      // TODO: implement (probably by delegating to an AdaptableSupport)
    }
    GEF4 MVC makes use of this mechanism quite intensively for the configuration of adapters (and indeed, within the MVC framework, more or less everything is an adapter). However, similar to the support for adaptable objects itself, the related injection mechanism is easily usable in a standalone scenario. Feel free to do so!

    by Alexander Nyßen (noreply@blogger.com) at April 24, 2016 04:07 PM

    Eclipse Neon: disable the theming

    April 23, 2016 10:00 PM

    At Devoxx France 2016, Mikaël Barbero gave a great talk about the Eclipse IDE. The talks was well attended (the room was almost full). This is the proof that there is still a lot of interest for the Eclipse IDE.

    2016-04-24_room

    The talk was a great presentation of all improvements made to the IDE (already implemented with mars, coming with neon or oxygen). It was a big new and noteworthy, organized by main categories that matters to the users and not by eclipse projects. I really appreciated this approach.

    If understand French, I recommend you to watch the video of the talk. In the other case, I am sure you will learn something by just looking at the slides.

    Something I have learned: with Neon you can deactivate the theming (appearance section of the preferences) completely. In that case the CSS styling engine will be deactivated and your Eclipse IDE will have a really raw look. To disable the theming, just uncheck the checkbox highlighted in Figure 2

    Eclipse preferences > General > Appearance (Eclipse Neon)

    2016-04-24_preferences_appearance

    After a restart your Eclipse will look like this screenshot (Figure 3):

    Eclipse IDE with disabled theming

    2016-04-24_eclipse_neon_disabled_theming

    I hope the performances will be better, it particular when Eclipse IDE is used on distant virtualized environments like Citrix clients. If you want to test it now, download a Milestone release of Neon.


    April 23, 2016 10:00 PM

    StringBuffer and StringBuilder performance with JMH

    April 23, 2016 09:00 PM

    Last week, Doug Schaefer wished on Twitter that other Eclipse projects were getting the same kind of contribution love as Platform UI. Lars Vogel attributed that to the effort in cleaning up the codebase and the focus on new contributions and contributors.

    I thought I’d spend some time helping out CDT in assisting with this effort, and over the past week or so have been sending a few patches that way. Fortunately Sergey Prigogin has been an excellent reviewer, turning around my patches in a matter of hours in some cases, and that in turn has meant that I’ve been able to make further and faster progress than on some of the other projects I’ve tried contributing improvements to.

    Most recently I’ve been looking into optimising some of the StringBuffer code and thought I’d go into a little bit of detail about the performance aspects of these changes.

    The TL;DR of this post is:

    • StringBuilder is better than StringBuffer
    • StringBuilder.append(a).append(b) is better than StringBuilder.append(a+b)
    • StringBuilder.append(a).append(b) is better than StringBuilder.append(a); StringBuilder.append(b);
    • StringBuilder.append() and + are only equivalent provided that they are not nested and you don’t need to pre-sizing the builder
    • Pre-sizing the StringBuilder is like pre-sizing an ArrayList; if you know the approximate size you can reduce the garbage by specifying a capacity up-front

    Most of this may be common knowledge but I hope that I can back this up with data using JMH.

    Introduction to JMH

    The Java Microbenchmark Harness or JMH is the tool to use for performance testing microbenchmarks. In the same way that JUnit is the de facto standard for testing, JMH is the de facto standard for performance measurement. There’s a great thread that goes into the details behind some of JMH’s evolution and the choices that were made; and the fact that since then it seems to have edged out other performance testing benchmark tools like Caliper seems to be a good indicator of its future existence.

    JMH projects can be bootstrapped from mvn and then compiled/post annotated with the launcher to generate a benchmarks.jar file, which contains the code under test as well as a copy of the JMH code in an uber JAR. It also helpfully sets up a command line interface that you can use to test your code, and is the simplest way to generate a project.

    You can create a stub JMH project using the steps on the JMH homepage:

    $ mvn archetype:generate \
      -DinteractiveMode=false \
      -DarchetypeGroupId=org.openjdk.jmh \
      -DarchetypeArtifactId=jmh-java-benchmark-archetype \
      -DgroupId=org.sample \
      -DartifactId=test \
      -Dversion=1.0
    

    From the command line, the sample project can be run by executing:

    $ mvn clean package
    $ java -jar target/benchmarks.jar
    

    There’s a lot of flags that can be passed on the command line; passing -h will show the full list of flags that can be passed.

    Using JMH in Eclipse

    If you’re trying to run JMH in Eclipse, you will need to ensure that annotation processing is enabled. That’s because JMH uses annotations not only to annotate the benchmarks, but uses a annotation processing tool to transform the benchmarked code into executable units. If you don’t have annotation processing enabled and try to run it, you’ll see a cryptic message like Unable to read /META-INF/BenchmarkList

    If you’ve created a Maven project (and presumably, therefore, have m2e installed) the easiest way is to install JBoss' m2e-apt connector, which allows you to configure the project for JDT’s support for APT. This can be installed from Eclipse → Preferences → Discovery and choosing the m2e-apt connector. After restart this can be used to enable the JDT support automatically bu going to Window → Preferences → Maven → Annotation Processing and then choosing the “Automatically configure JDT APT” option.

    If you’re not using Maven then you can add the jmh-generator-annprocess JAR (along with its dependencies) to the project’s Java Compiler → Annotation Processing → Factory Path, and ensure that the annotation processing is switched on.

    Tests can then be run by creating a launch configuration to run the main class org.openjdk.jmh.Main or by using the JMH APIs.

    StringBuilder vs StringBuffer benchmark

    So having got the basis for benchmarking set up, it’s time to look at the performance of the StringBuilder vs the StringBuffer. It’s a good idea to see what the performance is like of the empty buffers before we start adding content to it:

    public class StringBenchmark {
      @Benchmark
      public String testEmptyBuffer() {
        StringBuffer buffer = new StringBuffer();
        return buffer.toString();
      }
    
      @Benchmark
      public String testEmptyBuilder() {
        StringBuilder builder = new StringBuilder();
        return builder.toString();
      }
    
      @Benchmark
      public String testEmptyLiteral() {
        return "";
      }
    }
    

    Two things are worth calling out: the first is that the resulting expression you’re using always has to be returned to the caller, otherwise the JIT will optimise the code away. The second is that it’s worth testing the empty case first of all so that it sets a baseline for measurement.

    We can run it from the command line by doing:

    $ mvn clean package
    $ java -jar target/benchmarks.jar Empty \
       -wi 5 -tu ns -f 1 -bm avgt
    ...
    Benchmark                      Mode  Cnt  Score   Error  Units
    StringBenchmark.testEmptyBuffer   avgt   20  8.306 ± 0.497  ns/op
    StringBenchmark.testEmptyBuilder  avgt   20  8.253 ± 0.416  ns/op
    StringBenchmark.testEmptyLiteral  avgt   20  3.510 ± 0.139  ns/op
    

    The flags used here are -wi (warmup iterations), -tu (time unit; nanoseconds), -f (number of forked JVMs) and -bm (benchmark mode; in this case, average time).

    Somewhat unsurprisingly the values are relatively similar, with the return literal being the fastest.

    What if we’re concatenating two strings? We can write a method to test that as well:

    @Benchmark
    public String testHelloWorldBuilder() {
      StringBuilder builder = new StringBuilder();
      builder.append("Hello");
      builder.append("World");
      return builder.toString();
    }
    
    @Benchmark
    public String testHelloWorldBuffer() {
      StringBuffer buffer = new StringBuffer();
      buffer.append("Hello");
      buffer.append("World");
    return buffer.toString();
    }
    

    When run, it looks like:

    $ mvn clean package
    $ java -jar target/benchmarks.jar Hello \
       -wi 5 -tu ns -f 1 -bm avgt
    ...
    Benchmark                           Mode  Cnt   Score   Error  Units
    StringBenchmark.testHelloWorldBuffer   avgt   20  25.747 ± 1.188  ns/op
    StringBenchmark.testHelloWorldBuilder  avgt   20  25.411 ± 1.015  ns/op
    

    Not much difference there, although the Buffer is marginally slower than the Builder is. That shouldn’t be too surprising; they are both subclasses of AsbtractStringBuilder anyway, which has all the logic.

    Job done?

    Are we all done yet? Well, no, because there are other things at play.

    Firstly, JMH is a benchmarking tool to find the highest possible value of performance under load. What happens in Java is that by default HotSpot uses a tiered compilation model; it starts off interpreted, then once a method has been executed a number of times it gets compiled. In fact, there are different levels of compilation that kick in after a different amount of calls. You can see these if you look at the various *Threshold* flags generated by -XX:+PrintFlagsFinal from an OpenJDK installation.

    When a method has been called thousands of times, it will be compiled using the Tier 3 (client) or Tier 4 (server) compiler. This generally involves optimisations such as in-lining methods, dead code elimination and the like. This gives the best possible code performance for the application.

    But what if the method is called infrequently, or puts memory pressure on the garbage collector instead? It won’t be JIT compiled and so will take longer. We can see the effect of running in interpreted mode by running the generated benchmark code with -jvmArgs -Xint to force the forked JVM used to run the benchmarks to only use the interpreter:

    $ mvn clean package
    $ java -jar target/benchmarks.jar Empty Hello \
       -wi 5 -tu ns -f 1 -bm avgt -jvmArgs -Xint
    ...
    Benchmark                           Mode  Cnt     Score    Error  Units
    StringBenchmark.testEmptyBuffer        avgt   20  1102.609 ± 66.596  ns/op
    StringBenchmark.testEmptyBuilder       avgt   20   769.682 ± 27.962  ns/op
    StringBenchmark.testEmptyLiteral       avgt   20   184.061 ± 13.587  ns/op
    StringBenchmark.testHelloWorldBuffer   avgt   20  2299.749 ± 70.087  ns/op
    StringBenchmark.testHelloWorldBuilder  avgt   20  2381.348 ± 38.726  ns/op
    

    A better option is to use the JMH specific annotation @CompilerControl(Mode.EXCLUDE) which prevents benchmarking methods from being JIT compiled, while allowing the other Java classes to be JIT compiled as usual. This is akin to having other classes call the StringBuffer (so that is sufficiently well exercised) while emulating code that isn’t called all that frequently. It can be added at the class level or at the method level.

    $ grep -B2 class StringBenchmark.java
    @State(Scope.Benchmark)
    @CompilerControl(Mode.EXCLUDE)
    public class StringBenchmark {
    $ mvn clean package
    $ java -jar target/benchmarks.jar Empty Hello \
       -wi 5 -tu ns -f 1 -bm avgt
    ...
    Benchmark                              Mode  Cnt    Score   Error  Units
    StringBenchmark.testEmptyBuffer        avgt   20  144.745 ± 4.561  ns/op
    StringBenchmark.testEmptyBuilder       avgt   20  122.477 ± 3.273  ns/op
    StringBenchmark.testEmptyLiteral       avgt   20   91.139 ± 1.685  ns/op
    StringBenchmark.testHelloWorldBuffer   avgt   20  236.223 ± 7.679  ns/op
    StringBenchmark.testHelloWorldBuilder  avgt   20  222.462 ± 5.733  ns/op
    

    Either way, calling the code before the JIT compilation has kicked in magnifies the difference between the different types of data structure by a factor of around 10%. So for methods that are called less than 1000 times – such as during start-up or when invoked from a user interface – the difference will exist.

    Different calling patterns

    What about different calling patterns? One example I came across was using an implicit String concatenation inside a StringBuilder or StringBuffer. This might be the case when generating a buffer to represent an e-mail, for example.

    To test this, and to prevent Strings being concatenated by the javac compiler, we need to use non-final instance variables. However, to do that with the benchmark requires that the class be annotated with @State(Scope.Benchmark). (As with public static void main(String args[]) it’s best to just learn that this is necessary when you’re getting started, and then understand what it means later.)

    @State(Scope.Benchmark)
    public class StringBenchmark {
      private String from = "Alex";
      private String to = "Readers";
      private String subject = "Benchmarking with JMH";
      ...
      @Benchmark
      public String testEmailBuilderSimple() {
        StringBuilder builder = new StringBuilder();
        builder.append("From");
        builder.append(from);
        builder.append("To");
        builder.append(to);
        builder.append("Subject");
        builder.append(subject);
        return builder.toString();
      }
    
      @Benchmark
      public String testEmailBufferSimple() {
        StringBuffer buffer = new StringBuffer();
        buffer.append("From");
        buffer.append(from);
        buffer.append("To");
        buffer.append(to);
        buffer.append("Subject");
        buffer.append(subject);
        return buffer.toString();
      }
    } 
    

    You can selectively run the benchmarks by putting one or more regular expressions on the command line:

    $ mvn clean package
    $ java -jar target/benchmarks.jar Simple \
       -wi 5 -tu ns -f 1 -bm avgt
    ...
    Benchmark                               Mode  Cnt   Score   Error  Units
    StringBenchmark.testEmailBufferSimple   avgt   20  88.149 ± 1.014  ns/op
    StringBenchmark.testEmailBuilderSimple  avgt   20  88.277 ± 1.201  ns/op
    

    These obviously take a lot longer to run. But what about other forms of the code? What if a developer has used + to concatenate the fields together in the append calls?

    public String testEmailBuilderConcat() {
      StringBuilder builder = new StringBuilder();
      builder.append("From" + from);
      builder.append("To" + to);
      builder.append("Subject" + subject);
      return builder.toString();
    }
    
    @Benchmark
    public String testEmailBufferConcat() {
      StringBuffer buffer = new StringBuffer();
      buffer.append("From" + from);
      buffer.append("To" + to);
      buffer.append("Subject" + subject);
      return buffer.toString();
    }
    

    Running this again shows why this is a bad idea:

    $ mvn clean package
    $ java -jar target/benchmarks.jar Simple Concat \
       -wi 5 -tu ns -f 1 -bm avgt
    ...
    Benchmark                               Mode  Cnt    Score   Error  Units
    StringBenchmark.testEmailBufferConcat   avgt   20  105.424 ± 3.704  ns/op
    StringBenchmark.testEmailBufferSimple   avgt   20   91.427 ± 2.971  ns/op
    StringBenchmark.testEmailBuilderConcat  avgt   20  100.295 ± 1.985  ns/op
    StringBenchmark.testEmailBuilderSimple  avgt   20   90.884 ± 1.663  ns/op
    

    Even though these calls do the same thing, the cost of having an embedded implicit String concatenation is enough to add a 10% penalty on the time taken for the methods to return.

    This shouldn’t be too surprising; the cost of doing the in-line concatenation means that it’s generating a new StringBuilder, appending the two String expressions, converting it to a new String with toString() and finally inserting that resulting String into the outer StringBuilder/StringBuffer.

    This should probably be a warning in the future.

    Chaining methods

    Finally, what about chaining the methods instead of referring to a local variable? That can’t make any difference; after all, this is equivalent to the one before, right?

    @Benchmark
    public String testEmailBuilderChain() {
      return new StringBuilder()
       .append("From")
       .append(from)
       .append("To")
       .append(to)
       .append("Subject")
       .append(subject)
       .toString();
    }
    
    @Benchmark
    public String testEmailBufferChain() {
      return new StringBuffer()
       .append("From")
       .append(from)
       .append("To")
       .append(to)
       .append("Subject")
       .append(subject)
       .toString();
    }
    

    What’s interesting is that you do see a significant difference:

    $ java -jar target/benchmarks.jar Simple Concat Chain \
       -wi 5 -tu ns -f 1 -bm avgt
    ... 
    Benchmark                               Mode  Cnt    Score   Error  Units
    StringBenchmark.testEmailBufferChain    avgt   20   38.950 ± 1.120  ns/op
    StringBenchmark.testEmailBufferConcat   avgt   20  103.151 ± 4.197  ns/op
    StringBenchmark.testEmailBufferSimple   avgt   20   89.685 ± 2.041  ns/op
    StringBenchmark.testEmailBuilderChain   avgt   20   38.113 ± 1.012  ns/op
    StringBenchmark.testEmailBuilderConcat  avgt   20  102.193 ± 2.829  ns/op
    StringBenchmark.testEmailBuilderSimple  avgt   20   89.117 ± 2.658  ns/op
    

    In this case, the chaining together of arguments has resulted in a 50% speed up of the method call after JIT. One possible reason this may occur is that the length of the method’s bytecode has been significantly reduced:

    $ javap -c StringBenchmark.class | egrep "public|areturn"
      public java.lang.String testEmailBuilder();
          60: areturn
      public java.lang.String testEmailBuffer();
          60: areturn
      public java.lang.String testEmailBuilderConcat();
          84: areturn
      public java.lang.String testEmailBufferConcat();
          84: areturn
      public java.lang.String testEmailBuilderChain();
          46: areturn
      public java.lang.String testEmailBufferChain();
          46: areturn
    

    Simply by chaining the .append() methods together has resulted in a smaller method, and thus a faster call site when compiled to native code. The other advantage (though not demonstrated here) is that the size of the bytecode affects the caller’s ability to in-line the method; smaller than 35 bytes (-XX:MaxInlineSize) means the method can be trivially inlined, and if it’s smaller than 325 bytes then it can be in-lined if it’s called enough times (-XX:FreqInlineSize).

    Finally, what about ordinary String concatenation? Well, as long as you don’t mix and match it, then you’re fine – it works out as being identical to the testEmailBuilderChain methods.

    @Benchmark
    public String testEmailLiteralConcat() {
      return "From" + from + "To" + to + "Subject" + subject;
    }
    

    Running it shows:

    $ java -jar target/benchmarks.jar EmailLiteral \
       -wi 5 -tu ns -f 1 -bm avgt
    ...
    Benchmark                         Mode  Cnt   Score   Error  Units
    StringBenchmark.testEmailLiteral  avgt   20  38.033 ± 0.588  ns/op
    

    And for comparative purposes, running the lot with @CompilerControl(Mode.EXCLUDE) (simulating an infrequently used method) gives:

    $ java -jar target/benchmarks.jar Email \
       -wi 5 -tu ns —f 1 -bm avgt
    ...
    Benchmark                               Mode  Cnt    Score    Error  Units
    StringBenchmark.testEmailBufferChain    avgt   20  416.745 ±  9.087  ns/op
    StringBenchmark.testEmailBufferConcat   avgt   20  764.726 ±  9.535  ns/op
    StringBenchmark.testEmailBufferSimple   avgt   20  462.361 ± 15.091  ns/op
    StringBenchmark.testEmailBuilderChain   avgt   20  384.936 ±  9.173  ns/op
    StringBenchmark.testEmailBuilderConcat  avgt   20  752.375 ± 19.544  ns/op
    StringBenchmark.testEmailBuilderSimple  avgt   20  414.372 ±  6.940  ns/op
    StringBenchmark.testEmailLiteral        avgt   20  417.772 ±  9.515  ns/op
    

    What a lot of rubbish

    The other aspect that affects the performance is how much garbage is created during the program’s execution. Allocating new data in Java is very, very fast these days, regardless of whether it’s interpreted or JIT compiled code. This is especially true of the new +XX:+UseG1GC which is available in Java 8 and will become the default in Java 9. (Hopefully it will also become a part of the standard Eclipse packages in the future.) That being said, there are certainly cycles that get wasted, both from the CPU but also the GC, when using concatenation.

    The StringBuffer and StringBuilder are implemented like an ArrayList (except dealing with an array of characters instead of an array of Object instances). When you add new content, if there’s capacity, then the content is added at the end; if not, a new array is created with double-plus-two size, the content backing store is copied to a new array, and then the old array is thrown away. As a result this step can take between O(1) and O(n lg n) depending on whether the initial capacity is exceeded.

    By default both classes start with a size of 16 elements (and thus the implicit String concatenation also uses that number); but the explicit constructors can be overridden to specify a default starting size.

    JHM also comes with a garbage profiler that can provide (in my experience, fairly accurate) estimates of how much garbage is collected per operation. It does this by hooking into some of the serviceability APIs in the OpenJDK runtime (so other JVMs may find this doesn’t work) and then provides a normalised estimate for how much garbage is attributable per operation. Since garbage is a JVM wide construct, any other threads executing in the background will cause the numbers to be inaccurate.

    By modifying the creation of the StringBuffer with a JMH parameter, it’s possible to provide different values at run-time for experimentation:

    public class StringBenchmark {
      @Param({"16"})
      private int size;
      ...
      public void testEmail... {
        StringBuilder builder = new StringBuilder(size);
      }
    }
    

    It’s possible to specify multiple parameters; JMH will then iterate over each and give the results separately. Using @Param({"16","48"}) would run first with 16 and then 48 afterwards.

    $ java -jar target/benchmarks.jar EmailBu \
       -wi 5 -tu ns -f 1 -bm avgt -prof gc
    ...
    Benchmark                                               (size)  Mode  Cnt     Score     Error   Units
    StringBenchmark.testEmailBufferChain                        16  avgt   20    37.593 ±   0.595   ns/op
    StringBenchmark.testEmailBufferChain:·gc.alloc.rate.norm    16  avgt   20   136.000 ±   0.001    B/op
    StringBenchmark.testEmailBufferConcat                       16  avgt   20   155.290 ±   2.206   ns/op
    StringBenchmark.testEmailBufferConcat:·gc.alloc.rate.norm   16  avgt   20   576.000 ±   0.001    B/op
    StringBenchmark.testEmailBufferSimple                       16  avgt   20   136.341 ±   3.960   ns/op
    StringBenchmark.testEmailBufferSimple:·gc.alloc.rate.norm   16  avgt   20   432.000 ±   0.001    B/op
    StringBenchmark.testEmailBuilderChain                       16  avgt   20    37.630 ±   0.847   ns/op
    StringBenchmark.testEmailBuilderChain:·gc.alloc.rate.norm   16  avgt   20   136.000 ±   0.001    B/op
    StringBenchmark.testEmailBuilderConcat                      16  avgt   20   153.879 ±   2.699   ns/op
    StringBenchmark.testEmailBuilderConcat:·gc.alloc.rate.norm  16  avgt   20   576.000 ±   0.001    B/op
    StringBenchmark.testEmailBuilderSimple                      16  avgt   20   136.587 ±   3.146   ns/op
    StringBenchmark.testEmailBuilderSimple:·gc.alloc.rate.norm  16  avgt   20   432.000 ±   0.001    B/op
    

    Running this shows that the normalised allocation rate for the various methods (gc.alloc.rate.norm) varies between 136 bytes and 576 for both classes. This shouldn’t be a surprise; the implementation of the storage structure is the same between both classes. It’s more noteworthy to observe that there is a variation between using the chained implementation and the simple allocation (136 vs 432).

    The 136 bytes is the smallest value we can expect to see; the resulting String in our test method works out at 45 characters, or 90 bytes. Considering a String instance has a 24 byte header and a character array has a 16 byte header, 90 + 24 + 16 = 130. However, the character array is aligned on an 8 bit boundary, so it is rounded up to 96 bits. In other words, the code for the *Chain methods has been JIT optimised to produce a single String with the exact data in place.

    The *Simple methods have additional data generated by the increasing size of the internal character backing array. 136 of the bytes are the returned String value, so that can be taken out of the equation. Of the 296 remaining bytes, 24 bytes are taken up by the StringBuilder leaving 272 bytes to account for. This actually turns out to be the character arrays; a StringBuilder starts off with a size of 16 chars, then doubles to 34 chars and then 70 chars, following a 2n+2 growth. Since each char[] has an overhead of 16 bytes (12 for the header, 4 for the length) and that chars are stored as 16 bit entities, this results in 48, 88 and 160 bytes. Perhaps unsurprisingly the growth (and subsequent discarded char[] arrays) equal 296 bytes. So the growth of both the *Simple elements are equivalent here.

    The larger values in the *Concat methods show additional garbage growth caused due to the temporary internal StringBuilder elements.

    To test a different starting size of the buffer, passing the -p size=48 JMH argument will allow us to test the effect of initialising the buffers with 48 characters:

    $ java -jar target/benchmarks.jar EmailBu \
       -wi 5 -tu ns -f 1 -bm avgt -prof gc -p size=48
    ...
    Benchmark                                               (size)  Mode  Cnt     Score     Error   Units
    StringBenchmark.testEmailBufferChain                        48  avgt   20    38.961 ±   1.732   ns/op
    StringBenchmark.testEmailBufferChain:·gc.alloc.rate.norm    48  avgt   20   136.000 ±   0.001    B/op
    StringBenchmark.testEmailBufferConcat                       48  avgt   20   106.726 ±   4.118   ns/op
    StringBenchmark.testEmailBufferConcat:·gc.alloc.rate.norm   48  avgt   20   392.000 ±   0.001    B/op
    StringBenchmark.testEmailBufferSimple                       48  avgt   20    93.455 ±   2.702   ns/op
    StringBenchmark.testEmailBufferSimple:·gc.alloc.rate.norm   48  avgt   20   248.000 ±   0.001    B/op
    StringBenchmark.testEmailBuilderChain                       48  avgt   20    39.056 ±   1.723   ns/op
    StringBenchmark.testEmailBuilderChain:·gc.alloc.rate.norm   48  avgt   20   136.000 ±   0.001    B/op
    StringBenchmark.testEmailBuilderConcat                      48  avgt   20   103.264 ±   2.404   ns/op
    StringBenchmark.testEmailBuilderConcat:·gc.alloc.rate.norm  48  avgt   20   392.000 ±   0.001    B/op
    StringBenchmark.testEmailBuilderSimple                      48  avgt   20    88.175 ±   2.442   ns/op
    StringBenchmark.testEmailBuilderSimple:·gc.alloc.rate.norm  48  avgt   20   248.000 ±   0.001    B/op
    

    By tweaking the initialised StringBuffer/StringBuilder instances to 48 bytes, we can reduce the amount of garbage generated as part of the concatenation process. The Java implicit String concatenation is outside our control, and is a result of the underlying character array resizing itself.

    Here, the *Simple methods have dropped from 432 to 248 bytes, which represents the 136 byte String result and a copy of the 112 byte array (corresponding to an 41-48 character array with the 16 byte header). Presumably in this case the JIT has managed to avoid the creation of the StringBuilder instance in the *Simple methods, but the array copy has leaked through. However other than these two values, there is no additional garbage created.

    Conclusion

    Running benchmarks is a good way of finding out what the cost of a particular operation is, and JMH makes it easy to be able to generate such benchmarks. Being able to ensure that the benchmarks are correct are a little harder, as well as what effect seen by other processes. Of course, different machines will give different results to these, and you’re encouraged to replicate this on your own setup.

    Although the fully JIT compiled method for both StringBuffer and StringBuilder are very similar, there is an underlying trend for the StringBuilder to be at least as fast as its StringBuffer older cousin. In any case, implicit String concatenation (with +) creates a StringBuilder under the covers and it’s likely therefore that the StringBuilder will hit hot compilation method before StringBuffer in any case.

    The most efficient way of concatenating strings is to have a single expression which uses either implicit String concatenation ( + + + + ) or has a series of (e.g. .append().append().append()) without any intermediate reference to a local variable. If you’ve got a lot of constants then using + will also have the advantage of using constant folding of the String literals ahead of time.

    Mixing + and .append() is a bad idea though, because there will be extra pressure on the memory as the String instances are created and then immediately thrown away.

    Finally, although using + + + + is easy, it doesn’t let you pre-size the StringBuilder array, which starts off with 16 characters by default. If the StringBuilder is used to create large Strings then avoiding multiple results is a relatively simple optimisation technique as far as reducing garbage is concerned. In addition, the array copy operation will grow larger as the size of the data set increases.

    The sample code for this blog post is available at https://gist.github.com/alblue/aa4453a5b1614ee1084570f32b8b5b95 if you’d like to replicate the findings.


    April 23, 2016 09:00 PM

    Documentation: do not forget the edit link

    April 21, 2016 10:00 PM

    In my opinion if you publish documentation online, you should always tell your readers how they can change/update it.

    I picked up several examples of documentation hosted on GitHub. On each page there is an "Edit on GitHub" link. This eases the creation of pull requests.

    Example 1: Microsoft Azure documentation (see this page for example):

    2016-04-22_azure

    Example 2: the Eclipse Xtend documentation:

    2016-04-22_xtend

    Example 3: Eclipse Scout documentation (see the scout "hello world" page):

    2016-04-22_scout

    Example 4: My blog (see my last article for example):

    2016-04-22_blog

    And many other projects are following this best practice…​

    This isn’t a GitHub only stuff. The same pattern can be used if the documentation is hosted on a wiki engine. As an example the GEF4 documentation does it exactly like that:

    2016-04-22_gef4

    Asciidoctorj macro

    If you are using asciidoctor for your documentation and if the sources are hosted on GitHub, you might be interested in the small extension I wrote: asciidoctorj-gh-edit. Short usage example:

    :repository: jmini/jmini.github.io
    :branch: develop
    
    Do you want to improve this documentation? Please gh:edit[].

    A second example where more parameters are defined as arguments of the macro:

    See gh:view[repository='asciidoctor/asciidoctor.org', branch='master', path="news/debuter-avec-asciidoctor.adoc", link-text='this article in french'] on GitHub.

    JBake blog

    If you have a blog powered by JBake (like this one), you can add the code for the link creation in your templates. This way JBake generates the desired link on each page. I implemented it with some Groovy code included in the template. More details can be found on the JBake-user mailing list: Create a link "see blog post source on GitHub".


    April 21, 2016 10:00 PM

    Can open source solve the too many IoT standards problem?

    by Ian Skerrett at April 21, 2016 08:02 PM

    An important issue in the IoT industry is the plethora of IoT standards that exist today and new standards being created. The current situation for IoT standards has been described as a ‘Trainwreck for IoT Vendors‘ and the implication being IoT solutions are going to be pretty dumb. In the recent IoT Developer Survey, interoperability was one of the top concerns, so it is definitely been on the minds of IoT developers.

    If we look at why there are so many IoT standards, there are many answers. The simple fact is that all standards are not created equal and some solve very different problems. Many of the different standards exist at different layers of the stack; just like the Internet. There is also lot of innovation required in the IoT industry to enable many of the use cases,  for example, LPWAN. We also have to remember IoT is not new. Lots of industries have have massive investments in equipment that uses an existing standard that isn’t going away anytime soon.

    Those are some of the valid reasons for the different IoT standards. Of course, they are many not great reasons; as the saying goes ‘standards are like toothbrushes, everyone wants to use one but not other peoples’. XKCD captures the spirit of the worst possible situation for a new IoT standard.

    standards

    It is time we accept there will be a plethora of IoT standards. Some will become more widely adopted than others, some will fail but we will never get to the ‘one IoT standard to rule them all’. It is just not realistic.

    However, we still need to solve the issue of interoperability. There are many use cases of Things/Devices that communicate using different standards but will need to communicate in a consistent manner. It seems to me the only way this interoperability challenge will be solved is in software and ideally open source software. We need an open software platform that enables the standards bazaar and not try to create a cathedral. Kai Kreuzer, Eclipse SmartHome project leader, has published a great video on this topic for Home Automation.

    Interoperability is not going to be solved in standards groups. It is going to be solved with running software. Open source IoT platforms are going to be the bedrock for the IoT industry. There is just no other way it will work.



    by Ian Skerrett at April 21, 2016 08:02 PM

    My Eclipse Newsletter!

    by tevirselrahc at April 21, 2016 03:12 PM

    I am so proud and happy: I have an Eclipse Newsletter dedicated to me!

    Well, there other intersting stuff in the newsletter, but I am the star!

    My minions and admirers are so nice to me they pooled together to say nice things about me! Thanks to Sébastien, Saadia, Francis, and Ronan for their efforts in writing these articles! And thanks to others (Susan, Roxanne, and Charles) for helping them in their efforts to support me – I greatly appreciate it!

    Go and read from them about:

    And hopefully also glanced at the other Eclipse news — especially the PolarSys Rover, where I’ll have a role.

    Now that you have read all the good stuff , you just have to try me out! Get me from the Papyrus Downloads!

    Trust me! You won’t regret it! And that’s a money-back offer! (wait… I’m free…)


    Filed under: News Tagged: 2.0, eclipse, Industry consortium, IoT, migration, Newsletter, PIC

    by tevirselrahc at April 21, 2016 03:12 PM

    Eclipse Newsletter - Discover Model-Based Engineering

    April 21, 2016 02:03 PM

    Eclipse Papyrus is an open source model-based engineering tool. Get a crash course on everything Papyrus.

    April 21, 2016 02:03 PM

    Presentation: Modeling Avengers: Open Source Technology Mix for Saving the World

    by Cedric Brun, Benoit Combemale at April 21, 2016 12:25 AM

    Cedric Brun and Benoit Combemale discuss the Smart Farming System Tooling, an environment to model, analyze and simulate an agricultural exploitation, biomass growth and water consumption based on user input and open data.

    By Cedric Brun, Benoit Combemale

    by Cedric Brun, Benoit Combemale at April 21, 2016 12:25 AM