New Papyrus-based tool!

by tevirselrahc at June 19, 2018 01:30 PM

Good news! The Papyrus Industry Consortium’s steering committee has approved the creation of a “Papyrus Light” addition to the product line!

My insiders have been telling me that work is ongoing on the requirements for this new tool.

Would you like to have a voice? Well you can do so through the Papyrus IC public Tuleap repo’s product management forum!  (You may remember my previous post about Tuleap).


by tevirselrahc at June 19, 2018 01:30 PM

Web-based vs. desktop-based Tools

by Jonas Helming and Maximilian Koegel at June 19, 2018 12:25 PM

It is clear that there is an ongoing excitement surrounding web-based IDEs and tools, e.g. Eclipse Che, Eclipse Theia, Visual Studio Code, Atom or Eclipse Orion. If you attended recent presentations or read current announcements, you may get the feeling that desktop IDEs have already been deprecated. But is this really true? If you ask developers for the tools they use in their daily work, you will rarely find someone already using web-based development tools in production.

At EclipseSource we develop IDE-based solutions, development tools, tools for engineers and modeling tools on a daily basis in various customer projects. We are dealing with this particular design decision regularly:

Do we go for a desktop tool, a web-based, or cloud-based solution?

Web-based vs. desktop-based Tools

Therefore, we want to share our experience on this topic. This is the first of three articles. It describes the most important drivers behind any design decision: the requirements. In the second article, we will describe challenges, technical patterns, solutions, and frameworks on how to match the requirements and how to remain as flexible as possible. In the third article, we will provide an example scenario to substantiate those best practises.

So first things first: As for so many design decisions, the most important thing is to know the requirements. Software Engineers love to talk about implementation and we also like to use new, fancy, or just our favorite technology. But in the end, we need to solve a given problem as efficiently as possible. Therefore, we should think about the problem definition first, even if that leads to a design decision that doesn’t bet on “what’s trendy right now”?

For the impatient reader, here is the possibly unsatisfying conclusion: Whether to go for a desktop or a web-based solution is a complex decision. If you want to make an optimal choice, you will need to consider your custom requirements in several dimensions. For some projects, it will be a rather simple and straightforward choice, e.g. if you are required to integrate with a given existing tool chain. However, for most projects you will need to consider the overall picture and even try to predict the future as accurately as possible.

In our experience, it is worth the effort. In the end, you will hopefully develop a good strategy. This strategy does not have to be limited to strictly choosing one option. It is also a perfectly valid strategy to choose one primary option, but also to decide on being compatible with another option. This allows for a future migration path forward. Further, it is possible to mix both worlds on a per use case basis. We will detail these particular strategies in a follow-up post.

So let’s look at the most common non-functional requirements, which play a role in the design choice between desktop and web. To be more precise, the following areas are typically treated as advantages of a web-based solution.

  • Installability: (also referred to as deployment or “set-up effort”): How easy and fast can you set-up the required tooling and the runtime for executing a system. Usually this is mainly referring to the developer tooling and its runtime(s) since it needs to be repeated for every developer. For simplicity, let us assume this also includes the effort to keep the system up-to-date.
  • Portability: How difficult is it to port a tool to another platform/hardware. In the case of IDEs this requirement is also sometimes referred to as “accessibility”. The typical use case is to access all development resources from any platform, e.g. also on your mobile device.
  • Performance and responsiveness: How responsive is the tool or how responsive does it feel. How long do crucial operations take to run, e.g. a full rebuild.
  • Usability: Let us use this wonderful definition from Wikipedia: “In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use”
  • Cost: The cost it takes to implement an IDE, tooling, extension or the necessary development runtime. For most projects it is probably one of the most important criteria to consider

Besides those non-functional requirements, tools also need to fulfill functional requirements. As those are specific to a certain tool, we will only consider the cost requirement since typical projects are aimed at fulfilling their functional requirements at the lowest possible cost.

As a first requirement, we will look at installability, because it is the most obvious distinction between a desktop-based and a cloud-based solution. For this requirement, we will also introduce some example scenarios and dimensions, that recur for other requirements later on, so the next section is the most comprehensive.

Installability (a.k.a. Deployment, “set-up effort” and updatability)

Web-based vs. desktop-based Tools

Installability is probably the most prominent advantage that is advertised for web-based solutions. The vision is that you can simply log into a system via your Browser and you can directly start to code without installing anything, neither the IDE or tools, nor any runtime environment. Further, you do not need to install any updates anymore, as they are applied in a central way.

So let us look at this obvious advantage more in detail. The first interesting dimension of this is how much time you can save with improving the installability. This is connected to the number of developers that you have on-board to use the tooling and the number of people who are occasionally using those tools. Further, it plays a role on how long a developer would use the tool after installation, the shorter the usage the more significant the set-up time would be.

Let’s look at three example scenarios.

First let’s consider a tutorial/training scenario, where participants should participate in completing exercises. A tutorial/training takes usually a couple of hours or days, so the set-up time is a crucial part here. Further, trainings/tutorials are typically conducted for a bigger number of developers. Anybody who ever had to deal with preparing a setup guide for a tutorial would agree that a browser-based solution would immediately pay off here. Even the simplest and best prepared desktop-based setup will take some time to install. So this scenario is a win for a web-based solution if, and only if, you can rely on a robust high-speed internet connection. Consequently, you can observe that a lot of online tutorials already embed web-based code editors and tooling. 

The second scenario is an open source project. Many OS project have a mix of different developer roles. Some developers (committers) are regularly contributing to a project. The second group of contributors typically uses a technology (“adopters”), but want to eventually contribute small enhancements and fixes. While for the regular committers, set-up is not that important compared to their time working on the project, occasional contributors are obviously often discouraged from contributing to a project by a complex set-up. Therefore, in this scenario, there is a mismatch between the requirements of two user roles that you need to balance. So is it worth to switch existing committers to a cloud-based solution to ease the life for contributors?

Ed Merks has described this issue very well in his blog post in 2014. His conclusion was the creation of a tool called Oomph, which automates the set-up of a desktop IDE for projects. For the source code, Oomph even goes partly a little further than most cloud-based solutions, as it even allows you to automatically check-out the sources. Please see here for a tutorial regarding Oomph. While Oomph greatly improves the set-up process, it does not solve the issue completely. It will still take some time to materialize all project resources (i.e. download time). So for a very small contribution, it might still be too much of a burden. Further, it does not fully automate the creation of an appropriate runtime. If a project requires a lot of external tools (e.g. databases, applications servers, etc.), those have to be installed separately. In turn, it does not touch the regular committer user, as they just continue using their existing and well-proven solutions. In scenarios with such different roles, it is always a good idea to let all developers use the same set-up and tooling. Otherwise, there might be slight differences in their output and further, the committers will usually only maintain their solution well.

The third scenario is not differentiated by the type of project, but by the use case: code review. In a scenario where a developer works on implementation within a project every day, she might not care about installability so much, at least other requirements might be more important. In turn, if you only review a code change and you do not implement something regularly, installing/updating all the required tooling plays a significant role. As a consequence, most reviews are already probably done in web interfaces (e.g. Gerrit or Pull Requests). Also the use case is focussed on reading and understanding code, rather than on changing or creating new code. Therefore, other requirements are not so important for a code review compared to a good installability.

Like the three scenarios described here, you can categorize any arbitrary project and the according use cases based on the importance of a good installability. The result of this will be very specific to a custom project set-up. Those considerations are naturally already reflected in today’s toolchains, where parts of the tool chain more focussed on reading or browsing code are often web-based.

There are some more considerations to make, which are related to installability. One is the updatability. While of course an update to the tooling is hopefully not the same as installing it from scratch, most considerations we had for the installability should be applied for the update case as well. This especially includes, how often updates to the tool are applied.

Another obvious dimension is the complexity of the project set-up. The more difficult it is, the bigger the advantage of simplifying it via a cloud solution. For this, we need to differentiate of course between the IDE tooling itself and the necessary runtime environment. The environment is often much more complicated, e.g. if you need to set-up several services, databases and so on. If only the runtime set-up is very complicated, a cloud-based IDE might not be the only valuable solution. There are several ways to ease those setups, even without using a cloud- based IDE, e.g. with docker containers or workspace servers like Eclipse Che.

Installability is one of the major advantages of a web-based solution. However, it will only provide a significant advantage, if the use case fits. Therefore, it is worth it to spend some time on defining the core users and use cases of your system and define the importance of installability for them. In a nutshell, installability pays off most in environments, where a lot of people need to and can be on-boarded very fast, and if they do not work continuously on a project. Unfortunately, when onboarding developers it is usually much more time consuming to transfer the required knowledge than to setup the IDE.

Portability

Web-based vs. desktop-based Tools

Portability is the second very obvious advantage of a cloud and browser-based solution over a desktop-based tool. The ultimate goal is that you can access the tooling and runtime with any device which has a browser. As a consequence, you can ideally fulfil your development use case at any location, even from a mobile device. In some discussions, this is currently referred to as “accessibility”.  While this strictly speaking means something different, we consider Portability as the ability to access the tool from anywhere on any device.

A lot of the considerations we have described for installability can be shared when thinking about the advantage of Portability. Different project roles and different tooling use cases would benefit on a different level. Doing a code review or browsing a diagram on a tablet makes more sense than writing a lot of code. So, again, the detailed roles and use cases will need to be evaluated. We will not repeat that in detail, but focus on new dimensions, which are specific for portability.

One additional scenario connected to portability is the ability to share a certain runtime set-up or project state. That means developers always have exactly the same environment. This obviously simplifies life as the typical phrase “I cannot reproduce this on my machine” would no longer occur. However, this only fully applies to the tooling. For the runtime, it plays a role if the runtime platform for a system is unified, too. If the system under development runs natively on different operating systems again, you still need to test different runtime environments. As a consequence, cloud-based tooling currently seems to get adapted in the area of cloud-based development, first.

And like mentioned for installability, there are other ways to achieve a uniform setup, although not as unified as a cloud solution.

A disadvantage of a pure cloud-based solution is that they often rely on constant internet connection. While this issue becomes less and less relevant, at least it must be considered. Some cloud-solutions already provide a good work-around, e.g. the offline mode of Google Mail.

A final word about the dream to be able to contribute to a project from anywhere on any device: While this sounds appealing for certain use cases, do we really want to be called by a client and subsequently feel obligated to fix a bug on our smart phone while we are sitting in a ski lift?

Performance

Web-based vs. desktop-based Tools

Performance is a very interesting requirement to consider. In contrast to installability and portability, there is no clear winner between desktop-based and cloud-based tooling. You will find valid arguments for both to be “more performant”, e.g. this article by Tom Radcliffe claims that desktop IDEs are the clear winner.

The major reason for this tie is again that we have to consider the specific use case when talking about performance. While writing and browsing code, we want fast navigation, key bindings and coding features such as auto-completion. While web IDEs have caught up a lot in recent years, you can still claim that a desktop tool is typically more performant for those “local” use cases (as also claimed by Tom Radcliffe in the article referenced above).

Things change, when looking at other use cases, e.g. compiling. A powerful cloud-instance can certainly compile a project faster than a typical laptop. Further, it is comparably cheaper and more scalable to gain more resources in a central way. However, when going for a cloud-based solution, scalability must be taken into account. Any advantage is obsolete, if developers have to wait 15 minutes for getting something compiled, because other build jobs are running on the central instance.

So for performance, it is important to consider which development use cases are crucial and will benefit performance-wise from either solution. A follow-up decision of a cloud-based solution would be to strip down the hardware for participating developers to save costs (Chromebook scenario). While this sounds like a rational thing to do, not everybody will like the idea of giving away his or her powerful device.

Usability

Web-based vs. desktop-based Tools

Usability also doesn’t have a clear winner in the comparison between desktop-based to web-based IDEs. While advocates of both platforms would claim a clear advantage, this is really a matter of personal taste. Web technologies have become incredibly powerful when it comes to styling, look, and feel. Therefore, you can achieve almost any visualization you would like.

Further, there is much more innovation going on in the web area in comparison to, e.g. SWT.

On the desktop, depending on the UI Toolkit you use, there might be more restrictions. JavaFX and Swing are powerful when it comes to styling, but Swing is kind of deprecated. SWT has limitations when it comes to styling. However, it probably provides most existing framework support when it comes to tooling (see also “cost vs. features”).

Besides styling, native desktop applications have still some advantages in usability, e.g. by supporting native features such as application menus, tray icons, key bindings and such. Although, it is expectable that these advantages will shrink over time as  browsers keep on evolving very fast. In any case, usability is not equal to the ability of supporting a dark theme as some browser advocats may try to make us believe.

It is worth mentioning that there are platforms combining advantages of both worlds. Platforms such as Visual Studio Code, Atom or Eclipse Theia embed web technologies (HTML and Typescript) into a desktop application (using Electron).

Cost vs. Features

Web-based vs. desktop-based Tools

At the end of the day, for many projects a very important constraint is cost, meaning the required effort to satisfy your list of functional requirements. A comparison of the required effort on either platform is driven by several parameters.

First, the effort would be influenced by the general efficiency of a certain platform. It would go beyond the scope of this article to compare, for example, the efficiency of Java development with TypeScript or JavaScript development, so let us assume the general development efficiency to be equal for now.

Second, cloud-based solutions usually add some complexity to the architecture due to the required encapsulation of client and server. Of course, you want to have encapsulation of components in a desktop-based tool, too. However, you typically do not have to deal with the fact that components are deployed in a distributed environment.

Third, a very central aspect for implementing tooling is the availability of existing frameworks, tools and platforms to reuse. Ecosystems like Eclipse have grown over 17 years now and already provide a tool, framework or plugin for almost everything. Many requirements can be covered with very little effort, just by extending existing tools or by using the right framework.

While there are frameworks for cloud-tools, too (such as Orion, Eclipse Che or Theia), they are arguably not as powerful as Eclipse/IntelliJ/NetBeans, yet. This might of course change over time.

One mentionable trend is to reuse existing “desktop” components in web-tooling. As an example, Eclipse JDT provides an API to be used using the language server protocol (LSP). This allows you to reuse features such as auto-completion and syntax highlighting from another UI, typically a web-based IDE. While LSP does not yet cover the full feature set of the Eclipse desktop IDE, it is a great way of reusing existing efforts.

Finally, powerful frameworks usually also carry the burden of complexity. The advantage of reusing existing stuff typically (or hopefully) justifies the effort of learning to use a framework. In turn, if you just use a tiny bit of a platform, you might be better off in using something slimmer, which covers your use cases just as well. As an example, if you need a plain code editor without any context, e.g. for a tutorial use case, a platform such as Eclipse might be overkill.

So it is useful to evaluate the desired feature set and how well it is supported by existing frameworks on the respective technology stack (web vs. desktop). It is also worth it to investigate whether existing features on one stack can be adapted to be reused on the other (e.g. using LSP). This applies in both directions. Not only can you call existing frameworks in the background from a cloud-based solution, it is also possible to embed web-based UI components into desktop applications (see also conclusion).

Finally, what makes the dimension of cost vs. features especially difficult is that you typically cannot know exactly what kind of features you need to provide with a tool in the mid-term future.

And more…

Web-based vs. desktop-based Tools

There are obviously many other parameters to consider when comparing the costs of cloud vs. desktop-based tools. This article would go beyond scope if we were to spend a section on all of these, but let us at least mention some more important topics.

One meta criterion in the decision for a platform, framework, or technology stack, is the long term availability as well as how actively it is maintained. Again, there is no clear winner, when comparing desktop and web stacks. While you can argue that there is more development going on in the web area, it is in turn extremely volatile. Platforms such as Eclipse have been successfully maintained for 17 years now. There are existing workflows for long-term support (LTS) and suppliers like us providing this as a service. Plus, the platform is very stable in terms of APIs.

In turn, web frameworks provide major version updates with improvements almost every year. While this brings in innovation, it also often requires refactoring of an adopting project. As an example, there is currently a lot of variety emerging when it comes to web-based IDEs (e.g. Visual Studio Code, Theia, Eclipse Che, Orion, etc.). We will cover strategies to deal with this risk in a follow-up article.

Another meta criterion is the availability of skilled resources. At the moment, you can probably find more Angular developers on the market than SWT experts. However, this may change very quickly – once Angular is not “hip” anymore.

Another frequently discussed topic when it comes to cloud-based solutions is of course the security as well as tracing aspect. While it is certainly worth considering, it is probably not the key decision factor for most professional environments, but rather requires special attention.

Conclusion

Web-based vs. desktop-based Tools

In this article, we have tried to cover the most important considerations when deciding between a cloud or web-based solution and a desktop tool. There are for sure many more things to consider.

However, what all dimensions have in common is that it is most important to think about the users, their use cases and the frequency of those. Based on these criteria, you can evaluate the benefits of supporting them in the web or on the desktop. This is especially true, if you already have an existing tool and are considering a migration. In this case, there must be clear advantages justifying the cost.

While this is already complex, it is even worth it to make this decision on a per use case basis. This is already happening naturally in our tool landscape, e.g. code reviews are very often conducted using web-interfaces. Identifying the use cases in your tool, which would benefit most from being available online, reduces the effort and risk to migrate everything at once. So it is often a good idea to pick the low hanging fruits first.

Ultimately, it will almost never be possible to make a perfect decision. This is especially true, as important criteria, use cases, and technologies change over time and no one can perfectly predict the future. Therefore, the most important thing is to keep some flexibility. That means, even if you decide for a desktop solution, or vice versa, it should be as easy as possible to switch to the other option later on.

Even mixtures of both technology stacks on a per use case basis often make sense. While this sounds ambiguous, there are some simple patterns to follow to make this true. We will highlight those strategies in a follow-up article. This strategy also allows an iterative migration, which is often the only viable way tackle the complexity and efforts of migrating existing tool projects. Some Frameworks even proactively support this strategy by supplying implementations based on web- and desktop technology at the same time, e.g. EMF Forms and JSON Forms.

Let us close this article with a general, non-statistical overview of what most projects currently do. This is of course biased, as the input is derived from our customer projects or projects we know about. However, looking at those:

Some projects directly aim at a pure web-based solution, typically, if they benefit a lot from the advantages, if they implement something from scratch and if they have a pretty self-contained feature set (e.g. training).

Few projects do not consider web-based tooling at all, mostly if they have a defined set of continuous users and a lot of existing investments in their desktop tools.

Most projects plan to maintain their desktop solutions in the near future, but will migrate certain use cases to web technology. Therefore, those projects implement certain design patterns allowing this partial migration. We will highlight those patterns in a follow-up article. Follow us on Twitter to get notified about our blog posts. Stay tuned!

Finally, if you are dealing with the designing decision in your project and want support, if you want an evaluation of a web-based version of your tools or if you want to make your current tools ready for the upcoming challenges and chances, please do not hesitate to contact us.


by Jonas Helming and Maximilian Koegel at June 19, 2018 12:25 PM

Visualizing Eclipse Collections

by Donald Raab at June 16, 2018 09:36 PM

A visual overview of the APIs, Interfaces, Factories, Static Utility and Adapters in Eclipse Collections using mind maps.

A picture is worth a thousand words

I’m not sure how many words a mind map is worth, but they are useful for information chunking. Eclipse Collections is a very feature rich library. The mind maps help organize and group concepts and they help convey a sense of the symmetry in Eclipse Collections.

Symmetric Sympathy

A High-level view of the Eclipse Collections Library

RichIterable API

The RichIterable API is the set of common APIs share between all of the container classes in Eclipse Collections. Some methods have overloaded forms which take additional parameters. In the picture below I have grouped the unique set of methods by the kind of functionality they provide.

RichIterable API

API by Example

Below are links to several blogs covering various APIs available on RichIterable.

  1. Filtering (Partitioning)
  2. Transforming (Collect / FlatCollect)
  3. Short-circuiting
  4. Counting
  5. Filter / Map / Reduce
  6. Eclipse Collections API compared to Stream API

RichIterable Interface Hiearchy

RichIterable is the base type for most container types in Eclipse Collections. Even object valued Map types extend RichIterable in Eclipse Collections. A Map of type K (key) and V (value), will be an extension of RichIterable of V (value). This provides an rich set of behaviors to Map types for their values. You can still iterate over keys, and keys and values together, and there are separate methods for this purpose.

RichIterable Interface Hierarchy — Green Star=Mutable, Red Circle=Immutable

PrimitiveIterable Interface Hierarchy

Eclipse Collections provides container support for all eight Java primitives. There is a base interface with common behavior named PrimitiveIterable.

PrimitiveIterable Interface Hierarchy

The following diagram shows the IntIterable branch from the diagram above. There are seven other similar branches.

IntIterable Interface Hierarchy — Green Star=Mutable, Red Circle=Immutable

The interface hierarchy for each primitive type is pretty much the same as IntIterable.

Factories

If you want to create a collection in Eclipse Collections, you have a few options available. One option is to use a constructor or static factory method on the concrete mutable type that you want to create. This requires you to know the name of the concrete mutable types (e.g. FastList, UnifiedSet or UnifiedMap). This option does not exist for immutable types however. The most convenient, consistent and symmetric option if you are going to create both mutable and immutable containers is to use one of the factory classes provided. A factory class follows the pattern of using the type name plus an s, to make it plural. So if you want a mutable or immutable List, you would use the Lists class, and then specify whether you want the mutable or immutable factory for that class.

Factory Classes available in Eclipse Collections for Object Containers

There are separate factory classes for primitive containers. Prefix the primitive type in front of the container type to find the right primitive factory class.

Mutable Factory Examples

MutableList<T> list = Lists.mutable.empty();
MutableSet<T> set = Sets.mutable.empty();
MutableSortedSet<T> sortedSet = SortedSets.mutable.empty();
MutableMap<K, V> map = Maps.mutable.empty();
MutableSortedMap<K, V> sortedMap = SortedMaps.mutable.empty();
MutableStack<T> stack = Stacks.mutable.empty();
MutableBag<T> bag = Bags.mutable.empty();
MutableSortedBag<T> sortedBag = SortedBags.mutable.empty();
MutableBiMap<K, V> biMap = BiMaps.mutable.empty();

Immutable Factory Examples

ImmutableList<T> list = Lists.immutable.empty();
ImmutableSet<T> set = Sets.immutable.empty();
ImmutableSortedSet<T> sortedSet = SortedSets.immutable.empty();
ImmutableMap<K, V> map = Maps.immutable.empty();
ImmutableSortedMap<K, V> sortedMap = SortedMaps.immutable.empty();
ImmutableStack<T> stack = Stacks.immutable.empty();
ImmutableBag<T> bag = Bags.immutable.empty();
ImmutableSortedBag<T> sortedBag = SortedBags.immutable.empty();
ImmutableBiMap<K, V> biMap = BiMaps.immutable.empty();

Static Utility Classes

In the beginning of Eclipse Collections development, everything was accomplished through static utility classes. We added our own interface types later on. Over time Eclipse Collections has accumulated quite a few static utility classes that serve various purposes. Static utility classes are useful when you want to use Eclipse Collections APIs with types that extend the JDK Collection interfaces like Iterable, Collection, List, RandomAccess and Map.

A collection of useful static utility classes

Static Utility Examples

Assert.assertTrue(
Iterate.anySatisfy(
Collections.singleton("1"),
"1"::equals));
Assert.assertTrue(
ListIterate.anySatisfy(
Collections.singletonList("1"),
Predicates.equal("1")));
Assert.assertTrue(
MapIterate.anySatisfy(
Collections.singletonMap(1, "1"),
Predicates.notEqual("2")));
String[] strings = {"1", "2", "3"};
Assert.assertTrue(
ArrayIterate.anySatisfy(strings, "1"::equals));
Assert.assertTrue(
ArrayIterate.contains(strings, "1"));

Adapters

There are adapters that provide the Eclipse Collections APIs to JDK types.

Adapters for JDK types

Creating an adapter

MutableList<String> list = 
Lists.adapt(new ArrayList<>());
MutableSet<String> set =
Sets.adapt(new HashSet<>());
MutableMap<String, String> map =
Maps.adapt(new HashMap<>());
MutableList<String> array =
ArrayAdapter.adapt("1", "2", "3");
CharAdapter chars =
Strings.asChars("Hello Chars!");
CodePointAdapter codePoints =
Strings.asCodePoints("Hello CodePoints!");
LazyIterable<String> lazy =
LazyIterate.adapt(new CopyOnWriteArrayList<>());

Additional Types

There are more types in Eclipse Collections like Multimaps. These will be covered in a separate blog. Multimap is one of the types today, along with ParallelIterable, that does not extend RichIterable directly.

Links

  1. Eclipse Collections Reference Guide
  2. Eclipse Collections Katas
  3. API Design of Eclipse Collections
  4. Refactoring to Eclipse Collections
  5. UnifiedMap, UnifiedSet and Bag Explained

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at June 16, 2018 09:36 PM

Pro Tip: Implementing JUnit Test Cases in Xtend

by Tamas Miklossy (miklossy@itemis.de) at June 15, 2018 02:18 PM

 

What makes a clean test? Three things. Readability, readability, and readability. Readability is perhaps even more important in unit tests than it is in production code. What makes tests readable? The same thing that makes all code readable: clarity, simplicity, and density of expression.
[Robert C. Martin: Clean Code - A Handbook of Agile Software Craftsmanship (page 124)]

 

Recently, the Eclipse GEF DOT Editor has been extended by the Rename Refactoring functionality. Following the Behaviour-Driven Development approach, its acceptance criteria have been specified first:

Feature: Rename Refactoring

  Scenario Outline:
    Given is the <dslFile>
    When renaming the <targetElement> to <newName>
    Then the dsl file has the content <newContent>.

    Examples:
      |  dslFile  |   targetElement   | newName | newContent |
      |-----------|-------------------|---------|------------|
      |  graph {  |                   |         |  graph {   |
      |    1      |     firstNode     |    2    |    2       |
      |  }        |                   |         |  }         |
      |           |                   |         |            |
      | digraph { |                   |         | digraph {  |
      |   1       |     firstNode     |    3    |   3        |
      |   1->2    |                   |         |   3->2     |
      | }         |                   |         | }          |
      |           |                   |         |            |
      | digraph { |                   |         | digraph {  |
      |   1       |    source node    |    3    |   3        |
      |   1->2    | of the first edge |         |   3->2     |
      | }         |                   |         | }          |
      |           |                   |         |            |


Thereafter, the test specification has been implemented in JUnit test cases:

class DotRenameRefactoringTests extends AbstractEditorTest {

	// ...

	@Test def testRenameRefactoring01() {
		'''
			graph {
				1
			}
		'''.
		testRenameRefactoring([firstNode], "2", '''
			graph {
				2
			}
		''')
	}

	@Test def testRenameRefactoring02() {
		'''
			digraph {
				1
				1->2
			}
		'''.
		testRenameRefactoring([firstNode], "3", '''
			digraph {
				3
				3->2
			}
		''')
	}

	@Test def testRenameRefactoring03() {
		'''
			digraph {
				1
				1->2
			}
		'''.
		testRenameRefactoring([sourceNodeOfFirstEdge], "3", '''
			digraph {
				3
				3->2
			}
		''')
	}

	// ...

	private def testRenameRefactoring(CharSequence it, (DotAst)=>NodeId element,
		String newName, CharSequence newContent) {
		// given
		dslFile.
		// when
		rename(target(element), newName).
		// then
		dslFileHasContent(newContent)
	}

	// ...

}


Thanks to the Xtend programming language, the entire DotRenameRefactoringTests test suite became readable, clean, and scales very well.

How did I do this? I did not simply write this program from beginning to end in its current form. To write clean code, you must first write dirty code and then clean it.

[Robert C. Martin: Clean Code - A Handbook of Agile Software Craftsmanship (page 200)]

Would you like to learn more about Clean Code, Behaviour-Driven and Test-Driven Development? Take a look at the (german) blog posts of my former colleague Christian Fischer, a very passionate software craftsman and agile coach.


by Tamas Miklossy (miklossy@itemis.de) at June 15, 2018 02:18 PM

Announcing Ditto Milestone 0.3.0-M2

June 15, 2018 04:00 AM

Today we, the Eclipse Ditto team, are happy to announce our next milestone 0.3.0-M2.

The main changes are

  • improvement of Ditto’s cluster performance with many managed Things
  • improved cluster bootstrapping based on DNS with the potential to easy plugin other mechanism (e.g. for Kubernetes)

Have a look at the Milestone 0.3.0-M2 release notes for a detailed description of what changed.

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Docker images have been pushed to Docker Hub:



Ditto


The Eclipse Ditto team


June 15, 2018 04:00 AM

Hello Planet Eclipse!

by Jonas Helming and Maximilian Koegel at June 14, 2018 09:23 AM

This is a test blog to check the aggregation on Planet Eclipse.


by Jonas Helming and Maximilian Koegel at June 14, 2018 09:23 AM

ECF Photon supports OSGi Async Remote Services

by Scott Lewis (noreply@blogger.com) at June 12, 2018 05:29 PM

In a previous post, I indicated that ECF Photon/3.14.0 will support the recently-approved OSGi R7 specification.   What does this support provide for  developers?

Support osgi.async remote service intent

The OSGi R7 Remote Services specification has been enhanced with remote service intents.  Remote Service Intents allow service authors to specify requirements on the underlying distribution system in a standardized way.   Standardization of service behavior guarantees the same runtime behavior across distribution providers and implementations.

The osgi.async intent allows the service interface to use return types such as Java8's CompletableFuture or OSGi's Promise.   With a supporting distribution provider, the proxy will automatically implement the asynchronous/non-blocking behavior for the service consumer.

For example, consider a service interface:
public interface Hello {
CompletableFuture<String> hello(String greetingMessage);
}
When an implementation of this service is registered and exported as a remote service with the osgi.async intent:
@Component(property = { "service.exported.interfaces=*", "service.intents=osgi.async" })
public class HelloImpl implements Hello {
public CompletableFuture<String> hello(String greetingMessage) {
CompletableFuture<String> future = new CompletableFuture<String>();
future.complete("Hi. This a response to the greeting: "+greetingMessage);
return future;
}
}
Then when a Hello service consumer (on same or other process) discovers, imports and then remote service is injected by DS:
@Component(immediate=true)
public class HelloConsumer {

@Reference
private Hello helloService;

@Activate
void activate() throws Exception {
// Call helloService.hello remote service without blocking
helloService.hello("hi there").whenComplete((result,exception) -> {
if (exception != null)
exception.printStackTrace(exception);
else
System.out.println("hello service responds: " + result);
});
}
}
The injected helloService instance (a distribution-provider-constructed proxy) will automatically implement the asynchronous remote call.   Since the proxy is constructed by the distribution provider, there is no need for the consumer to implement anything other than calling the 'hello' method and handling the response via the Java8-provided whenComplete method.   Java8's CompletionStage, Future, and OSGi's Promise are also supported return types.  (Only the return type is used to identify asynchronous remote methods, any method name can be used).  For example: the following signature is also supported as an async remote service:
public interface Hello {
org.osgi.util.promise.Promise<String> hello(String greetingMessage);
}

Further, OSGi R7 Remote Services supports a timeout property:
@Component(property = { "service.exported.interfaces=*", "service.intents=osgi.async", "osgi.basic.timeout=20000" })
public class HelloImpl implements Hello {
public CompletableFuture<String> hello(String greetingMessage) {
CompletableFuture<String> future = new CompletableFuture<String>();
future.complete("Hi. This a response to the greeting: "+greetingMessage);
return future;
}
}
With ECF's RSA implementation and distribution providers, this timeout will be honored by the underlying distribution system. That is, if the remote implementation does not return within 20000ms, then the returned CompletableFuture will complete with a TimeoutException.

Async Remote Services make it very easy for service developers to define, implement, and consume loosely-coupled and dynamic asynchronous remote services.   It also makes asynchronous remote service contracts transport independent, allowing the swapping of distribution providers or creating/using custom providers without changes to the service contract.

For the documented example code, see here

by Scott Lewis (noreply@blogger.com) at June 12, 2018 05:29 PM

Eclipse Vert.x 3.5.2

by vietj at June 08, 2018 12:00 AM

We have just released Vert.x 3.5.2, a bug fix release of Vert.x 3.5.x.

Since the release of Vert.x 3.5.1, quite a few bugs have been reported. We would like to thank you all for reporting these issues.

Vert.x 3.5.2 release notes:

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !


by vietj at June 08, 2018 12:00 AM

Siemens partnering with Obeo on Model Based Systems Engineering solution - a major recognition for OSS Modeling Techs

by Cédric Brun (cedric.brun@obeo.fr) at June 08, 2018 12:00 AM

You might have already heard the news, earlier this week during Siemens PLM Connection Americas 2018, Joe Bohman announced that Siemens PLM was partnering with Obeo.

Here is the complete press release for more detail but in short: we are working with Siemens with either standard modeling languages, Capella, SysML or tools to support custom process methodologies in order to contribute to the true integration of MBSE - Model Based System Engineering, within the entire product lifecycle.

This is significant in several ways.

First it’s another strong data point demonstrating that MBSE is a key enabler in a strategy aiming at enabling multi-domain engineering.

Second, it’s a public endorsement from one of the top high-tech multinational company that the OpenSource technologies built through the Eclipse Foundation and the Polarsys Working Group, in this case Acceleo, Sirius and Capella are innovation enablers. Our contribution is fundamental to those and as such this clearly strengthen these projects but also our vision and strategy!

Even more importantly adopters of those technologies will benefit from new integration points and means to leverage their models during the entire product lifecycle, and that’s what modeling is all about: using the model, iterating over it, refining it; as a living artifact, one that is shared and not as something gathering dust in a corner.

These are pretty exciting prospects ahead, no doubt this will be a central subject during EclipseCon France next week. Note that we’ll hold a Capella workshop during the Unconference and that it’s still time to register!

See you next week!

Siemens partnering with Obeo on Model Based Systems Engineering solution - a major recognition for OSS Modeling Techs was originally published by Cédric Brun at CEO @ Obeo on June 08, 2018.


by Cédric Brun (cedric.brun@obeo.fr) at June 08, 2018 12:00 AM

Edit an OpenAPI specification in Eclipse IDE

June 07, 2018 10:00 PM

I am working a lot on the OpenAPI Generator project these days. This means that I need to edit OpenAPI Specification files a lot. A specification file is a *.yaml file that describes a REST API.

In Eclipse IDE I have installed the KaiZen OpenAPI Editor plugin. This is an Xtext editor that provides everything that you need to be efficient with your OpenAPI specification: outline, code completion, jumps for references, renaming support…​

KaiZen OpenAPI Editor for Eclipse IDE

It can be installed from the Eclipse Marketplace.

If you use the Eclipse Installer (also called Oomph), you can add this xml snippet to your installation.setup file:

Listing 1. Oomph snippet to install the KaiZen OpenAPI Editor
<?xml version="1.0" encoding="UTF-8"?>
<setup.p2:P2Task
    xmi:version="2.0"
    xmlns:xmi="http://www.omg.org/XMI"
    xmlns:setup.p2="http://www.eclipse.org/oomph/setup/p2/1.0">
  <requirement
      name="com.reprezen.swagedit.feature.feature.group"/>
  <repository
      url="http://products.reprezen.com/swagedit/latest/"/>
</setup.p2:P2Task>

It is free and open-source (EPL). Enjoy.


June 07, 2018 10:00 PM

Visualizing npm Package Dependencies with Sprotty

by Miro Spönemann at June 07, 2018 07:24 AM

Sprotty is an open-source diagramming framework that is based on web technologies. I’m excited to announce that it will soon be moved to the Eclipse Foundation. This step will enable existing visualizations built on the Eclipse Platform to be migrated to cloud IDEs such as Eclipse Theia. But Sprotty is not limited to IDE integrations; it can be embedded in any web page simply by consuming its npm package.

In this post I present an application that I implemented with Sprotty: visualizing the dependencies of npm packages as a graph. Of course there are already several solutions for this, but I was not satisfied with their graph layout quality and their filtering capabilities. These are areas where Sprotty can excel.

Standalone Web Page

The application is available at npm-dependencies.com. Its source code is on GitHub.

Dependency graph of the sprotty package

The web page offers a search box for npm packages, with package name proposals provided through search requests to the npm registry. After selecting a package, its metadata is resolved through that same registry and the direct dependencies are shown in the diagram. Further dependencies are loaded by clicking on one of the yet unresolved packages (shown in light blue).

If you want to see the whole dependency graph at once, click the “Resolve All” button. For projects with many transitive dependencies, this can take quite some time because the application needs to load the metadata of every package in the dependency graph from the npm registry. The resulting graph can be intimidatingly large, as seen below for lerna.

The full dependency graph of lerna

This is where filtering becomes indispensable. Let’s say we’re only interested in the package meow and how lerna depends from it. Enter meow in the filter box and you’ll see this result:

Dependency paths from lerna to meow

The filtered graph shows the packages that contain the filter string plus all packages that have these as direct or indirect dependencies. Thus we obtain a compact visualization of all dependency paths from lerna to meow.

Hint: If the filter text starts with a space, only packages that have the following text as prefix are selected. If it ends with a space, packages must have the text as suffix. Thus, if the text starts and ends with a space, only exact matches are accepted.

How It Works

The basic configuration of the diagram is quite simple and follows the concepts described in the documentation of Sprotty. Some additional code is necessary to resolve package metadata from the npm registry and to analyze the graph to apply the selected filter. A subclass of LocalModelSource serves as the main API to interact with the graph.

Automatic layout is provided by elkjs, a JavaScript version of the Eclipse Layout Kernel. Here it is configured such that dependency edges point upwards using the Layered algorithm. It tries to minimize the number of crossings, though only through a heuristic because that goal cannot be satisfied efficiently (it’s an NP-hard problem).

Integration in Theia

The depgraph-navigator package can be used in a standalone scenario as described above, but it also works as an extension for the Theia IDE. Once installed in the Theia frontend, you can use this extension by right-clicking the package.json file of an npm package you are working on and selecting Open With → Dependency Graph.

The dependency graph view embedded in Theia

If you have already installed the dependencies of your project via npm install or yarn, all package metadata are available locally, so they are read from the file system instead of querying the npm registry. The registry is used only as a fallback in case a package is not installed in the local project. This means that resolving further packages is much faster compared to the standalone web page. You can get a full graph view of all dependencies by typing ctrl + shift + A (cmd + shift + A on Mac). Again, if the number of dependencies is too large, you probably want to filter the graph; simply start typing a package name to set up the same kind of filter described above for the standalone application (press esc to remove the filter).

Try It!

If you haven’t already done it while reading, try the dependency graph application. You are welcome to get in touch with me if you have any questions about Sprotty and how it can help you to build web-based diagrams and visualizations.

By the way, don’t miss the talk on Sprotty at EclipseCon France that I will do together with Jan next week!


by Miro Spönemann at June 07, 2018 07:24 AM

Download the conference app

by Anonymous at June 05, 2018 11:56 AM

Explore the program by speaker, tracks (categories) or days. Read the session descriptions and speaker bios and choose your favourites. Download the Android or iOS versions. Thank you @EclipseSource!


by Anonymous at June 05, 2018 11:56 AM

Meet the research community

by Anonymous at June 05, 2018 10:39 AM

Tap into the research community at EclipseCon France!


by Anonymous at June 05, 2018 10:39 AM

Eclipse DemoCamp Photon in Eindhoven on July 4: Platform, Sirius, Xtext, and more!

by Niko Stotz at June 04, 2018 01:52 PM

tl;dr: Altran organizes the first Eclipse DemoCamp in Eindhoven to celebrate the Photon Release Train on July 4, 17:00 hrs. Register today! We have Mélanie Bats of Obeo talking about Sirus 6, our own Marc Hamilton summarizing lessons learned from 10 years worth of MDE projects, and itemis’ Karsten Thoms and Holger Schill reporting about the latest features of Eclipse Platform 4.8 and Xtext 2.14, respectively.

After hosting the Sirius Day in April, we’re already looking at the next Eclipse event at Altran Netherlands: We’ll host the first Eclipse DemoCamp in Eindhoven to celebrate the Photon Release Train on July 4, 17:00 hrs.

We’ll start off at 17:00 hrs with a small dinner, so we all can enjoy the talks without starving. Afterwards, we have a very exiting list of speakers:

  • Mélanie Bats, CTO of Obeo, will tell us about What’s new in Sirius 6.

    Major Changes in Sirius 6:

    • Sirius now supports an optional integration with ELK for improved diagram layouts: specifiers can configure which ELK algorithm and parameters should be used for each of their diagrams, directly inside the VSM (ticket #509070). This is still considered experimental in 6.0.

    • A new generic edge creation tool is now available on all Sirius diagrams. With it, end users no longer have to select a specific edge creation tool in the palette, but only to choose the source and target elements (ticket #528002).

    • Improved compatibility with Xtext with an important bug fix (ticket #513407). This is a first step towards a better integration with Xtext, more fixes and improvements will come during the year.

    • It is now possible for specifiers to configure the background color of each diagram. Like everything else in Sirius, the color can be dynamic and reflect the current state of the model. (ticket #525533).

    • When developing a new modeler, it is now possible to reload the modeler’s definition (.odesign) from an Eclipse runtime if the definition has changed in the host that launched the runtime. This is similar to “hot code replace” in Java, but for VSMs, and avoids stopping/restarting a new runtime on each VSM change (ticket #522407).

    • In the VSM editor, when editing an interpreted expression which uses custom Java services, it is now possible to navigate directly to a service’s source code using F3 (ticket #471900).

    A more visual overview can be found in the Obeo blog.

  • Altran’s own Marc Hamilton shares Altran’s experience developing MDE applications with Eclipse technology.

    Altran Netherlands develops Eclipse-based model-driven applications for its customers for several years.
    In this talk, we share our experience with different modeling technologies like Acceleo, OCL, QVTo, EGF, Sirius, Xtext, and others.

  • What’s new in Xtext 2.14 will be presented by Xtext committer of itemis, Holger Schill.

    Major Changes in Xtext 2.14:

    • Java 9 and 10 Support
    • JUnit 5 Support
    • New Grammar Annotations
    • Create Action Quickfix
    • Code Mining Support
    • New Project and File Wizard
    • Improved Language Server Support
    • Performance Improvements

    Please check the Release Notes for details.

  • Yet another overview by Karsten Thoms, of itemis with his talk Approaching Light Speed – News from the Eclipse Photon Platform.

    The Eclipse Photon simultaneous release comes this year with a plethora of new features and improvements that will continue the Eclipse IDE keeping the #1 flexible, scalable and most performing IDE!

    This session will give a guided tour through the new features and changes in Eclipse Photon. Due to the vast amount of noteworthy stuff the focus of this talk is on the Eclipse Platform Project, covering JDT only roughly. You will see usability improvements, useful new API for platform developers and neat features for users. Besides visible changes, the platform project team has paid special attention on stability, performance and resource consumption tuning. In this talk, I will give some insights how the team has worked on that.

    Come and see the incredible achievements the platform team and its growing number of contributors made to bring you the best Eclipse IDE ever!

More talks are in discussion. Please propose your talk to us; we’d be especially happy to include more local speakers in the lineup.

We’ll have a break and some get-together afterwards, so there is plenty of opportunity to get in touch with the speakers and your fellow Eclipse enthusiasts in the region.

The DemoCamp will take place at the Altran office in Eindhoven. Please refer to the Eclipse wiki for all details and register now to secure your spot at the first Eclipse DemoCamp in Eindhoven!


by Niko Stotz at June 04, 2018 01:52 PM

Bag — The Counter

by Nikhil Nanivadekar at June 04, 2018 01:15 PM

https://www.eclipse.org/collections/

I have often times encountered the necessity to count the number of objects. I have experienced the necessity to count in two flavors, first is to count the number of objects which satisfy a certain criteria and second is to find the number of times a particular object is encountered. In this blog we are going to see how to solve the second problem: Find the number of times a particular object is encountered.

Bag (or Multiset):

Bag is a data structure which you use when you are counting objects by putting in a Map<K, Integer> . A Bag is similar to your shopping bag or grocery bag, wherein you can have one or more occurrences or a particular item in no particular order. So, a Bag is an order independent data structure like a Set, however it allows duplicates.

Let us consider a list of items and you want to count the number of each fruit you have in your list. You can simply group the items and count, JDK has Collectors which do that for you. The code looks like this:

Using a Map to count

“Apple”, “Banana” and “Orange” have a valid count, however, “Grapes” which are not a part of the items the assertion has to be for a null. There by making this implementation not null safe.

Now let us solve the same problem by using an Eclipse Collections Bag in this case. Eclipse Collections has the toBag() API available which returns a Bag.

Using a Bag to count

Bag has occurrencesOf() API on it which returns the count. The occurrencesOf() API is null safe as can be seen by the assertion for “Grapes”.

In addition to the rich API available on RichIterable , the Eclipse Collections Bag also has more specific and intuitive API like occurrencesOf(), addOccurrences(), topOccurrences(), bottomOccurrences() to name a few.

The Eclipse Collections Bag implementation is called HashBag. A HashBag is backed by an ObjectIntMap<K> from Eclipse Collections itself. The ObjectIntMap is an open address map which has Objects as a Key but the values are primitive ints. This implementation makes the Bag leaner.

Below are a few memory and performance comparisons between JDK 1.8 HashMap and Eclipse Collections 9.2.0 Bag

Memory Footprint (lower number the better)

This shows the total memory footprint including the constituents of the data structures.

Memory Comparison HashMap<Integer, Integer> vs Eclipse Collections HashBag<Integer>
Memory Comparison HashMap<String, Integer> vs Eclipse Collections HashBag<String>

Performance Tests (higher number the better)

All measurements reported in operations/s.

Source code for memory tests and performance tests is available on GitHub.

Note: Map<K, Integer> is considered for memory and performance tests instead of Map<K, Long> so that the comparisons are comparable since the Eclipse Collections Bag is backed by an ObjectIntMap<K>. I have verified that the memory footprint for Map<K, Integer> and Map<K, Long> for these tests was same.

Summary:

  1. Eclipse Collections HashBag has ~40% smaller memory footprint compared to JDK HashMap.
  2. JDK HashMap performs better for than Eclipse Collections HashBag for add() and look-up operations for sizes less than 40,000 elements.
  3. JDK HashMap and Eclipse Collections HashBag have comparable performance for sizes greater than 40,000 elements.
  4. Eclipse Collections HashBag performs better than JDK HashMap when adding the same element 10 times.
  5. JDK HashMap performs slightly better than Eclipse Collections HashBag for look-up operations.
  6. Eclipse Collections HashBag has API which is helpful for Bag (count) specific operations.
  7. Eclipse Collections HashBag is null safe for cases where a particular object does not exist in the Bag.

Show your support star us on GitHub.

Eclipse Collections Resources:
Eclipse Collections comes with it’s own implementations of List, Set and Map. It also has additional data structures like Multimap, Bag and an entire Primitive Collections hierarchy. Each of our collections have a rich API for commonly required iteration patterns.

  1. Website
  2. Source code on GitHub
  3. Contribution Guide
  4. Reference Guide

Bag — The Counter was originally published in Oracle Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.


by Nikhil Nanivadekar at June 04, 2018 01:15 PM

Eclipse Newsletter - A First Look at Jakarta EE

June 04, 2018 10:40 AM

This month, read eight great pieces to get informed about various parts of Jakarta EE.

June 04, 2018 10:40 AM

Eclipse Vert.x goes Native

by jotschi at June 04, 2018 12:00 AM

I this blog post I would like to give you a preview on native image generation of Vert.x applications using GraalVM.

With GraalVM it is possible to generate native executables. These executables can be directly run without the need of an installed JVM.

Benefits

  • The start up time is way faster. It is no longer required to wait for the start up of the JVM. The application is usually up and running in a matter of milliseconds.

  • Reduced memory footprint. I measured 40 MB memory usage (RSS) for the Vert.x Web application which I’m going to showcase.

  • Smaller Containers. No JVM means no overhead. All the needed parts are already contained within the executable. This can be very beneficial when building deployable container images.

Demo Project

For the demo application I choose a very basic hello world Vert.x Web server.

package de.jotschi.examples;

import java.io.File;

import io.vertx.core.Vertx;
import io.vertx.core.logging.Logger;
import io.vertx.core.logging.LoggerFactory;
import io.vertx.core.logging.SLF4JLogDelegateFactory;
import io.vertx.ext.web.Router;

public class Runner {

    public static void main(String[] args) {
        // Use logback for logging
        File logbackFile = new File("config", "logback.xml");
        System.setProperty("logback.configurationFile", logbackFile.getAbsolutePath());
        System.setProperty(LoggerFactory.LOGGER_DELEGATE_FACTORY_CLASS_NAME, SLF4JLogDelegateFactory.class.getName());
        Logger log = LoggerFactory.getLogger(Runner.class);

        // Setup the http server
        log.info("Starting server for: http://localhost:8080/hello");
        Vertx vertx = Vertx.vertx();
        Router router = Router.router(vertx);

        router.route("/hello").handler(rc -> {
            log.info("Got hello request");
            rc.response().end("World");
        });

        vertx.createHttpServer()
            .requestHandler(router::accept)
            .listen(8080);

    }

}

GraalVM

GraalVMruns a static analysis on the generated application in order to find the reachable code. This process which is run within the Substrate VM will lead to the generation of the native image.

Limitations

Due to the nature of the static analysis Substrate VM also has some limitations.

Dynamic class loading and unloading for example is not supported because this would in essence alter the available code during runtime.

Reflection is only partially supported and requires some manual steps which we will cover later on.

Patches / Workarounds

Work in progress
Next we need to apply some patches / workarounds. Keep in mind that native image generation is a fairly new topic and the these workarounds will hopefully no longer be required once the Substrate VM and Netty have better support for each other.

I did not manage to get native epoll, kqueue and SSL integration to work with native images. These parts are heavily optimized within Netty and use JNI to directly access the OS features. Substrate VM supports JNI and could in theory integrate these native libraries.

I created a reproducer and an issue so hopefully these problems can be addressed soon.

Vert.x Transport

First I needed to patch the io.vertx.core.net.impl.transport.Transport class in order to prevent the loading of EPoll and KQueue native support. Otherwise Substrate VM will try to load these classes and fail.

public class Transport {
…
  /**
   * The native transport, it may be {@code null} or failed.
   */
  public static Transport nativeTransport() {
    // Patched: I remove the native transport discovery. 
    // The imports would be picked up by substrate 
    // and cause further issues. 
    return null;
  }
…
}

Netty SSL

Native SSL support is another problematic area. I created a patched dummy io.netty.handler.ssl.ReferenceCountedOpenSslEngine class in order to prevent Substrate VM from digging deeper into the SSL code of Netty.

Next we need to set up the reflection configuration within reflectconfigs/netty.json.

Netty uses reflection to instantiate the socket channels. This is done in the ReflectiveChannelFactory. We need to tell Substrate VM how classes of type NioServerSocketChannel and NioSocketChannel can be instantiated.

[
  {
    "name" : "io.netty.channel.socket.nio.NioSocketChannel",
    "methods" : [
      { "name" : "", "parameterTypes" : [] }
    ]
  },
  {
    "name" : "io.netty.channel.socket.nio.NioServerSocketChannel",
    "methods" : [
      { "name" : "", "parameterTypes" : [] }
    ]
  }
]

If you want to learn more about the state of Netty and GraalVM I can recommend this GraalVM Blogpost by Codrut Stancu.

Building

Finally we can build our maven project to generate a shaded jar.

mvn clean package

Next we need the GraalVM package. You can download it from the GraalVM website.

We use the shaded jar as the input source for the native-image command which will generate the executable.

$GRAALVMDIR/bin/native-image \
 --verbose \
 --no-server \
 -Dio.netty.noUnsafe=true  \
 -H:ReflectionConfigurationFiles=./reflectconfigs/netty.json \
 -H:+ReportUnsupportedElementsAtRuntime \
 -Dfile.encoding=UTF-8 \
 -jar target/vertx-graalvm-native-image-test-0.0.1-SNAPSHOT.jar

Result

Finally we end up with an 27 MB vertx-graalvm-native-image-test-0.0.1-SNAPSHOT executable which we can run.

$ ldd vertx-graalvm-native-image-test-0.0.1-SNAPSHOT 
  linux-vdso.so.1 (0x00007ffc65be8000)
  libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f8e892f0000)
  libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f8e890d3000)
  libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f8e88eb9000)
  librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f8e88cb1000)
  libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f8e88a79000)
  libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8e886da000)
  /lib64/ld-linux-x86-64.so.2 (0x00007f8e8afb7000)

Memory

/usr/bin/time -f "\nmaxRSS\t%MkB" java -jar target/vertx-graalvm-native-image-test-0.0.1-SNAPSHOT.jar 
/usr/bin/time -f "\nmaxRSS\t%MkB" ./vertx-graalvm-native-image-test-0.0.1-SNAPSHOT
  • Native Image: 40 MB
  • Java 10: 125 MB

The full project can be found on GitHub.

If you want to read more on the topic I can also recommend this article by Renato Athaydes in which he demonstrates how to create a very small light weight low memory application using GraalVM.

Thanks for reading. If you have any further questions or feedback don’t hesitate to send me a tweet to @Jotschi.


by jotschi at June 04, 2018 12:00 AM

Leaving the Eclipse Foundation

by Roxanne on IoT at May 31, 2018 04:27 PM

Yes. The rumours are true. I will star in the next Marvel film! Alright, alright. That’s a lie, but it is true that I am leaving my Marketing Specialist position at the Eclipse Foundation after nearly 6 years.

The beginning

To be honest, I don’t remember applying for the job. I got an email from someone named Ian Skerrett saying I had an interview. I was like… “What’s the Eclipse Foundation?”. I even had to Google what a “committer” was. To top it off, I got to the headquarters in Ottawa for my interview and the door said to go around the back, but I went the wrong way. I came to a dead end: face-to-face with a fence. I looked at the clock, 2 minutes left. I looked around. Screw it. I threw my high heels and resume over the fence, climbed then jumped over it barefoot in a dress. I got the job!

It has been quite the journey. I have met many great people along the way. I’ve connected with so many smart and talented professionals, I’ve made awesome friends — I even met my now partner Stefan Oehme at EclipseCon Europe. I created and edited the Eclipse Newsletter since 2013; it now has 250,000 subscribers! I helped launch the first EclipseCon France in 2013. In 2016, I moved from Canada to Germany and have been working from home ever since. Finally, I gave my first talk and coded my first full website in 2017.

All that to say that working at the Eclipse Foundation has shaped who I am today. Roxanne Joncas would be a completely different human being without the influence of all of you.

Say Goodbye in Person

I will be at EclipseCon France on June 13–14 in Toulouse. This will be my 14th and last EclipseCon. Wow. I can’t believe my first EclipseCon was in Boston in 2013. Get your ticket for Toulouse if you want to get one last high five from me!

My first EclipseCon

Thank you

I wanted to take the time to write how some of you have influenced me and how I will remember you. Here goes:

  • Sven Efftinge: First community member I remember working with and meeting in person.
  • Mélanie Bats: Voice your opinion.
  • Alex Morel: Always be yourself.
  • Mickael Istria: Passion is important.
  • Kai Hudalla: Ask questions.
  • Cédric Brun: Slides/presentations can be fun.
  • Jonas Helming: Always answer your emails.
  • Alexandra Schladebeck: Test. Test. Test. + Voice losenges.
  • Sebastian Zarnekow: Being quiet 99% of the time makes your words more powerful. Silent, but deadly.
  • Alexander Schmitt: I can speak in public.
  • Stéphane Bégaudeau: Stranger to friends in 5 seconds.
  • Tracy Miranda: Work on what you love.
  • Martin Lippert: Always show up.
  • Shawn Pearce: Being a ‘’Bored’’ Member can be fun.
  • Emily Jiang: A smile goes a long way!
  • Linda Snyder and Carole Garner: Plan. Plan. Plan. Make it seamless for attendees and it’ll be fine, even if it isn’t.
  • Goulwen Le Fur: A hoodie can be worth more than you think.
  • Eike Stepper: Badge ribbons are important.
  • Loredana Chituc: Open your heart.
  • Chris Aniszczyk: Spelling is important.
  • Lorenzo Bettini: Italy has a thing for paperwork.
  • Ivar Grimstad: When in London, eat Indian Food.

*Note: I didn’t include everyone, because the list would be too long. If your name isn’t up there. It does not mean I don’t like you or won’t remember you!

What next?

My last day at the Foundation will be mid-August (exact date TBD). Following my departure, I am planning a sweet 4-month break to let all my creative out and learn German at an expert level. I have no idea what 2019 will have in store for me, but I am excited to find out! Ideally, I become as cool as Kate McKinnon in Ghostbusters.

Not really goodbye

Obviously, we live on the digital frontier, so it’s not really a farewell. You can:

Farewell, my friends. See you out there in the wild!

Psst: You can apply for my position here.


by Roxanne on IoT at May 31, 2018 04:27 PM

JBoss Tools 4.6.0.AM2 for Eclipse Photon.0.M7

by jeffmaury at May 30, 2018 05:33 PM

Happy to announce 4.6.0.AM2 (Developer Milestone 2) build for Eclipse Photon.0.M7.

Downloads available at JBoss Tools 4.6.0 AM2.

What is New?

Full info is at this page. Some highlights are below.

General

Eclipse Photon

JBoss Tools is now targeting Eclipse Photon M7.

OpenShift

Enhanced Spring Boot support for server adapter

Spring Boot runtime was already supported by the OpenShift server adapter. However, it has one major limitation: files and resources were synchronized between the local workstation and the remote pod(s) only for the main project. If your Spring Boot application had dependencies that were present in the local workspace, any change to a file or resource of one of these dependencies was not handled. This is not true anymore.

Fuse Tooling

Camel Rest DSL from WSDL wizard

There is a new "Camel Rest DSL from WSDL" wizard. This wizard wraps the wsdl2rest tool now included with the Fuse 7 distribution, which takes a WSDL file for a SOAP-based (JAX-WS) web service and generates a combination of CXF-generated code and a Camel REST DSL route to make it accessible using REST operations.

To start, you need an existing Fuse Integration project in your workspace and access to the WSDL for the SOAP service. Then use File→New→Other…​ and select Red Hat Fuse→Camel Rest DSL from WSDL wizard.

On the first page of the wizard, select your WSDL and the Fuse Integration project in which to generate the Java code and Camel configuration.

SOAP to REST Wizard page 1

On the second page, you can customize the Java folder path for your generated classes, the folder for the generated Camel file, plus any customization for the SOAP service address and destination REST service address.

SOAP to REST Wizard page 2

Click Finish and the new Camel configuration and associated Java code are generated in your project. The wizard determines whether your project is Blueprint, Spring, or Spring Boot based, and it creates the corresponding artifacts without requiring any additional input. When the wizard is finished, you can open your new Camel file in the Fuse Tooling Route Editor to view what it created.

Fuse Tooling editor Rest Tab

That brings us to another new functionality, the REST tab in the Fuse Tooling Route Editor.

Camel Editor REST tab

The Fuse Tooling Route Editor provides a new REST tab. For this release, the contents of this tab is read-only and includes the following information:

  • Details for the REST Configuration element including the component (jetty, netty, servlet, etc.), the context path, the port, binding mode (JSON, XML, etc.), and host. There is only one REST Configuration element.

  • A list of REST elements that collect REST operations. A configuration can have more than one REST element. Each REST element has an associated property page that displays additional details such as the path and the data it consumes or produces.

Fuse Tooling Rest Elements Properties View
  • A list of REST operations for the selected REST element. Each of the operations has an associated property page that provides details such as the URI and output type.

Fuse Tooling Rest Operations Properties View

For this release, the REST tab is read-only. If you want to edit the REST DSL, use the Route Editor Source tab. When you make changes and save them in the Source tab, the REST tab refreshes to show your updates.

Enjoy!

Jeff Maury


by jeffmaury at May 30, 2018 05:33 PM

Papyrus Coding Day 2018

by tevirselrahc at May 30, 2018 02:50 PM

In previous posts (here and here), I mentioned the increased focus on Papyrus Toolsmiths.

In this context, the Papyrus development team is putting together a “Papyrus coding day” just before EclipseCon France.
During this free coding day, they will provide you with:

  • Hands-on sessions to get an insight on Papyrus SDK capabilities
  • Discussions with the Papyrus development team

Registration is mandatory as there is a limit on the number of attendees is limited.

And rejoice in that attendance is free (and includes coffee and snacks)!

There are, however, prerequisites:

  • Knowledge of Java (at least intermediate level)
  • EMF and UML experience is an plus!

So whether you are already invested in Papyrus, just curious, a toolsmith or a hacker, this may be of interest to you!

You can contact me is this is of interest and I will put you in touch with the organizers!

(The information you provide will only be used to put you in touch with the organizer and then deleted)

by tevirselrahc at May 30, 2018 02:50 PM