Integration Tooling for Eclipse Photon

by pleacu at August 14, 2018 02:55 PM

Try our leaner, complete Eclipse Photon and Red Hat Developer Studio 12 compatible integration tooling.

devstudio12

JBoss Tools Integration Stack 4.6.0.Final / Red Hat Developer Studio Integration Stack 12.0.0.GA

All of the Integration Stack components have been verified to work with the same dependencies as JBoss Tools 4.6 and Red Hat Developer Studio 12.

What’s new for this release?

This is the initial release in support of Eclipse Photon. It syncs up with Developer Studio 12.0.0, JBoss Tools 4.6.0 and Eclipse 4.8.0 (Photon). It is also a maintenance release for Teiid Designer and BRMS tooling.

Released Tooling Highlights

Business Process and Rules Development

BPMN2 Modeler Known Issues

See the BPMN2 1.5.0.Final Known Issues Section of the Integration Stack 12.0.0.GA release notes.

Drools/jBPM6 Known Issues

See the Drools 7.8.0.Final Known Issues Section of the Integration Stack 12.0.0.GA release notes.

Data Virtualization Highlights

Teiid Designer

See the Teiid Designer 11.2.0.Final Resolved Issues Section of the Integration Stack 12.0.0.GA release notes.

What’s an Integration Stack?

Red Hat Developer Studio Integration Stack is a set of Eclipse-based development tools. It further enhances the IDE functionality provided by Developer Studio, with plug-ins specifically for use when developing for other Red Hat products. It’s where DataVirt Tooling and BRMS tooling are aggregated. The following frameworks are supported:

Red Hat Business Process and Rules Development

Business Process and Rules Development plug-ins provide design, debug and testing tooling for developing business processes for Red Hat BRMS and Red Hat BPM Suite.

  • BPEL Designer - Orchestrating your business processes.

  • BPMN2 Modeler - A graphical modeling tool which allows creation and editing of Business Process Modeling Notation diagrams using graphiti.

  • Drools - A Business Logic integration Platform which provides a unified and integrated platform for Rules, Workflow and Event Processing including KIE.

  • jBPM - A flexible Business Process Management (BPM) suite.

Red Hat Data Virtualization Development

Red Hat Data Virtualization Development plug-ins provide a graphical interface to manage various aspects of Red Hat Data Virtualization instances, including the ability to design virtual databases and interact with associated governance repositories.

  • Teiid Designer - A visual tool that enables rapid, model-driven definition, integration, management and testing of data services without programming using the Teiid runtime framework.

The JBoss Tools website features tab

Don’t miss the Features tab for up to date information on your favorite Integration Stack components.

Installation

The easiest way to install the Integration Stack components is through the stand-alone installer or through our JBoss Tools Download Site.

For a complete set of Integration Stack installation instructions, see Integration Stack Installation Guide

Let us know how it goes!

Paul Leacu.


by pleacu at August 14, 2018 02:55 PM

Announcing Ditto Milestone 0.8.0-M1

August 14, 2018 04:00 AM

Even during the summer break the Ditto team worked hard in order to provide the next milestone release. Here it is: Milestone 0.8.0-M1.

Have a look at the Milestone 0.8.0-M1 release notes for what changed in detail and why there was a version bump from 0.3.0-M2 to 0.8.0-M1.

The main changes and new features are

  • security enhancement by making some of Ditto’s headers not settable from the outside
  • report application metrics to Prometheus
  • automatically form a cluster when running in Kubernetes
  • improvement of Ditto’s things-service memory consumption
  • stabilization of the connectivity to AMQP 1.0 and 0.9.1

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Docker images have been pushed to Docker Hub:



Ditto


The Eclipse Ditto team


August 14, 2018 04:00 AM

EMF Forms 1.17.0 Feature: Table Detail Panes

by Jonas Helming and Maximilian Koegel at August 10, 2018 10:06 AM

EMF Forms makes it easy to create forms that are capable of editing your data based on an EMF model. To get started with EMF Forms please refer to our tutorial. If you are an adopter of EMF Forms, please note, that we have recently published 1.17.1 an update to 1.17.0. The update fixes three bugs which occurred if you use EMF Forms in Photon. Please see here for details and our download page to get the new release.

In this post, we would like to outline a new feature in the 1.17.0 release: The improved table detail panes.

While EMF Forms is well known to support form-based UIs with classic input fields, such as text controls or checkboxes, it also supports showing list of elements in tables and list views, as well as hierarchies in trees.

As an example, you can very easily create a tree like this:

EMF Forms 1.17.0 Feature: Table Detail Panes

Or a table like this:

EMF Forms 1.17.0 Feature: Table Detail Panes

With 1.17.0, we have updated the documentation, please see this tutorial for tables and this tutorial for tree view.

Any element showing several elements can allow inline editing (like the EMF Forms table does), or show a detail pane (like the tree does), or both. As an example, if elements shown in a table have many attributes, you could show some of them in the table and then all of them in a detail pane. To do so, just enable the detail pane on the TableControl in the view model:

EMF Forms 1.17.0 Feature: Table Detail Panes

The result, after removing most columns from the table would then look like this:

EMF Forms 1.17.0 Feature: Table Detail Panes

Alternatively, you can set the DetailEditing property to “WithDialog”. As a result the renderer opens a separate window showing the details on double clicking an element. With 1.17.0, both options are supported by all table renderers, including the table renderer based on Nebula Grid.

You might wonder, where the layout of the detailed pane comes from. This detail itself is rendered with EMF Forms. Therefore, the framework retrieves the view model for the selected element, such that, if you already have a view model for the type “User”, it will be used in the detail pane as well. For this to work, you need to register the view model with EMF Forms by default, via an extension point.

Another minor improvement, which comes with 1.17.0 is that you can also try out those detail panes with a separate view model in the preview provided by the EMF Forms tooling. Therefore, you can add those additional view models to the preview using the “Manage Additional Views” button in the toolbar of the preview.

EMF Forms 1.17.0 Feature: Table Detail Panes

Any view model added here will be picked up by the preview when a detail pane is to be rendered.

As for all EMF Forms features, the detail panes of the respective tooling is of course adaptable to even more custom requirements. If there are any features you miss or ways you wish to adapt it, please provide feedback by submitting bugs or feature requests or contact us if you are interested in enhancements or support.


by Jonas Helming and Maximilian Koegel at August 10, 2018 10:06 AM

We are hiring 2 Eclipse developers

by Andrey Loskutov (noreply@blogger.com) at August 09, 2018 12:47 PM

We are hiring again!

We have 2 opened positions for Eclipse developers in our main office in Böblingen, Germany (no, it is not a remote job).

The job focus is Java/Eclipse development in the context of the very complex Eclipse based IDE used as the front end for the semiconductor tester.

We speak English and Java here, if you are interested, just drop me a mail.


by Andrey Loskutov (noreply@blogger.com) at August 09, 2018 12:47 PM

Modeling Symposium @ EclipseCon Europe 2018

by Jonas Helming and Maximilian Koegel at August 08, 2018 11:45 AM

We are happy to announce that Ed, Philip and Jonas are organizing the Modeling Symposium for the EclipseCon Europe 2018 in Ludwigsburg. The symposium aims to provide a forum for community members to present a brief overview of their work. We offer 10 minute lightning slots (including set-up and questions) to facilitate a broad range of speakers. The primary goal is to introduce interesting, new technological features. This targets mainly modeling projects which are otherwise not represented at the conference.

If you are interested in giving a talk, please send a short description (a few sentences) to munich@eclipsesource.com. Depending on the number, we might have to select among the submissions.

Deadline for submission: Wednesday, September 5th, 2018

Acceptance/ Decline notification: Monday, September 10th, 2018

Please adhere to the following guidelines:

  • Please provide sufficient context. Talks should start with a concise overview of what the presenter plans to demonstrate, or what a certain framework offers.  Even more important, explain how and why this is relevant.
  • Do not bore us! Get to the point quickly.  You do not have to use all your allocation. An interesting 3 minute talk will have a bigger impact than a boring 10 minute talk. We encourage you to plan for a 5 minute talk, leaving room for 5 minutes of discussion.
  • Keep it short and sweet, focus on the most important aspects. A conference offers the major advantage of getting in contact with people who are interested in your work. So consider the talk more as a teaser to prompt follow-up conversations than a forum to demonstrate or discuss technical details in depth.
  • A demo is worth a thousand slides. We prefer to see how your stuff works rather than be told about how it works with illustrative slides.  Please restrict the slides to summarize your introduction or conclusion.

Looking forward to your submissions!


by Jonas Helming and Maximilian Koegel at August 08, 2018 11:45 AM

Supporting OpenJFX 11 from JDK11 onwards in e(fx)clipse

by Tom Schindl at August 04, 2018 09:42 PM

So starting with JDK-11 OpenJFX is not part of any downloadable distribution. As JavaFX is designed to run on the module-path (and tested only there) you have 2 options to run JavaFX inside OSGi:
* You create your own JDK-Distribution using jlink
* You launch the VM you want to use JavaFX adding the JavaFX-Modules

While the 2nd solution is doable for RCP-Applications it is less than a nice one, and for integrating into external frameworks (like the Eclipse IDE) it is not possible at all. So we need a different solution to satisfy both usecases.

The solution to this problem is that e(fx)clipse installs a classloader hook using the Equinox AdapterHook-Framework (you can do crazy stuff with that) and on the fly spins up a new Java-Module-Layer containing all the JavaFX-Modules and uses the classloader from the Module-Layer to load the JavaFX-Classes.

With this strategy you can supply the JavaFX-Modules (including the native bits) required for your application to run as part of your p2-repository.


by Tom Schindl at August 04, 2018 09:42 PM

New improvements to the Eclipse Packaging website

August 02, 2018 02:30 PM

In my previous blog post, we announced a new look and feel for the Eclipse Foundation website. The plan was to roll out our new design to eclipse.org first and then gradually migrate our other web properties.

Since then, we migrated our Hugo theme, Eclipsepedia, Eclipse Community Forums and a few other Drupal sites, such as the Eclipse User Profile and the Eclipse Foundation Blog to the Quicksilver look and feel!

This week, I am happy to announce an update to the Eclipse Packaging website. For those who don’t know, the Eclipse Packaging website is used to publish download links for the Eclipse Installer and Eclipse Packages.

I am very proud of the work done here since the original site desperately needed some TLC. I’m hoping the new look and feel will improve the way the Eclipse IDE is downloaded by the community!

Eclipse.org new home page

New features include:

  • A website redesign based off the Quicksilver look and feel.
  • The links to the Eclipse Installer, Eclipse Packages and Eclipse Developer Builds are more accessible via a new submenu beneath our breadcrumbs.
  • Created a new Eclipse Installer download page page with instructions.
  • Made improvements to our breadcrumb links which allow users to easily find every Eclipse release on the Eclipse Packaging site.
  • The More Downloads sidebar includes links to Eclipse Packages instead of the release train landing page.
  • Links to the Eclipse Installer is available in the sidebar.

Finally, this migration is also a win for the Eclipse Foundation staff. These changes to the Eclipse Packages site allow us to streamline the Eclipse Release process and no longer requires us to manually submit Gerrit patches to publish a release.


August 02, 2018 02:30 PM

We Are Open

August 02, 2018 01:00 PM

We Are Open campaign provides a peek into Eclipse community's openness, innovation, and collaboration.

August 02, 2018 01:00 PM

We Are Open

by Thabang Mashologu at August 01, 2018 06:31 PM

Back in April, our Executive Director Mike Milinkovich blogged about a new logo and redesigned website for the Eclipse Foundation. Our new branding is meant to reflect the Foundation’s role beyond the Eclipse IDE. We are proud of our heritage and successfully launched the Eclipse Photon release recently to a global base of over 4 million active users. But clearly the Eclipse Foundation and its 350+ open source projects represent more than the Eclipse IDE. 

The fact is, we are a leading platform and environment for global developers and organizations to collaborate on open technologies that solve complex problems and enable value creation. 

From enterprise Java to IoT and autonomous vehicles, we are increasingly becoming the open source foundation of choice for digital companies looking for a vendor-neutral governance model to help them to accelerate market adoption of technologies and standards, increase the pace of innovation, and to reduce development costs. In fact, we are supported by over 275 organizations who see the strategic, operational and financial value of open source software development at the Eclipse Foundation.

For thousands of developers around the world, we offer great opportunities to contribute to game-changing technologies, demonstrate expertise, and participate in our vibrant Eclipse community, among many other benefits. At the time of writing, we have over 1,550 committers and counting who power Eclipse projects spanning many technology domains.

The Foundation marketing team has the fun job of sharing the stories and successes of our community with the world. To that end, we developed the We Are Open video campaign to provide a quick peek into how the Eclipse community represents openness, innovation, and collaboration. We hope you like it, share and subscribe to our various channels!
 


by Thabang Mashologu at August 01, 2018 06:31 PM

Accepted Sessions Announced

by Anonymous at July 31, 2018 08:14 PM

It was a lot of work for the program committee, but they got it done! And thank you again to all the community members who sent in a talk proposal.

Visit this page to see the list of accepted tutorials and talks. We expect to have the schedule done by mid-August.


by Anonymous at July 31, 2018 08:14 PM

Eclipse Foundation Announces Jakarta EE Committee Election Results

July 31, 2018 02:10 PM

The results are in for Participant and Committer Member elections for representatives to the Jakarta EE Working Group!

July 31, 2018 02:10 PM

Eclipse Newsletter | Embedded Development

July 26, 2018 01:30 PM

This month's newsletter features five articles that focus on Embedded Development. Read it now.

July 26, 2018 01:30 PM

Eclipse Newsletter on Papyrus UML Light

by tevirselrahc at July 26, 2018 01:23 PM

Back in June, I reported that a new variant of Papyrus was being funded for development by the Papyrus Industry Consortium.

Well there’s no turning back with an official article in this month’s Eclipse Newsletter!


by tevirselrahc at July 26, 2018 01:23 PM

We scaled IoT – Eclipse Hono in the lab

by Jens Reimann at July 25, 2018 12:03 PM

Working for Red Hat is awesome. Not only can you work on amazing things, you will also get the tools you need in order to do just that. We wanted to test Eclipse Hono (yes, again) and see how far we can scale it. And of course which limits and issues we encounter on the way. So we took the current development version of Hono (0.7) from Eclipse IoT, backed by EnMasse 0.21 and ran it on an OpenShift 3.9 cluster.

Note: This blog post presents an intermediate result of the whole test, as it is still ongoing. Want to know more? We put in a talk for EclipseCon Europe about this scale test. With a bit of luck we can show you more in person at the end of October in Ludwigsburg.

The lab

From the full test cluster, we received an allocation of 16 nodes with a bit of storage (mostly HDDs), Intel Xeon E5-2620, 2×6 cores (24 threads) each and a mix of 64GB/128GB RAM. 12 nodes got assigned for the IoT cluster, running Eclipse Hono, EnMasse and OpenShift. The remaining 4 nodes made up the simulation cluster for generating the IoT workload. For the simulation cluster, we also deployed OpenShift, simply to re-use the same features like scaling, deploying, building as we did for the IoT cluster. Both clusters are a single master setup. For the IoT cluster, we went with GlusterFS as the storage provider as we wanted to have dynamic provisioning for the broker deployments. Everything is connected by a 1GBit Ethernet link. In the IoT cluster, we allocated 3 nodes for infrastructure-only purposes (like the Docker registry and the OpenShift router). Which left 8 general-purpose compute nodes that Hono could make use of.

Node distribution

The test

The focus of this test was put on telemetry data using HTTP as a transport. For this we simulated devices, sending one message per second. In the context of IoT, you have a bigger number of senders (devices), but they do send less payload and less frequent than e.g. a cloud-side enterprise system might do. It is also most likely that an IoT device wouldn’t send once each second over HTTP. But “per second” is easier to process. And, at least in theory, you could trade in 1.000 devices sending once per second with 10.000 devices sending once every 10 seconds.

The simulator cluster consisted of three main components. An InfluxDB to store some metrics. A “consumer” and a “HTTP simulator” deployment. The consumer directly consumed from the EnMasse Qpid dispatch router instance via AMQP 1.0, as fast as possible. The HTTP simulator tries to simulate 2.000 devices with a message rate of 1 message per second per device. If the HTTP adapter stalls, it will wait for requests to complete. For the HTTP client, we used the Vert.x Web Client, as it turned out to be the most performant Java HTTP client (aside from having a nice API). So scaling up by single pod means that we increase the IoT workload by 2.000 devices (meaning 2.000 additional messages per second).

Testing architecture

To the max

As a first exercise we tried out a few configurations and see how far we could get. In the end, we were able to saturate the ethernet port of our (initially) two ingress nodes and so decided to re-allocate one node from Eclipse Hono to the OpenShift infrastructure. Having 3 ingress nodes and 8 compute nodes. This did reduce the capacity available for Hono and let us run into a limit of processing messages. However, it seemed better to run into a limit with Hono compared to running into a limit of network throughput. Adding an additional ingress node would be a simple task to do. And if we could improve Hono during the test, then we would actually see more throughput as we have some reserves in network throughput with that third node.

The final setup processed something around 80.000 devices with 1 message/second. There was a bit of room above that. But our DNS round-robin “load balancer” was not optimal, so we kept that reserve for further testing.

Note: Please note, that this number may be quite different on other machines, in other environments. We simply used this as a baseline for further testing.

Scaling up

The first automated scenario we ran was a simple scale up test. For that we scaled down all producers and consumer and slowly started to scale up the producers. After adding a new pod it waited until the message flow has settled. If the failure rate is too high, then scale up an additional protocol adapter. Otherwise, scale up another producer and continue.

As an acceptable failure rate, this test used 2% of the messages over the last 3 minutes. And a “failure” is actually a rejection of the message at the current point in time. Devices may re-try at a later time to submit its data. For telemetry data, it may be fine to, drop some information (with QoS 0) every now and then. Or use QoS 1 instead and but be aware of the fact that the current request as rejected and re-try at a later time. In any case, if Hono responds with a failure of 503, then the adapter cannot handle any more requests at the moment, leading to an increased failure rate in the simulator.

Initial results

So let’s have a quick look at the results of this test:

Eclipse Hono scale testing results, number of pods

This chart shows the scale-up of the simulator pods and the accompanying scale-up of the Eclipse Hono protocol adapter pods. You can also see the number of messages each instance of the protocol adapters processes. It looks like, once we push a few messages into the system, this evens out around 5.000 msgs/s. Meaning that each additional Hono HTTP adapter instance can serve 5.000 more messages/s, or 5.000 devices sending one message per second. Or 50.000 devices sending one message every 10 seconds. And each time we fire up a new instance the whole system can handle 5.000 msgs/s more.

In the second chart we can see the failure rate:

Eclipse Hono scale testing results, failure rate

Now the rule for the test was, that the failure rate has to be below 2% in order for the test to continue scaling up. We the test didn’t do well was to wait a bit longer and see if the failure rate declined even more. The failure rate is a moving average over 3 minutes. For that reason, this behavior has been changed in succeeding tests. The scenario now waits a bit longer before recording the final result of the current step.

So what you can see is that the failure rate stays below that “magic” 2% line. But that was the requirement. Except of course for the last entry, where the test was ended as there were no more resources to scale up in order for the scenario to compensate.

Yes it scales

Does Eclipse Hono scale? With charts and numbers, there is always room for interpretation. 😉 But to me, it definitely looks that way. When we increase the IoT workload we can compensate by scaling up protocol adapters in a linear way. Settling around 5.000 msgs/s per protocol adapter instance and keeping that figure until the end of the test. Until we ran out of computing resources.

Want more?

More background? You can have a look at the source code around this test on GitHub at redhat-iot/hono-simulator and redhat-iot/hono-scale-test. But please remember that this setup might be very specific to our infrastructure and test.

More details? Come to our talk at EclipseCon Europe if we get accepted and learn more about how we did the test. What improvements we tried out, which issues we ran in and how we set up of our infrastructure. And maybe have a chat with us in person about the gory details of IoT testing.

More throughput? Come and join the Eclipse Hono community and bring in your ideas about performance improvements.

The post We scaled IoT – Eclipse Hono in the lab appeared first on ctron's blog.


by Jens Reimann at July 25, 2018 12:03 PM

Eclipse IoT Day Singapore Announced

July 24, 2018 11:00 AM

The very first Eclipse IoT Day Singapore will take place Sept. 18 in co-location with IoT World Asia 2018.

July 24, 2018 11:00 AM

EC by Example: Collectors2

by Donald Raab at July 23, 2018 02:26 AM

Learn how to transition to Eclipse Collections types using Collectors2 with any Java Stream.

Visualizing Collectors2

Anatomy of a Collector

One of the many great additions to Java 8 was the interface named Collector. A Collector can be used with the collect method on the Stream interface. The collect method will allow you to reduce a Stream to any type you want. Java 8 included a set of stock Collector implementations which are part of the Collectors utility class. Eclipse Collections includes another set of Collector implementations that return Eclipse Collections types. The name of the utility class in Eclipse Collections is Collectors2.

So what is a Collector? Let’s take a look at the interface to find out. There are five public instance methods on a Collector.

  • supplier → Supplier<A>
  • accumulator → BiConsumer<A, T>
  • combiner → BinaryOperator<A>
  • finisher → Function<A, R>
  • characteristics → Set<Characteristics> → Enum(CONCURRENT, UNORDERED, IDENTITY_FINISH)

There are also two static of methods on Collector which can be used to easily create your own Collector implementations.

So let’s see how we can create a Collector to better understand what these individual components are used for.

@Test
public void collector()
{
Collector<String, Set<String>, Set<String>> toCOWASet =
Collector.of(
HashSet::new, // supplier
Set::add, // accumulator
(set1, set2) -> { // combiner
set1.addAll(set2);
return set1;
},
CopyOnWriteArraySet::new); // finisher
List<String> strings = Arrays.asList("a", "b", "c");
Set<String> set =
strings.stream().collect(toCOWASet);
Assert.assertEquals(new HashSet<>(strings), set);
}

Here I use the static of method which takes five parameters. I leave the var arg’d final parameter for characteristics empty here. The supplier here creates a new HashSet. The accumulator is used to specify what operation to apply on the object created using the supplier. The items in the Stream will be passed to the add method of the Set. The combiner is used to specify how collections should be merged in the case where a parallelStream is used. I cannot use a method reference for the combiner here because one of the collections must be returned, and the addAll method on Collection returns a boolean. Finally, the finisher coverts the final result to a CopyOnWriteArraySet.

Building a reusable Collector

The Collector example above would not be very useful if it needed to be inlined directly in code as it is rather verbose. It would be much more useful if it could handle any type instead of just String. This can be done easily by moving the construction of this Collector to a static method and giving it a name like toCopyOnWriteArraySet.

public static <T> Collector<T, ?, Set<T>> toCopyOnWriteArraySet()
{
return Collector.<T, Set<T>, Set<T>>of(
HashSet::new, // supplier
Set::add, // accumulator
(set1, set2) -> { // combiner
set1.addAll(set2);
return set1;
},
CopyOnWriteArraySet::new, // finisher
Collector.Characteristics.UNORDERED); // characteristics
}

@Test
public void reusableCollector()
{
List<String> strings = Arrays.asList("a", "b", "c");
Set<String> set1 =
strings.stream().collect(toCopyOnWriteArraySet());
Verify.assertInstanceOf(CopyOnWriteArraySet.class, set1);
Assert.assertEquals(new HashSet<>(strings), set1);

List<Integer> integers = Arrays.asList(1, 2, 3);
Set<Integer> set2 =
integers.stream().collect(toCopyOnWriteArraySet());
Verify.assertInstanceOf(CopyOnWriteArraySet.class, set2);
Assert.assertEquals(new HashSet<>(integers), set2);
}

Now I’ve created a reusable Collector that can be used with a Stream of any type. I’ve additionally specified a Collector.Characteristics in the reusable implementation. These characteristics can be used by the Stream collect method to optimize the reduction implementation. Since I am accumulating to a Set which is unordered in this case, it makes sense to use the UNORDERED characteristic.

Filtering with Collectors2

In order to filter with Collectors2, you will need three things:

  • A select, reject, or partition Collector
  • A Predicate
  • A target collection Supplier

Here are examples using select, reject, and partition with Collectors2.

@Test
public void filtering()
{
List<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
Predicate<Integer> evens = i -> i % 2 == 0;

MutableList<Integer> selectedList = list.stream().collect(
Collectors2.select(evens, Lists.mutable::empty));
MutableSet<Integer> selectedSet = list.stream().collect(
Collectors2.select(evens, Sets.mutable::empty));

MutableList<Integer> rejectedList = list.stream().collect(
Collectors2.reject(evens, Lists.mutable::empty));
MutableSet<Integer> rejectedSet = list.stream().collect(
Collectors2.reject(evens, Sets.mutable::empty));

PartitionList<Integer> partitionList = list.stream().collect(
Collectors2.partition(evens, PartitionFastList::new));
PartitionSet<Integer> partitionSet = list.stream().collect(
Collectors2.partition(evens, PartitionUnifiedSet::new));

Assert.assertEquals(selectedList, partitionList.getSelected());
Assert.assertEquals(rejectedList, partitionList.getRejected());

Assert.assertEquals(selectedSet, partitionSet.getSelected());
Assert.assertEquals(rejectedSet, partitionSet.getRejected());
}

Transforming with Collectors2

There are several methods which provide different transformations using Collectors2. The most basic transformation is available through the collect method. In order to use collect, you will need two things:

  • A Function
  • A target collection Supplier

The other transforming Collectors I will demonstrate below are makeString, zip, zipWithIndex, chunk, and flatCollect.

@Test
public void transforming()
{
List<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
MutableList<String> strings = list.stream().collect(
Collectors2.collect(Object::toString,
Lists.mutable::empty));

String string = list.stream().collect(Collectors2.makeString());

Assert.assertEquals(string, strings.makeString());

MutableList<Pair<Integer, String>> zipped =
list.stream().collect(Collectors2.zip(strings));

Assert.assertEquals(Tuples.pair(1, "1"), zipped.getFirst());
Assert.assertEquals(Tuples.pair(5, "5"), zipped.getLast());

MutableList<ObjectIntPair<Integer>> zippedWithIndex =
list.stream().collect(Collectors2.zipWithIndex());

Assert.assertEquals(
PrimitiveTuples.pair(Integer.valueOf(1), 0),
zippedWithIndex.getFirst());
Assert.assertEquals(
PrimitiveTuples.pair(Integer.valueOf(5), 4),
zippedWithIndex.getLast());

MutableList<MutableList<Integer>> chunked =
list.stream().collect(Collectors2.chunk(2));

Assert.assertEquals(
Lists.mutable.with(1, 2), chunked.getFirst());
Assert.assertEquals(
Lists.mutable.with(5), chunked.getLast());

MutableList<Integer> flattened = chunked.stream().collect(
Collectors2.flatCollect(e -> e, Lists.mutable::empty));

Assert.assertEquals(list, flattened);
}

Converting with Collectors2

There are two sets of converting Collector implementations available in Collectors2. One set converts to MutableCollection types. The other converts to ImmutableCollection types.

Collectors converting to Mutable Collections

@Test
public void convertingToMutable()
{
List<Integer> source = Arrays.asList(2, 1, 4, 3, 5);
MutableBag<Integer> bag = source.stream().collect(
Collectors2.toBag());
MutableSortedBag<Integer> sortedBag = source.stream().collect(
Collectors2.toSortedBag());
Assert.assertEquals(
Bags.mutable.with(1, 2, 3, 4, 5), bag);
Assert.assertEquals(
SortedBags.mutable.with(1, 2, 3, 4, 5), sortedBag);

MutableSet<Integer> set = source.stream().collect(
Collectors2.toSet());
MutableSortedSet<Integer> sortedSet = source.stream().collect(
Collectors2.toSortedSet());
Assert.assertEquals(
Sets.mutable.with(1, 2, 3, 4, 5), set);
Assert.assertEquals(
SortedSets.mutable.with(1, 2, 3, 4, 5), sortedSet);

MutableList<Integer> list = source.stream().collect(
Collectors2.toList());
MutableList<Integer> sortedList = source.stream().collect(
Collectors2.toSortedList());
Assert.assertEquals(
Lists.mutable.with(2, 1, 4, 3, 5), list);
Assert.assertEquals(
Lists.mutable.with(1, 2, 3, 4, 5), sortedList);

MutableMap<String, Integer> map =
source.stream().limit(4).collect(
Collectors2.toMap(Object::toString, e -> e));
Assert.assertEquals(
Maps.mutable.with("2", 2, "1", 1, "4", 4, "3", 3),
map);

MutableBiMap<String, Integer> biMap =
source.stream().limit(4).collect(
Collectors2.toBiMap(Object::toString, e -> e));
Assert.assertEquals(
BiMaps.mutable.with("2", 2, "1", 1, "4", 4, "3", 3),
biMap);
}

Collectors converting to Immutable Collections

@Test
public void convertingToImmutable()
{
List<Integer> source = Arrays.asList(2, 1, 4, 3, 5);
ImmutableBag<Integer> bag = source.stream().collect(
Collectors2.toImmutableBag());
ImmutableSortedBag<Integer> sortedBag = source.stream().collect(
Collectors2.toImmutableSortedBag());
Assert.assertEquals(
Bags.immutable.with(1, 2, 3, 4, 5), bag);
Assert.assertEquals(
SortedBags.immutable.with(1, 2, 3, 4, 5), sortedBag);

ImmutableSet<Integer> set = source.stream().collect(
Collectors2.toImmutableSet());
ImmutableSortedSet<Integer> sortedSet = source.stream().collect(
Collectors2.toImmutableSortedSet());
Assert.assertEquals(
Sets.immutable.with(1, 2, 3, 4, 5), set);
Assert.assertEquals(
SortedSets.immutable.with(1, 2, 3, 4, 5), sortedSet);

ImmutableList<Integer> list = source.stream().collect(
Collectors2.toImmutableList());
ImmutableList<Integer> sortedList = source.stream().collect(
Collectors2.toImmutableSortedList());
Assert.assertEquals(
Lists.immutable.with(2, 1, 4, 3, 5), list);
Assert.assertEquals(
Lists.immutable.with(1, 2, 3, 4, 5), sortedList);

ImmutableMap<String, Integer> map =
source.stream().limit(4).collect(
Collectors2.toImmutableMap(
Object::toString, e -> e));
Assert.assertEquals(
Maps.immutable.with("2", 2, "1", 1, "4", 4, "3", 3),
map);

ImmutableBiMap<String, Integer> biMap =
source.stream().limit(4).collect(
Collectors2.toImmutableBiMap(
Object::toString, e -> e));
Assert.assertEquals(
BiMaps.immutable.with("2", 2, "1", 1, "4", 4, "3", 3),
biMap);
}

The Collector implementations that convert to ImmutableCollection types use the finisher to convert from a mutable container to an immutable container. Here is the example of the Collector implementation for toImmutableList().

public static <T> Collector<T, ?, ImmutableList<T>> toImmutableList()
{
return Collector.<T, MutableList<T>, ImmutableList<T>>of(
Lists.mutable::empty, // supplier
MutableList::add, // accumulator
MutableList::withAll, // combiner
MutableList::toImmutable, // finisher
EMPTY_CHARACTERISTICS); // characteristics
}

The finisher here is the MutableList::toImmutable method reference. This is the final step that converts the MutableCollection with the results into an ImmutableCollection.

Eclipse Collections API vs. Collectors2

My preference is always to use the Eclipse Collections API directly if I can. If I need to operate on a JDK Collection type or if I am only given a Stream, then I will use Collectors2. As you can see in the examples above, Collectors2 is a natural gateway to the Eclipse Collections types and their functional, fluent, friendly and fun APIs.

Check out this presentation to learn more about the origins, design and evolution of the Eclipse Collections API.

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at July 23, 2018 02:26 AM

New Working Group and Charter at the Eclipse Foundation: OpenMobility

July 20, 2018 05:00 PM

OpenMobility will drive the evolution and broad adoption of mobility modelling and simulation technologies.

July 20, 2018 05:00 PM

RHAMT Eclipse Plugin 4.1.0.Final has been released!

by josteele at July 18, 2018 12:06 PM

Happy to announce version 4.1.0.Final of the Red Hat Application Migration Toolkit (RHAMT) is now available.

Getting Started

Downloads available through JBoss Central and from the update site.

RHAMT in a Nutshel

RHAMT is an application migration and assessment tool. The migrations supported include application platform upgrades, migrations to a cloud-native deployment environment, and also migrations from several commercial products to the Red Hat JBoss Enterprise Application Platform.

What is New?

Eclipse Photon

The tooling now targets Eclipse Photon.

Photon

Ignoring Patterns

Specify locations of files to exclude from analysis (using regular expressions).

Ignore Patterns

External Report

The generated report has been moved out of Eclipse and into the browser.

Report View

Improved Ruleset Schema

The XML ruleset schema has been relaxed providing flexible rule structures.

Schema

Custom Severities

Custom severities are now included in the Issue Explorer.

Custom Category

Stability

A good amount of time has been spent on ensuring the tooling functions consistently across Windows, OSX, and Linux.

You can find more detailed information here.

Our goal is to make the RHAMT tooling easy to use. We look forward to your feedback and comments!

Have fun!
John Steele
github/johnsteele


by josteele at July 18, 2018 12:06 PM