Skip to main content

Commercial-Grade Collaboration at the Eclipse Foundation

July 16, 2019 03:10 PM

We've just released a Business of Open Source eBook that is essential reading for leaders in the age of digital disruption who are considering how to maximize their returns from open source participation.

July 16, 2019 03:10 PM

React App Development in N4JS (Chess Game Part 1)

by n4js dev (noreply@blogger.com) at July 16, 2019 09:47 AM

React is a popular JavaScript library created by Facebook widely used for developing web user interfaces. In this post we introduce a tutorial on how to develop a chess game based on React, JSX and N4JS. The full tutorial is available (and playable) at eclipse.org/n4js and the sources can be found at github.com/Eclipse/n4js-tutorials.


Chess game implemented in N4JS with React

The chess game app implements the following requirements:
  • When the chess application is started, a chess board of 8x8 squares shall be showed containing 16 white pieces and 16 black pieces in their initial positions.
  • A player in turn shall be able to use the mouse to pick one of the pieces that she/he wants to move. A picked piece shall be clearly recognisable. Moreover, to aid players, especially beginners, whenever a piece is picked, all possible valid destination squares shall be visually highlighted as well.
  • In addition to the game board, there shall be a game information area that shows which player is in turn. Moreover, the game information area shall show a complete history of the game protocolling each move made by the players. As a bonus, jumping back to a previous state of the history shall be possible.

In the tutorial you will learn how to use npm, webpack and React to develop a web application with N4JS and the N4JS IDE. Most of the tutorial will elaborate on specific parts of the implementation and explain for example the graphical representation of the chess board and chess pieces, and how to use React to model the UI. Also, it will explain the game logic, i.e. how possible moves for the different piece types are computed, how the turn history is maintained, and how the end of the game (i.e. a win situation) is detected. In the end, the tutorial will make suggestions on how to improve the chess game e.g. by adding support for the en passant move.

Have fun with implementing this game!

by Minh Quang Tran

by n4js dev (noreply@blogger.com) at July 16, 2019 09:47 AM

Update for Jakarta EE community: July 2019

by Tanja Obradovic at July 15, 2019 04:19 PM

Two months ago, we launched a monthly email update for the Jakarta EE community which seeks to highlight news from various committee meetings related to this platform. There are a few ways to get richer insight into the work that has been invested in Jakarta EE so far, so if you’d like to learn more about Jakarta EE-related plans and get involved in shaping the future of Cloud Native Java, read on. 

Without further ado, let’s have a look at what happened in June: 

JakartaOne LiveStream: All eyes on Cloud Native Java

Are you interested in the current state and future of Jakarta EE? Would you like to explore other related technologies that should be part of your toolkit for developing Cloud Native Java applications? Then JakartaOne Livestream is for you! No matter if you’re a developer or a technical business leader, this virtual conference promises to satisfy your thirst for knowledge with a balance of technical talks, user experiences, use cases and more.  

You should join the JakartaOne Livestream speaker lineup if you want to 

  • Show the world how you and/or your organization are using Jakarta EE technologies to develop cutting-edge solutions. 

  • Demonstrate how Jakarta EE and Java EE features can be used today to develop cloud native solutions. 

This one-day virtual conference, which takes place September 10, 2019, is currently accepting submissions from speakers so if you have an idea for a talk that will educate and inspire the Jakarta community, now’s the time to submit your pitch!  The deadline for submissions is today, July 15, 2019. 

Note: All the JakartaOne Livestream sessions and keynotes are chosen by an independent program committee made up of volunteers from the Jakarta EE and Cloud Native Java community: Reza Rahman, who is also the program chair, Adam Bien, Arun Gupta, Ivar Grimstad, Josh Juneau, and Tanja Obradovic.

*As this inaugural event is a one-day event only, the number of accepted sessions is limited. Submit your talk now!  

Even if all the talks will be recorded and made available later on the Jakarta EE website, make sure to attend the virtual conference in order to directly interact with the speakers. We do hope you will attend “live”, as it will lead to more questions and more interactive sessions.  


Jakarta EE 8 release and progress

Are you keeping track of Eclipse EE4J projects on GitHub? Have you noticed that Jakarta EE Platform Specifications are now available in GitHub? If not please do!!!! Also please, check out the creation and progress of specification projects, which will be used to follow the process of converting the "Eclipse Project for ..." projects into specification projects to set them up to specification work as defined by the Eclipse Foundation Specification Process, and Specification Document Names.

Noticeable progress has been made on Jakarta EE 8 TCK jobs, Jakarta Specification Project Names, and Jakarta Specification Scope Statements so head over to GitHub to discover all the improvements and all the bits and pieces that have already been resolved.  

Work on the TCK process is in progress, with Scott Stark, Vice President of Architecture at Red Hat, leading the effort. The TCK process document v 1.0 is expected to be completed in the very near future. The document will shed light on aspects such as the materials a TCK must possess in order to be considered suitable for delivering portability, the process for challenging tests and how to resolve them and more. 

Jakarta EE 8 is expected to be released on September 10, 2019, just in time for JakartaOne Livestream.  

Javax package namespace discussions

The specification committee has put out two approaches regarding restrictions on javax package namespace use for the community to consider, namely Big Bang and Incremental. 

Based on the input we got from the community and discussions within the Working Group, the specification committee has not yet reached consensus on the approach to be taken, until work on the binary compatibility is further explored. With that in mind, the Working Group members will invest time to work on the technical approach for binary compatibility and then propose/decide on the option that is the best for the customers, vendors, and developers.

Please refer to David Blevins’ presentation from the Jakarta EE Update call June 12th, 2019

If you want to dive deeper into this topic, David Blevins has written a helpful analysis of the javax package namespace matter, in which he answers questions like "If we rename javax.servlet, what else has to be renamed?" 

 JCP Copyright Licensing request: Your assistance in this matter is greatly appreciated

As part of Java EE’s transfer to the Eclipse Foundation under the Jakarta EE name, it is essential to ensure that the Foundation has the necessary rights so that the specifications can be evolved under the new Jakarta EE Specification Process. For this, we need your help!

We are currently requesting copyright licenses from all past contributors to Java EE specifications under the JCP; we are reaching out to all companies and individuals who made contributions to Java EE in the past to help out, execute the agreements and return them back to the Eclipse Foundation. As the advancement of the specifications and the technology is at stake, we greatly appreciate your prompt response. Oracle, Red Hat, IBM, and many others in the community have already signed an agreement to license their contributions to Java EE specifications to the Eclipse Foundation. We are also counting on the JCP community to be supportive of this request.

For more information about this topic, read Tanja Obradovic’s blog. If you have questions regarding the request for copyright licenses from all past contributors, please contact mariateresa.delgado@eclipse-foundation.org.

 Election results for Jakarta EE working group committees

The nomination period for elections to the Jakarta EE committees is now closed. 

Almost all positions have been filled, with the exception of the Committer representative on the Marketing Committee, due to lack of nominees.   

The representatives for 2019-20 on the committees, starting July 1, 2019, are: 

Participant Representative:

STEERING COMMITTEE - Martijn Verburg (London Java Community)

SPECIFICATIONS COMMITTEE - Alex Theedom (London Java Community)

MARKETING COMMITTEE - Theresa Nguyen (Microsoft)

Committer Representative:

STEERING COMMITTEE - Ivar Grimstad

SPECIFICATIONS COMMITTEE - Werner Keil

MARKETING COMMITTEE - Vacant

 Jakarta EE Community Update: June video call

The most recent Jakarta EE Community Update meeting took place in June; the conversation included topics such as Jakarta EE 8 progress and plans, headway with specification name changes/ specification scope definitions, TCK process update, copyright license agreements, PMC/ Projects update, and more. 

The materials used on the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Please make sure to join us for the July 17th call.

 EclipseCon Europe 2019: Call for Papers open until July 15

You can still submit your proposals to be part of EclipseCon Europe 2019’s speaker lineup. The Call for Papers (CFP) is closing soon so if you have an idea for a talk that will educate and inspire the Eclipse community, now’s the time to submit your talk! The final submission deadline is July 15. 

The conference takes place in Ludwigsburg, Germany on October 21 - 24, 2019. 


Jakarta EE presence at events and conferences: June overview

(asked members on Jakarta marketing committee Slack channel if they participated in any conferences; waiting for a reply) 

Eclipse DemoCamp Florence 2019

Tomitribe: presence at JNation in Portugal 

 

Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group. 

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  


 

 


by Tanja Obradovic at July 15, 2019 04:19 PM

Commercial-Grade Collaboration at the Eclipse Foundation

by Thabang Mashologu at July 15, 2019 03:04 PM

When it comes to the digital economy, business runs on open source. That’s because open source is the best way to deliver large-scale business innovation and value at the pace customers expect. We’ve just released a Business of Open Source eBook that is essential reading for leaders in the age of digital disruption who are considering how to maximize their returns from open source participation.

 

For the last 15 years, companies ranging from startups to industry leaders the likes of Bosch, Broadcom, Fujitsu, Google, IBM, Microsoft, Oracle, Red Hat, SAP, and more have collaborated under the Eclipse governance model to advance open source projects and create value for their stakeholders. In this latest publication, we explore the role of open source as a pillar for transformation initiatives and the unique position of the Eclipse Foundation as the home of community-driven, code-first, and commercial-ready open source technologies.

 

Featuring interviews from leading open source industry experts, this eBook sheds light on how hundreds of organizations have leveraged the Foundation’s clear, vendor-neutral rules for intellectual property sharing and decision-making, business-friendly licensing, and ecosystem development and marketing services to accelerate market adoption, mitigate business risk, and harness open source for business growth while giving back to the developer community.

 

Deborah Bryant, Senior Director, OpenSource Program Office at Red Hat, puts it this way in the eBook: “The Eclipse Foundation has a rich history of being an industry disrupter...It distinguishes itself in its long history and deep roots with large industry players. The Foundation has really been driven by engineers for engineers, but also as an honest broker of discussions with the business of these big companies that are doing very large-scale projects.”

 

Are you leveraging all that open source has to offer? Do you understand the value of participating in open source to develop customer-centric products and services faster? Do you recognize the scalability of open source, the ability to innovate on business models, and the ability to collaborate with a global developer community? Congratulations, you’re what we like to call an entr<open>eur. To get the most out of your stake in open source, it's time to consider joining a commercially-friendly open source foundation like ours.

 

To learn more about the business value of open collaboration at the Eclipse Foundation, visit entropeneur.org. In addition to the eBook, you’ll find video success stories from the Eclipse community, an infographic summarizing the role and benefits of participating in an open source foundation, and an informative slide deck that you can use to make the case for joining the Eclipse Foundation. Many thanks to Deborah Bryant, Todd Moore, and Tyler Jewell for contributing their expertise and insights to the eBook.

 

Let us know what you think and be sure to join the entrepreneurial open source conversation on Twitter @EclipseFdn and share your open source success story using #entropeneur.


by Thabang Mashologu at July 15, 2019 03:04 PM

Update for Jakarta EE community: July 2019

July 15, 2019 03:00 PM

There are a few ways to get richer insight into the work that has been invested in Jakarta EE so far, so if you'd like to learn more about Jakarta EE-related plans and get involved in shaping the future of Cloud Native Java, read on.

July 15, 2019 03:00 PM

Industrial-Scale Collaboration for the Business Win

by Mike Milinkovich at July 15, 2019 02:40 PM

Marc Andreessen once famously said, “Software is eating the world.” He was right, software gobbled up industry sectors as varied as financial services, automotive, mining, healthcare, and entertainment. Companies of all sizes have leveraged software to improve their business processes and adapt products to a digital economy. And then a funny thing happened: open source ate software.

From startups to the world’s large corporations, commercial software is built on and with open source. In fact, open source now comprises 80 to 90 percent of the code in a typical software application. Today, most companies ship commercial products based on open source. If software is the engine of industrial-scale digital transformation, open source is the rocket fuel.

The fact is, no single company can compete with the rate and scale of disruptive innovation delivered by diverse open source communities. Not only has open source proven to be the most viable way of delivering complex platform software, but open source tenets like transparency, community-focus, inclusion, and collaboration have been adopted by organizations for building customer-centric strategies and cultures. According to research from Harvard Business School, firms contributing to open source see as much as 100 percent of a productivity boost.

Nowadays, organizations collaborate at open source foundations to gain a competitive edge. Industry leaders leverage participation in open source foundations to accelerate the market adoption of technologies, improve time to market, and achieve superior interoperability. At the Eclipse Foundation over the last 15 years, industry leaders like Bosch, Broadcom, Fujitsu, Google, IBM, Microsoft, Oracle, Red Hat, SAP, and hundreds more have collaborated under the Eclipse governance model to drive shared innovation and create value within a sustainable ecosystem.

Today, we are thrilled to release the Business of Open Source eBook focused on how successful entrepreneurs are leveraging all that open source has to offer to drive digital disruption within business-friendly open source foundations like the Eclipse Foundation. We call this class of innovators entr<open>eurs.

Entr<open>eurs understand the value of open source participation to develop products faster, mitigate risk, and recruit talent to gain a competitive edge. They fundamentally recognize the role of vendor-neutral, community-driven, and commercially-friendly open source foundations like ours to foster industry-scale collaboration, anti-trust compliance, IP cleanliness, and ecosystem development and sustainability.

As Todd Moore, IBM’s Vice President of Open Technology, explains in the eBook, “being a disruptor generally means that you have to move very quickly. You don’t develop all of the technologies that you’re employing. You’ve got enough mastery over them to quickly be able to assemble them. You’re using automation and deployment strategies that allow you to rapidly cycle through the code. What you start with and what you end up with at the end of the string can radically change.”

Download the Business of Open Source eBook today to learn how to innovate with confidence by giving your mission-critical projects a proper home at the Eclipse Foundation. Thank you to Deborah Bryant, Todd Moore, and Tyler Jewell for contributing their insights and expertise to the eBook. Let us know what you think and be sure to join the entrepreneurial open source conversation on Twitter @EclipseFdn and share your open source success story using #entropeneur.

To learn more about the business value of open collaboration at the Eclipse Foundation, visit entropeneur.org to explore our other commercial open source resources, including video success stories featuring Eclipse community members. We’ve also developed an infographic summarizing the benefits and advantages of participating in an open source foundation, and slide deck that you can use to make the case for joining the Eclipse Foundation.


by Mike Milinkovich at July 15, 2019 02:40 PM

EMF Forms 1.21.0 Feature: Multi Edit for Tables and Trees

by Jonas Helming and Maximilian Koegel at July 15, 2019 10:29 AM

EMF Forms makes it easy to create forms that are able to edit your data based on an EMF model. To...

The post EMF Forms 1.21.0 Feature: Multi Edit for Tables and Trees appeared first on EclipseSource.


by Jonas Helming and Maximilian Koegel at July 15, 2019 10:29 AM

New Features of Eclipse Collections 10.0 — Part 1

by Donald Raab at July 15, 2019 12:05 AM

New Features of Eclipse Collections 10.0 — Part 1

Examples of ten new features in the latest major release of the Eclipse Collections library

Ten new features in Eclipse Collections 10.0

Summary

In this blog I will cover ten of the twenty six new features mentioned in the Eclipse Collections 10.0 Release Summary Blog.

1. MultiReaderList/Bag/Set Interfaces

We have had multi-reader collection implementations for a long time. We have not had specialized interfaces for them. Now we do.

MultiReaderList/Set/Bag Interfaces
@Test
public void multiReaderList()
{
MultiReaderList<String> list =
Lists.multiReader.with("1", "2", "3");

list.withWriteLockAndDelegate(backingList -> {
Iterator<String> iterator = backingList.iterator();
iterator.next();
iterator.remove();
});
    Assert.assertEquals(Lists.mutable.with("2", "3"), list);
}

2. Stream for Primitive Lists

You can ask for a Stream from a regular List, but prior to Eclipse Collections 10.0, you could not easily get a primitive stream from a primitive List. Now you can.

@Test
public void primitiveListToPrimitiveStream()
{
IntStream intStream1 =
IntLists.mutable.with(1, 2, 3, 4, 5)
.primitiveStream();
IntStream intStream2 =
IntLists.immutable.with(1, 2, 3, 4, 5)
.primitiveStream();

LongStream longStream1 =
LongLists.mutable.with(1L, 2L, 3L, 4L, 5L)
.primitiveStream();
LongStream longStream2 =
LongLists.immutable.with(1L, 2L, 3L, 4L, 5L)
.primitiveStream();

DoubleStream doubleStream1 =
DoubleLists.mutable.with(1.0, 2.0, 3.0, 4.0, 5.0)
.primitiveStream();
DoubleStream doubleStream2 =
DoubleLists.immutable.with(1.0, 2.0, 3.0, 4.0, 5.0)
.primitiveStream();
}

3. toMap supports passing a target Map

The method toMap has been overloaded to allow a target map to be passed in as a parameter.

@Test
public void toMapWithTarget()
{
MutableList<Integer> list =
Lists.mutable.with(1, 2, 3, 4, 5);

Map<String, Integer> map =
list.toMap(String::valueOf,
each -> each,
new LinkedHashMap<>());

Map<String, Integer> expected = new LinkedHashMap<>();
expected.put("1", 1);
expected.put("2", 2);
expected.put("3", 3);
expected.put("4", 4);
expected.put("5", 5);

Assert.assertEquals(expected, map);
}

4. MutableMapIterable removeAllKeys

With Eclipse Collections 10.0, you can removeAllKeys from a Map that are contained in the specified Set parameter.

@Test
public void removeAllKeys()
{
MutableMap<Integer, String> map =
Maps.mutable.with(1, "1", 2, "2", 3, "3");

map.removeAllKeys(Sets.mutable.with(1, 3));

Assert.assertEquals(Maps.mutable.with(2, "2"), map);
}

5. RichIterable toBiMap

With Eclipse Collections 10.0, you can now convert any RichIterable to a BiMap.

@Test
public void toBiMap()
{
MutableBiMap<String, Integer> expected =
BiMaps.mutable.with("1", 1, "2", 2, "3", 3);

MutableBiMap<String, Integer> biMap =
Lists.mutable.with(1, 2, 3).toBiMap(String::valueOf, i -> i);

Assert.assertEquals(expected, biMap);
}

6. MultiMap collectKeyMultiValues

We can now transform a Multimap applying functions to both keys and values using collectKeyMultiValues.

@Test
public void collecKeyMultiValues()
{
MutableListMultimap<String, String> multimap =
Multimaps.mutable.list.with(
"nj", "Monmouth",
"nj", "Bergen",
"nj", "Union");

MutableBagMultimap<String, String> transformed =
multimap.collectKeyMultiValues(
String::toUpperCase,
String::toUpperCase);

Assert.assertEquals(Multimaps.mutable.bag.with(
"NJ", "MONMOUTH",
"NJ", "BERGEN",
"NJ", "UNION"), transformed);
}

7. fromStream on Collection Factories

We can now construct a Collection from a Stream using fromStream on each of the Collection factories for List, Set, Bag, and Stack.

@Test
public void fromStreamOnCollectionFactories()
{
MutableList<Integer>
list = Lists.mutable.fromStream(Stream.of(1, 2, 3, 4, 5));
Assert.assertEquals(
Lists.mutable.with(1, 2, 3, 4, 5), list);

MutableSet<Integer> set =
Sets.mutable.fromStream(Stream.of(1, 2, 3, 4, 5));
Assert.assertEquals(
Sets.mutable.with(1, 2, 3, 4, 5), set);

MutableBag<Integer> bag =
Bags.mutable.fromStream(Stream.of(1, 2, 3, 4, 5));
Assert.assertEquals(
Bags.mutable.with(1, 2, 3, 4, 5), bag);

MutableStack<Integer> stack =
Stacks.mutable.fromStream(Stream.of(1, 2, 3, 4, 5));
Assert.assertEquals(
Stacks.mutable.with(1, 2, 3, 4, 5), stack);
}

8. LazyIterate cartesianProduct

Sometimes it’s useful to calculate the cartesianProduct of more than just Sets. LazyIterate.cartesianProduct will take any Iterable.

@Test
public void cartesianProduct()
{
MutableList<Integer> numbers = Lists.mutable.with(1, 2, 3);
MutableList<String> letters = Lists.mutable.with("A", "B", "C");

MutableList<Pair<String, Integer>> pairs =
LazyIterate.cartesianProduct(letters, numbers).toList();

MutableList<Pair<String, Integer>> expected =
Lists.mutable.with(
Tuples.pair("A", 1),
Tuples.pair("A", 2),
Tuples.pair("A", 3),
Tuples.pair("B", 1),
Tuples.pair("B", 2),
Tuples.pair("B", 3),
Tuples.pair("C", 1),
Tuples.pair("C", 2),
Tuples.pair("C", 3));

Assert.assertEquals(expected, pairs);
}

9. Primitive Maps updateValues

In case you want to update all of the values in a primitive map, now you can using updateValues.

@Test
public void updateValues()
{
MutableIntIntMap map = IntIntMaps.mutable.empty()
.withKeyValue(1, 5)
.withKeyValue(2, 3)
.withKeyValue(3, 10);

map.updateValues((k, v) -> v + 1);

MutableIntIntMap expected = IntIntMaps.mutable.empty()
.withKeyValue(1, 6)
.withKeyValue(2, 4)
.withKeyValue(3, 11);

Assert.assertEquals(expected, map);
}

10. MutableMultimap getIfAbsentPutAll

The method getIfAbsentPutAll on a MutableMultimap is equivalent to getIfAbsentPut on a MutableMap. The difference is that with a Multimap you can put in multiple values.

@Test
public void getIfAbsentPutAll()
{
MutableListMultimap<Integer, Integer> multimap =
Multimaps.mutable.list.with(2, 1);

ImmutableList<Integer> defaultValue =
Lists.immutable.with(1, 2, 3);
MutableList<Integer> oneValue =
multimap.getIfAbsentPutAll(1, defaultValue);
MutableList<Integer> twoValue =
multimap.getIfAbsentPutAll(2, defaultValue);

Assert.assertEquals(defaultValue, oneValue);
Assert.assertEquals(Lists.mutable.with(1), twoValue);
}

Stay Tuned!

There are still sixteen more features to cover in Eclipse Collections 10.0. I will be writing two more blogs covering all of them.

I hope you enjoy all of the new features in Eclipse Collections 10.0!

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at July 15, 2019 12:05 AM

Eclipse Collections 10.0 Released

by Donald Raab at July 14, 2019 12:54 AM

The features you want, with the collections you need.

Thank you to the contributors

Eclipse Collections 9.2 was released in May 2018. The 9.x releases were extremely feature rich and had many contributions from the community. The 10.0 release is even more so. There were 18 contributors in the 10.0 release. This is outstanding! Thank you so much to all of the contributors who donated their valuable time to making Eclipse Collections more feature rich and even higher quality. Your efforts are truly appreciated.

Too many features for one blog

There are so many features included in Eclipse Collections 10.0, that it is going to take me a bit longer to write good examples leveraging all of them. So I have decided to break this release blog into a few parts. This part will purely be a summary.

The Feature Summary

  1. Specialized Interfaces for MultiReaderList/Bag/Set
  2. Implement Stream for Primitive Lists
  3. Implement toMap with target Map
  4. Implement MutableMapIterable.removeAllKeys
  5. Implement RichIterable.toBiMap
  6. Implement Multimap.collectKeyMultiValues
  7. Implement fromStream(Stream) on collection factories
  8. Implement LazyIterate.cartesianProduct
  9. Add updateValues to primitive maps
  10. Implement MutableMultimap.getIfAbsentPutAll
  11. Implement Bag.collectWithOccurrences
  12. Add reduce and reduceIfEmpty for primitive iterables
  13. Add <type1><type2>To<type1>Function for primitives
  14. Add ofInitialCapacity to primitive maps
  15. Implement countByEach on RichIterable
  16. Implement UnifiedSetWithHashingStrategy.addOrReplace
  17. Implement UnmodifiableMutableOrderedMap
  18. Implement withAllKeyValues on mutable primitive maps.
  19. Add ability to create PrimitivePrimitive/PrimitiveObject/PrimitiveObject/ObjectPrimitiveMap from Iterable
  20. Implement ofInitialCapacity and withInitialCapacity in HashingStrategySets
  21. Implement getAny on RichIterable
  22. Revamp and standardize resize/rehash for all primitive hash structures
  23. Implement factory methods to convert Iterable<BoxedPrimitive> to PrimitiveStack/Bag/List/Set
  24. Implement ImmutableSortedBagMultimapFactory in Multimaps
  25. Implement a Map factory method that takes a Map parameter.
  26. Wildcard type in MultableMultimap.putAllPairs & add methods

Check out the latest JavaDoc for the new features.

Other Improvements

  1. Improved Test Coverage
  2. Many build improvements
  3. Remove duplicate code
  4. Removed some deprecated classes
  5. Improved generics
  6. Some new benchmark tests
  7. And much more!

Thank you

From all the contributors and committers… thank you for using Eclipse Collections. We hope you enjoy all of the new features and improvements in the 10.0 release.

I’ll be publishing detailed examples for the new features in the 10.0 release in a few blogs. Stay tuned!

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at July 14, 2019 12:54 AM

Papyrus SysML 1.6 available from the Eclipse Marketplace.

by tevirselrahc at July 12, 2019 02:03 PM

I should have mentioned, yesterday, that Papyrus SysML 1.6 is available from the Eclipse market place at https://marketplace.eclipse.org/content/papyrus-sysml-16


by tevirselrahc at July 12, 2019 02:03 PM

Papyrus 4.4 is available

by tevirselrahc at July 12, 2019 07:15 AM

I’m a bit late with this posting…better late than never!

A new version of papyrus 4.4 is available:

SysML1.6 ( a forum topic will be send when the market place is available)

  • SysML 1.6 profile done
  • The SysML requirement diagram shall be implemented
  • The SysML Parametric diagram shall be implemented
  • The SysML BDD shall be implemented
  • The SysML IBD shall be implemented
  • The SysML requirement table shall be implemented
  • The SysML Graphical element type shall be implemented
  • The SysML AF shall be implemented
  • The SysML allocation Matrix shall be implemented
  • The elementype of SysML 1.6 shall be implemented
  • Make SysML 1.6 open source
  • The SysML model explorer customization shall be implemented
  • Add written OCL constraints
  • Implement E3 of SysML 1.6
  • Update SysML 1.6 diagram of profile
  • Add Icon for conjugated Interface block
  • Add compartment of Conjugated Interfaceblock inside BDD
  • The SysML Junit Test shall be implemented
  • Papyrus shall support the migration from SysML 1.4 to 1.6 Papyrus toolsmith

Validation of plugins:

  • you have done your profile to customize papyrus, but you forget extension point, build.xml, dependencies. We have done a work not only to validate profile but the plugin that contains the profile
  • the work has also be done for plugins that contain elementTypes model.

Improve developer experience to use plugin org.eclipse.papyrus.infra.core.sasheditor

  • Decrease the usage of internal eclipse code
  • Papyrus has developed at the beginning a new kind of editor compnoent sasheditor. To be more stable, we have ask to open api to eclipse in order to improve integration to eclipse.
  • Dedicated API have been created from use cases in order to help developer to access to this graphical composite; add a new editor inside papyrus, get active editor…
  • These usecases will be published inside a plugin developer doc :
  • It is will be like a javadoc that has a list of use cases and references API that implement these usecases.

Model2Doc

  • Papyrus will provide a documentation generator targeting LibreOffice file (odt).
  • This generator will allow to the user to describe how to cross the UML model to create the document
  • This generator will allow to the user to define a document template to use for the generation
  • This generator will support image and table insertion.

Go try it and send me your comments!

HAVE FUN!


by tevirselrahc at July 12, 2019 07:15 AM

Announcing Eclipse Ditto Release 0.9.0

July 10, 2019 04:00 AM

Today the Eclipse Ditto team proudly presents its second release 0.9.0.

The topics of this release in a nutshell were:

  • Memory improvements for huge amounts (multi million) of digital twins which are held in memory
  • Adding metrics and logging around the connectivity feature in order to enable being able to operate connections to foreign systems/brokers via APIs
  • Enhancing Ditto’s connectivity feature by additionally being able to connect to Apache Kafka
  • Performance improvements of Ditto’s search functionality
  • Stabilization of cluster bootstrapping
  • Refactoring of how the services configurations are determined
  • Addition of a Helm template in order to simplify Kubernetes based deployments
  • Contributions from Microsoft in order to ease operating Eclipse Ditto on Microsoft Azure

Please have a look at the 0.9.0 release notes for a more detailed information on the release.

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Docker images have been pushed to Docker Hub:



Ditto


The Eclipse Ditto team


July 10, 2019 04:00 AM

JBoss Tools and Red Hat CodeReady Studio for Eclipse 2019-06

by jeffmaury at July 09, 2019 02:20 PM

JBoss Tools 4.12.0 and Red Hat CodeReady Studio 12.12 for Eclipse 2019-06 are here waiting for you. Check it out!

crstudio12

Installation

Red Hat CodeReady Studio comes with everything pre-bundled in its installer. Simply download it from our Red Hat CodeReady product page and run it like this:

java -jar codereadystudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) CodeReady Studio require a bit more:

This release requires at least Eclipse 4.12 (2019-06) but we recommend using the latest Eclipse 4.12 2019-06 JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat CodeReady Studio".

For JBoss Tools, you can also use our update site directly.

http://download.jboss.org/jbosstools/photon/stable/updates/

What is new?

Our main focus for this release was improvements for container based development and bug fixing. Eclipse 2019-06 itself has a lot of new cool stuff but let me highlight just a few updates in both Eclipse 2019-06 and JBoss Tools plugins that I think are worth mentioning.

OpenShift

OpenShift Container Platform 4 support

With the new OpenShift Container Platform (OCP) 4 now available (see this article), even if this is a major shift compared to OCP 3, JBoss Tools is compatible with this major release in a transparent way. Just define your connection to your OCP 4 based cluster as you did before for an OCP 3 cluster, and use the tooling !

Server tools

Wildfly 17 Server Adapter

A server adapter has been added to work with Wildfly 17. It adds support for Java EE 8.

Hibernate Tools

New Runtime Provider

The new Hibernate 5.4 runtime provider has been added. It incorporates Hibernate Core version 5.4.3.Final and Hibernate Tools version 5.4.3.Final

Runtime Provider Updates

The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.10.Final and Hibernate Tools version 5.3.10.Final.

Maven

Maven support updated to M2E 1.12

The Maven support is based on Eclipse M2E 1.12

Platform

Views, Dialogs and Toolbar

Import project by passing it as command-line argument

You can import a project into Eclipse by passing its path as a parameter to the launcher. The command would look like eclipse /path/to/project on Linux and Windows, or open Eclipse.app -a /path/to/project on macOS.

pass directory to launcher
Launch Run and Debug configurations from Quick Access

From the Quick Access proposals (accessible with Ctrl+3 shortcut) you can now directly launch any of the Run or Debug configurations available in your workspace.

run debug quickaccess
For performance reasons, the extra Quick Access entries are only visible if the org.eclipse.debug.ui bundle was already activated by some previous action in the workbench such as editing a launch configuration, or expanding the Run As…​ menus.

Themes and Styling

Improved View Menu Icon

The icon used for the view menu has been improved. It is now crisp on high resolution displays and also looks much better in the dark theme.

Compare the old version at the top and the new version at the bottom:

view menu
High resolution images drawn on Mac

On Mac, images and text are now drawn in high resolution during GC operations. You can see crisp images on high resolution displays in the editor rulers, forms, etc in Eclipse.

Compare the old version at the top and the new version at the bottom:

hidpi mac old behavior
hidpi mac new behavior
Table/Tree background lines shown in dark theme on Mac

In dark theme on Mac, the Table and Trees in Eclipse now show the alternating dark lines in the background when setLinesVisible(true) is set. Earlier they had a gray background even if line visibility was true.

Example of a Tree and Table in Eclipse with alternating dark lines in the background:

dark theme alternating lines

Equinox

When the Equinox OSGi Framework is launched the installed bundles are activated according to their configured start-level. The bundles with lower start-levels are activated first. Bundles within the same start-level are activated sequentially from a single thread.

A new configuration option equinox.start.level.thread.count has been added that enables the framework to start bundles within the same start-level in parallel. The default value is 1 which keeps the previous behavior of activating bundles from a single thread. Setting the value to 0 enables parallel activation using a thread count equal to Runtime.getRuntime().availableProcessors(). Setting the value to a number greater than 1 will use the specified number as the thread count for parallel bundle activation.

The default is 1 because of the risk of possible deadlock when activating bundles in parallel. Extensive testing must be done on the set of bundle installed in the framework before considering enabling this option in a product.

Java Developement Tools (JDT)

Java 12 Support

Change project compliance and JRE to 12

A quick fix Change project compliance and JRE to 12 is provided to change the current project to be compatible with Java 12.

quickfix change compliance 12
Enable preview features

Preview features in Java 12 can be enabled using Preferences > Java > Compiler > Enable preview features option. The problem severity of these preview features can be configured using the Preview features with severity level option.

enable preview
Set Enable preview features

A quick fix Configure problem severity is provided to update the problem severity of preview features in Java 12.

quickfix configure severity 12
Add default case to switch statement

A quick fix Add &aposdefault&apos case is provided to add default case to a enhanced switch statement in Java 12.

quickfix default switch statement
Add missing case statements to switch statement

A quick fix Add missing case statements is provided for a enhanced switch statement in Java 12.

quickfix missing case switch statement
Add default case to switch expression

A quick fix Add &aposdefault&apos case is provided to add default case to a switch expression.

quickfix default switch expression
Add missing case statements to switch expression

A quick fix Add missing case statements is provided for switch expressions.

quickfix missing case switch expression
Format whitespaces in &aposswitch&apos

As Java 12 introduced some new features into the switch construct, the formatter profile has some new settings for it. The settings allow you to control spaces around the arrow operator (separately for case and default) and around commas in a multi-value case.

The settings can be found in the Profile Editor (Preferences > Java > Code Style > Formatter > Edit…​) under the White space > Control statements > &aposswitch&apos subsection.

formatter switch
Split Switch Case Labels

As Java 12 introduced the ability to group multiple switch case labels into a single case expression, a quick assist is provided that allows these grouped labels to be split into separate case statements.

split switch case labels

Java Editor

Show method parameter names on code as code minings

In the Java > Editor > Code Mining preferences, you can now enable the Show parameter names option. This will show the parameter names as code minings in method or constructor calls, for cases where the resolution may not be obvious for a human reader.

For example, the code mining will be shown if the argument name in the method call is not an exact match of the parameter name or if the argument name doesn’t contain the parameter name as a substring.

parameter name codeminings
Show number of implementations of methods as code minings

In the Java > Editor > Code Mining preferences, selecting Show implementations with the Show References (including implementations) for → Methods option now shows implementations of methods.

method implementation codeminings

Clicking on method implementations brings up the Search view that shows all implementations of the method in sub-types.

method implementation codeminings click
Open single implementation/reference in editor from code mining

When the Java > Editor > Code Mining preferences are enabled and a single implementation or reference is shown, moving the cursor over the annotation and using Ctrl+Click will open the editor and display the single implementation or reference.

ctrlclickimpl
Additional quick fixes for service provider constructors

Appropriate quick fixes are offered when a service defined in a module-info.java file has a service provider implementation whose no-arg constructor is not visible, or is non-existent.

service provider create constructor
service provider change constructor visibility
Template to create Switch Labeled Statement and Switch Expressions

The Java Editor now offers new templates for the creation of switch labeled statements and switch expressions. On a switch statement, three new templates: switch labeled statement, switch case expression and switch labeled expression are available as shown below. These new templates are available on Java projects having compliance level of Java 12 or above.

switch labeled statement
switch case expression
switch labeled expression

If switch is being used as an expression, then only switch case expression and switch labeled expression templates are available as shown below:

switch expression templates

Java Views and Dialogs

Enable comment generation in modules and packages

An option is now available to enable/disable the comment generation while creating module-info.java or package-info.java.

module info comment generation check box
package info comment generation checkbox
Improved &aposcreate getter and setter&apos quick assist

The quick assist for creating getter and setter methods from fields no longer forces you to create both.

getter setter dialog new
Quick fix to open all required closed projects

A quick fix to open all required closed projects is now available in the Problems view.

quickfix open missing projects problem view
quickfix open missing projects
New UI for configuring Module Dependencies

The Java Build Path configuration now has a new tab Module Dependencies, which will gradually replace the options previously hidden behind the Is Modular node on other tabs of this dialog. The new tab provides an intuitive way for configuring all those module-related options for which Java 9 had introduced new command line options like --limit-modules etc.

module dependencies cropped

The dialog focuses on how to build one Java Project, here "org.greetings".

Below this focus module, the left hand pane shows all modules that participate in the build, where decorations A and S mark automatic modules and system modules, respectively. The extent of system modules (from JRE) can be modified with the Add System Module…​ and Remove buttons (corresponds to --add-modules and --limit-modules).

When a module is selected in the left hand pane, the right hand pane allows to configure the following properties for this module:

Read Module:

Select additional modules that should be accessible from the selected module (corresponds to --add-reads)

Expose Package:

Select additional packages to be exposed ("exports" or "opens") from the selected module (corresponds to --add-exports or --add-opens)

Patch with:

Add more packages and classes to the selected module (corresponds to --patch-module)

Java Compiler

Experimental Java index retired

Eclipse 4.7 introduced a new experimental Java index which was disabled by default.

Due to lack of resources to properly support all Java 9+ language changes, this index is not available anymore starting with Eclipse 4.12.

The preference to enable it in Preferences > Java is removed and the old index will be always used.

Preferences > Java > Rebuild Index button can be used to delete the existing index files and free disk space.

Debug

&aposRun to Line&apos on Ctrl+Alt+Click in annotation ruler

A new shortcut, Ctrl+Alt+Click, has been added to the annotation ruler that will invoke the &aposRun to Line&apos command and take the program execution to the line of invocation.

run to line
Content assist in Debug Shell

Content assist (Ctrl+Space) support is now available in the Debug Shell.

content assist debug shell
Clear Java Stack Trace Console usage hint on first edit

The Java Stack Trace Console shows a usage hint when opened the first time. This message is now automatically removed when the user starts typing or pasting a stack trace.

jstc initial clear
Lambda variable names shown in Variables view

The Lambda variable names are now shown in the Variables view while debugging projects in the workspace.

lambda variables view

JDT Developers

Support for new Javadoc tags

The following Javadoc tags are now supported by the compiler and auto-complete.

Tags introduced in JDK 8:

@apiNote

@implSpec

@implNote

Tags introduced in JDK 9:

@index

@hidden

@provides

@uses

Tags introduced in JDK 10:

@summary

And more…​

You can find more noteworthy updates in on this page.

What is next?

Having JBoss Tools 4.12.0 and Red Hat CodeReady Studio 12.12 out we are already working on the next release for Eclipse 2019-09.

Enjoy!

Jeff Maury


by jeffmaury at July 09, 2019 02:20 PM

Building UIs with EASE

by Christian Pontesegger (noreply@blogger.com) at July 08, 2019 06:54 PM

You probably used EASE before to automate daily tasks in your IDE or to augment toolbars and menus with custom functionality. But so far scripts could not be used to build UIs. This changed with the recent contribution of the UI Builder module.

What it is all about
The UI Builder module allows to create views and dialogs by pure script code in your IDE. It is great for custom views that developers do not want to put into their products, for rapid prototyping and even for mocking.

The aim of EASE is to hide layout complexity as much as possible and provide a simple, yet flexible way to implement typical UI tasks.

Example 1: Input Form
We will start by creating a simple input form for address data.

loadModule("/System/UI Builder");
createView("Create Contact");

setColumnCount(2);
createLabel("First Name:");
var txtFirstName = createText();
createLabel("Last Name:");
var txtLastName = createText();
This snippet will create a dynamic view as shown below:
The renderer used will apply a GridLayout. By setting the columnCount to 2 we may simply add our elements without providing any additional layout information - a simple way to create basic layouts.

If needed EASE provides more control by providing layout information when creating components:

createView("Create Contact");
createLabel("First Name:", "1/1 >x");
var txtFirstName = createText("2-4/1 o!");
createLabel("Last Name:", "1/2 >x");
var txtLastName = createText("2-4/2 o!");
Creates the same view as above, but now with detailed layout information.
As an example "1/2 >x" means: first column, second row, horizontal align right, vertical align middle. A full documentation on the syntax is provided in the module documentation (Hover over the UI Builder module in the Modules Explorer view).

Now lets create a combo viewer to select a country for the address:
cmbCountry = createComboViewer(["Austria", "Germany", "India", "USA"])
Simple, isn't it?

So far we did not need to react on any of our UI elements. Next step is to create a button which needs some kind of callback action:
createButton("Save 1", 'print("you hit the save button")')
createButton("Save 2", saveAddress)

function saveAddress() {
print("This is the save method");
}
The first version of a button adds the callback code as string argument. When the button gets pressed, the callback code will be executed. You may call any script code that the engine is capable of interpreting.

The second version looks a bit cleaner, as it defines a function saveAddress() which is called on a button click. Note that we provide a function reference to createButton().

View the full example of this script on our script repository. In addition to some more layouting it also contains a working implementation of the save action to store addresses as JSON data files.

Interacting with SWT controls

The saveAddress() method needs to read data from the input fields of our form. This could be done using
var firstName = txtFirstName.getText();
Unfortunately SWT Controls can only be queried in the UI thread, while the script engine is executed in its own thread. To move code execution to the UI thread, the UI module provides a function executeUI(). By default this functionality is disabled as a bad script executed in the UI thread might stall your Eclipse IDE. To enable it you need to set a checkbox in Preferences/Scripting. The full call then looks like this:
loadModule("/System/UI")
var firstName = executeUI('txtFirstName.getText();');

Example 2: A viewer for our phone numbers

Now that we are able to create some sample data, we also need a viewer for our phone numbers. Say we are able to load all our addresses into an array, the only thing we need is a table viewer to visualize our entries. Following 2 lines will do the job:
var addresses = readAddresses();
var tableViewer = createTableViewer(addresses)
Where readAddresses() collects our *.address files and stores them into an array.

So the viewer works, however we need to define how our columns shall be rendered.
createViewerColumn(tableViewer, "Name", createLabelProvider("getProviderElement().firstName + ' ' + getProviderElement().lastName"))
createViewerColumn(tableViewer, "Phone", createLabelProvider("getProviderElement().phone"))
Whenever a callback needs a viewer element, getProviderElement() holds the actual element.
We are done! 3 lines of code for a TableViewer does not sound too bad, right? Again a full example is available on our script repository. It automatically loads *.address files from your workspace and displays them in the view.

Example 3: A workspace viewer

We had a TableViewer before, now lets try a TreeViewer. As a tree needs structure, we need to provide a callback to calculate child elements from a given parent:
var viewer = createTreeViewer(getWorkspace().getProjects(), getChildren);

function getChildren() {
if (getProviderElement() instanceof org.eclipse.core.resources.IContainer)
return getProviderElement().members();

return null;
}
So simple! The full example looks like this:
Example 4: Math function viewer

The last example demonstrates how to add a custom Control to a view.
For plotting we use the Charting module that is shipped with EASE. The source code should be pretty much self explanatory.

Some Tips & Tricks

  • Layouting is dynamic.
    Unlike the Java GridLayout you do not need to fill all cells of your layout. The EASE renderer takes care to automatically fill empty cells with placeholders
  • Elements can be replaced.
    If you use coordinates when creating controls, you may easily replace a given control by another one. This simplifies the process of layouting (eg if you experience with alignments) and even allows a view to dynamically change its components depending on some external data/events
  • Full control.
    While some methods from SWT do not have a corresponding script function, still all SWT calls may be used as the create* methods expose the underlying SWT instances.
  • Layout help.
    To simplify layouting use the showGrid() function. It displays cell borders that help you to see row/column borders.



by Christian Pontesegger (noreply@blogger.com) at July 08, 2019 06:54 PM

Eclipse Milo 0.3, updated examples

by Jens Reimann at July 06, 2019 08:22 PM

We while back I wrote a blog post about OPC UA, using Milo and added a bunch of examples, in order to get you started. Time passed by and now Milo 0.3.x is released, with a changed API and so those examples no longer work. Not too much has changed, but the experience of running into compile errors isn’t a good one. Finally I found some time to update the examples.

This blog post will focus on the changes, compared to the old blog post. As the old blog post is still valid, I though it might make sense to keep it, and introduce the changes of Milo here. The examples repository however is updated to show the new APIs on the master branch.

Making contact

This is the first situation where you run into the changed API, getting the endpoints. Although the new code is not much different, the old will no longer work:

List<EndpointDescription> endpoints =
  DiscoveryClient.getEndpoints("opc.tcp://localhost:4840")
    .get();

When you compare that to the old code, then you will notice that instead of an array, now a list is being used and the class name changed. Not too bad.

Also, the way you create a new client instance with Milo 0.3.x is a bit different now:

OpcUaClientConfigBuilder cfg = new OpcUaClientConfigBuilder();
cfg.setEndpoint(endpoints[0]); // please do better, and not only pick the first entry

OpcUaClient client = OpcUaClient.create(cfg.build());
client.connect().get();

Using the static create method instead of the constructor allows for a bit more processing, before the class instance is actually created. Also may this new method throw an exception now. Handling this in an async way isn’t too hard when you are using Java 9+:

public static CompletableFuture<OpcUaClient> createClient(String uri) {
  return DiscoveryClient
    .getEndpoints(uri) // look up endpoints from remote
    .thenCompose(endpoints -> {
      try {
        return CompletableFuture.completedFuture(
            OpcUaClient.create(buildConfiguration(endpoints)) // "buildConfiguration" should pick an endpoint
        );
      } catch (final UaException e) {
        return CompletableFuture.failedFuture(e);
      }
    });
}

That’s it? That’s it!

Well, pretty much. However, we have only been looking at the client side of Milo. Implementing your own server requires to use the server side API, and that change much more. But to be fair, the changes improve the situation a lot, and make things much easier to use.

Milo examples repository

As mentioned, the examples in the repository ctron/milo-ece2017 have been updated as well. They also contain the changed server side, which changed a lot more than the client side.

When you compare the two branches master and milo-0.2.x, you can see the changed I made for updating to the new version.

I hope this helps a bit in getting started with Milo 0.3.x. And please be sure to read the original post, giving a more detailed introduction, as well.

The post Eclipse Milo 0.3, updated examples appeared first on ctron's blog.


by Jens Reimann at July 06, 2019 08:22 PM

Early-Bird Talks: A Preview of What's to Come

July 05, 2019 01:25 PM

The community really came though for the early-bird deadline this year. The program committee reviewed a record number of talks (144) to come up with a top-six list!

July 05, 2019 01:25 PM

Short-Circuit Evaluation in N4JS

by n4js dev (noreply@blogger.com) at July 04, 2019 01:01 PM

Short-circuit evaluation is a popular feature of many programming languages and also part of N4JS. In this post, we show how the control-flow analysis of the N4JS-IDE deals with short-circuit evaluation, since it can have a substantial effect on the data flow and execution of a program.



Short circuit evaluation is a means to improve runtime performance when evaluating boolean expressions. This improvement is a result of skipping code execution. The example above shows an if-statement whose condition consists of two boolean expressions that combine the values of 1, 2 and 3, and its control flow graph. Note that the number literals are placeholders for more meaningful subexpressions.

First the logical and, then the logical or gets evaluated: (1 && 2) || 3. In case the expression 1 && 2 evaluates to true, the evaluation of the subclause 3 will be skipped and the evaluation of the entire condition results to true. This skipping of nested boolean expressions is called short circuit evaluation.

However, instead of skipping expression 3, expression 2 might be skipped. In case condition 1 does not hold, the control flow will continue with condition 3 right away. This control flow completely takes places within the if-condition, whereas the former short circuit targets the then block.

The reasoning behind short circuit evaluation is that the skipped code does not affect the result of the whole boolean expression. If the left hand side of the logical or expression evaluates to true, the whole or expression also does. Only if the left hand side is false, the right hand side will be evaluated. Complementary, the right hand side of a logical and expression is skipped in case the left hand side evaluates to false.


Side Effects


Risks of short circuit evaluation might arise in case a subexpression has side effects: These side effects will not occur if the subexpression is skipped. However, a program that relies on side effects of expressions inside an if-condition can be called fragile (or adventurous). In any case it is recommended to write side-effect free conditions.


Have a look at the example above. In case variable i has a value of zero, the right hand side expression i++ is executed, otherwise, it is skipped. The side effect here is the post-increment the value of i. If the value of i is other than zero, this value will be printed out. Otherwise, the value will be incremented but not printed. The control flow shows this behavior with the edge starting at i and targeting the symbol console.


Loops


Loop conditions also benefit from short circuit evaluation. This is important to know when reasoning about the all possible control flow paths through the loop: Each short circuit will introduce another path. Combining all of them makes data flow in loops difficult to understand in case of side effects in the subconditions.


Creative use of short circuit evaluation


Misusing short circuit evaluation can mimic if-statements by using expressions but without using the language feature of conditional expressions (i.e. condition() ? then() : else()). This could be used when if-statements should be executed e.g. when passing arguments to method calls, or when computing the update part of for-loops.





The picture above shows the two versions: the first uses an if-statement and the second uses an  expression statement. These two statements call the functions condition, then and end. Depending on the return value of condition, the function then is executed or not. Consequently, the printouts are either "condition then end" or "condition end", depending on the control flow.

The corresponding control flows are depicted on the right: The upper three lines refer to the if-statement, and the lower three lines to the expression statement. They reveal that the expression statement behaves similar to the if-statement. Note that the control flow edge in the last line that skips the nodes end and end() is never traversed since the logical or expression always evaluates to true.

The interested reader would find more details about the N4JS flow graphs and their implementation in the N4JS Design Document, Chapter: Flow Graphs.


by Marcus Mews


by n4js dev (noreply@blogger.com) at July 04, 2019 01:01 PM

Null/Undefined Analysis in N4JS

by n4js dev (noreply@blogger.com) at July 04, 2019 01:00 PM

The N4JS IDE integrates validations and analyses that are quite common for IDEs of statically typed languages. However, these analyses are seldom available for dynamically typed languages like N4JS or TypeScript. In this post we present the null/undefined analysis for N4JS source code.




TypeError: Cannot read property of undefined
- Developer's staff of life


The runtime error above occurs pretty often for JavaScript programmers: A quick search on Google returned about 1.2 million for the term TypeError: Cannot read property of undefined. When constraining search results to site:stackoverflow.com the query will still yield 126 thousand results. These numbers are comparable to the somewhat similar error NullPointerException which has about 3 million hits on Google and about 525 thousand when constrained to stackoverflow.com. Some of these results are caused by rather simple mistakes that a null/undefined analysis could detect. As a result, the developer could restructure his code and remove these potential errors even before he runs his first test and hence save time.


Null/Undefined Analysis


The N4JS IDE provides static analyses to indicate problems when it detects a property access on a variable which can be null or undefined. The analysis considers all assignments that occur either through a simple assignment expression or via destructuring. Loops, conditional expressions (e.g. i = c? 1 : 0;) and declaration initializers are respected as well.

The screenshot above shows a first example where a potential TypeError is detected. Since there exists at least one control flow from v.length backwards to all reachable definitions of v,  such that one definition assigns null or undefined to v, a warning is issued telling that v may be undefined.

To make sure that the analysis will produce fast results, it is implemented within some limitations. One is that the analysis is done separately for each body of a function, method, etc. (i.e. intra-procedural analysis). Hence it lacks knowledge of variables that cross the borders of these bodies such as the return value of a nested function call. In addition, property variables (such as v.length) are not analyzed since this would require the analysis to be context sensitive to the receiver object (here v). However, these limitations are common for static analyses of statically typed languages and still allow to detect many problems regarding local variables and parameters.

Usually, the analysis makes optimistic assumptions. For instance it can happen that a local variable receives the value of a method call or another non-local variable. In this situation the analysis assumes this value is neither null nor undefined. The same is true for function parameters. Only when there are distinct indications in the source code for a value of a local variable to be null or undefined, the analysis will issue a warning.


Guards

 

Sometimes the programmer knows that a variable may be null or undefined and hence checks the variable explicitly, for instance using if (v) {...}. As a result this check disables the warning in the then-branch that complies to the execution semantics.



As shown in the screenshot above, neither at the expression w.length < 1 nor at the statement return w.length; a warning is shown. Of course, the else-branch of such a check would consequently always indicate a warning when a property of variable v is accessed. Checks for conditional expressions and binary logical expressions (e.g. v && v.length) are also supported. A reader might think: "In case w is null the expression w.length would fail." True, but in this example the analysis detects the value of w being undefined. In case null might have been assigned to w e.g. in an if-condition before, the analysis will issue a warning of w being null at the two w.length expressions.


Data Flow

There are situations where the value of a variable is null or undefined due to a previous assignment of a variable which may have been null or undefined before, like shown in the example above. Then, the null/undefined dereference problem occurs later when a property is accessed. Since the analysis respects data flow, it can follow the subsequent assignments. Hence a warning is shown at a property access indicating the null or undefined problem. Moreover, the warning also indicates the source of the null or undefined value which would be the variable w in the example above.

The interested reader would find more details about the N4JS flow graphs and their implementation in the N4JS Design Document, Chapter: Flow Graphs.



by Marcus Mews

by n4js dev (noreply@blogger.com) at July 04, 2019 01:00 PM

The end of the Papyrus IC

by tevirselrahc at July 02, 2019 03:47 PM

With the end of the Eclipse PolarSys adventure comes that of the Papyrus IC.

In the end, we could not maintain the momentum to move forward.

We failed to grow our community, and in doing so, we failed our community.

But this is not the end of Papyrus! Not by a long shot!

Papyrus is more vibrant than ever.

New variants are still being build, e.g., Papyrus UMLLight, new released are still provided with continued improvements, and a new major release is in the plan for the project.

As well, many companies, research groups, schools, and individuals are still teaching with it, working with it, improving it.

A personal shout out to EclipseSource employees and Queens University’s faculty and students for their dedication to the Papyrus products and to Francis, our glorious leader for his perseverance.!

For the time being, this blog will remain a beacon of light for Papyrus . But here will be a time where it will have to close. Other endeavours await the author.

If anyone one want to help or take it over, please let me know


by tevirselrahc at July 02, 2019 03:47 PM

Current State of C/C++ Language Servers

by Doug Schaefer at June 28, 2019 07:30 PM

A Bit of History

When I joined the Eclipse CDT project back in 2002 (yeah, it’s been a long time), I was working on modeling tools for “real time”, or more accurately, embedded reactive systems. Communicating state machines. I wrote code generators that generated C and C++ from ROOM models and then eventually UML-RT. ROOM was way better by the way and easier to generate for because it was more semantically complete and well defined. That objective is key later in this story.

We had the vision to integrate our modeling tools more closely with Integrated Development Environments. We started looking at Visual Studio but Eclipse was the young up and comer. That and IBM bought us, Rational by that point, and had already bought OTI who built Eclipse so it was a natural fit. And we were all in Ottawa. And by chance, Ottawa-based QNX had already written a C/C++ IDE based on Eclipse and were open sourcing it and it was perfect for our customers as well. It’s amazing how that all happened and led to my life as CDT Doug.

Our first order of business was to help the CDT become an industry class C/C++ IDE and become a foundation for integrating our modeling tools. Since we wanted to be able to generate model elements from code, it required we have accurate C and C++ parsers and indexers. No one figured we could do it, but we were able to put together a somewhat decent system written in Java in the org.eclipse.cdt.core plug-in.

Scaling is Hard

However, as the community started to try it out on real projects, especially ones of a significant size, we started to run into pretty massive performance problems with the indexer. We were essentially doing full builds of the user’s projects and storing the results in a string table. On large projects, builds take a long time. But users expect that and put up with it because they really need those binaries it produces. They don’t have the same patience for their IDEs building indexes the don’t really see and we paid a pretty high price for that.

As a solution, I wondered if we could store the symbol information that we were gathering in a way that we could load it up from disk as we were parsing other files and plug the symbol info into the AST the same way we do symbols normally. This would allow us to parse header files once and reuse the results, similar to how precompiled headers work. The price you pay is in accuracy since some systems parse header files multiple times with different macro settings. But my guess was that it wouldn’t be that bad.

It was hard to convince my team at IBM Rational to take this road. Accuracy was king for our modeling tools. But when I moved to join QNX, I decide to forgo that requirement and give this “fast indexer” strategy a go. And the rest is history. Performance on large projects was an order of magnitude faster. Incremental indexing of files as they were saved isn’t even noticeable. It was a huge success and my proudest contribution to the CDT. And I was even better when other community members handed us their expertise to make the accuracy better and better so you barely notice that at all either.

C++ Rises from the “Dead”

Move the clock a decade later and we started running into a problem. The C++ standards community has new life and are adding a tonne of new features at a three year cadence. The CDT community has long lost most of the experts that build the original parsers. Lucky for us a new crop of contributors has come along and are doing heroes work to keep up. But it’s getting harder and harder. One thing we benefit from is how slow embedded developers, the majority of users of CDT, are to adopt the new standards. It gives us time, but not forever. We need to find a better way.

Then along came the Language Server Protocol and a small handful of language servers that do C/C++. I’ve investigated four of them. Three of them are based on llvm and clang. One of them is in tree with llvm and clang in clang-tools-extra, i.e., clangd. The other two are projects that use libclang with parts of the tree, i.e., cquery and ccls. Those two projects are what I call “one person projects” and with cquery at least, that person found something else to do last November. Beware of the one person project.

clangd

I’ve spent a lot of time with clangd when experimenting with Visual Studio Code. For what it does, clangd is very accurate and really fast. It uses compile_commands.json files to find out what source files are built and what compiler and command lines they use. I’ve had to fork the tree to add in support for discovering compilers it doesn’t know about, but that was pretty easy to put together. It gives great content assist and you get the benefit of clang’s awesome compilation error diagnostics as you type. It shows a lot of promise.

However clangd for the longest time lacked an indexer. When you search for references it only finds them in files you have opened previously. The thought as I understand it is that you use another process to build the index and that is usually done at build time. However, not all users have such an environment set up so having an index created by the IDE is a mandatory feature. Now, clangd did eventually get an indexer but it does what the old CDT indexer did and completely parses the source three. That predictably takes forever on large projects and I don’t think users have the appetite to take a huge step backwards like that.

IntelliSense

While waiting for the right solution to arrive for clangd, I thought I’d give the Microsoft C/C++ Tools for VS Code a try. My initial experience was quite surprising. It actually worked well with a gnu tools cross compiler project I used for testing. You have to teach it how to parse your code using a magic JSON file, which fits right in with the rest of VS Code. It’s able to pick out the default include path when you point it at your compiler. It has a MI support for debugging, though no built-in support for remote debugging but that was hackable. It seemed like a reasonable alternative, at least for VS Code.

However when I tried it with one of our production projects it quickly fell apart. It does a great job trying to figure out include paths, similar to the heuristics we use in CDT. That includes things like treating all the folders in your workspace as a potential include path entry. But it tended to make mistakes. It even has support for compile_commands.json files so I could tell it the command lines that were use. It did better but still tried to do too much and gave incorrect results.

That and it doesn’t have an index yet either. One is coming soon, but if it can’t figure out how to parse my files correctly, it’s not going to be a great experience. Still a lot of work to do there.

Where do we go from here?

As it stands today, at least from a CDT perspective, there really isn’t a language server solution that comes near what we have in CDT. Yes, some things are better. Both these language servers are using real parsers to parse the code. (or at least clangd is. Microsoft’s, of course, is closed source so I can only assume). They give really good content assist and error diagnostics and open declaration works. But without a usable indexer, you don’t get accurate symbol references. And I haven’t even mentioned refactoring which CDT has and which is not even suggested in the language server protocol.

So if all your doing is typing in code, the new language servers are great. But if you need to do some code mining to understand the code before you change it, you’re out of luck. The good news is that we are continuing to see investment in them so who knows. But then, maybe the CDT parsers catch up with the language standards before these other language servers grow great indexers making the whole thing moot. I wouldn’t bet against that right now.


by Doug Schaefer at June 28, 2019 07:30 PM

Back to the top