Eclipse Neon.1 - New and Noteworthy

September 28, 2016 03:54 PM

Neon.1 release is landing today! Read what's new and noteworthy.

September 28, 2016 03:54 PM

Eclipse Neon.1 - 10 Improvements Video

September 28, 2016 02:43 PM

Thank you Holger Voormann for putting together this great Neon.1 video!

September 28, 2016 02:43 PM

Eclipse Neon.1: 4:00-minute demo of 10 improvements

by howlger at September 28, 2016 02:21 PM

Eclipse Neon.1 has arrived. Watch a 4:00-minute demo of 10 Eclipse Neon.1 improvements, including the IDE/platform, Java (WindowBuilder and Code Recommenders), Web/JavaScript and Docker Tooling:

Eclipse Neon.1: 4:00-minute demo of 10 improvements

And in case you missed it, the 5:30-minute video showing 22 Neon.0 improvements.



by howlger at September 28, 2016 02:21 PM

WTP 3.8.1 Released!

September 28, 2016 10:00 AM

The Web Tools Platform's 3.8.1 Release is now available! Installation and update can be performed using the Neon Update Site at http://download.eclipse.org/releases/neon/. Release 3.8.1 fixes issues that occur in prior releases or have been reported since 3.8's release. WTP 3.8.1 is featured in the Neon.1 Eclipse IDE for Java EE Developers, with selected portions also included in other packages. Adopters can download the build itself directly.

More news


September 28, 2016 10:00 AM

Vert.x featuring Continuous Delivery with Jenkins and Ansible

by ricardohmon at September 28, 2016 12:00 AM

This blog entry describes an approach to adopt Continuous Delivery for Vert.x applications using Jenkins and Ansible by taking advantage of the Jenkins Job DSL and Ansible plugins.

Table of contents

Preamble

This post was written in context of the project titled “DevOps tooling for Vert.x applications“, one of the projects at Vert.x taking place during the 2016 edition of Google Summer of Code, a program that aims to bring students together with open source organizations in order to help them to gain exposure to software development practices and real-world challenges.

Introduction

System configuration management (e.g., Ansible) has been really hype in the recent years and there is a strong reason for that. Configuration management facilitates configuring a new environment following a fixed recipe or slightly varying it with the help of parameters. This has not only the advantage of being able to do it more frequently but reduces the chance of errors than doing it manually.
Beyond that, combining it with Continuous Integration tools (e.g., Jenkins) allows making a deployment as soon as a new codebase version is available, which represents the main building block of a Continuous Delivery pipeline, one of the objectives of embracing a DevOps culture.

Given that Vert.x is a framework that consists in a few libraries which can be shipped within a single fat jar, adopting a DevOps culture while developing a Vert.x-based application is straightforward.

Overview

As seen in the diagram below, this post describes a method to define a Jenkins build job which will react to changes in a code repository. After succesfully building the project, the job will execute an Ansible playbook to deploy the new application version to the hosts specified within the Ansible configuration.

Overview of the continous delivery process

Creating a Jenkins build job using Job DSL

Jenkins has created a convenient way to define build jobs using a DSL. While this option avoids the hassle of configuring build jobs manually, it supports all features of the regular interface through its API. It is possible to use Ansible together with Jenkins with the help of the Ansible plugin, whose instructions are also included in the Job DSL API. Alternatively to the Job DSL Plugin, Ansible can be used inside the definition of Jenkins Pipeline, one of tool’s most recent features.

Below is a sample job definition which can be used after creating a freestyle job (seed job) and adding a new build step with the DSL script. In the script, there are a few things to notice:

  • A name for the job created by the seed job is given.
  • Specific versions of JDK, Maven, and Ansible (available in the environment) are used.
  • Git is selected as the SCM platform and the target repository is defined. Also, the build job is triggered according to a specific interval.
  • The Maven package goal is invoked, which is instructed to package the application into a fat jar.
  • Lastly, Ansible is used to call a playbook available in the filesystem. The app will be deployed to the defined target hosts and the credentials (configured in Jenkins) will be used to log into the target hosts. Additionally, enabling the colorizedOutput option will result in a friendlier formatting of the results in the console output. The contents of this playbook will be addressed in the next section.
job('vertx-microservices-workshop-job') {
    jdk('JDK8')
    scm {
        git('git://github.com/ricardohmon/vertx-microservices-workshop.git')
    }
    triggers {
        scm('*/15 * * * *')
    }
    steps {

      def mvnInst = 'M3.3.9'  
      maven {  
        goals('package')  
        mavenInstallation(mvnInst)  
      }  
      ansiblePlaybook('/ansible/playbook.yml') {  
        inventoryPath('/ansible/hosts')  
        ansibleName('Ansible2.0')  
        credentialsId('vagrant-key')  
        colorizedOutput(true)  
      }  

    }  
}

Deploying Vert.x app using Ansible

An Ansible Playbook results quite convenient to deploy a Vert.x application to a number of hosts while still taking considerations for each of them. Below is a sample playbook that deploys the respective application to each of the hosts described in an inventory file. The playbook comprises the following tasks and takes the listed considerations:

1) A task that targets only hosts with a database.

  • The target hosts is specified with the name of the host (or hosts group) defined in the inventory file.

2) Actual application deployment task. Here, several considerations are done:

  • The application may require that only one host is updated at the time.
    This can be achieved with the serial option, while the order of the deployment to hosts can be enforced in the hosts option.
    Host processing order
    Even though we could have declared all hosts, Ansible does not provide an explicit way to specify the order.
  • Java is a system requirement for our Vert.x applications.
    Besides installing it (keep reading), we need to declare the JAVA_HOME environment variable.
  • A deployment may just represent an update to an already running application (Continuous Deployment), hence it is convenient to stop the previous application inside the pre_tasks and take post-deployment actions in the post_tasks. Vert.x ships with the convenient start/stop/list commands that result very helpful here. We can use the list command and extract (using regex) the id of the running application of its output to stop it before deploying a new version.
    Hint
    If our solution includes a load balancer or proxy, we could deal with them at this step as described in Ansible’s best practices for rolling updates
  • Call to a role that makes the actual application deployment. The Jenkins Ansible Plugin includes, between others, a WORKSPACE environment variable, which may result very helpful in the following tasks, as shown later.
# 1) Special task for the service with a db
- hosts: audit-service
  remote_user: vagrant
  become: yes
  roles:
    - db-setup

  # 2) Common tasks for all hosts
- hosts: quote-generator:portfolio-service:compulsive-traders:audit-service:trader-dashboard
  remote_user: vagrant
  become: yes
  serial: 1
  environment:
    JAVA_HOME: /usr/lib/jvm/jre-1.8.0-openjdk/

  pre_tasks:
  - name: Check if the app jar exists in the target already
    stat: path=/usr/share/vertx_app/app-fatjar.jar
    register: st
  - name: List running Vert.x applications
    command: java -jar /usr/share/vertx_app/app-fatjar.jar list
    register: running_app_list
    when: st.stat.exists == True
  - name: Stop app if it is already running (avoid multiple running instances)
    command: java -jar /usr/share/vertx_app/app-fatjar.jar stop {{ item | regex_replace('^(?P.[8]-.[4]-.[4].[4].[12])\t.*', '\\g') }}
    with_items: "{{ running_app_list.stdout_lines|default([]) }}"
    when: st.stat.exists == True and (item | regex_replace('.*\t(.*)$', '\\1') | match('.*/app-fatjar.jar$'))

  # Main role
  roles:
    - { role: vertx-app-deployment, jenkins_job_workspace: "" }

  post_tasks:
  - name: List again running Vert.x applications
    command: java -jar /usr/share/vertx_app/app-fatjar.jar list

Once we took care of the actions shown before, the remaining tasks (included in the main deployment role) reduce to the following:

1) Prepare the target machine with the proper environment to run our application. This includes:

  • Set up Java (pretty convenient to do it through a package manager).
  • Copy the Vert.x application package to the appropriate folder (quite simple using a fat jar). The actual name and location of the jar package in the Jenkins environment can be defined using host-specific variables.
  • In case necessary, copy the required config files.
- name: Install Java 1.8 and some basic dependencies
  yum: name={{ item }} state=present
  with_items:
   - java-1.8.0-openjdk
- name: Ensure app dir exists
  file: path=/usr/share/vertx_app/ recurse=yes state=directory mode=0744
- name: Copy the Vert.x application jar package
  copy: src={{ app_jar }} dest=/usr/share/vertx_app/app-fatjar.jar mode=0755
- name: Ensure config dir exists
  file: path=/etc/vertx_app/ recurse=yes state=directory mode=0744
- name: Copy the application config file if needed
  copy: src={{ app_config }} dest=/etc/vertx_app/config.json mode=0755
  when: app_config is defined

2) Run the application as a service in the hosting machine.

  • Make sure to ignore the hang up signal with the help of nohup command. Otherwise, Ansible will be stuck at this step.
- name: Run Vert.x application as a service, ignore the SIGHUP signal
  shell: nohup java {{ vertx_opts }} -jar /usr/share/vertx_app/app-fatjar.jar start {{ launch_params }}
  register: svc_run_out
- name: Print run output
  debug: var=svc_run_out.stdout_lines

Launching the Vert.x app
This example uses the start command to launch the application as a service. This method may result more comfortable than creating an init.d script or calling Vert.x from command line, which would have required to install the Vert.x libraries in an independent Ansible task.

This describes all the configuration needed to be able to build from a repository using Jenkins and deploy the results to our hosts with Ansible.

Sample sources and demo

The sample configurations presented before are part of a complete demo focused on the Vert.x microservices workshop to exemplify a basic Continuous Delivery scenario. This set up is available in a repository and contains, in addition, a pre-configured Jenkins-based demo ready to host the build job described the previous sections. The demo scenario requires Vagrant and Virtualbox to be launched.

Launch instructions

  • Clone or download this repository, and launch the demo using vagrant up

    git clone https://github.com/ricardohmon/vertx-ansible.git
    cd demo
    vagrant up
    This command will launch a virtual machine hosting Jenkins with the required plugins installed (tools names needed) and also launch five additional VMs that will host the microservices deployed by Jenkins.

  • Create a Jenkins freestyle build job using the DSL job script (seed job) found in deployment-jobs/microservices_workshop_dsl.groovy and build it.

    Tool configuration assumption
    The DSL Job assumes the following tools (with names) have been configured in Jenkins: Java 8(JDK8), Maven (M3.3.9), Ansible (Ansible2.0)

  • After building the seed job, a new job(vertx-microservices-workshop-job) will be created, which will be in charge of pulling recent changes of the project, building it, and deploying it.

Demo

Watch the previous demo in action in the following screencast:

Conclusion

Continuous Delivery approach is a must in modern software development lifecycles (including Vert.x-based applications) and a step further towards adopting a DevOps culture. There are a number of tools that enable it and one example is the combination of Jenkins + Ansible described in this post.
While Jenkins offers the possibility to integrate recent changes perceived in a codebase and build runnable artifacts, Ansible can help to deploy them to hosting environments. The usage of both tools can be coupled easily with the help of the Job DSL plugin, a feature of Jenkins that allows describing a build job using a domain-specific language, which can help to integrate additional steps and tools to a CD pipeline.

Further enhancements can be done to this basic pipeline, such as, integrating the recent Pipeline plugin, a feature that allows a better orchestration of CD stages; inclusion of notification and alerting services; and, ultimately a zero-downtime deployment approach, which could be achieved with the help of a proxy; plus, tons of options available trough Jenkins plugins.

Thanks for reading!


by ricardohmon at September 28, 2016 12:00 AM

Replacing Bugzilla with Tuleap

by waynebeaton at September 27, 2016 07:45 PM

Bugzilla has served the Eclipse Community well for many years, easily scaling to serve the needs of over 500 open source projects, servicing a community of thousands of software developers generating half of a million issue reports over close to two decades (I’m taking some liberties with that last one: it’s been about 17 years). When I say “easily”, I mean that our world-class IT team has made it look easy.

While generally robust, Bugzilla isn’t sexy, and it’s missing some valuable features. I’m also a little concerned that the team responsible for maintaining Bugzilla doesn’t seem to have the resources necessary to move Bugzilla forward (though, I’ll admit that I have only limited insight).

I’ve been investigating Tuleap, which is billed as “software development and project management tool”, as a potential successor to Bugzilla. Any effort to make Tuleap a first-class service for Eclipse projects will include the deprecation and eventual retirement of Bugzilla. Much like our migration from CVS to Git, I expect that project teams will start slowly adopting the new technology. Much like that other migration, the pace will likely pick up quickly as project teams see the benefits being reaped by the earlier adopters. This is how it will have to be: with project teams migrating themselves completely off Bugzilla and into the new system.

The criteria for picking a Bugzilla successor is pretty straightforward. Any new issue tracking software we adopt must:

  1. be able to import our existing issue (bug) data;
  2. be open source;
  3. have a strong community behind it; and
  4. provide equivalent functionality to what Bugzilla provides today.

Tuleap can import existing Bugzilla data directly, so that’s a huge win. We’ve discussed potential for synchronising data across systems, but that’s a big and expensive challenge that I’d really like to avoid. Once a team decides to move to Tuleap, the data will be moved and they’ll be off. This is an important consideration for project teams that might be considering participating in the ongoing proof of concept: we do have a means of moving your existing data from Bugzilla to Tuleap, but currently don’t have a means of making the return trip should we decide to move in a different direction.

Requiring that the replacement be open source is, I think, obvious. All of our public-facing systems are open source today and this will continue into the future.  Tuleap is 100% open source.

Having a strong community is important and ties into the open source requirement. Replacing Bugzilla is going to be a massive undertaking and we need to have confidence that the replacement that we choose has some staying power. Tuleap appears to have some large enterprise organizations adopting the technology, which is encouraging. We’ll expect to contribute to the open source project, but we need to be part of a community that plans to contribute and continue to evolve the product.

I think that it would be a mistake to look for a feature-to-feature replacement for Bugzilla. If we’re going to make this change, then we need to do some dreaming. Having said that, with my experimentation so far, I believe that Tuleap provides most of the fundamental functionality that we require.

A small number of our projects have started to independently evaluate the potential to use Tuleap. I’m certainly not an expert, but I’ve engaged with both the Tuleap project itself and several the Eclipse projects testing Tuleap via Tuleap as a user, so I’m starting to get a pretty good feel for it. So far, my experience has shown that it’s a pretty awesome bit of software and the projects that are currently using it seem to be moving along nicely.

Tuleap is more than an issue tracker. What I think sells a lot of project teams on Tuleap is the ability to engage in a full Scrum or Kanban process, but Tuleap provides all sorts of other services, including a wiki, document management, Git hosting, Gerrit integration, and more. It is relatively easy for a project team to create additional Git repositories from directly within the Tuleap interface. There is a notion of a project landing page that likely overlaps (at least a little) with the PMI; it may be worth investigating integration options (I’m not particularly keen on project teams being required to manage multiple landing pages on different services).

Tuleap provides fine-grained permissions which would make it relatively easy to, for example, give a project lead the ability to grant access to non-committer triagers (I believe that our various policies permit this sort of thing).

It is possible to add custom fields into a project team’s bucket to capture information that may be specific to the project. My understanding, however, is that the user experience for this sort of administrative task is not as refined as the user-facing interface. Customisation can be templated, which may make it possible to share some set of common customisations. But customisation introduces a troubling problem: since the source and target may have different customisation, moving issues from one team’s bucket to another team’s bucket is not currently supported: moving issues must be easy (especially considering the number of bugs that are incorrectly opened against JDT). Tuleap also does not currently provide an easy means of marking duplicates. According to the project team, these important features just haven’t been implemented yet. I’ve opened an artifact to track the need requirement to easily move issues.

Unfortunately, I believe that the abilities to move an issue from one project team to another, and to quickly and easily mark duplicates are critical to our adoption and so are show stoppers for the moment.

I have been trying to think a bit outside of the box. One of the bigger drivers for the need to be able to move issues is that the JDT team is the target for any and every issue that comes up in any variant of an Eclipse IDE that includes Java development tools. Maybe we can use adoption of a new issue tracking system as an opportunity to help our users find the right place to report issues without having to navigate our project structure. That’s going to require a lot of thought and energy from the community as a whole.

I think that Tuleap has a lot of potential, but we’re not quite ready for widespread adoption by Eclipse projects. I’ve managed to identify two show stoppers, but they’re point-in-time problems that will be addressed. I believe that we’re setting up a BoF at EclipseCon Europe 2016; if you’re using Tuleap now, or are interested in being a part of the decision making process for widespread adoption, please attend that session (and, of course, please do feel free to connect with me one-on-one at the conference). Please also to interact with the projects currently using Tuleap and let me know if your project is interested in participating in the experiment.

Finally, please join the discussion on Bug 497333.

EclipseCon Europe 2016



by waynebeaton at September 27, 2016 07:45 PM

Eclipse Newsletter | IoT is the New Black

September 27, 2016 02:44 PM

The Eclipse Newsletter is all about Eclipse IoT this month. Start reading now!

September 27, 2016 02:44 PM

About the notation of the names Xtext, Xtend, Xbase, Xcore & Xpand

by Karsten Thoms (thoms@itemis.de) at September 27, 2016 11:26 AM

xText? XPand? XBASE? We sometimes see different ways these names are being written in online publications and even scientific works. Here's a quick reminder on how to write them correctly and why this is important.

Names are defined in their notation and must be used as they are defined. This is especially true for the use of character casing. However, there seems to be some confusion on the right notation of the “X-frameworks” at Eclipse. We see these names so often with different styles of upper and lower case in written text like XText, xText or XTEXT.

Unfortunately this goes on for other names as well. Please note, that this is simply wrong. You won’t find any place in the official documentation or websites where these names are written with different case notations. So please use the right notation starting with a capital followed by lower case characters.

Here's a list of the correct notation of Eclipse's X-frameworks:

  • Xtext
  • Xtend
  • Xbase
  • Xcore
  • Xpand

We face wrong notations in articles, blog posts, forum questions, documents but even in contracts, bachelor or master thesis or CVs. Imagine how it looks like if you read a CV or profile of a person and this person can not even write the name of the technologies he uses correctly!

For any projects it is important to use their name consistent. It is already a mess how often you find the wrongly written names on the internet and this can only be explained by copy and pasting them without thinking about the right notation.

We hope that this post helps a bit to disambiguate the names and helps you to avoid a common mistake. Please use the right notation and remind anyone using a wrong one to check their spelling and why it's important to get these names right. One way would be to politely send them to this post. Thanks! :)


by Karsten Thoms (thoms@itemis.de) at September 27, 2016 11:26 AM

Configuring OSGi Declarative Services

by Dirk Fauth at September 26, 2016 06:40 AM

In my blog post about Getting Started with OSGi Declarative Services I provided an introduction to OSGi declarative services. How to create them, how they behave at runtime, how to reference other services, and so on. But I left out an important topic there: configuring OSGi components. Well to be precise I mentioned it, and one sort of configuration was also used in the examples, but it was not explained in detail. As there are multiple aspects with regards to component configuration I wanted to write a blog post that is dedicated to that topic, and here it is.

After reading this blog post you should have a deeper understanding of how OSGi components can be configured.

Basics

A component can be configured via Component Properties. Properties are key-value-pairs that can be accessed via Map<String, Object>. With DS 1.3 the Component Property Types are introduced for type safe access to Component Properties.

Component Properties can be defined in different ways:

  • inline
  • via Java properties file
  • via OSGi Configuration Admin
  • via argument of the ComponentFactory.newInstance method
    (only for factory components, and as I didn’t cover them in the previous blog post, I won’t cover that topic here aswell)

Component Properties that are defined inline or via properties file can be overridden by using the OSGi Configuration Admin or the ComponentFactory.newInstance argument. Basically the property propagation is executed sequentially. Therefore it is even possible to override inline properties with properties from a properties file, if the properties file is specified after the inline properties.

The SCR (Service Component Runtime) always adds the following Component Properties that can’t be overridden:

  • component.name – The component name.
  • component.id – A unique value (Long) that is larger than all previously assigned values. These values are not persistent across restarts.

In a life cycle method (activate/modified/deactivate) you can get the Component Properties via method parameter. The properties that are retrieved in event methods for referencing other services (bind/updated/unbind) are called Service Properties. The SCR performs a property propagation in that case, which means that all non-private Component Properties are propagated as Service Properties. To mark a property as private, the property name needs to be prefixed with a full stop (‘.’).

First I will explain how to specify Component Properties in different ways. I will use a simple example that inspects the properties in a life cycle method. After that I will show some examples on the usage of properties of service references.

Let’s start to create a new project for the configurable components:

  • Create a new Plug-in Project via File -> New -> Plug-in Project. (Plug-in Perspective needs to be active)
    • Set the Plug-in name to org.fipro.ds.configurable
    • Press Next
    • Ensure that no Activator is generated, no UI contributions will be added and that no Rich Client Application is created
    • Press Finish
  • Open the MANIFEST.MF file and switch to the Dependencies tab
  • Add the following dependency on the Imported Packages side:
    • org.osgi.service.component.annotations (1.2.0)
  • Mark org.osgi.service.component.annotations as Optional via Properties… to ensure there are no runtime dependencies. We only need this dependency at build time.
  • Create the package org.fipro.ds.configurable

Inline Component Properties

You can add Component Properties to a declarative service component via the @Component annotation property type element. The value of that annotation type element is an array of Strings, which need to be given as key-value pairs in the format
<name>(:<type>)?=<value>
where the type information is optional and defaults to String.

The following types are supported:

  • String (default)
  • Boolean
  • Byte
  • Short
  • Integer
  • Long
  • Float
  • Double
  • Character

There are typically two use cases for specifying Component Properties inline:

  • Define default values for Component Properties
  • Specify some sort of meta-data that is examined by referencing components

Of course the same applies for Component Properties that are applied via Properties file, as they have an equal ranking.

  • Create a new class StaticConfiguredComponent like shown below.
    It is a simple Immediate Component with the Component Properties message and iteration, where message is a String and iteration is an Integer value. In the @Activate method the Component Properties will be inspected and the message will be printed out to the console as often as specified in iteration.
    Remember that it is an Immediate Component, as it doesn’t implement an interface and it doesn’t specify the service type element.
package org.fipro.ds.configurable;

import java.util.Map;

import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;

@Component(
    property = {
        "message=Welcome to the inline configured service",
        "iteration:Integer=3"
    }
)
public class StaticConfiguredComponent {

    @Activate
    void activate(Map<String, Object> properties) {
        String msg = (String) properties.get("message");
        Integer iter = (Integer) properties.get("iteration");

        for (int i = 1; i <= iter; i++) {
            System.out.println(i + ": " + msg);
        }
        System.out.println();
    }
}

Now execute the example as a new OSGi Framework run configuration (please have a look at Getting Started with OSGi Declarative Services – 6. Run to see how to setup such a configuration). If you used the same property values as specified in the above example, you should see the welcome message printed out 3 times to the console.

It is for sure not a typical use case to inspect the inline specified properties at activation time. But it should give an idea on how to specify Component Properties statically inline via @Component.

Component Properties from resource files

Another way specify Component Properties statically is to use a Java Properties File that is located inside the bundle. It can be specified via the @Component annotation properties type element, where the value needs to be an entry path relative to the root of the bundle.

  • Create a simple properties file named config.properties inside the OSGI-INF folder of the org.fipro.ds.configurable bundle.
message=Welcome to the file configured service
iteration=4
  • Create a new class FileConfiguredComponent like shown below.
    It is a simple Immediate Component like the one before, getting the Component Properties message and iteration from the properties file.
package org.fipro.ds.configurable;

import java.util.Map;

import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;

@Component(
    properties="OSGI-INF/config.properties"
)
public class FileConfiguredComponent {

    @Activate
    void activate(Map<String, String> properties) {
        String msg = (String) properties.get("message");
        String iter = (String) properties.get("iteration");

        if (msg != null && iter != null) {
            Integer count = Integer.valueOf(iter);
            for (int i = 1; i <= count; i++) {
                System.out.println(i + ": " + msg);
            }
            System.out.println();
        }
    }
}
  • Add the OSGI-INF/config.properties file to the build.properties to include it in the resulting bundle jar file. This is of course only necessary in case you haven’t added the whole directory to the build.properties.

On executing the example you should now see the console outputs for both components.

I’ve noticed two things when playing around with the Java Properties File approach:

  • Compared with the inline properties it is not possible to specify a type. You can only get Strings, which leads to manual conversions (at least before DS 1.3 – see below).
  • The properties file needs to be located in the same bundle as the component. It can not be added via fragment.

Having these two facts in mind, there are not many use cases for this approach. IMHO this approach was intended to support client specific properties that are for example placed inside the bundle in the build process.

Bndtools vs. PDE

  • Create the config.properties file in the project root
  • Add the -includeresource instruction to the bnd.bnd file
    This is necessary to include the config.properties file to the resulting bundle jar file. The instruction should look similar to the following snippet to specify the destination and the source.

    -includeresource: OSGI-INF/config.properties=config.properties

    Note:
    The destination is on the left side of the assignment and the source is on the right.
    If only the source is specified (that means no assignment), the file is added to the bundle root without the folder where it is included in the sources.

Component Properties via OSGi Configuration Admin

Now let’s have a look at the dynamic configuration by using the OSGi Configuration Admin. For this we create a new component, although it would not be necessary, as we could also use one of the examples before (remember that we could override the statically defined Component Properties dynamically via the Configuration Admin). But I wanted to start with creating a new component, to have a class that can be directly compared with the previous ones.

To specify properties via Configuration Admin it is not required to use any additional type element. You only need to know the configuration PID of the component to be able to provide a configuration object for it. The configuration PID (Persistent IDentity) is used as a key for objects that need a configuration dictionary. With regards to the Component Configuration this means, we need the configuration PID to be able to provide the configuration object for the component.

The PID can be specified via the configurationPid type element of the @Component annotation. If not specified explicitly it is the same as the component name, which is the fully qualified class name, if not explicitly set to another value.

Via the configurationPolicy type element it is possible to configure the relationship between component and component configuration, e.g. whether there needs to be a configuration object provided via Configuration Admin to satisfy the component. The following values are available:

  • ConfigurationPolicy.OPTIONAL
    Use the corresponding configuration object if present, but allow the component to be satisfied even if the corresponding configuration object is not present. This is the default value.
  • ConfigurationPolicy.REQUIRE
    There must be a corresponding configuration object for the component
    configuration to become satisfied. This means that there needs to be a configuration object that is set via Configuration Admin before the component is satisfied and therefore can be activated. With this policy it is for example possible to control the startup order or component activation based on configurations.
  • ConfigurationPolicy.IGNORE
    Always allow the component configuration to be satisfied and do
    not use the corresponding configuration object even if it is present. This basically means that the Component Properties can not be changed dynamically using the Configuration Admin.

If a configuration change happens at runtime, the SCR needs to take actions based on the configuration policy. Configuration changes can be creating, modifying or deleting configuration objects. Corresponding actions can be for example that a Component Configuration becomes unsatisfied and therefore Component Instances are deactivated, or to call the modified life cycle method, so the component is able to react on a change.

To be able to react on a configuration change at runtime, a method to handle the modified life cycle can be implemented. Using the DS annotations this can be done by using the @Modified annotation, where the method parameters can be the same as for the other life cycle methods (see the Getting Started Tutorial for further information on that).

Note:
If you do not specify a modified life cycle method, the Component Configuration is deactivated and afterwards activated again with the new configuration object. This is true for the configuration policy require as well as for the configuration policy optional.

Now create a component similar to the previous ones, that should only be satisfied if a configuration object is provided via the Configuration Admin. It should also be prepared to react on configuration changes at runtime. Specify an alternative configuration PID so it is not necessary to use the full qualified class name of the component.

  • Create a new class AdminConfiguredComponent like shown below.
    It is an Immediate Component that prints out a message for a specified number of iterations.

    • Specify the configuration PID AdminConfiguredComponent so it is not necessary to use the full qualified class name of the component when trying to configure it.
    • Set the configuration policy REQUIRE, so the component will only be activated once a configuration object is set by the Configuration Admin.
    • Add life cycle methods for modified and deactivate to be able to play around with different scenarios.
package org.fipro.ds.configurable;

import java.util.Map;

import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.ConfigurationPolicy;
import org.osgi.service.component.annotations.Deactivate;
import org.osgi.service.component.annotations.Modified;

@Component(
    configurationPid = "AdminConfiguredComponent",
    configurationPolicy = ConfigurationPolicy.REQUIRE
)
public class AdminConfiguredComponent {

    @Activate
    void activate(Map<String, Object> properties) {
        System.out.println();
        System.out.println("AdminConfiguredComponent activated");
        printMessage(properties);
    }

    @Modified
    void modified(Map<String, Object> properties) {
        System.out.println();
        System.out.println("AdminConfiguredComponent modified");
        printMessage(properties);
    }

    @Deactivate
        void deactivate() {
        System.out.println("AdminConfiguredComponent deactivated");
        System.out.println();
    }

    private void printMessage(Map<String, Object> properties) {
        String msg = (String) properties.get("message");
        Integer iter = (Integer) properties.get("iteration");

        if (msg != null && iter != null) {
            for (int i = 1; i <= iter; i++) {
                System.out.println(i + ": " + msg);
            }
        }
    }
}

If we now execute our example, we will see nothing new. The reason is of course that there is no configuration object yet provided by the Configuration Admin.

Before we are able to do this we need to prepare our environment. That means that we need to install the Configuration Admin Service to the Eclipse IDE or the used Target Platform, as it is not part of the default installation.

To install the Configuration Admin to the Eclipse IDE you need to perform the following steps:

  • Select Help -> Install New Software… from the main menu
  • Select the Neon – http://download.eclipse.org/releases/neon repository
    (assuming you are following the tutorial with Eclipse Neon, otherwise use the matching update site)
  • Disable Group items by category
  • Filter for Equinox
  • Select the Equinox Compendium SDKinstall_equinox_compendium
  • Click Next
  • Click Next
  • Accept the license agreement and Finish
  • Restart the Eclipse IDE to safely apply the changes

Now we can create a Gogo Shell command that will be used to change a configuration object at runtime.

  • Open MANIFEST.MF of org.fipro.ds.configurable
    • Add org.osgi.service.cm to the Imported Packages
  • Create a new package org.fipro.ds.configurable.command
  • Create a new class ConfigureServiceCommand in that package that looks similar to the following snippet.
    It is a Delayed Component that will be registered as a service for the ConfigureCommand class. It has a reference to the ConfigurationAdmin service, which is used to create/get the Configuration object for the PID AdminConfiguredComponent and updates the configuration with the given values.
package org.fipro.ds.configurable.command;

import java.io.IOException;
import java.util.Hashtable;

import org.osgi.service.cm.Configuration;
import org.osgi.service.cm.ConfigurationAdmin;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property = {
        "osgi.command.scope=fipro",
        "osgi.command.function=configure"
    },
    service=ConfigureCommand.class
)
public class ConfigureCommand {

    ConfigurationAdmin cm;

    @Reference
    void setConfigurationAdmin(ConfigurationAdmin cm) {
        this.cm = cm;
    }

    public void configure(String msg, int count) throws IOException {
        Configuration config =
            cm.getConfiguration("AdminConfiguredComponent");
        Hashtable<String, Object> props = new Hashtable<>();
        props.put("message", msg);
        props.put("iteration", count);
        config.update(props);
    }
}

Note:
The ConfigurationAdmin reference is a static reference. Therefore it doesn’t need an unbind method. If you follow the example with Eclipse Neon you will probably see an error mentioning the missing unbind method. Either implement the unbind method for now or disable the error via Preferences. This is fixed with Eclipse Oxygen M2.

Note:
The two Component Properties osgi.command.scope and osgi.command.function are specified inline. These are necessary so the Apache Gogo Shell recognizes the component as a service that can be triggered by entering the corresponding values as a command to the console. This shows the usage of Component Properties as additional meta-data that is examined by other components. Also note that we need to set the service type element, as only services can be referenced by other components.

To execute the example you need to include the org.eclipse.equinox.cm bundle to the Run configuration.

On executing the example you should notice that the AdminConfiguredComponent is not activated on startup, although it is an Immediate Component. Now execute the following command on the console: configure foo 2

As a result you should get an output like this:

AdminConfiguredComponent activated
1: foo
2: foo

If you execute the command a second time with different parameters (e.g. configure bar 3), the output should change to this:

AdminConfiguredComponent modified
1: bar
2: bar
3: bar

The component gets activated after we created a configuration object via the Configuration Admin. The reason for this is ConfigurationPolicy.REQUIRED which means that there needs to be a configuration object for the component configuration in order to be satisfied. Subsequent executions change the configuration object, so the modified method is called then. Now you can play around with the implementation to get a better feeling. For example, remove the modified method and see how the component life cycle handling changes on configuration changes.

Note:
To start from a clean state again you need to check the option Clear the configuration area before launching in the Settings tab of the Run configuration.

Using the modified life cycle event enables to react on configuration changes inside the component itself. To be able to react to configuration changes inside components that reference the service, the updated event method can be used.

  • Create a simple component that references the AdminConfiguredComponent to test this:
package org.fipro.ds.configurable;

import java.util.Map;

import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Deactivate;
import org.osgi.service.component.annotations.Modified;
import org.osgi.service.component.annotations.Reference;

@Component
public class AdminReferencingComponent {

    AdminConfiguredComponent component;

    @Activate
    void activate() {
        System.out.println("AdminReferencingComponent activated");
    }

    @Modified
    void modified() {
        System.out.println("AdminReferencingComponent modified");
    }

    @Deactivate
    void deactivate() {
        System.out.println("AdminReferencingComponent deactivated");
    }

    @Reference
    void setAdminConfiguredComponent(
        AdminConfiguredComponent comp, Map<String, Object> properties) {
        System.out.println("AdminReferencingComponent: set service");
        printMessage(properties);
    }

    void updatedAdminConfiguredComponent(
        AdminConfiguredComponent comp, Map<String, Object> properties) {
        System.out.println("AdminReferencingComponent: update service");
        printMessage(properties);
    }

    void unsetAdminConfiguredComponent(
        AdminConfiguredComponent comp) {
        System.out.println("AdminReferencingComponent: unset service");
    }

    private void printMessage(Map<String, Object> properties) {
        String msg = (String) properties.get("message");
        Integer iter = (Integer) properties.get("iteration");
        System.out.println("[" + msg + "|" + iter + "]");
    }
}
  • Configure the AdminConfiguredComponent to be a service component by adding the attribute service=AdminConfiguredComponent.class to the @Component annotation. Otherwise it can not be referenced.
@Component(
    configurationPid = "AdminConfiguredComponent",
    configurationPolicy = ConfigurationPolicy.REQUIRE,
    service=AdminConfiguredComponent.class
)
public class AdminConfiguredComponent {

Now execute the example and call the configure command two times. The result should look similar to this:

osgi> configure blubb 2
AdminConfiguredComponent activated
1: blubb
2: blubb
AdminReferencingComponent: set service
[blubb|2]
AdminReferencingComponent activated
osgi> configure dingens 3
AdminConfiguredComponent modified
1: dingens
2: dingens
3: dingens
AdminReferencingComponent: update service
[dingens|3]

Calling the configure command the first time triggers the activation of the AdminConfiguredComponent, which then can be bound to the AdminReferencingComponent, which is satisfied and therefore can be activated afterwards. The second execution of the configure command triggers the modified life cycle event of the AdminConfiguredComponent and the updated event method of the AdminReferencingComponent.

If you ask yourself why the AdminConfiguredComponent is still immediately activated, although we made it a service now, the answer is, because it is referenced by an Immediate Component. Therefore the target services need to be bound, which means the referenced services need to be activated too.

This example is also helpful in getting a better understanding of the component life cycle. For example, if you remove the modified life cycle method from the AdminConfiguredComponent and call the configure command subsequently, both components get deactivated and activated, which results in new instances. Modifying the @Reference attributes will also lead to different results then. Change the cardinality, the policy and the policyOption to see the different behavior. Making the service reference OPTIONAL|DYNAMIC|GREEDY results in only re-activating the AdminConfiguredComponent but keeping the AdminReferencingComponent in active state. Changing it to OPTIONAL|STATIC|GREEDY will lead to re-activation of both components, while setting it OPTIONAL|STATIC|RELUCTANT any changes will be ignored, and actually nothing happens as the AdminReferencingComponent never gets satisfied, and therefore the AdminConfiguredComponent never gets activated.

The correlation between cardinality, reference policy and reference policy option is explained in detail in the OSGi Compendium Specification (table 112.1 in chapter 112.3.7 Reference Policy Option in Specification Version 6).

Location Binding

Some words about location binding here. The example above created a configuration object using the single parameter version of ConfigurationAdmin#getConfiguration(String). The parameter specifies the PID for which a configuration object is requested or should be created. This means that the configuration is bound to the location of the calling bundle. It then can not be consumed by other bundles. So the method is used to ensure that only the components inside the same bundle are affected.

A so-called bound configuration object is sufficient for the example above, as all created components are located in the same bundle. But there are also other cases where for example a configuration service in another bundle should be used to configure the components in all bundles of the application. This can be done by creating an unbound configuration object using the two argument version of ConfigurationAdmin#getConfiguration(String, String). The first parameter is the PID and the second parameter specifies the bundle location string.

Note:
The location parameter only becomes important if a configuration object will be created. If a configuration for the given PID already exists in the ConfigurationAdmin service, the location parameter will be ignored and the existing object will be returned.

You can use different values for the location argument:

  • Exact bundle location identifier
    In this case you explicitly specify the location identifier of the bundle to which the configuration object should be bound. The location identifier is set when a bundle is installed and typically it is a file URL that points to the bundle jar. It is impossible to have that hard coded and work across multiple installations. But you could retrieve it via a snippet similar to this:

    Bundle adminBundle =
        FrameworkUtil.getBundle(AdminConfiguredComponent.class);
    adminBundle.getLocation()

    But doing this introduces a dependency to the bundle that should be configured, which is typically not a good practice.

  • null
    The location value for the binding will be set when a service with the corresponding PID is registered the first time. Note that this could lead to issues if you have multiple services with the same PID in different bundles. In that case only the services in the first bundle that requests a configuration object would be able to get it because of the binding.
  • Multi-locations
    By using a multi-location binding, the configurations are dispatched to any target that has visibility to the configuration. A multi-location is specified with a leading question mark. It is possible to use only the question mark or adding a multi-location name behind the question mark, e.g.

    Configuration config =
        cm.getConfiguration("AdminConfiguredComponent", "?");
    Configuration config =
        cm.getConfiguration("AdminConfiguredComponent", "?org.fipro");

    Note:
    The multi-location name only has importance in case security is turned on and a ConfigurationPermission is specified. Otherwise it doesn’t has an effect. That means, it can not be used to restrict the targets based on the bundle symbolic name without security turned on.

Note:
The Equinox DS implementation has some bugs with regards to location binding. Basically the location binding is ignored. I had a discussion on Stackoverflow (thanks again to Neil Bartlett) and created the ticket Bug 493637 to address that issue. I also created Bug 501898 to report that multi-location binding doesn’t work.

To get familiar with the location binding basics create two additional bundles:

  • Create the bundle org.fipro.ds.configurator
    • Open the MANIFEST.MF file and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • org.osgi.service.cm
      • org.osgi.service.component.annotations (1.2.0)
      • Mark org.osgi.service.component.annotations as Optional
    • Create the package org.fipro.ds.configurator
    • Create the class ConfCommand
      • Copy the ConfigureCommand implementation
      • Change the property value for osgi.command.function to conf
      • Change the method name from configure to conf to match the osgi.command.function property
  • Create the bundle org.fipro.ds.other
    • Open the MANIFEST.MF file and switch to the Dependencies tab
    • Add the following dependency on the Imported Packages side:
      • org.osgi.service.component.annotations (1.2.0)
      • Mark org.osgi.service.component.annotations as Optional
    • Create the package org.fipro.ds.other
    • Create the class OtherConfiguredComponent
      • Copy the AdminConfiguredComponent implementation
      • Change the console outputs to show the new class name
      • Ensure that it is an Immediate Component (i.e. remove the service property or add the immediate property)
      • Ensure that configurationPID and configurationPolicy are the same as in AdminConfiguredComponent

Use three different scenarios:

  1. Use the single parameter getConfiguration(String)
    Calling the conf command on the console will result in nothing. As the configuration object is bound to the bundle of the command, the other bundles don’t see it and the contained components don’t get activated.
  2. Use the double parameter getConfiguration(String, String) where location == null
    Only the component(s) of one bundle will receive the configuration object, as it will be bound to the bundle that first registers a service for the corresponding PID.
  3. Use the double parameter getConfiguration(String, String) where location == “?”
    The components of both bundles will receive the configuration object, as it is dispatched to all bundles that have visibility to the configuration. And as we didn’t mention and configure permissions, all our bundles receive it.

Note:
Because of the location binding issues in Equinox DS (see above), the examples doesn’t work using it. For testing I replaced Equinox DS with Apache Felix SCR in the Run Configuration, which worked well. To make this work just download SCR (Declarative Services) from the Apache Felix Download page and put it in the dropins folder of your Eclipse installation. After restarting the IDE you are able to select org.apache.felix.scr as bundle in the Run Configuration. Remember to remove org.eclipse.equinox.ds to ensure that only one SCR implementation is running.

Bndtools vs. PDE

For the org.fipro.ds.configurable bundle you need to add the package org.fipro.ds.configurable.command to the Private Packages in the bnd.bnd file. Otherwise it will not be part of the resulting bundle.

While we needed to add the Import-Package statement for org.osgi.service.cm manually in PDE, that import is automatically calculated by Bndtools. So at that point there is no action necessary. Only the launch configuration needs to be updated manually to include the Configuration Admin bundle.

  • Open the launch.bndrun file
  • On the Run tab click on Resolve
  • Verify the values values shown in the opened dialog in the Required Resources section
  • Click Finish

If you change a component class while the example is running, you will notice that the OSGi framework automatically restarts and the values set before via Configuration Admin are gone. This is because the Bndtools OSGi Framework launch configuration has two options enabled by default on the OSGi tab:

  • Framework: Update bundles during runtime.
  • Framework: Clean storage area before launch.

To test the behavior of components in case of persisted configuration values, you need to disable these settings.

DS 1.3

A new feature added to the DS 1.3 specification are the Component Property Types. They can be used as alternative to the component property Map<String, Object> parameter for retrieving the Configuration Properties in a life cycle method. The Component Property Type is specified as a custom annotation type, that contains property names, property types and default values. The following snippet shows the definition of such an annotation for the above examples:

package org.fipro.ds.configurable;

public @interface MessageConfig {
    String message() default "";
    int iteration() default 0;
}

Most of the examples found in the web show the definition of the annotation inside the component class. But of course it is also possible to create a public annotation in a separate file so it is reusable in multiple components.

The following snippet shows one of the examples above, modified to use a Component Property Type instead of the property Map<String, Object>.

package org.fipro.ds.configurable;

import java.util.Map;

import org.osgi.service.component.annotations.Activate;
import org.osgi.service.component.annotations.Component;

@Component(
    property = {
        "message=Welcome to the inline configured service",
        "iteration:Integer=3"
    }
)
public class StaticConfiguredComponent {

    @Activate
    void activate(MessageConfig config) {
        String msg = config.message();
        int iter = config.iteration();

        for (int i = 1; i <= iter; i++) {
            System.out.println(i + ": " + msg);
        }
    }
}

Note:
If properties are needed that are not specified in the Component Property Type, you can have both as method arguments. Since DS 1.3 there are different method signatures supported, including the combination of Component Property Type and the component property Map<String, Object>.

Although the Component Property Type is defined as an annotation type, it is not used as an annotation. The reasons for choosing annotation types are:

  • Limitations on annotation type definitions match component property types (no-argument methods and limited return types supported)
  • Support of default values

As Component Property Types are intended to be type safe, an automatic conversion happens. This is also true for Component Properties that are specified via Java Properties files.

To set configuration values via ConfigurationAdmin service you still need to operate on a Dictionary, which means you need to know the parameter names. But of course on setting the values you are type safe.

Another new feature in DS 1.3 is that you can specify multiple configuration PIDs for a component. This way it is for example possible to specify configuration objects for multiple components that share a common PID, while at the same time having a specific configuration object for a single component. To specify multiple configuration PIDs and still keep the default (that is the component name), the placeholder “$” can be used. By adding the following property to the StaticConfiguredComponent and the FileConfiguredComponent created before, the execution of the configure command will update all three components at once.

@Component(
    configurationPid = {"$", "AdminConfiguredComponent"},
    ...
)

Note that we don’t update the configurationPid value of AdminConfiguredComponent. The reason for this is that we use the configuration policy REQUIRE, which means that the component only gets satisfied if there are configuration objects available for BOTH configuration PIDs. And our example does not create a configuration object for the default PID of the AdminConfiguredComponent.

The order of the configuration PIDs matters with regards to property propagation. The configuration object for a PID at the end overrides values that were applied by another configuration object for a PID before. This is similar to the propagation of inline properties or property files. The processing is sequential and therefore later processed instructions override previous ones.

Service Properties

As initially explained there is a slight difference between Component Properties and Service Properties. Component Properties are all properties specified for a component that can be accessed in life cycle methods via method parameter. Service Properties can be retrieved via Event Methods (bind/updated/unbind) or since DS 1.3 via field strategy. They contain all public Component Properties, which means all excluding those whose property names start with a full stop. Additionally some service properties are added that are intended to give additional information about the service. These properties are prefixed with service, set by the framework and specified in the OSGi Core Specification (service.id, service.scope and service.bundeid).

To play around with Service Properties we set up another playground. For this create the following bundles to simulate a data provider service:

  • API bundle
    • Create the bundle org.fipro.ds.data.api
    • Add the following service interface
      package org.fipro.ds.data;
      
      public interface DataService {
      
          /**
           * @param id
           * The id of the requested data value.
           * @return The data value for the given id.
           */
          String getData(int id);
      }
    • Modify the MANIFEST.MF to export the package
  • Online data service provider bundle
    • Create the bundle org.fipro.ds.data.online
    • Add the necessary package import statements to the MANIFEST.MF
    • Create the following simple service implementation, that specifies the property fipro.connectivity=online for further use
      package org.fipro.ds.data.online;
      
      import org.fipro.ds.data.DataService;
      import org.osgi.service.component.annotations.Component;
      
      @Component(property="fipro.connectivity=online")
      public class OnlineDataService implements DataService {
      
          @Override
          public String getData(int id) {
              return "ONLINE data for id " + id;
          }
      }
  • Offline data service provider bundle
    • Create the bundle org.fipro.ds.data.offline
    • Add the necessary package import statements to the MANIFEST.MF
    • Create the following simple service implementation, that specifies the property fipro.connectivity=offline for further use
      package org.fipro.ds.data.offline;
      
      import org.fipro.ds.data.DataService;
      import org.osgi.service.component.annotations.Component;
      
      @Component(property="fipro.connectivity=offline")
      public class OfflineDataService implements DataService {
      
          @Override
          public String getData(int id) {
              return "OFFLINE data for id " + id;
          }
      }

Note:
For Java best practices you would of course specify the property name and the possible values as constants in the API bundle to prevent typing errors.

To be able to interact with the data provider services, we create an additional console command in the bundle  that references the services and shows the retrieved data on the console on execution. Add it to the bundle org.fipro.ds.configurator or create a new bundle if you skipped the location binding example.

package org.fipro.ds.configurator;

import java.util.ArrayList;
import java.util.List;
import java.util.Map;

import org.fipro.ds.data.DataService;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
import org.osgi.service.component.annotations.ReferenceCardinality;
import org.osgi.service.component.annotations.ReferencePolicy;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=retrieve"},
    service=DataRetriever.class
)
public class DataRetriever {

    private List<DataService> dataServices = new ArrayList<>();

    @Reference(
        cardinality=ReferenceCardinality.MULTIPLE,
        policy=ReferencePolicy.DYNAMIC
    )
    void addDataService(
            DataService service, Map<String, Object> properties) {
        this.dataServices.add(service);
        System.out.println(
            "Added " + service.getClass().getName());
    }

    void removeDataService(DataService service) {
        this.dataServices.remove(service);
        System.out.println(
            "Removed " + service.getClass().getName());
    }

    public void retrieve(int id) {
        for (DataService service : this.dataServices) {
            System.out.println(service.getData(id));
        }
    }
}

Add the new bundles to an existing Run Configuration and execute it. By calling the retrieve command on the console you should get an output similar to this:

osgi> retrieve 3
OFFLINE data for id 3
ONLINE data for id 3

Nothing special so far. Now let’s modify the example to verify the Service Properties.

  • Modify DataRetriever#addDataService() to print the given properties to the console
    @Reference(
        cardinality=ReferenceCardinality.MULTIPLE,
        policy=ReferencePolicy.DYNAMIC
    )
    void addDataService(
            DataService service, Map<String, Object> properties) {
        this.dataServices.add(service);
    
        System.out.println("Added " + service.getClass().getName());
        properties.forEach((k, v) -> {
            System.out.println(k+"="+v);
        });
        System.out.println();
    }
  • Start the example and execute the retrieve command. The result should now look like this:
    osgi> retrieve 3
    org.fipro.ds.data.offline.OfflineDataService
    fipro.connectivity=offline
    component.id=3
    component.name=org.fipro.ds.data.offline.OfflineDataService
    service.id=51
    objectClass=[Ljava.lang.String;@1403f0fa
    service.scope=bundle
    service.bundleid=5
    
    org.fipro.ds.data.online.OnlineDataService
    fipro.connectivity=online
    component.id=4
    component.name=org.fipro.ds.data.online.OnlineDataService
    service.id=52
    objectClass=[Ljava.lang.String;@c63166
    service.scope=bundle
    service.bundleid=6
    
    OFFLINE data for id 3
    ONLINE data for id 3

    The Service Properties contain the fipro.connectivity property specified by us, aswell as several properties that are set by the SCR.

    Note:
     The DataRetriever is not in Immediate Component and therefore gets activated when the retrieve command is executed the first time. The target services are bound at activation time, therefore the setter is called at that time and not before.

  • Modify the OfflineDataService
    • Add an Activate life cycle method
    • Add a property with a property name that starts with a full stop
    package org.fipro.ds.data.offline;
    
    import java.util.Map;
    
    import org.fipro.data.Constants;
    import org.fipro.data.DataService;
    import org.osgi.service.component.annotations.Activate;
    import org.osgi.service.component.annotations.Component;
    
    @Component(
        property= {
            "fipro.connectivity=offline",
            ".private=private configuration"
        }
    )
    public class OfflineDataService implements DataService {
    
        @Activate
        void activate(Map<String, Object> properties) {
            System.out.println("OfflineDataService activated");
            properties.forEach((k, v) -> {
                System.out.println(k+"="+v);
            });
            System.out.println();
        }
    
        @Override
        public String getData(int id) {
            return "OFFLINE data for id " + id;
        }
    }

    Execute the retrieve command again and verify the console output. You will notice that the output from the Activate life cycle method contains the .private property but no properties with a service prefix. The output from the bind event method on the other hand does not contain the .private property, as the leading full stop marks it as a private property.

    osgi> retrieve 3
    OfflineDataService activated
    objectClass=[Ljava.lang.String;@c60d42
    component.name=org.fipro.ds.data.offline.OfflineDataService
    component.id=3
    .private=private configuration
    fipro.connectivity=offline
    
    org.fipro.ds.data.offline.OfflineDataService
    fipro.connectivity=offline
    component.id=3
    component.name=org.fipro.ds.data.offline.OfflineDataService
    service.id=51
    objectClass=[Ljava.lang.String;@2b5d77a6
    service.scope=bundle
    service.bundleid=5
    
    ...

Service Ranking

In case multiple services of the same type are available, the service ranking is taken into account to determine which service will get bound. In case of multiple bindings the service ranking effects in which order the services are bound. The ranking order is defined as follows:

  • Sorted on descending ranking order (highest first)
  • If the ranking numbers are equal, sorted on ascending service.id property (oldest first)

As service ids are never reused and handed out in order of their registration time, the ordering is always complete.

The property service.ranking can be used to specify the ranking order and in case of OSGi components it can be specified as a Component Property via @Component where the value needs to be of type Integer. The default ranking value is zero if the property is not specified explicitly.

Modify the two DataService implementations to specify the initial service.ranking property.

@Component(
    property = {
        "fipro.connectivity=online",
        "service.ranking:Integer=7"
    }
)
public class OnlineDataService implements DataService {
...
@Component(
    property = {
        "fipro.connectivity=offline",
        "service.ranking:Integer=5",
        ".private=private configuration
    }
)
public class OfflineDataService implements DataService {
...

If you start the application and execute the retrieve command now, you will notice that the OnlineDataService is called first. Change the service.ranking of the OnlineDataService to 3 and restart the application. Now executing the retrieve command will first call the OfflineDataService.

To make this more obvious and show that the service ranking can also be changed dynamically, create a new DataGetter command in the org.fipro.ds.configurator bundle:

package org.fipro.ds.configurator;

import java.util.Map;

import org.fipro.ds.data.DataService;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;
import org.osgi.service.component.annotations.ReferencePolicy;
import org.osgi.service.component.annotations.ReferencePolicyOption;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=get"
    },
    service=DataGetter.class
)
public class DataGetter {

    private DataService dataService;

    @Reference(
        policy=ReferencePolicy.DYNAMIC,
        policyOption=ReferencePolicyOption.GREEDY
    )
    void setDataService(DataService service,
            Map<String, Object> properties) {
        this.dataService = service;
    }

    void unsetDataService(DataService service) {
        if (service == this.dataService) {
            this.dataService = null;
        }
    }

    public void get(int id) {
        System.out.println(this.dataService.getData(id));
    }
}

This command has a MANDATORY reference to a DataService. The policy option is set to GREEDY which is necessary to bind to a higher ranked service if available. The policy is set to DYNAMIC to avoid re-activation of the DataGetter component if a service changes. If you change the policy to STATIC, the binding to the higher ranked service is done by re-activating the component.

Note:
For dynamic references the unbind event method is mandatory. This is necessary because the component is not re-activated if the bound services change, which means there will be no new Component Instance. Therefore the Component Instance state needs to be secured in the unbind method. In our case we check if the current service reference is the same that should be unbound. In that case we set the reference to null, otherwise there is already another service bound.

Finally create a toggle command, which dynamically toggles the service.ranking property of OnlineDataService.

package org.fipro.ds.configurator;

import java.io.IOException;
import java.util.Dictionary;
import java.util.Hashtable;

import org.osgi.service.cm.Configuration;
import org.osgi.service.cm.ConfigurationAdmin;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=ranking"
    },
    service=ToggleRankingCommand.class
)
public class ToggleRankingCommand {

    ConfigurationAdmin admin;

    @Reference
    void setConfigurationAdmin(ConfigurationAdmin admin) {
        this.admin = admin;
    }

    public void ranking() throws IOException {
        Configuration configOnline =
            this.admin.getConfiguration(
                "org.fipro.ds.data.online.OnlineDataService",
                null);
        Dictionary<String, Object> propsOnline = null;
        if (configOnline != null
                && configOnline.getProperties() != null) {
            propsOnline = configOnline.getProperties();
        } else {
            propsOnline = new Hashtable<>();
        }

        int onlineRanking = 7;
        if (configOnline != null
                && configOnline.getProperties() != null) {
            Object rank =
                configOnline.getProperties().get("service.ranking");
            if (rank != null) {
                onlineRanking = (Integer)rank;
            }
        }

        // toggle between 3 and 7
        onlineRanking = (onlineRanking == 7) ? 3 : 7;

        propsOnline.put("service.ranking", onlineRanking);
        configOnline.update(propsOnline);
    }
}

Starting the example application the first time and executing the get command will return the ONLINE data. After executing the ranking command, the get command will return the OFFLINE data (or vice versa dependent on the initial state).

Note:
Equinox DS will log an error or warning to the console every second time. Probably an issue with processing the service reference update in Equinox DS. The example will still work, and if you replace Equinox DS with Felix SCR the message does not come up. So it looks like another Equinox DS issue.

Reference Properties

Reference Properties are special Component Properties that are associated with specific component references. They are used to configure component references more specifically. With DS 1.2 the target property is the only supported Reference Property. The reference property name needs to follow the pattern <reference_name>.<reference_property> so it can be accessed dynamically. The target property can be specified via the @Reference annotation on the bind event method via the target annotation type element. The value needs to be an LDAP filter expression and is used to select target services for the reference. The following example specifies a target property for the DataService reference of the DataRetriever command to only select target services which specify the Service Property fipro.connectivity with value online.

@Reference(
    cardinality=ReferenceCardinality.MULTIPLE,
    policy=ReferencePolicy.DYNAMIC,
    target="(fipro.connectivity=online)"
)

If you change that in the example and execute the retrieve command in the console again, you will notice that only the OnlineDataService will be selected by the DataRetriever.

Specifying the target property directly on the reference is a static way of defining the filter. The registering of custom commands to the Apache Gogo Shell seems to work that way, as you can register any service to become a console command when the necessary properties are specified.

In a dynamic environment it needs to be possible to change the target property at runtime aswell. This way it is possible to react on changes to the environment for example, like whether there is an active internet connection or not. To change the target property dynamically you can use the ConfigurationAdmin service. For this the reference property name needs to be known. Following the pattern
    <reference_name>.<reference_property>
this means for our example where
    reference_name = DataService
    reference_property = target
the reference property name is
    DataService.target

To test this we implement a new command component in org.fipro.ds.configurator that allows us to toggle the connectivity state filter on the DataService reference target property.

package org.fipro.ds.configurator;

import java.io.IOException;
import java.util.Dictionary;
import java.util.Hashtable;

import org.osgi.service.cm.Configuration;
import org.osgi.service.cm.ConfigurationAdmin;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property= {
        "osgi.command.scope:String=fipro",
        "osgi.command.function:String=toggle"
    },
    service=ToggleConnectivityCommand.class
)
public class ToggleConnectivityCommand {

    ConfigurationAdmin admin;

    @Reference
    void setConfigurationAdmin(ConfigurationAdmin admin) {
        this.admin = admin;
    }

    public void toggle() throws IOException {
        Configuration config =
            this.admin.getConfiguration(
                "org.fipro.ds.configurator.DataRetriever");

        Dictionary<String, Object> props = null;
        Object target = null;
        if (config != null
                && config.getProperties() != null) {
        	props = config.getProperties();
        	target = props.get("DataService.target");
        } else {
            props = new Hashtable<String, Object>();
        }

        boolean isOnline = (target == null
            || target.toString().contains("online"));

        // toggle the state
        StringBuilder filter =
            new StringBuilder("(fipro.connectivity=");
        filter.append(isOnline ? "offline" : "online").append(")");

        props.put("DataService.target", filter.toString());
        config.update(props);
    }
}

Some things to notice here:

  1. We use the default PID org.fipro.ds.data.configurator.DataRetriever to get a configuration object.
  2. We check if there is already an existing configuration. If there is an existing configuration we operate on the existing Dictionary. Otherwise we create a new one.
  3. We try to get the current state from the Dictionary.
  4. We create an LDAP filter String based on the retrieved information (or default if the configuration is created) and set it as reference target property.
  5. We update the configuration with the new values.

From my observation the reference policy and reference policy option doesn’t matter in that case. On changing the reference target property dynamically, the component gets re-activated to ensure a consistent state.

DS 1.3

With DS 1.3 the Minimum Cardinality Reference Property was introduced. Via this reference property it is possible to modify the minimum cardinality value at runtime. While it is only possible to specify the optionality via the @Reference cardinality attribute (this means 0 or 1), you can specify any positive number for MULTIPLE or AT_LEAST_ONE references. So it can be used for example to specify that at least 2 services of a special type needs to be available in order to satisfy the Component Configuration.

The name of the minimum cardinality property is the name of the reference appended with .cardinality.minimum. In our example this would be
DataService.cardinality.minimum

Note:
The minimum cardinality can only be specified via the cardinality attribute of the reference element. So it is only possible to specify the optionality to be 0 or 1. To specify the minimum cardinality in an extended way, the minimum cardinality reference property needs to be applied via Configuration Admin.

Create a command component in org.fipro.ds.configurator to modify the minimum cardinality property dynamically. It should look like the following example:

package org.fipro.ds.configurator;

import java.io.IOException;
import java.util.Dictionary;
import java.util.Hashtable;

import org.osgi.service.cm.Configuration;
import org.osgi.service.cm.ConfigurationAdmin;
import org.osgi.service.component.annotations.Component;
import org.osgi.service.component.annotations.Reference;

@Component(
    property = {
        "osgi.command.scope=fipro",
        "osgi.command.function=cardinality"
    },
    service=ToggleMinimumCardinalityCommand.class
)
public class ToggleMinimumCardinalityCommand {

    @Reference
    ConfigurationAdmin admin;

    public void cardinality(int count) throws IOException {
        Configuration config =
            this.admin.getConfiguration(
                "org.fipro.ds.configurator.DataRetriever");

        Dictionary<String, Object> props = null;
        if (config != null
                && config.getProperties() != null) {
            props = config.getProperties();
        } else {
            props = new Hashtable<String, Object>();
        }

        props.put("DataService.cardinality.minimum", count);
        config.update(props);
    }
}

Launch the example and execute retrieve 3. You should get a valid response like before from a single service (online or offline dependent on the target property that is set). Now if you execute cardinality 2 and afterwards retrieve 3 you should get a CommandNotFoundException. Checking the components on the console via scr:list will show that org.fipro.ds.configurator.DataRetriever now has a unsatisfied reference. Calling cardinality 1 afterwards will resolve that again.

Now you can play around and create additional services to test if this is also working for values > 1.

While I was writing on this blog post, finding and reporting some issues in Equinox DS, the following ticket was created Bug 501950. If everything works out, Equinox DS will be replaced with Felix SCR. This would solve several issues and finally bring DS 1.3 also to Eclipse. So I cross my fingers that this ticket will be fixed for Oxygen. (which on the other hand means some work for the DS Annotations @pnehrer)

That’s if for this blog post. It again got much longer than I intended. But on the way writing the blog post I again learned a lot that wasn’t clear to me before. I hope you also could take something out of it to use declarative services even more in your projects.

Of course you can find the sources of this tutorial in my GitHub account:


by Dirk Fauth at September 26, 2016 06:40 AM

New Eclipse feature: See return value during debugging

by leoufimtsev at September 23, 2016 07:16 PM

Selection_121.png

With a recent patch, Eclipse can now show you the return value of a method during a debug session.

For years, when I was debugging and I needed to see the return value of a method, I would change code like:

return function();

 

To:

String retVal = function();
return retVal;

And then step through the code and inspect the value of “retVal”.

Recently [September 2016] a patch was merged to support this feature. Now when you return from a method, in the upper method, in the variable view it shows the return value of the previously finished call:

Selection_120.png

As a side note, the reason this was not implemented sooner is that the Java virtual machine debugger did not provide this information until Java 1.6.

If your version of Eclipse doesn’t yet have that feature, try downloading a recent integration or nightly build.

Happy debugging.

 

 



by leoufimtsev at September 23, 2016 07:16 PM

JBoss Tools and Red Hat Developer Studio Maintenance Release for Eclipse Neon

by jeffmaury at September 21, 2016 04:04 PM

JBoss Tools 4.4.1 and Red Hat JBoss Developer Studio 10.1 for Eclipse Neon are here waiting for you. Check it out!

devstudio10

Installation

JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our JBoss Products page and run it like this:

java -jar jboss-devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more:

This release requires at least Eclipse 4.6 (Neon) but we recommend using the latest Eclipse 4.6 Neon JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat JBoss Developer Studio".

For JBoss Tools, you can also use our update site directly.

http://download.jboss.org/jbosstools/neon/stable/updates/

What is new?

Our main focus for this release was improvements for container based development and bug fixing.

Improved OpenShift 3 and Docker Tools

We continue to work on providing better experience for container based development in JBoss Tools and Developer Studio. Let’s go through a few interesting updates here.

Support for Container Labels

Users can now specify labels when running a container. The labels are saved in the launch configuration and can also be edited before relaunching the container.

Container Labels

Automatically detect known Docker daemon connections

When the Docker Explorer view is opened, the list of existing connections (saved from a previous session) is reloaded. In addition to this behaviour, the view will also attempt to find new connections using default settings such the &aposunix:///var/run/docker.sock&apos Unix socket or the &aposDOCKER_HOST&apos, &aposDOCKER_CERT_PATH&apos and &aposDOCKER_TLS_VERIFY&apos environment variables. This means that by default, in a new workspace, if a Docker daemon is reachable using one of those methods, the user does not have to use the "New Connection" wizard to get a connection.

Extension point for Docker daemon connection settings

An extension point has been added to the Docker core plugin to allow for custom connection settings provisionning.

Support for Docker Compose

Support for Docker Compose has finally landed !

Users can select a docker-compose.yml file and start Docker Compose from the context menu, using the Run > Docker Compose launcher shortcut.

The Docker Compose process displays it logs (with support for text coloring based on ANSI escape codes) and provides a stop button to stop the underlying process.

Docker Compose

Also, as with the support for building and running containers, a launch configuration is created after the first call to Docker Compose on the selected docker-compose.yml file.

Docker Image Hierarchy View Improvements

The new Docker Image Hierarchy view not only shows the relationships between images (which is particularly interesting when an image is built using a Dockerfile), but it also includes containers based on the images in the tree view while providing with all relevant commands (in the context menu) for containers and images.

Docker Image Hierarchy View

Server templates can now be displayed / edited

Server templates are now displayed in the property view under the Templates tab:

property view template

You can access/edit the content of the template with the Edit command.

Events can now be displayed

Events generated as part of the application livecycle are now displayed in the property view under the Events tab (available at the project level):

property view event

You can refresh the content of the event with the Refresh command or open the event in the OpenShift web console with the Show In → Web Console command.

Volume claims can now be displayed

Volume claims are now displayed in the property view under the Storage tab (available at the project level):

property view storage1

You can create a new volume claim using a resource file like the following:

{
          "apiVersion": "v1",
          "kind": "PersistentVolumeClaim",
          "metadata": {
              "name": "claim1"
          },
          "spec": {
              "accessModes": [ "ReadWriteOnce" ],
              "resources": {
                  "requests": {
                      "storage": "1Gi"
                  }
              }
          }
      }

If you deploy such a resource file with the New → Resource command at the project level, the Storage tab will be updated:

property view storage2

You can access/edit the content of the volume claim with the Edit command or open the volume claim in the OpenShift web console with the Show In → Web Console command.

Server Tools

QuickFixes now available in runtime detection

Runtime detection has been a feature of JBossTools for a long while, however, it would sometimes create runtime and server adapters with configuration errors without alerting the user. Now, the user will have an opportunity to execute quickfixes before completing the creation of their runtimes and servers.

JBIDE 15189 rt detect 1

To see this in action, we can first open up the runtime-detection preference page. We can see that our runtime-detection will automatically search three paths for valid runtimes of any type.

JBIDE 15189 rt detect 2

Once we click search, the runtime-detection’s search dialog appears, with results it has found. In this case, it has located an EAP 6.4 and an EAP 7.0 installation. However, we can see that both have errors. If we click on the error column for the discovered EAP 7.0, the error is expanded, and we see that we’re missing a valid / compatible JRE. To fix the issue, we should click on this item.

JBIDE 15189 rt detect 3

When we click on the problem for EAP 7, the new JRE dialog appears, allowing us to add a compatible JRE. The dialog helpfully informs us of what the restrictions are for this specific runtime. In this case, we’re asked to define a JRE with a minimum version of Java-8.

JBIDE 15189 rt detect 4

If we continue along with the process by locating and adding a Java 8 JRE, as shown above, and finish the dialog, we’ll see that all the errors will disappear for both runtimes. In this example, the EAP 6.4 required a JRE of Java 7 or higher. The addition of the Java 8 JRE fixed this issue as well.

JBIDE 15189 rt detect 5

Hopefully, this will help users preemptively discover and fix errors before being hit with surprising errors when trying to use the created server adapters.

Support for WildFly 10.1

The WildFly 10.0 Server adapter has been renamed to WildFly 10.x. It has been tested and verified to work for WildFly 10.1 installations.

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

New Hibernate 5.2 Runtime Provider

With final releases available in the Hibernate 5.2 stream, the time was right to make available a corresponding Hibernate 5.2 runtime provider. This runtime provider incorporates Hibernate Core version 5.2.2.Final and Hibernate Tools version 5.2.0.Beta1.

hibernate 5 2
Figure 1. Hibernate 5.2 is available
Other Runtime Provider Updates

The Hibernate 4.3 runtime provider now incorporates Hibernate Core version 4.3.11.Final and Hibernate Tools version 4.3.5.Final.

The Hibernate 5.0 runtime provider now incorporates Hibernate Core version 5.0.10.Final and Hibernate Tools version 5.0.2.Final.

The Hibernate 5.1 runtime provider now incorporates Hibernate Core version 5.1.1.Final and Hibernate Tools version 5.1.0.CR1.

Forge Tools

Added Install addon from the catalog command

From Forge 3.3.0.Final onwards it is now possible to query and install addons listed in the Forge addons page.

addon install from catalog

Forge Runtime updated to 3.3.1.Final

The included Forge runtime is now 3.3.1.Final. Read the official announcement here.

startup

Freemarker

Freemarker 2.3.25

Freemarker library included in the Freemarker IDE was updated to latest available version 2.3.25.

flth / fltx file extensions added

The new flth and fltx extensions have been added and associated with Freemarker IDE. flth stands for HTML content whereas fltx stands for XML content.

Overhaul of the plugin template parser

The parser that FreeMarker IDE uses to extract IDE-centric information (needed for syntax highlighting, related tag highlighting, auto-completion, outline view, etc.) was overhauled. Several bugs were fixed, and support for the newer template language features were added. Also, the syntax highlighting is now more detailed inside expressions.

Fixed the issue when the (by default) yellow highlighting of the related FTL tags shift away from under the tag as you type.

Showing whitespace, block selection mode

The standard "Show whitespace characters" and "Toggle block selection mode" icons are now available when editing a template.

Improved automatic finishing of FreeMarker constructs

When you type <#, <@, ${, #{ and <#-- the freemarker editor now automatically closes them.

When a FreeMarker exception is printed to the console, the error position in it is a link that navigates to the error. This has worked long ago, but was broken for quite a while.

Fixed auto-indentation

When hitting enter, sometimes the new line haven’t inherited the indentation of the last line.

Updated the "database" used for auto completion

Auto completion now knows all directives and "built-ins" up to FreeMarker 2.3.25.

What is next?

Having JBoss Tools 4.4.1 and Developer Studio 10.1 out we are already working on the next maintenance release for Eclipse Neon.1.

Enjoy!

Jeff Maury


by jeffmaury at September 21, 2016 04:04 PM

Native browser for GTK on linux

by Christian Pontesegger (noreply@blogger.com) at September 21, 2016 09:14 AM

Having support for the internal browser is often not working out of the box on linux. You can check the status by opening your Preferences/General/Web Browser settings. If the radio button Use internal  web browser is enabled (not necessarily activated) internal browser support is working, otherwise not.

Most annoyingly without internal browser support help hovers in your text editors use a fallback mode not rendering links or images.

To solve this issue you may first check the SWT FAQ. For me working on gentoo linux the following command fixed the problem:
emerge net-libs/webkit-gtk:2
It is important to not only install the latest version of webkit-gtk which will not be recognized by Eclipse. After installation restart eclipse and your browser should work. Verified on Eclipse Neon.

by Christian Pontesegger (noreply@blogger.com) at September 21, 2016 09:14 AM

Creating My First Web App with Angular 2 in Eclipse

by dimitry at September 20, 2016 02:00 PM

Angular 2 is a framework for building desktop and mobile web applications. After hearing rave reviews about Angular 2, I decided to check it out and take my first steps into modern web development. In this article, I’ll show you how to create a simple master-details application using Angular 2, TypeScript, Angular CLI and Eclipse […]

The post Creating My First Web App with Angular 2 in Eclipse appeared first on Genuitec.


by dimitry at September 20, 2016 02:00 PM

Eclipse 4.7 M2 is out with a focus on usability

by Lars Vogel at September 19, 2016 08:23 AM

Eclipse 4.7 M2 is out with a focus on usability.

From simplified filter functionality in the Problems, Bookmark and Task view, improved color usage for popups, simplified editor assignments for file extensions, enhancements of quick access, a configurable compare direction in the compare editor, etc. you will find lots of nice goodies which will increase your love with the Eclipse IDE.

Also the background jobs API has been improved and we run jobs still fast, even if you do a lot of status updates in your job implementation.

Checkout the Eclipse 4.7 M2 Notes and Noteworthy for the details.


by Lars Vogel at September 19, 2016 08:23 AM

Eclipse basics for Java development

by leoufimtsev at September 19, 2016 03:10 AM

Just a basic into to Eclipse, aimed at people who are new to Java. Covers creating a new project, debugging, common shortcuts/navigation, git.

Workspace

Workspace contains your settings, ex your keyboard shortcut preferences, list of your open projects. You can have multiple workspaces.

Selection_097.png

You can switch between Workspaces via File ->Switch Workspaces

Projects

A project is essentially an application, or a library to an application. Projects can be opened or closed. Content of closed projects don’t appear in searches.

Hello world Project

To run some basic java code:

  • File -> New -> Java project
  • Give the project some name ->  finish.
    Selection_085.png
  • Right click on src -> New -> Class
    Selection_086.png
  • Give your Class some name, check “Public static void main(String [] args)”
    Selection_087.png
  • Add “Hello World” print line:
    System.out.println(“Hello world”);
    Selection_088.png
  • Right click on “SomeName.java” -> run as -> Java Application
    Selection_089.png
  • Output printed in Console:
    Selection_090.png
  • Next time you can run the file via run button:Selection_092.png
  • Or via “Ctrl+F11”

Debugging

Set a breakpoint by double clicking on the line numbers in the margin, then click on the bug icon or right click and “Debug as” -> “Java appliaction”
Selection_098.png

For more info on debugging, head over to Vogella:
http://www.vogella.com/tutorials/EclipseDebugging/article.html

Switching perspectives

Eclipse has the notion of Perspectives. One is for Java development, one for debugging, (others could be C++ development, or task planning etc..). It’s basically a customisation of features and layout.

When you finish debugging, you can switch back to the java perspective:
Selection_099.png

Common keyboard shortcuts

  • Ctrl+/    – comment code “//”
    Selection_093.png
  • Ctrl+shift+/    – comment code ‘/* … */’
    Selection_094.png
  • Ctrl+F11   – Run last run configuration
  • Ctrl+Shift+L  Keyboard reminder cue sheet. (Type to search)
    Selection_095.png
  • Ctrl+Shift+L, then Ctrl+Shift+L again, open keyboard preferences.
  • Ctrl+O – Java quick method Outline:
    Selection_096.png
    Note: Regex and case search works. Ex “*Key” will find “getBackgroundColorKey()”, so will  “gBFCK”.
  • Ctrl+Shift+r – search for resource (navigate between your classes).
  • Ctrl+Shift+f – automatically format the selected code. (Or all code if no block selected).

For more on shortcuts, head over to Vogella:
http://www.vogella.com/tutorials/EclipseShortcuts/article.html

Source code navigation

Right click on a method/variable to bring up a context menu, from there select:

Open Declaration (F3)

This is one of the most used functions. It’s a universal “jump to where the method/variable/class/constant is defined”.

 

Open Call hierarchy

See where variable or method is called.

Tip: For variables, you can narrow down the Field Access so that it only shows where a field is read/written.

Selection_105.png

Quick outline (Ctrl+O)

The quick outline is a quick way to find a function in your class. It has regex support and case search. E.g “*size” will find any method with ‘size’ in it and “cSI” will find ‘computeSizeInPixels’.
Tip: Press Ctrl+O again and you will be shown methods that get inherited from parent classes.

Selection_102.png

Navigate to super/implementation (Ctrl+click)

Sometimes you may want to see which sub-classes overrides a method. You can hover over a method and press ctrl+click, then on “Open Implementation”.

Selection_106.png

You will be presented with a list of sub-implementations.

Selection_108.png

You can similarly navigate to parent classes.

Code completion

Predict variable names, method names,

Start typing something, press: “Ctrl+space”

Selection_101.png

It can complete by case also, ex if you type “mOF” and press Ctrl+Space, it will expand to “myOtherFunction()”.

Templates

Typing “System.out.println();” is tedious. Instead you can type “syso” and then press ‘ctrl+space’. Eclipse can fill in the template code.Selection_084.png
You can find more on templates in Eclipse Preferences.

Git integration

99% of my git workflow happens inside Eclipse.

You will want to open three useful views:

Window -> Show view -> others

  • Team -> History
  • git -> Git Repositories
  • git -> Git Staging

You can manage git repositories in the “Git Repositories” view:

Selection_109.png

You can add changed files in the “Git Staging View” via drag and drop, and fill in the commit message. You can view your changes by double clicking on the files:

Selection_110.png

In the “History” view, you can create new branches, cherry pick commits, checkout older versions. Compare current files to previous versions etc..

Selection_112.png

Selection_113.png

More on Eclipse

If you want to know more about the Eclipse interface, feel free to head over to Vogella’s in-depth Eclipse tutorial:
http://www.vogella.com/tutorials/Eclipse/article.html

Also free free to leave comments with questions.



by leoufimtsev at September 19, 2016 03:10 AM

Pushing the Eclipse IDE Forward

by Doug Schaefer at September 17, 2016 06:40 PM

It’s been a crazy week if you follow the ide-dev mailing list at Eclipse. We’ve had many posts over the years discussing our competitive relationship with IntelliJ and the depression that sets in when we try to figure out how to make Eclipse make better so people don’t hate on it so much and then how nothing changes.

This time, though,  sparked by what seemed to be an innocent post by Mickael Istria about yet another claim that IntelliJ has better content assist (which from what I’ve seen, it actually does). This time it sparked a huge conversation with many Eclipse contributors chiming in with their thoughts about where we are with the Eclipse IDE and what needs to be done to make things better. A great summary of the last few days has been captured in a German-language Jaxenter article.

The difference this time is that it’s actually sparked action. Mickael, Pascal Rapicault, and others have switched some of their focus on the low hanging user experience issues and are providing fixes for them. The community has been activated and I love seeing it.

Someone asked why the Architecture Council at Eclipse doesn’t step in and help guide some of this effort and after discussing it at our monthly call, we’ve decided to do just that. Dani Megert and I will revive the UI Guidelines effort and update the current set and extend it to more general user experience guidance. We’ll use the UI Best Practices group mailing list to hold public discussions to help with that. Everyone is welcome to participate. And I’m sure the ide-dev list will continue to be busy as contributors discuss implementation details.

Eclipse became the number one Java IDE with little marketing. Back in the 2000’s developers were hungry for a good Java IDE and since Eclipse was free and easy to set up (yes, unzipping the IDE wasn’t that bad an experience) and worked well, had great static analysis and refactoring, they fell in love with it.

Other IDEs have caught up and in certain areas passed Eclipse and, yes, IntelliJ has become more popular. It’s not because of marketing. Developers decide what they like to use by downloading it and trying it out. As long as we keep our web presence in shape that developers can find the IDE, especially the Java one, and then keep working to make it functionally the best IDE we can, we’ll be able to continue to serve the needs of developers for a long time.

Our best marketing comes from our users. That’s the same with all technology these days. I’d rather hear from someone who’s tried Docker Swarm than believe what the Docker people are telling me (for example). That’s how we got Eclipse to number one, and where we need to focus to keep the ball rolling. And as a contributor community, we’re working hard to get them something good to talk about.


by Doug Schaefer at September 17, 2016 06:40 PM

Me as text?

by tevirselrahc at September 16, 2016 07:05 AM

Over the last few days, a large group of my minions and admires met in Sweden at EMD2017 to talk about me…in all my incarnation.

One of the most polarizing discussion was about whether I should stay graphical or whether I also needed to be textual. For those who do not know, I am a UML-based modeling tool and therefore graphical by nature.

However, some of my minions think that I would be more usable if I also allowed them to create/edit models using text (just like this posting, but in a model instead of a blog post.

During the meeting, there was a lot of discussion about whether it was a good idea or not, whether it was useful or not, whether I was even able to support this!

The main point made by the pro-text minions was that many things are simply easier to do by writing text rather than drawing images, but that both could be supported. Other minions were saying that it was simply impossible.

Now, this is all a bit strange to me. After all, when I look at my picture, I am an image, but then I can express myself in text (again, like in this posting).

Regardless, any new capability given me makes me happy!

And I wonder how I would look as text…

papyrus-logo-asciiart

I think I like myself better as an image, but it’s good to have a choice. In the end, I trust my minions.

 


Filed under: Papyrus, Textual Tagged: modeling, Textual, uml

by tevirselrahc at September 16, 2016 07:05 AM

Install "Plug-in Spy" in your Eclipse Neon IDE

September 15, 2016 10:00 PM

There is a lot of documentation about the Eclipse "Plug-in Spy" feature (Plug-in Spy for UI parts or Eclipse 3.5 - Plug-in Spy and menus). Im my opinion one information is missing: what you need to install to use the Spy feature in your Eclipse Neon IDE. Here is my small how-to.

Select "Install new Software…​" in the "Help" Menu. In the dialog, switch to the "The Eclipse Project Updates" update site (or enter its location http://download.eclipse.org/eclipse/updates/4.6). Filter with "PDE" and select the "Eclipse PDE Plug-in Developer Resources". Validate your choices with "Next" and "Finish", Eclipse will install the feature and ask for a Restart.

2016 09 16 install dialog
Figure 1. Install new Software in Eclipse

If you prefer the Oomph way, you can paste the snippet contained in Listing 1 in your installation.setup file (Open it with the Menu: Navigate ▸ Open Setup ▸ Installation).

<?xml version="1.0" encoding="UTF-8"?>
<setup.p2:P2Task
    xmi:version="2.0"
    xmlns:xmi="http://www.omg.org/XMI"
    xmlns:setup.p2="http://www.eclipse.org/oomph/setup/p2/1.0">
  <requirement
      name="org.eclipse.pde.source.feature.group"/>
  <repository
      url="http://download.eclipse.org/eclipse/updates/4.6"/>
</setup.p2:P2Task>

Your Oomph Editor should looks like in Figure 2. Save the file and select "Perform Setup Task…​" (in the Help menu). Oomph will update your installation and will ask for a restart.

2016 09 16 installation oomph editor
Figure 2. Oomph setup Editor: installation.setup File

In both cases, after the restart you can press alt+shift+f1 and use the Plug-in Spy as in Figure 3.

2016 09 16 plugin spy
Figure 3. Plug-in Spy in Eclipse Neon

September 15, 2016 10:00 PM

Basic Oomph Tutorial published

by Maximilian Koegel and Jonas Helming at September 15, 2016 08:37 PM

Oomph is a great tool to automate Eclipse installations, workspace setups, and more. Projects can configure profiles (called “setup models”) and thereby allow contributors to get an IDE to work on the project with a single click. This includes all necessary plugins to be installed, preferences, a git clone, the project import, and even more.

If you want to configure your own custom setup model for your project, we have recently compiled a basic tutorial on how to get started: Basic Oomph tutorial.


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with eclipse, EclipseSource Oomph Profile, oomph, eclipse, EclipseSource Oomph Profile, oomph


by Maximilian Koegel and Jonas Helming at September 15, 2016 08:37 PM

Oomph 04: P2 install tasks

by Christian Pontesegger (noreply@blogger.com) at September 15, 2016 02:15 PM

Form this tutorial onwards we are going to extend our project setup step by step. Today we are looking how to install additional plugins and features to our setup.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.  

For a list of all Oomph related tutorials see my Oomph Tutorials Overview.

Step 1: Add the Repository

Open your Oomph setup file and create a new task of type P2 Director. To install components we need to provide a p2 site location and components from that location to install. So create a new Repository child to our task. When selected the Properties view will ask for a URL. Point to the p2 location you want to install stuff from. Leave Type to Combined. If you do not know about repository types you definitely do not need to change this setting.
When you are working with a p2 site provided by eclipse, Oomph can help to identify update site locations.

Step 2: Add features

Once you added the repository you can start to add features to be installed from that site. The manual way requires you to create a new child node of type Requirement. Go to its Properties and set Name to the feature id you want to install. You may add version ranges or make installs optional (which means do not throw an error when the feature cannot be installed).
The tricky part is to find out the name of the feature you want to install. I like to use the target platform editor from Mickael Barbero with its nice code completion features. An even simpler way is to use the Repository Explorer from Oomph:

Right click on your Repository node and select Explore. The Repository Explorer view comes up and displays features the same way as you might know it from the eclipse p2 installer. Now you can drag and drop entries to your P2 Director task.



by Christian Pontesegger (noreply@blogger.com) at September 15, 2016 02:15 PM