More OSGi events in Spain this month.....

by Mike Francis (noreply@blogger.com) at September 12, 2014 10:09 PM

So it looks like September is turning in to OSGi month in Spain.... *** Update - Live Stream for tonight available from 19.00hrs CEST on You Tube *** Hot on the heels of the Peter Kriens presentation about enRoute to the Madrid JUG on Sept 10, there is an event from Barcelona JUG this Friday and Saturday (Sept 12 and 13) by Jean-Baptiste Onofré. This is then followed by another event from

by Mike Francis (noreply@blogger.com) at September 12, 2014 10:09 PM

Looking at High Availablility with Eclipse Orion

by paulweb515 (noreply@blogger.com) at September 12, 2014 08:40 PM

I've had the opportunity to do some work on Eclipse Orion lately, and I'm really enjoying it.  There are opportunities to learn about technology on both the server (Java and/or Nodejs) and the client side (HTML, CSS, and JS).  It's a lot to learn :-)

One of the things I've been looking at is High Availability in Orion 7.0.  If we use horizontal scaling, we can upgrade one server while leaving another server to continue to provide service (we're working with some kind of proxy in front of the servers, like NGINX, to direct traffic).

Anthony has been working on consolidating Orion metastore/metadata, on distributing upgrades on user access or background processes, and on a policy to cover how we add/upgrade new metadata properties so that older servers could still function against the new metadata.

I've been focused on allowing multiple servers to access the same user content and search indices at the same time.

By default, there's only one small change in Orion 7.0.  The search indices which used to reside in osgi.instance.area/.plugins/org.eclipse.orion.server.core.search will now reside in osgi.instance.area/.plugins/org.eclipse.orion.server.core.search/<version> (where the current version is v16).

You can use properties (See web-ide.conf) to activate the new configuration, where the user content and the search indices are in a common location but each server still has its own local osgi.instance.area.

For example, if I had 2 servers that had their local OSGi instance areas at /opt/orion/workspace (not shared between servers) and a common NFS mount at /home/data, I could set up the orion configuration on each server to serve up the same content using:

  • orion.file.content.location=/home/data/orion/content
  • orion.search.index.location=/home/data/orion/search
  • orion.file.content.locking=true
The search indices and user content should not point to the same directory.  The file content locking uses file system (per process) locks to protect read-writes of user metadata and instructs only one process to run the search indexing jobs at a time.

We've done some preliminary testing, but we always welcome more testing and feedback.  If you get a chance to use this, please let me know :-)


by paulweb515 (noreply@blogger.com) at September 12, 2014 08:40 PM

EMFStore 1.4.0 Released

by Maximilian Koegel at September 12, 2014 01:20 PM

We just released EMFStore 1.4.0. You can find the new release on our download page. A list of implemented features and bug fixes is available here.

The most notable new feature is an improved Admin UI to configure Users, Groups and their access privileges on projects on the EMFStore server. We believe the Admin UI is now more usable and more repsonsive.

Futhermore we have implemented a security feature which allows you to configure the SSL ciphers suites accepted by the server for SSL connections. This can be used to exclude ciphers which are no longer considered secure.

Also we added more API for ModelElementIDs which are used in EMFStore to uniquely identify EObjects stored within EMFStore. This is useful if you need to allocate custom IDs with your EObjects.

To improve out-of-the-box Ecore support we added the Ecore feature to the SDK as a default. This allows to store the contents of Ecore files (e.g. EPackages and EClasses) in EMFStore without explicitly installing this feature additionally to the SDK.

Finally we fixed a number of minor bugs.

Enjoy!

client server2 EMFStore 1.4.0 Released


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with emf, EMFStore, emf, EMFStore


by Maximilian Koegel at September 12, 2014 01:20 PM

Little Known Things about the Eclipse Infocenter: Basic and Bot Mode (1/5)

by howlger at September 12, 2014 12:28 PM

The Eclipse help system can also be launched as a stand-alone web application, the so-called Infocenter. The Eclipse Infocenter is widely used by several companies and organizations, with and without adaptions. Although many of you will be familiar with help.eclipse.org or one of these websites, hardly anyone will know all of the five well-hidden features of the Infocenter that might be useful to you. They will be presented individually here. And here comes number one: the basic and the bot mode.

When you visit the Infocenter with a very, very, very old web browser such as the Internet Explorer 5.5 or Netscape Navigator 4.8 the website looks different than with any web browser that was released in the last twelve years. This so-called basic mode (in contrast to the advanced mode for current browsers) exists mainly because these old browsers do not support JavaScript callbacks. For instance, if you click on a book in the table of contents, the table of contents HTML frame will be replaced by a new HTML page in which the whole subtree of the clicked book is expanded. This is realized simply by HTML links. In the normal advanced mode the chapters of the next level are asynchronously loaded via JavaScript without leaving the page.

Eclipse Luna Help in Basic Mode

The basic mode can also be useful with a current browser. For example, if you need the whole table of contents of a book then it can be easily extracted (and copied) from /basic/tocView.jsp. This saves you manually expanding every individual chapter.

In the Infocenter, the browser detection to switch between basic and advanced mode is done via the User-Agent HTTP header field. The User-Agent field is also used to present the website without HTML frames to web crawlers in a third mode, the bot mode. For the Google Search Bot the Luna Help with its 17,875 topics looks like this:

Eclipse Luna Help in Bot Mode

The next little known things about the Eclipse Infocenter on my list are Deep Linking, Search Links, Language Switching and Debugging Information. Stay tuned!

Flattr this



by howlger at September 12, 2014 12:28 PM

Open IoT hangout #10

by Eclipse Foundation at September 11, 2014 03:31 PM

Join Benjamin, Ian and their guest +Sandro Kock to learn more about MQTTlens and discuss the latest news in the IoT space

by Eclipse Foundation at September 11, 2014 03:31 PM

Live Streaming of Peter Kriens presenting enRoute Today @ Madrid JUG

by Mike Francis (noreply@blogger.com) at September 11, 2014 08:14 AM

Peter Kriens is presenting at Madrid JUG this evening (Sept 10) about OSGi enRoute. The presentation will be in English. Our friends at the Madrid JUG have kindly arranged to stream the session. You can also follow the event and ask questions on twitter using the hashtag #mjugOSGi. Its taking place at 19.00hrs CEST/ 18.00hrs BST/ 13.00 EDT / 10.00hrs PDT /  02.00hrs on Sept 11 (JST) (apologies

by Mike Francis (noreply@blogger.com) at September 11, 2014 08:14 AM

Mozilla pushes - August 2014

by Kim Moir (noreply@blogger.com) at September 10, 2014 02:09 PM

Here's August 2014's monthly analysis of the pushes to our Mozilla development trees.  You can load the data as an HTML page or as a json file.



Trends
It was another record breaking month.  No surprise here!

Highlights
  • 13090 pushes
    • new record
  • 422 pushes/day (average)
    • new record
  • Highest number of pushes/day: 690 pushes on August 20.  This same day corresponded with our first day where we ran over 100,000 test jobs.
    • new record
  • 23.12 pushes/hour (average)

General Remarks
Both Try and Gaia-Try have about 36% each of the pushes.  The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 21% of all the pushes.


Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 620 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
August 20, 2014 had the highest number of pushes in one day with 690 pushes







by Kim Moir (noreply@blogger.com) at September 10, 2014 02:09 PM

Call for Submissions: Modeling Symposium ECE 2014

by Jonas Helming at September 10, 2014 09:54 AM

I am happy to announce that Philip, Ed and I are organizing the Modeling Symposium for EclipseCon Europe 2014. It is scheduled for the first day of the conference, i.e., Tuesday, October 28th. The symposium aims to provide a forum for community members to present a brief overview of their work. We offer 10 minute lightning slots (including questions) to facilitate a broad range of speakers. The primary goal is to introduce interesting new technology and features. We are mainly targeting projects that are not otherwise represented in the conference program.

If you are interested in giving a talk, please send a short description (a few sentences) to jhelming@eclipsesource.com. Depending on the number, we might have to select among the submissions. Submission Deadline is September 20th.

Please adhere to the following guidelines:

Please provide sufficient context. Talks should start with a concise overview of what the presenter plans to demonstrate, or what a certain framework offers. Even more important, explain how and why the topic is relevant.

Don’t bore us! Get to the point quickly. You don’t have to use all your allocated time. An interesting 3 minute talk will have a bigger impact than a boring 10 minute talk. We encourage you to plan for a 5 minute talk, leaving room for 5 minutes of discussion.

Keep it short and sweet, and focus on the most important aspects. The conference offers the advantage of getting in contact with people who are interested in your work. So consider the talk more as a teaser to prompt follow-up conversations than a forum to demonstrate or discuss technical details in depth.

A demo is worth a thousand slides. We prefer to see how your stuff works rather than be told about how it works with illustrative slides.

Please restrict the slides to summarize your introduction or conclusion.

Looking forward to your submissions!

ECE%20Friends%20600x125 0 Call for Submissions: Modeling Symposium ECE 2014


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with eclipse, eclipsecon, emf, eclipse, eclipsecon, emf


by Jonas Helming at September 10, 2014 09:54 AM

Challenged

by Peter Kriens (noreply@blogger.com) at September 10, 2014 09:49 AM

In my last whiny post I stated that we tend to solve the symptoms and not the root problem, Jaco Joubert challenged me to define what the real problem is; fair enough. The root problem is that we write fragile software that stumbles when there are unexpected changes in the environment. The simplest solution to this problem is to lash down the environment. If the environment does not move,

by Peter Kriens (noreply@blogger.com) at September 10, 2014 09:49 AM

IoT Gateway: Reducing the distance between embedded and enterprise technologies

by Eclipse Foundation at September 09, 2014 03:48 PM

Slides of the presentation: http://slideshare.net/Eurotechchannel/kuram2miotgateway Internet of Things adoption is constrained by disparate implementations and proprietary solutions. The emergence of a service gateway model operating on the edge of a deployment as an aggregator and controller has opened up the possibility of enabling enterprise-level technologies in the world of the Internet of Things. This session discusses and demonstrates Eclipse Kura, a Java- and OSGi-based application framework for service gateways. Kura abstracts and isolates the developer from the complexity of the hardware subsystems and ensures application portability across architectures. Kura built-in components are configurable services covering the most-common building blocks for Internet of Things applications such as I/O access, network configuration, data services, and remote management.

by Eclipse Foundation at September 09, 2014 03:48 PM

AnyEdit fixes line delimiters now

by Andrey Loskutov at September 07, 2014 10:22 PM

My AnyEdit plugin just got new feature (see issue 92): AnyEdit v.2.4.11 can now fix line delimiters in text files (either automatically on save or explicitly while performing "Tabs<->Spaces" action).

As you may know, Windows uses "CRLF", Linux "LF" and Mac "CR" special characters to indicate "new line" in text files. Unfortunatelly one can easily mess up files if editing them on different platforms and doing copy/paste. This is history now, as you can setup AnyEdit to automatically "fix" line delimiters on save.

Note: this feature is not enabled by default, because if projects are not properly setup, things will be not better as before.

Fix delimiters "on save" preferences:

Fix delimiters "on convert" preferences:

 

Before you now start updating AnyEdit and changing AnyEdit preferences, please make sure your project knows what line separator should be always used.

To make sure your project is properly setup, create org.eclipse.core.runtime.prefs file in .settings directory and add the line:

line.separator=\n
to it (for Linux), or just :

echo "line.separator=\n" > .settings/org.eclipse.core.runtime.prefs
if the file doesn't exist yet.

Additionally (fully unrelated to the new feature) I would recommend  you to setup the default encoding for your project, as this is the next big problem you will face with multi-platform projects.

Create org.eclipse.core.resources.prefs file in .settings directory and add the line:

encoding/<project>=UTF-8
to it (for UTF-8 encoding which I would suggest as default), or just :
echo "encoding/<project>=UTF-8" > .settings/org.eclipse.core.resources.prefs 

if the file doesn't exist yet.

Please note, that the you do not need to substitute the real project name: the line should be taken literally as it is written above.


by Andrey Loskutov at September 07, 2014 10:22 PM

Committer and Contributor Hangout -- Project Management Infrastructure (PMI)

by Eclipse Foundation at September 05, 2014 02:52 PM

We'll be talking/showing about the Project Management Infrastructure (PMI). This is a crucial component that helps projects manage and consolidate several key administrative activities into one place. Part of the talk will be showing some of the functionality and offering tips/insights that can really help your project from an administrative perspective. Richard Burcher from the Eclipse Foundation will be your host:) Links: Project Management Infrastructure (PMI): https://wiki.eclipse.org/Project_Management_Infrastructure Eclipse Project Tools: https://www.eclipse.org/projects/tools/

by Eclipse Foundation at September 05, 2014 02:52 PM

Sphinx – how to access your models

by Andreas Graf at September 03, 2014 02:16 PM

When you work with Sphinx as a user in an Eclipse runtime, e.g. with the Sphinx model explorer, Sphinx does a lot of work in the background to update models, provide a single shared model to all editors etc. But what to do when you want to access Sphinx models from your own code.

EcorePlatformUtil

EcorePlatformUtil is one of the important classes with a lot of methods that help you access your models. Two important methods are

  • getResource(…)
  • loadResource(…)

They come in a variety of parameter variations. The important thing is, getResource(…) will not load your resource if it is not yet loaded. That is a little bit different than the standard ResourceSet.getResource(…) with its loadOnDemand parameter.

On the other hand, loadResource(…) wil only load your resource if it is not loaded yet. If it is, there will be now runtime overhead. Let’s have a look at the code:

 

public static Resource loadResource(IFile file, Map<?, ?> options) {
		TransactionalEditingDomain editingDomain = WorkspaceEditingDomainUtil.getEditingDomain(file);
		if (editingDomain != null) {
			return loadResource(editingDomain, file, options);
		}
		return null;
	}

Sphinx uses its internal registries to find the TransactionalEditingDomain that the file belongs to and then calls loadResource(…)

public static Resource loadResource(final TransactionalEditingDomain editingDomain, final IFile file, final Map<?, ?> options) {
		if (editingDomain != null && file != null) {
			try {
				return TransactionUtil.runExclusive(editingDomain, new RunnableWithResult.Impl<Resource>() {
					@Override
					public void run() {
						URI uri = createURI(file.getFullPath());
						setResult(EcoreResourceUtil.loadResource(editingDomain.getResourceSet(), uri, options));
					}
				});
			} catch (InterruptedException ex) {
				PlatformLogUtil.logAsError(Activator.getPlugin(), ex);
			}
		}
		return null;
	}

So we have to look at org.eclipse.sphinx.emf.util.EcoreResourceUtil to see what happens next. There is just a little fragment

public static Resource loadResource(ResourceSet resourceSet, URI uri, Map<?, ?> options) {
		Assert.isNotNull(uri);
		return loadResource(resourceSet, uri, options, true);
	}

that leads us to

private static Resource loadResource(ResourceSet resourceSet, URI uri, Map<?, ?> options, boolean loadOnDemand) {
		Assert.isNotNull(uri);

		// Create new ResourceSet if none has been provided
		if (resourceSet == null) {
			resourceSet = new ScopingResourceSetImpl();
		}

		// Try to convert given URI to platform:/resource URI if not yet so
		/*
		 * !! Important Note !! This is necessary in order to avoid that resources which are located inside the
		 * workspace get loaded multiple times just because they are referenced by URIs with different schemes. If given
		 * resource set were an instance of ResourceSetImpl this extra conversion wouldn't be necessary.
		 * org.eclipse.emf.ecore.resource.ResourceSet.getResource(URI, boolean) normalizes and compares given URI and to
		 * normalized copies of URIs of already present resources and thereby avoids multiple loading of same resources
		 * on its own. This is however not true when ExtendedResourceSetImpl or a subclass of it is used. Herein, URI
		 * normalization and comparison has been removed from
		 * org.eclipse.sphinx.emf.resource.ExtendedResourceSetImpl.getResource(URI, boolean) in order to increase
		 * runtime performance.
		 */
		if (!uri.isPlatform()) {
			uri = convertToPlatformResourceURI(uri);
		}

		// Just get model resource if it is already loaded
		Resource resource = resourceSet.getResource(uri.trimFragment().trimQuery(), false);

		// Load it using specified options if not done so yet and a demand load has been requested
		if ((resource == null || !resource.isLoaded()) && loadOnDemand) {
			if (exists(uri)) {
				if (resource == null) {
					String contentType = getContentTypeId(uri);
					resource = resourceSet.createResource(uri, contentType);
				}
				if (resource != null) {
					try {
						// Capture errors and warnings encountered during resource creation
						/*
						 * !! Important note !! This is necessary because the resource's errors and warnings are
						 * automatically cleared when the loading begins. Therefore, if we don't retrieve them at this
						 * point all previously encountered errors and warnings would be lost (see
						 * org.eclipse.emf.ecore.resource.impl.ResourceImpl.load(InputStream, Map<?, ?>) for details)
						 */
						List<Resource.Diagnostic> creationErrors = new ArrayList<Resource.Diagnostic>(resource.getErrors());
						List<Resource.Diagnostic> creationWarnings = new ArrayList<Resource.Diagnostic>(resource.getWarnings());

						// Load resource
						resource.load(options);

						// Make sure that no empty resources are kept in resource set
						if (resource.getContents().isEmpty()) {
							unloadResource(resource, true);
						}

						// Restore creation time errors and warnings
						resource.getErrors().addAll(creationErrors);
						resource.getWarnings().addAll(creationWarnings);
					} catch (Exception ex) {
						// Make sure that no empty resources are kept in resource set
						if (resource.getContents().isEmpty()) {
							// Capture errors and warnings encountered during resource load attempt
							/*
							 * !! Important note !! This is necessary because the resource's errors and warnings are
							 * automatically cleared when it gets unloaded. Therefore, if we didn't retrieve them at
							 * this point all errors and warnings encountered during loading would be lost (see
							 * org.eclipse.emf.ecore.resource.impl.ResourceImpl.doUnload() for details)
							 */
							List<Resource.Diagnostic> loadErrors = new ArrayList<Resource.Diagnostic>(resource.getErrors());
							List<Resource.Diagnostic> loadWarnings = new ArrayList<Resource.Diagnostic>(resource.getWarnings());

							// Make sure that resource gets unloaded and removed from resource set again
							try {
								unloadResource(resource, true);
							} catch (Exception e) {
								// Log unload problem in Error Log but don't let it go along as runtime exception. It is
								// most likely just a consequence of the load problems encountered before and therefore
								// should not prevent those from being restored as errors and warnings on resource.
								PlatformLogUtil.logAsError(Activator.getPlugin(), e);
							}

							// Restore load time errors and warnings on resource
							/*
							 * !! Important Note !! The main intention behind restoring recorded errors and warnings on
							 * the already unloaded resource is to enable these errors/warnings to be converted to
							 * problem markers by the resource problem handler later on (see
							 * org.eclipse.sphinx.emf.internal.resource.ResourceProblemHandler#resourceSetChanged(
							 * ResourceSetChangeEvent)) for details).
							 */
							resource.getErrors().addAll(loadErrors);
							resource.getWarnings().addAll(loadWarnings);
						}

						// Record exception as error on resource
						Throwable cause = ex.getCause();
						Exception exception = cause instanceof Exception ? (Exception) cause : ex;
						resource.getErrors().add(
								new XMIException(NLS.bind(Messages.error_problemOccurredWhenLoadingResource, uri.toString()), exception, uri
										.toString(), 1, 1));

						// Re-throw exception
						throw new WrappedException(ex);
					}
				}
			}
		}
		return resource;
	}

  • In line 25, the standard EMF ResourceSet.getResource() is used to see if the resource is already there. Note that loadonDemand is false.
  • Otherwise the resource is actually created and loaded. If it does not have any content, it is immediately removed
  • Information about loading errors / warnings is stored at the resource

EcorePlatformUtil.getResource(IFile)

This method will not load the resource as can be seen from the code:

public static Resource getResource(final IFile file) {
		final TransactionalEditingDomain editingDomain = WorkspaceEditingDomainUtil.getCurrentEditingDomain(file);
		if (editingDomain != null) {
			try {
				return TransactionUtil.runExclusive(editingDomain, new RunnableWithResult.Impl<Resource>() {
					@Override
					public void run() {
						URI uri = createURI(file.getFullPath());
						setResult(editingDomain.getResourceSet().getResource(uri, false));
					}
				});
			} catch (InterruptedException ex) {
				PlatformLogUtil.logAsError(Activator.getPlugin(), ex);
			}
		}
		return null;
	}

ModelLoadManager

In addition, the ModelLoadManager provides more integration with the Workspace framework (Jobs, ProgressMonitor,etc.):

ModelLoadManager.loadFiles(Collection, IMetaModelDescriptor, boolean, IProgressMonitor) supports an “async” parameter. If that is true, the model loading will be executed within a org.eclipse.core.runtime.jobs.Job. The method in turn calls

ModelLoadManager.runDetectAndLoadModelFiles(Collection, IMetaModelDescriptor, IProgressMonitor), which sets up progress monitor and performance statistics, itself calling detectFilesToLoad and runLoadModelFiles. detectFilesToLoad will be discussed in a dedicated posting.

runLoadModelFiles sets up progress monitor and performance statistics and calls loadModelFilesInEditingDomain finally delegating down to EcorePlatformUtil.loadResource(editingDomain, file, loadOptions) that we discussed above.


by Andreas Graf at September 03, 2014 02:16 PM

Sphinx is blacklisting your proxies

by Andreas Graf at September 03, 2014 11:48 AM

EMF proxy resolving is one area where Sphinx adds functionality. Sphinx was designed to support, amongst others, AUTOSAR models and with AUTOSAR models, references have some special traits:

  • References are based on fully qualified names
  • Any number of .arxml files can be combined to form a model
  • Models can be merged based on their names. So any number of resources can contain a package with a fully qualified name of “/AUTOSAR/p1/p2/p3/…pn”

Blacklisting

Obviously, it would be highly inefficient to try to resolve a proxy each time it is encountered in code in scenarios like this. So Sphinx can “blacklist” the proxies.

The information about the proxy blacklist is used in org.eclipse.sphinx.emf.resource.ExtendedResourceSetImpl. At the end of the getEObject() methods we see that the proxy is added to the blacklist if it could not be resolved:

if (proxyHelper != null) {
			// Remember proxy as known unresolved proxy
			proxyHelper.getBlackList().addProxyURI(uri);
		}

And of course, at the beginning of the method, a null is returned immediately if the proxy has been blacklisted:

if (proxyHelper != null) {
			// If proxy URI references a known unresolved proxy then don't try to resolve it again
			if (proxyHelper.getBlackList().existsProxyURI(uri)) {
				return null;
			}

Getting de-listed

Now Sphinx has to find out when it would make sense not to blacklist a proxy anymore but try to resolve it. So it registers a org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.ModelIndexUpdater by the extension point described in this previous post.

So the ModelIndexUpdaer will react to changes:

@Override
	public void resourceSetChanged(ResourceSetChangeEvent event) {
		ProxyHelper proxyHelper = ProxyHelperAdapterFactory.INSTANCE.adapt(event.getEditingDomain().getResourceSet());
		List<?> notifications = event.getNotifications();
		for (Object object : notifications) {
			if (object instanceof Notification) {
				Notification notification = (Notification) object;
				Object notifier = notification.getNotifier();
				if (notifier instanceof Resource) {
					Resource resource = (Resource) notifier;
					if (notification.getFeatureID(Resource.class) == Resource.RESOURCE__IS_LOADED) {
						if (resource.isLoaded()) {
							proxyHelper.getBlackList().updateIndexOnResourceLoaded(resource);
						} else {
							// FIXME when called on post commit, resource content is empty
							proxyHelper.getBlackList().updateIndexOnResourceUnloaded(resource);
						}
					}
				} else if (notifier instanceof EObject) {
					// Check if new model objects that are potential targets for black-listed proxy URIs have been added
					EStructuralFeature feature = (EStructuralFeature) notification.getFeature();
					if (feature instanceof EReference) {
						EReference reference = (EReference) feature;
						if (reference.isContainment()) {
							if (notification.getEventType() == Notification.SET || notification.getEventType() == Notification.ADD
									|| notification.getEventType() == Notification.ADD_MANY) {
								// Get black-listed proxy URI pointing at changed model object as well as all
								// black-listed proxy URIs pointing at model objects that are directly and indirectly
								// contained by the former removed
								proxyHelper.getBlackList().updateIndexOnResourceLoaded(((EObject) notifier).eResource());
							}
						}
					}

The class that actually implements the blacklist is org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.ModelIndex, which mostly delegates to org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.MapModelIndex

When a resource has changed or being loaded, the MapModelIndex tries to see if the blacklisted proxies could be resolved against the resource:

/**
	 * Try to resolve proxy objects against this resource.
	 */
	public void updateIndexOnResourceLoaded(Resource resource) {
		if (resource != null && !resource.getContents().isEmpty()) {
			for (URI proxyURI : new ArrayList<URI>(proxyURIs)) {
				// FIXME Potential EMF bug: NumberFormatExeption raised in XMIResourceImpl#getEObject(String) upon
				// unexpected URI fragment format.
				try {
					// If proxy URI is not fragment-based, i.e. includes segments pointing at the target resource, we
					// have to make sure that it matches URI of loaded resource
					if (proxyURI.segmentCount() == 0 || resource.getURI().equals(proxyURI.trimFragment().trimQuery())) {
						// See if loaded resource contains an object matching proxy URI fragment
						if (resource.getEObject(proxyURI.fragment()) != null) {
							removeProxyURI(proxyURI);
						}
					}
				} catch (Exception ex) {
					// Ignore exception
				}
			}
		}
	}

When the resource is actually removed from the model, all proxies from that resource are removed from the index:

public void updateIndexOnResourceUnloaded(Resource resource) {
		if (resource != null) {
			TreeIterator<EObject> iterator = resource.getAllContents();
			while (iterator.hasNext()) {
				EObject currentObject = iterator.next();
				if (currentObject.eIsProxy() && existsProxyURI(((InternalEObject) currentObject).eProxyURI())) {
					removeProxyURI(((InternalEObject) currentObject).eProxyURI());
				}
			}
		}
	}

So Sphinx avoids re-resolving of proxies.


by Andreas Graf at September 03, 2014 11:48 AM

New Completion Tweaks and Subwords Improvements for the Eclipse IDE

by Marcel Bruch at September 03, 2014 11:15 AM

Codetrails continuously strives to improve your performance by building tools that let you develop code faster with Eclipse. If you know Eclipse Code Recommenders then you probably know that (among other things) we spent quite some time on improving Eclipse’s content assist.

In case you wonder why, the answer is pretty simple: We improve content assist because it is the fifth-most frequently used command in the IDE. Right after Delete, Save, (Jump to) Next Word and Paste:


by Marcel Bruch at September 03, 2014 11:15 AM

Eclipse Fedorapackager 0.5.0 release

by Alexander Kurtakov (noreply@blogger.com) at September 03, 2014 10:54 AM

After a bit more than 2 years it was high time to do new release to adapt to current Fedora and Eclipse releases. Most of the changes in this release were towards cleaning and simplifying the codebase hoping for faster releases but there are some important feature changes like:
  • Fedorapackager requires Java 8 now.
  • Allow users to clone multiple projects at once.
  • Fix changelog parsing for Bodhi
  • Make use of /etc/rpkg/fedpkg.conf
  • Modified Fedora Packager Koji properties page
  • Reduce logging noise.
  • Enable rpmlint and rpm natures by default.
  • Tagging is no longer used with Git.
  • Shorter menu labels.
  • Fixed JsonNull error when pushing bodhi update
  • Eval NVRs when Bodhi updating
  • Eval the name to not fail on sclized packages.
  • Bodhi authenticate when pushing review fix.
  • Bodhi dialog cleanup.
  • Make perspective switch remember previous choice.
  • Let the console name has the package (srpm) name in it.
  • Remove expectation of the existance of an ssh key.
Updated packages are available in Rawhide and Fedora 21 updates-testing.

by Alexander Kurtakov (noreply@blogger.com) at September 03, 2014 10:54 AM

Sphinx is listening – editingDomainFactoryListeners

by Andreas Graf at September 03, 2014 09:33 AM

One of the mechanism that Sphinx uses to get notified about changes in your Sphinx-based models is registering Listeners when the editingDomain for your model is being created. This mechanism can also be used for your own listeners.

Sphinx defines the extension point org.eclipse.sphinx.emf.editingDomainFactoryListeners and in org.eclipse.sphinx.emf it registers a number of its own listeners:

Listeners registered by Sphinx

Listeners registered by Sphinx

The registry entries for that extension point are processed in EditingDomainFactoryListenerRegistry.readContributedEditingDomainFactoryListeners(). It creates an EditingDomainFactoryListenerRegistry.ListenerDescriptor which stores the class name and registers it internally for the meta-models that are specified in the extension. The ListenerDescriptor also contains code to actually load the specified class and instantiate it (base type is ITransactionalEditingDomainFactoryListener).

The method EditingDomainFactoryListenerRegistry.getListeners(IMetaModelDescriptor) can be used to get all the ITransactionalEditingDomainFactoryListener that are registered for a given IMetaModelDescriptor.

These are in turn invoked from ExtendedWorkspaceEditingDomainFactory.firePostCreateEditingDomain(Collection, TransactionalEditingDomain). ExtendedWorkspaceEditingDomainFactory is responsible for creating the editing domain and through this feature we have a nice mechanism to register custom listeners each time an editing domain is created by Sphinx (e.g., after models are being loaded).


by Andreas Graf at September 03, 2014 09:33 AM

Join the new Virtual IoT Meetup Group

by Ian Skerrett at September 02, 2014 06:33 PM

In the past year, IoT events and IoT meetup groups have been created all over the world. A lot of the Meetup groups, including the one I attend in Ottawa, have great content and networking opportunities. Similarly, there is a wealth of IoT events that you can attend throughout the year.

However not everyone can attend a meetup or an event so we thought it might be useful to start a virtual IoT meetup group. A place where people can learn about new IoT technologies and hopefully meet other IoT enthusiasts. We already have a number of presenters lined-up to talk about the different Eclipse IoT technology but over time I hope this group will be more than just Eclipse IoT projects.  I do hope it can be a group for IoT technologist and enthusiasts.  Please join and let us know what you would like to learn.

FWIW, hat tip to the Simon Maples at ZeroTurnaround who runs the great vJUG and the inspiration for this group.



by Ian Skerrett at September 02, 2014 06:33 PM

Vaadin 7.3 – Valo, OSGi and e4

by Florian Pirchner at September 02, 2014 11:23 AM

I got the chance to see a preview of Vaadin 7.3 some days ago and I am really really impressed about the new features it brings.

Until now, I have worked with the Vaadin Reindeer theme and tried to customize it. But since I am a Java developer, I do not have particularly deep knowledge about CSS3 and had a hard time with it. That’s why I am really looking forward to Vaadin 7.3 and going to upgrade my customer projects in the next days. The new Valo-Theme is exactly what I have been trying to do myself: a responsive and highly customizable theme. There are many different styles and most of them meet my objectives without having to change anything in the CSS.

And the best thing about Vaadin 7.3 is that it comes with a high-end Sass compiler. In the last days I was reading a lot about Sass and it is a perfect match for Java developers. Using this very intuitive styling language, Vaadin 7.3 will compile that information into proper CSS3. Really crazy… For me Sass is something like a DSL for CSS3. Thus, I do not have to schedule my CSS training anymore — I just have to use Sass :D

 OSGi and Vaaclipse

During the next days, I will “Run a first Vaadin 7.3 OSGi application”. And I am sure right now: it is a perfect match.

Running a Vaadin 7.3 OSGi application is the base for migrating the Vaaclipse-Project to Valo too. The Vaaclipse-Project is a rendering layer to render the e4-workspace with Vaadin. See http://semanticsoft.github.io/vaaclipse/.

For details about Vaadin 7.3 just click here.

I also added two screenshots about the new theme:

Metro-Theme

Valo-1

Dark-Theme

Valo-2

 

Going to keep you informed…

Best, Florian



by Florian Pirchner at September 02, 2014 11:23 AM

A Developer’s Guide to the Eclipse Calling Home Policy - Part I

by Marcel Bruch at September 01, 2014 06:04 PM

It’s been a while since the Eclipse Foundation decided to stop the Usage Data Collector. The main reason for stopping this service was that, although thousands of users shared data, neither plug-in providers nor researchers took significant advantage of the data collected at that time. Since its shutdown, however, a new demand for collecting usage data evolved. But compared to the data collected by the UDC, today’s demands are different and vary quite a lot from project to project.


by Marcel Bruch at September 01, 2014 06:04 PM