Je vous propose aujourd’hui d’étudier une configuration Haute Disponibilité avec Petals ESB. Ce sera l’occasion de comparer l’approche distribuée de Petals par rapport à une approche basée sur des clusters. Au passage, je considère que vous avez déjà lu mon précédent article sur le routage applicatif dans Petals. Régulièrement, j’utiliserai l’acronyme HA (High Availability) dans cet article.

Contexte

Nous allons partir sur une configuration classique :

  • Une application qui fournit une fonctionnalité d’un côté.
  • Une application qui de l’autre côté, demande cette fonctionnalité.
  • Et Petals au milieu pour assurer la médiation.

Notre application fournisseur peut être par exemple un EJB, une application légataire, un web service ou une ressource informatique quelconque. L’autre application est un client quelconque.

Quel est l’intérêt d’avoir au Petals au milieu ?

Dans le cas d’un EJB, on peut prendre le cas d’un EJB 2.x que l’on voudrait exposer en service web (soit vous écrivez un client en Java, soit vous passez par le connecteur EJB de Petals, où là, il n’y a aucun code à écrire). Dans le cas d’une application légataire, on pourra utiliser un connecteur dédié, de sorte à pouvoir le réutiliser avec d’autres applications du même type. Dans le cas d’un service web, on peut imaginer le cas d’un service XML-RPC alors que le client ne sait faire que du Document/Literal. Et pour une ressource informatique, l’intérêt sera en général de serviciser cette ressource, c’est-à-dire d’englober l’accès et la manipulation de cette ressource au travers d’un service (au sens SOA du terme). Du coup, on peut raisonner sur la fonctionnalité rendue par le service (fonctionnalité qui se trouve être implémentée grâce à la ressource en question).

Schématiquement, on a donc…

Application client - Petals ESB - Application fournisseur

En termes de déploiement, on a un composant et une service-unit provide pour importer (rendre accessible) l’application fournisseur dans Petals. Et on a un composant et une service-unit consume pour écouter les requêtes de l’application cliente. La question se pose maintenant de savoir comment assurer la haute disponibilité de l’ensemble.

Solution 1 : la spécialisation des noeuds

Cette solution n’est pas la plus intuitive, mais elle fournit un bon rappel des capacités de Petals. Elle est la plus adaptée lorsque l’on veut à la fois :

  • Répliquer des noeuds, c’est-à-dire les fournisseurs et consommateurs de services qui sont dessus.
  • Gérer la répartition de charge, à savoir que certains services sont plus gourmands que d’autres et que l’on ne souhaite pas les voir s’exécuter tous sur la même machine.

Avant même le déploiement, nous allons donc créer des groupes de déploiements. Chaque groupe définit un ensemble de service-units (fournisseurs et consommateurs de service) qui seront déployés sur un même noeud. Ces noeuds seront ensuite répliqués. Prenons ici un exemple de groupe : les fournisseurs et les consommateurs. Deux groupes, avec réplication, cela veut dire qu’il nous faut au moins 4 noeuds.

- Noeud 1 : tous les fournisseurs (SU provides, avec end-point auto-généré).
- Noeud 2 : tous les fournisseurs (SU provides, avec end-point auto-généré). Réplication du noeud 1. On déploie les même SU grâce aux end-points générés au déploiement.
- Noeud 3 : tous les consommateurs (SU consumes, invocation par nom d’interface et nom de service seulement).
- Noeud 4 : tous les consommateurs (SU consumes, invocation par nom d’interface et nom de service seulement). Réplication du noeud 4. On suppose que les consommateurs peuvent écouter les événements externes de manière concurrente. Je reviendrai sur ce point en dernière partie de cet article.

Schématiquemement, cela nous donne donc…

4 noeuds, avec spécialisation et réplication

Il n’y a qu’un seul point d’entrée dans le bus, c’est l’un des 2 consommateurs. Celui-ci va appeler un fournisseur, qui peut être sur le noeud 1 ou sur le noeud 2 (ça, c’est le routage applicatif). Ce fournisseur relaiera la requête à la ressource externe avant de faire remonter la réponse. Si un noeud fournisseur tombe, il en reste un deuxième. Petals enverra automatiquement les requêtes vers le noeud toujours vivant. Si un noeud consommateur tombe, c’est soit le load balancer en amont, soit directement le composant, qui s’assurera que le 2ème consommateur prend le relais.

Solution 2 : la réplication des noeuds

La deuxième solution est la plus simple. Il s’agît de répliquer tous les noeuds Petals. Ici, il n’y a pas de spécialisation des noeuds. Si l’on prend notre exemple précédent, on déploierait donc les fournisseurs et les consommateurs sur chaque noeud. Afin de gagner en performances et d’éviter un transport inutile, il faudrait forcer chaque consommateur à appeler le fournisseur présent sur le même noeud que lui. Cela tombe bien, c’est d’ailleurs la configuration par défaut de Petals.

Pour rappel, cela se configure dans le routeur et par noeud. Cela permet de pondérer les choix de Petals sur le routage applicatif. Du coup, on peut obtenir le même résultat que précédemment avec seulement 2 noeuds.

2 noeuds répliqués

Si un noeud tombe, le deuxième prendra le relais.

Et si on veut mettre 3 noeuds, il suffit de changer la topologie (topology.xml) et éventuellement de mettre à jour le load balancer en frontal. Si vous voulez aller à n noeuds, ça passe aussi.

N noeuds répliqués

Après, quant au choix du nombre de noeuds, il faut aussi prendre en compte la synchronisation des noeuds (périodicité, y’aura-t-il des déploiements et désinstallations souvent) et peut-être mettre en place une surveillance particulière du noeud maître. Mais ça ne change pas le principe sous-jacent.

Il existe aussi une variante à cette configuration, c’est plutôt que d’avoir 1 topologie de n noeuds, on peut préférer n topologies de 1 noeud. Sur cet exemple en tout cas, ce serait rigoureusement identique. On peut aussi mêler les 2 approches et répliquer des topologies. C’est assez flexible. Après, il faut voir combien de processus (chaînes de traitement) cohabitent.

Conditions sine qua non de la HA

On l’a vu ici, il y a de nombreuses façons d’organiser et répartir des noeuds Petals pour assurer sa haute disponibilité. Cependant, la HA ne peut concerner uniquement Petals. Elle doit aussi concerner les ressources externes accédées par le bus. Ces ressources, elles sont à l’entrée et à la sortie du bus, c’est-à-dire du côté des applications fournisseurs et des applications consommateurs.

En clair, l’application qui fournit une fonctionnalité (et accessible par l’application cliente au travers du bus) doit elle-aussi être hautement disponible. Cela peut passer par un autre load balancer, ou bien par des disques partagés, ou bien par un autre moyen qui garantit la disponibilité de la ressource ou de l’application.

HA de l'application fournisseur

Un deuxième élément porte sur la haute disponibilité du côté des consommateurs de services dans Petals. Les consommateurs, ce sont les service-units qui vont propager un événement externe au bus (comme une requête ou un fichier qui apparaît dans un dossier) sous la forme d’un message adressé à un forunisseur de service.

HA des consommateurs dans Petals

J’ai évoqué ce point au milieu de cet article.

On suppose que les consommateurs peuvent écouter les événements externes de manière concurrente.

Pour ce dernier point, voilà 2 exemples pour vous aider à comprendre :

  • Un consommateur associé au connecteur SOAP est en attente de requêtes SOAP. En cas de réplication d’un tel consommateur, la HA est garantie en plaçant un load balancer en amont. Ce load balancer sera chargé d’envoyer la requête sur un des consommateurs répliqués.
  • Un consommateur associé au connecteur File (Transfer) surveille un répertoire sur un disque. Il guette l’apparition de nouveaux fichiers. En cas de réplication d’un tel consommateur, la HA est garantie par le composant, qui supporte des accès concurrents au dossier surveillé.

La HA des consommateurs permet de garantir la pérennité du système, même si l’un des consommateurs venait à tomber.

Au final, c’est donc la combinaison de 3 choses qui permet assure la haute disponibilité de l’infrastructure :

  • La HA du bus (transport des messages).
  • La HA des ressources / applications accédées par le bus.
  • La détection concurrente et unique d’événements externes à propager dans le bus (pour les consommateurs).

Pour terminer…

Je vais rajouter une précision qui me semble évidente, mais sait-on jamais. La haute disponibilité de l’infrastructure ne garantit pas le bon traitement des messages ou des événements. Les processus doivent être construits de sorte à gérer les cas d’erreur. Petals fournit les mécanismes de base (au travers des MEP, des politiques de réémission de messages et des timeouts), mais ceux-ci doivent aussi être exploités et pris en compte lors de la conception des services et de leurs consommateurs à déployer dans le bus. C’est évidemment un tout autre sujet.

Cert article vous a présenté la haute disponibilité d’une infrastructure utilisant Petals ESB. Dans un prochain article, je présenterai très rapidement un cas plus concret afin d’illustrer la configuration d’un load balancer logiciel (Apache + mod proxy balancer).


Hi,

I have just added a new way to create BPEL processes in the BPEL Designer.
This feature allows to generate a BPEL skeleton from a WSDL port type. Roughly, it means we can generate BPEL processes that implement a given interface. In SOA, such a top-down approach is quite useful. It allows to be driven by service contracts.

In a RCP application I’m working on, I used to have this feature, but implemented through different means (and associated with an older fork of the BPEL Designer). It is now entirely based on the EMF metamodels (those of XML schemas, WSDL and BPEL). Since I want to get rid of this fork, I decided to contribute all my enhancements to the official BPEL Designer at Eclipse.org. And this feature is part of it.

Extending the wizard was also an opportunity to update the user interface in the creation wizard. Obviously, the former options of the wizard have been kept, even if they were reorganized. Here are some screenshots.

First, here is the new approach. You decide to create a BPEL from a WSDL contract…

Select the creation approach

You define the location of the WSDL and select the contract…

Specify the WSDL options

You can even import the WSDL (and its imports) in the project if you want.

Import the WSDL in the project

And eventually, you decide where to create the BPEL process.

The target location

 
Here is a video of this wizard in action.

Demonstration – WSDL to BPEL from Vincent Zurczak on Vimeo.

 
And here is how the former options are displayed now.

Generate a BPEL from templates

The next page is about the template and its options.

Fill-in the template options

These options depend on the selected template. A warning was also added to the “empty” template (this follows a discussion we had on the BPEL-dev mailing-list).

The empty template

And you have already seen the page to select the target location.

 
This contribution comes with SWTBot tests and a new plugin with utilities for WSDL and XML schemas. It was committed on a branch (thanks Git!). This way, it can be commented and discussed. Hopefully, this branch should then be merged with the master branch.


I recently had to work with the WSDL metamodel from Eclipse WTP.
I thought it may be useful to compare it with other WSDL parsing solutions. Notice that I will focus on Java solutions here.

A quick introduction to WSDL

WSDL stands for Web Services Description Language.
It is a W3C specification to describe the interface / contract of a service. A WSDL contract is defined in a *.wsdl file. This file describes the service operations, their input and output parameters, the declaration of services, their reach location and the communication protocol to use to interact with them. A *.wsdl file may import other WSDL and also XML schemas (they contain the type definition of operation parameters).

There are two versions of WSDL that may be worked with: WSDL 1.1 and WSDL 2.0.
Although WSDL 2.0 is conceptually better (in particular because of MEP – Message Exchange Patterns), WSDL 1.1 is the most used. There are very few tools and libraries libraries which support it (Axis 2 supports it, CXF does not, JAX-WS did not plan its support). And it must be said that the two versions are different in the approach and some concepts do not match at all.

And in WSDL 1.1, there are some distinctions, such as the style (document / wrapped, xml-rpc…).
Information about this can be found on the internet.

Parsing solutions

For WSDL 1.1, the most famous solution is WSDL4j.
This library provides an easy (and light!) solution to parse WSDL definitions. Its only limitation in my opinion, is that it is not easy to introspect XML schemas with it.

For WSDL 2.0, there is Apache Woden.
This library allows to parse WSDL (2.0) definitions and to convert WSDL (1.1) definitions to WSDL 2.0 (again, both versions are very different).

Then, there is a more recent project called EasyWSDL.
The versions 1.x and 2.x aimed at providing a single API to parse and edit WSDL 1.1 and WSDL 2.0 definitions. Unfortunately, this API makes an important use of Java generics. However, it is easy to navigate in XML schema with it. The version 3.x is still in incubation but took a very different turn. To simplify the API, and because it was ambiguous, it was decided to separate the API for the various versions of the specification. For the moment, EasyWSDL 3.x only supports WSDL 1.1. The API is much more simple and it is still easy to navigate in XML schemas. Besides, it can execute XPath queries on a macro document (the main document and all the import as a single virtual document). However, it misses some helpers to work with imports (writing XPath queries is not very user-friendly).

Eventually, there is the EMF metamodel for WSDL.
The metamodel is provided by the WTP project at Eclipse. It can only parse WSDL 1.1. In fact, I found out that it relies on and extends WSDL4j. One of the main improvements is that it can introspect XML schemas. To use it, you have to put the EMF libraries and some WTP libraries in your dependencies:

  • org.eclipse.xsd: the metamodel for XML schemas.
  • org.eclipse.wst.wsdl: the metamodel for WSDL.
  • javax.wsdl: a wrapper for WSDL4j.

Reading a WSDL with the WSDL metamodel

/**
 * Finds all the port types declared in the WSDL or in its imported documents.
 * @param emfUri an EMF URI pointing to a WSDL definition
 */
List<PortType> findAllPortTypes( URI emfUri ) {
	
	// Register the basic elements of the specification
	ResourceSet resourceSet = new ResourceSetImpl();
	resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put( "xml", new XMLResourceFactoryImpl());
	resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put( "xsd", new XSDResourceFactoryImpl());
	resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put( "wsdl", new WSDLResourceFactoryImpl());
	resourceSet.getPackageRegistry().put( XSDPackage.eNS_URI, XSDPackage.eINSTANCE );
	resourceSet.getPackageRegistry().put( WSDLPackage.eNS_URI, WSDLPackage.eINSTANCE );

	// Register the required extensions
	// None here
	
	// Load the main document
	Resource resource = resourceSet.getResource( emfUri, true );
    Definition def = (Definition) resource.getContents().iterator().next();

	// Add the initial definition
	Set<Definition> definitions = new HashSet<Definition> ();
	definitions.add( alreadyLoadedDefinition );

	// Process the imports
	processImports( alreadyLoadedDefinition.getImports(), definitions );
	
	// Get all the port types
	List<PortType> portTypes = new ArrayList<PortType> ();
	for( Definition def : definitions ) {
		for( Object o : def.getPortTypes().values())
			portTypes.add((PortType) o );
	}
	
	return portTypes;
}

/**
 * Finds the definitions from imports and processes them recursively.
 * @param imports a map of imports (see {@link Definition#getImports()})
 * @param definitions a list of definitions, found from import declarations
 */
private static void processImports( Map<?,?> imports, Collection<Definition> definitions ) {

	for( Object o : imports.values()) {

		// Case "java.util.list"
		if( o instanceof List<?> ) {
			for( Object oo : ((List<?>) o)) {
				Definition d = ((Import) oo).getEDefinition();
				if( d != null && ! definitions.contains( d )) {
					definitions.add( d );
					processImports( d.getImports(), definitions );
				}
			}
		}

		// Case "org.eclipse.wst.Definition"
		else if( o instanceof Definition ) {
			Definition d = (Definition) o;;
			if( ! definitions.contains( d )) {
				definitions.add( d );
				processImports( d.getImports(), definitions );
			}
		}
	}
}

XML schema can be accessed with this solution.
This is possible thanks to the metamodel for XML schemas.
As an example, here is how to parse a XML schema with this metamodel. Note that this metamodel is not only an EMF library, it is also bridged with a DOM model. In fact, this is also true for the WSDL metamodel (generally, this is not the case with EMF metamodels).

/**
 * Loads a XML schema.
 * @param emfUri an EMF URI
 * @return an instance of {@link XSDSchema}
 * <p>
 * This object already supports inclusions, which means there is no need to
 * get the imports and parse them.
 * </p>
 */
public static XSDSchema loadXmlSchema( URI emfUri ) {

	// Register the basic elements
	ResourceSet resourceSet = new ResourceSetImpl();
	resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put( "xml", new XMLResourceFactoryImpl());
	resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put( "xsd", new XSDResourceFactoryImpl());
	resourceSet.getPackageRegistry().put( XSDPackage.eNS_URI, XSDPackage.eINSTANCE );

	// Load the resource
	Resource resource = resourceSet.getResource( emfUri, true );
	return (XSDSchema) resource.getContents().iterator().next();
}

Creating a WSDL with the WSDL metamodel

Like most of standards, WSDL supports extensions.
In this sample, I use an extension for BPEL.

Definition createWsdlArtifact( String newWsdlUrl ) {

	// Initial data we got from another WSDL definition
	PortType portType = this.portTypePage.getPortType();
	Definition businessDefinition = (Definition) portType.eContainer();
		
	// The new definition to create
	Definition artifactsDefinition = WSDLFactory.eINSTANCE.createDefinition();
	artifactsDefinition.setTargetNamespace( businessDefinition.getTargetNamespace() + "Artifacts" );

	// Hack for the role: we need to define manually the name space prefix for the TNS of the business WSDL
	artifactsDefinition.getNamespaces().put( "tns", businessDefinition.getTargetNamespace());

	// WSDL import
	Import wsdlImport = WSDLFactory.eINSTANCE.createImport();
	wsdlImport.setLocationURI( newWsdlUrl );
	wsdlImport.setNamespaceURI( businessDefinition.getTargetNamespace());
	artifactsDefinition.addImport( wsdlImport );

	// Partner Link Type
	PartnerLinkType plType = PartnerlinktypeFactory.eINSTANCE.createPartnerLinkType();
	plType.setName( portType.getQName().getLocalPart() + "PLT" );

	Role plRole = PartnerlinktypeFactory.eINSTANCE.createRole();
	plRole.setName( portType.getQName().getLocalPart() + "Role" );
	plRole.setPortType( portType );
	plType.getRole().add( plRole );
	plType.setEnclosingDefinition( artifactsDefinition );
		
	// This is an extension, here is how to add it
	artifactsDefinition.getEExtensibilityElements().add( plType );
	return artifactsDefinition;
}

And here is how to write it.

void writeDefinition( Definition def, File targetFile ) {
	
	// Register the basic elements of the specification
	ResourceSet resourceSet = new ResourceSetImpl();
	resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put( "xml", new XMLResourceFactoryImpl());
	resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put( "xsd", new XSDResourceFactoryImpl());
	resourceSet.getResourceFactoryRegistry().getExtensionToFactoryMap().put( "wsdl", new WSDLResourceFactoryImpl());
	resourceSet.getPackageRegistry().put( XSDPackage.eNS_URI, XSDPackage.eINSTANCE );
	resourceSet.getPackageRegistry().put( WSDLPackage.eNS_URI, WSDLPackage.eINSTANCE );

	// Register the required extensions
	resourceSet.getPackageRegistry().put( PartnerlinktypePackage.eNS_URI, PartnerlinktypePackage.eINSTANCE );
	
	// Create a resource...
	URI emfUri = URI.createFileURI( targetFile.getAbsolutePath());
	Resource resource = resourceSet.createResource( emfUri );
	resource.getContents().add( def );
		
	// ... and save it
	// See @link{org.eclipse.emf.ecore.xmi.XMLResource} for examples of SAVE options
	Map<Object,Object> saveOptions = new HashMap<Object,Object> ();
	resource.save( saveOptions );
}

Conclusion

I have been working with WSDL for several years. And to be honest, I had never thought about using the WSDL metamodel outside Eclipse. I knew it was there, I knew how to work with EMF. But for me, it was more a tool than a real library. And somehow, this is true. But the fact is that it is not complicated to use. And it can also work in standalone applications, not only in Eclipse.

So, this may be a solution to consider, depending on your needs.


Some weeks ago I was looking for an OS X GitHub client. Not a client like the official GitHub client which is one of the best OS X app I ever see and which mainly deals with raw git stuff, but one which can allow me to access to my repositories, organizations, issues and gists. My different researchs returned nothing really exiting… Since I was looking for something new to develop, I started to create a simple application which focuses on my needs.
I started to share my idea and after some nights to code it, I sent the first prototype to some twitter geeks to get their feedback. The feedback I had was really exiting (thanks @k33g_org and @aheritier, you excited me a lot!), and most of the beta testers said me that I should submit the application to the Mac App Store and make it a paid application.

So, here we are! QuickHub has been finally validated by Apple and I have chosen to try to sell it at the lowest possible price ($0.99).

The first version of QuickHub stands in the OS X status bar and allows you to directly access to :
- Your Repositories
- Your Organizations/repositories
- Your Issues
- Your Gists

It also notifies you using Growl when something changes on the previously mentioned items. For now clicking a menu item opens in your browser. It is quite simple but it was really the first idea to provide something which can quickly allows you to access your GitHub stuff.

While Apple reviewed QuickHub, I started to add some features to it. The 1.1 version will allows you to do more things (hopefully), and especially to create gists directly from QuickHub, preview some artifacts and have more better user experience with some better interface… I hope to publish it in the next days.

For now, any feedback is appreciated. You can comment here, on my personal twitter @chamerling or on the official QuickHub one @quickhubapp. Share it with your friends/co workers/followers and let’s see what happens. You can directly access to the Mac App store or have a look to the QuickHub Web site.

Christophe


Classé dans:QuickHub Tagged: Git, GitHub, Mac App Store, Mac OS X, petalslink, quickhub, status bar

Last week was the first annual review meeting of the Play project I work on since one year. I am involved at several levels in this project: from the architecture point of view, to the software integration and quality ones. On my side, my goal is to provide the efficient software infrastructure for events actors, or how to build an Event Driven Architecture based on Petals Distributed Service Bus. There are others points which have to be developed, especially all the platform governance stuff and Service Level Agreement for events, what we call Event Level Agreement.

We showed several things to the European Commission reviewers and we were also able to show an early prototype (this one was originally planned to be show at mid project ie in 6 months…). I made a video capture of the demo, which really needs to be explained…

  • There is the idea of a market place for events : The event marketplace. From there users are able to subscribe to things they are interested in. For now it is just a simple Web application which subscribe on behalf of the user to events through topics. This subscription is sent to the Play paltform using Web standards. Here we use OASIS Web service Notification to create this communication link.
  • Events are collected from several sources by events adapters. In the video above, we can see the user setting this FB status which is collected by the Play system and transformed into a notification which is published to the platform. There are also some Twitter adapters and Pachube ones which collects data in real time and publish them to the platform. This time again, we use Web standards for adapters communication.
  • Now, what happens in the video? The user logs in to event marketplace and subscribes to FB events. The user then publishes something to its FB wall (note that it is not mandatory to have the same user in the event marketplace and in FB. A user can subscribe to FB events and receive events from all the FB statuses collected from the FB application users). After the event propagation delay, the event marketplace display the event to the user.

So do we need such machinery to do things like that? No, if you want to do some other simple mashup portal. There are several components which are under active development: Storage and processing. Yes we store all events in the Play platform. This storage will be huge, but it is one of the project goal: Providing efficient and elastic storage in some P2P way. The need for this storage comes with the other important aspect of the project: Complex Event Processing. We will soon be able to create complex rules on top of events and be able to generate notifications based on past and real time notifications because we have efficient storage and real time stuff inside the platform. I am not an expert of this domain, so I can not give more details about that point but capabilities are huge! For example, we can express something like "Hey Play, can you send me a SMS me when there is my favorite punk rock band playing just around and I am not on a business trip and X and !Y or Z". All of this intelligence coming from the processing of various sources I push since months in the platform coming from Twitter, FB, last.fm and other data providers.

Now let’s take some time to work on my OW2Con talk. The session name is pretty cool : Open Cloud Summit Session.


Classé dans:Play Tagged: Complex Event Processing, dsb, eda, esb, European Commission, Pachube, PEtALS, petalslink, play, Service Level Agreement, SOA, twitter, Web service, wsn

For those who did not follow GMF Tooling development in the last year, then let’s say that you missed a complex history. Since most of the contributors have changed, GMF Tooling had trouble to set up a new efficient leadership, had trouble to provide builds, did not succeed to get into the Indigo release train, and could not provide a release that is compliant with Indigo… That was a sad part of GMF Tooling history! But this is now over. Here are the recent accomplishment that make GMF Tooling back to active life:

GMF Tooling 2.4.0 “for Indigo” released

A lot of people were waiting for it, it finally occurs: GMF Tooling (finally) has a release that works with the Eclipse Indigo release! It is mainly made of bug fixes and compatibility improvements. Here is the p2 repository: http://download.eclipse.org/modeling/gmp/gmf-tooling/updates/releases/

A new lead: Michael “Borlander” Golubev

After lots of mails, the GMF Team, helped by the Modeling PMC, was able to nominate a new lead to overview the GMF Tooling contributors team and development. He is Michael Golubev, often known as “borlander” on bugs and forums. who works for Montages as a full-time developer for GMF Tooling. He has also been the lead of UML2 Tools.

Simple build process thanks to Tycho and host build on hudson.eclipse.org

GMF Tooling now has a Tycho builder, that is far easier to maintain and run than the legacy one. So contributors can now run tests very easily with a “mvn clean install” to ensure their work did not break anything. That makes contributing much easier. Moreover, the build is hosted on hudson.eclipse.org so that it is easy and transparent to get an idea of how healthy is the code. Also, going to continuous integration on Eclipse servers allows to produce builds that are equivalent to the one that will be released (including signing and all the necessary Eclipse stuff). So there is no more additional difficulty building a release than building a snapshot.

Get GMF Tooling back into the Modeling discovery service

The Modeling Discovery wizard is a wizard that appears when downloading the Eclipse Modeling package to suggest you some projects to install and use. GMF Tooling just get back into it as I am writing this post!

Guarantee GMF Tooling will make it in Juno release train

We also made the efforts to ensure the future of GMF Tooling will be less chaotic than it was for Indigo. We already did most of the necessary stuff to get GMF Tooling in the Juno release train. So, no stress this year! More details here.

Improved documentation

See this effort in my previous post. This is an undefinitely work in progress, so feel free to contribute directly by making the wiki easier to navigate.

So… what’s next?

A lot of things + what the community will think about contributing. The project plan for GMF-Tooling 3.0 (yes, 3.0!) is not yet finished, but here are some key objectives:

  • Easier and more intuitive Tooling – with a high-level graphical editor to define your own graphical editor (a “meta-”editor)
  • Improve integration/collaboration with other Modeling projects (EEF, XText…)
  • Move to Git
  • Enroll more contributors in GMF Tooling development
  • Simplify generator code
  • Extensibility of generator and tooling to make it easy to add support for new things in GMF Tooling from 3rd-party bundles.

I think GMF Tooling just achieved a major step, and I bet this is the beginning of a new, leaner, era for Graphical Modeling!


Back on SoftShake

In: Eclipse

28 Oct 2011

SoftShake 2011 was now several weeks ago! This was a very nice conference, quite well organized, with different tracks that make easy for you to always find something interesting to learn.

Some sessions I liked

Before speaking of the sessions I like, I have to say that there were a lot of sessions in just 2 days, and I had been busy during the conference to talk with some people during the sessions. So I missed most of them, maybe some very good ones. So this is not of “best of all sessions”, but more a “best of the 9 sessions I saw”.

Kudos to Hamlet D’Arcy who presented how the compile-time annotations, such as the one proposed by the Lombok project, are processed by compilers to modify the internal AST model of your code, and add behavior to your code automatically. It was interesting, simple to understand, and easy to find how it applies in everyday’s developer life.

I also liked the presentation “Data Grids vs Databases” from Galder Zamarreño from the Infinispan team. I am not very easy will all this Data-driven stuff, but after this presentation, I now know when to use a Data Grid or a Database. However, I am still not sure how all these things compare to Object Databases, such as Objectivity.

And I also found the présentation from Sébastion Douche interesting. It told us the story of the product and team he manages, how his team leverage Git, how they industrialize builds, and everything they do to make their working life easier. That’s alwasys good to hear that kind of tips.

Our sessions

Here is the slides of my “Modeling with Eclipse” presentation, with materials available on GitHub:

I enjoyed a lot presenting it, although I had my laptop taking lot of time between 2 operations, so that demonstrations got longer than expected. I counted 26 attendees. I hope I gave them enough examples to understand how powerful is the Modeling ecosystem at Eclipse, and how productive are the tools available.

And here are the slides from my colleague Jean-Christophe Reigner (PetalsLink Chief Services Officer) that he used to explain what is an ESB, why and when it is useful, and to show some concrete use-cases of ESB:

I liked his presentation, it was quite clear, and I could learn more about ESB in a general IT architect point of view. I think people in the room felt the same thing.

As a speaker

As a speaker too, it was really very cool to be there since the organizing team made everything necessary to get speakers happy, providing help and chocolate during the conference, and inviting us to a very good restaurant where we could discover, for those who did not know that already, the Swiss gastronomy.

Swiss Gastronomy Part I: "La fondue moit-moit"

Swiss gastronomy Part II: "Les meringues double-creme"

So, I hope I’ll make it again for SoftShake 2012 !


Almost true… In fact Petals DSB uses and extends Petals ESB in several ways. When I started to think about extending the Enterprise Service Bus, it was just to avoid all the JBI stuff at the management level i.e. use a real, simple and efficient API. So I added many management stuff exposed as Web services : Bind external services, expose internal services (oh yes bind + expose = proxy), get endpoints, activate things, etc…
One other goal was to avoid to use closed protocols for inter node communication: The ESB uses at least three ports to create inter node communications for JMX, NIO and SOAP. So why not, just using an open protocol like SOAP and just one port? This is what I did, I changed some implementations to use the same port and the same protocol for all services.

All this stuff has been developed focusing on extensibility and easier development. This is mainly because the ESB can be hard to extend for newbies, there are so many things inside… Today the DSB is not only a Distributed Service Bus, it is also a framework so that developers can easily extend the DSB without the need to know how it works inside. Some examples? Want to expose a kernel service : Add the JAXWS @WebService annotation. Want to subscribe to Web service notifications : Annotate you Java method with @Notify. Want to be notified about new endpoints : Add the @RegistryListener annotation. That’s all, you do not have to search for the Web service server to expose your service, nor read all the WSN documentation to receive notifications, … Simple, efficient.

There are other things you can do, mostly all is detailed in the DSB page.


Classé dans:PEtALS Tagged: dsb, Enterprise service bus, esb, Java, Petals ESB, petalslink, Service-oriented architecture, soap, Web service

This year again, I submitted a talk proposal to the OW2 annual conference and it has just been approved by the OW2 management office.

While last year I spoke about some conceptual things around the Distributed Service Bus and the Cloud, this year I will go one step further with some live demonstrations not only dealing with service bus stuff, but also with some BPM tools and the Cloud stack we actively develop in the research team at PetalsLink. Here is my talk proposal:

All the services are moving to the Cloud, so are business processes. In this talk, we will show how to create collaborative business processes using an open source SaaS BPMN Editor. But designing business processes is not enough, why not running them in the Cloud? We will see that we can rely on a completely Cloud-aware SOA software infrastructure combining several open sources solutions such as a Service Bus and IaaS framework. The resulting ‘Cloud Service Bus’ allows the integration of in-house services in order to benefit from Cloud-based features such as elasticity, load balancing, service clustering and migration. This Cloud Service Bus will serve as the runtime basis of the business processes producing a Petals Cloud Stack solution. All in the Cloud, all open source!

I really hope to have time to work on some cool things to add more foggy-cloudy stuff and have things running on a real cloud infrastructure. I have many ideas in my mind these days and it is really really really cool.

Oh and I will also talk a bit about what we are currently building in the Play FP7 project. We have to show things in two weeks at the European Commission and these things are really interesting to share with OW2 attendees and staff.

BTW, I think that there is some free beer social event this year at OW2Con, see you there of course!


Classé dans:Cloud, Conference Tagged: Business process, Cloud computing, dsb, esb, Open source, ow2, ow2con, petalslink, Service-oriented architecture, servicebus, Software as a service

Listing files on a Mediafire account

In: Uncategorized

9 Oct 2011

Here is the story.
I have a free account on Mediafire, where I host some files to be shared. This site provides a nice user experience. This is true for those who download, but also for those whose share files with it (the file management is really good).

The fact is that I would like to be able to list my files from an external application. It is just a way to retrieve my links to share them. Then, people will go on Mediafire to download the files. Mediafire provides an API, but it does not support this feature.

After a quite long search, I finally found a way to list my files.
It was quite hard because authentication is required and because files are not listed in the HTML source but entirely added in the page with Javascript. So, here is the solution (implemented it in Java). For this, I used HTML Unit, but it should also work with Cobra.

/**
 * List file links associated with a given Mediafire account.
 * @author Vincent Zurczak
 */
public class MediafireTest {

	/**
	 * @param args
	 */
	public static void main(String[] args) {

		// Create a web client
		final WebClient webClient = new WebClient( BrowserVersion.FIREFOX_3_6 );
		webClient.setThrowExceptionOnScriptError( false );
		webClient.setCssEnabled( false );
		webClient.setIncorrectnessListener( new IncorrectnessListener() {
			@Override
			public void notify(String arg0, Object arg1) {
				// Nothing
			}
		});

	    try {
			final HtmlPage page = webClient.getPage( "http://www.mediafire.com/" );
			System.getProperties().put( "org.apache.commons.logging.simplelog.defaultlog", "fatal" );

			// Log in
			final HtmlForm form = page.getFormByName( "form_login1" );
			HtmlTextInput emailField = form.getInputByName( "login_email" );
			emailField.setValueAttribute( "your email" );

			HtmlPasswordInput pwdField = form.getInputByName( "login_pass" );
			pwdField.setValueAttribute( "your password" );

			final HtmlImageInput button = form.getInputByName( "submit_login" );
			HtmlPage page2 = (HtmlPage) button.click();

			// Then, get the JS script to extract the links
			// We parse the script and not the modified HTML (which I did not succeed BTW)
			JavaScriptPage jsPage = webClient.getPage( "http://www.mediafire.com/js/myfiles.php/?45144" );
			// 45144: seems to be the same ID for all (anonymous and registered account)

			Pattern pattern = Pattern.compile( "es\\[\\d+\\]=Array\\(([^;]*)\\);" );
			Matcher m = pattern.matcher( jsPage.getContent());
			while( m.find()) {

				String[] parts = m.group( 1 ).split( "," );
				String link = removeSurroundingQuotes( parts[ 3 ]);
				String filename = removeSurroundingQuotes( parts[ 5 ]);
				System.out.println( filename + ": http://mediafire/download.php?" + link );
			}

		} catch( FailingHttpStatusCodeException e ) {
			e.printStackTrace();

		} catch( MalformedURLException e ) {
			e.printStackTrace();

		} catch( IOException e ) {
			e.printStackTrace();

		} finally {
			webClient.closeAllWindows();
		}
	}

	/**
	 * Removes surrounding quotes.
	 * @param s a string
	 * <p>
	 * If the string is null or if its length is lesser than 2, then the original string is returned.
	 * If the first and last characters are different, then the original string is returned.
	 * If the first character is not a single quote or a double quote, then the original string is returned.
	 * Otherwise, the first and last characters are returned.
	 * </p>
	 */
	private static String removeSurroundingQuotes( String s ) {

		String result;
		if( s == null )
			result = s;
		else {
			int length = s.length();
			if( length < 2 )
				result = s;
			else if( s.charAt( 0 ) != s.charAt( length - 1 ))
				result = s;
			else if( s.charAt( 0 ) != '\'' && s.charAt( 0 ) != '"' )
				result = s;
			else
				result = s.substring( 1, s.length() - 1 );
		}

		return result;
	}
}


Welcome

Archives

Categories

Latest Tweets

Page 4 of 33« First...23456102030...Last »