I have recently committed in the Eclipse BPEL Designer some patches that were contributed in our Bugzilla. This fact itself is not important. However, I used the Git command apply and kept the author name in the commit. I had never used this feature before, but it is extremely powerful. And very convenient to track activities and propose new committers on a project.

I let some traces in this article (as an example and for me, for later).
Notice that the apply command did not work with EGit, I had to use the command line. I guess it will be fixed in a future version.

  • First, we need a patch, given as a diff file, e.g. in a bug entry.
  • As a committer, you should apply and check this patch. So, you first need to copy the patch in your (cloned) Git repository.
  • Get some stats about the patch :
    git apply --stat myPatchFile
  • Make sure the patch can be applied :
    git apply --check myPatchFile
  • Apply the patch :
    git apply myPatchFile
  • Delete the patch file.

Then, you can check that the patch works and then commit it, before pushing the modification.
One important thing is to keep the author’s name in the commit. The author and the committer are different. And both can be kept in the commit and in the history. Here is an example in the command line:

git commit -a --author="authorUsername <author.mail.address@sth.com>"

After the push, both the author and commiter names will appear in the repository.

The author and committer names appear in the history

I don’t know if everyone in Eclipse projects uses this feature, but from the community perspective, this is very interesting (IMO).


I have 2 applications that should work together.
These applications are end-user applications. The link between them is in fact the system’s clipboard (this is a Windows 7′s use case, but it might be encountered on other platforms). Basically, the idea is that the user selects an image in the first application and copies it in the clipboard. The second application should detect this new image and process it.

By browsing the web, I found out that there is no perfect solution. Sun did not provide a complete solution for this in Java because there were portability issues. One alternative was to use the ava.awt.datatransfer.ClipboardOwner class. A subclass instance acquires the ownership of the clipboard by putting something into it. When another application copies something into the clipboard, the previous owner is notified that it has lost the ownership. The implementation would be something like:

public class ClipboardWatcher implements ClipboardOwner {

	/**
	 * @see java.awt.datatransfer.ClipboardOwner
	 * #lostOwnership(java.awt.datatransfer.Clipboard, java.awt.datatransfer.Transferable)
	 */
	@Override
	public void lostOwnership( Clipboard clipboard, Transferable contents ) {
		// Get the clipboard's content and process it if required
		// ...

		// Acquire the ownership again by putting the same content
		// (Same content because otherwise, copy/paste does not work anymore for other applications)
		// ...
	}
}

This is not very convenient.
I also discovered it was not working very well when there are several running applications.
So, I implemented a solution with a thread which polls the clipboard regularly. I put here a complete sample with an example of image processing.

import java.awt.Image;
import java.awt.Toolkit;
import java.awt.datatransfer.Clipboard;
import java.awt.datatransfer.ClipboardOwner;
import java.awt.datatransfer.DataFlavor;
import java.awt.datatransfer.StringSelection;
import java.awt.datatransfer.Transferable;
import java.awt.datatransfer.UnsupportedFlavorException;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.concurrent.atomic.AtomicBoolean;

import javax.imageio.ImageIO;

/**
 * @author Vincent Zurczak
 */
public class ClipboardWatcher implements ClipboardOwner {
	
	static final File PREVIEW_DIRECTORY = new File( System.getProperty( "java.io.tmpdir" ), "Image-Store" );
	private final List<PreviewFileListener> listeners;
	private final SimpleDateFormat sdf;
	private final AtomicBoolean run;
	
	
	/**
	 * Constructor.
	 */
	public ClipboardWatcher() {
		 this.listeners = new ArrayList<PreviewFileListener> ();
		 this.sdf = new SimpleDateFormat( "yy-MM-dd---HH-mm-ss" );
		 this.run = new AtomicBoolean( true );
		 
		 if( ! PREVIEW_DIRECTORY.exists()
				 && ! PREVIEW_DIRECTORY.mkdir()) {
			 // TODO: process this case in a better way
			 System.out.println( "The preview directory could not be created." );
		 }
	}
	
	
	/**
	 * @param e
	 * @return
	 * @see java.util.List#add(java.lang.Object)
	 */
	public boolean addPreviewFileListener( PreviewFileListener e ) {
		synchronized( this.listeners ) {
			return this.listeners.add( e );
		}
	}


	/**
	 * @param o
	 * @return
	 * @see java.util.List#remove(java.lang.Object)
	 */
	public boolean removePreviewFileListener( PreviewFileListener o ) {
		synchronized( this.listeners ) {
			return this.listeners.remove( o );
		}
	}
	
	
	/**
	 * @see java.awt.datatransfer.ClipboardOwner
	 * #lostOwnership(java.awt.datatransfer.Clipboard, java.awt.datatransfer.Transferable)
	 */
	@Override
	public void lostOwnership( Clipboard clipboard, Transferable contents ) {
		// Acquire the ownership again is not working very well for a periodic check
	}
	
	
	/**
	 * Starts polling the clipboard.
	 */
	public void start() {
		
		// Define the polling delay
		final int pollDelay = 5000;
		
		// Create the runnable
		Runnable runnable = new Runnable() {
			@Override
			public void run() {
				
				Transferable contents = null;
				while( ClipboardWatcher.this.run.get()) {
					
					// Get the clipboard's content
					Clipboard clipboard = Toolkit.getDefaultToolkit().getSystemClipboard();
					try {
						contents = clipboard.getContents( null );
						if ( contents == null 
								|| ! contents.isDataFlavorSupported( DataFlavor.imageFlavor ))
							continue;
						
						Image img = (Image) clipboard.getData( DataFlavor.imageFlavor );
						saveClipboardImage( img );
						clipboard.setContents( new StringSelection( "The clipboard watcher was here!" ), ClipboardWatcher.this );
						
					} catch( UnsupportedFlavorException ex ) {
						ex.printStackTrace();

					} catch( IOException ex ) {
						ex.printStackTrace();
						
					} catch( Exception e1 ) {
						// We get here if we could not get the clipboard's content
						continue;
					
					} finally {						
					
						try {
							Thread.sleep( pollDelay );
							
						} catch( InterruptedException e ) {
							e.printStackTrace();
						}
					}
				}
			}
		};
		
		// Run it in a thread
		Thread thread = new Thread( runnable );
		thread.start();
	}
	
	
	/**
	 * Stops listening.
	 */
	public void stop() {
		this.run.set( false );
	}
	

	/**
	 * Saves the image residing on the clip board.
	 */
	public void saveClipboardImage( Image img ) {

		if( img != null ) {	    	
			if( img instanceof BufferedImage ) {
				try {
					File file = new File( PREVIEW_DIRECTORY, this.sdf.format( new Date()) + ".jpg" );
					ImageIO.write((BufferedImage) img, "jpg", file );
					for( PreviewFileListener listener : this.listeners )
						listener.newCreatedFile( file );

				} catch ( IOException e ) {
					e.printStackTrace();
				}

			} else {
				System.out.println( "Unsupported image: " + img.getClass());
			}	
		}
	}
	
	
	/**
	 * A interface to listen to image detection.
	 */
	public interface PreviewFileListener {

		/**
		 * Notifies this instance that an image file was created from the clip board.
		 * @param imageFile the image file that was saved on the disk
		 */
		void newCreatedFile( File imageFile );
	}
}

There is another alternative for Windows platforms.
However, it was only implemented for an OSGi context. Notice that it should not be too difficult to adapt it for standalone Java applications. I read and understood most of the code, but I was a little bit lazy on this one. Check this blog entry for more details. The code is hosted on GitHub.


One more time, one more tiny (and maybe useful in some cases…) application with Play!.

WTF, my server is dead again!?

This app is a simple heartbeat manager looking at remote HTTP services and notifying you by email when something becomes unreachable. It uses the background Job feature of the Play framework and just does HTTP GETs on the specified list. That’s all, that’s simple, but that’s can be useful sometimes…

The code is located at 
https://github.com/chamerling/heartbeat-manager


Classé dans:Java Tagged: GitHub, heartbeat, HTTP, Java, petalslink, playframework

I started to develop QuickHub some weeks ago by focusing on features and without taking into account security issues such as this critical information which are login and password credentials… As mentioned in the GitHub developer pages, I started to use Basic Auth for all the requests QuickHub does to get retrieve data from GitHub. I started to think about moving to OAuth when a user said me that he bought QuickHub but that he did not use it because of Basic Auth. He was afraid that I can rob its credentials and so have a look to its repositories and more. I totally understand this argument and so I started to think that it limits QuickHub adoption by developers and that I should do something.

So, I discussed with GitHub guys through their support channel (Here I want to say that I am really impressed on how fast and professional is the GitHub support team!). Of course, they also told me that they will never use QuickHub if OAuth is not provided. After discussions with the guys, I started to implement a workaround to provide OAuth support in QuickHub by using the OAuth Web flow and adding some stuff to QuickHub.

Let’s see how we can do that in a generic way so that you can use it in your app if needed since every serious Internet service also provide OAuth support… Note that I will not give many details about OAuth itself but I will use some terms, so have a look to OAuth documentation for more details.

Register a Web application

The first step is to register your application to the developer portal. Here you need to provide some information such as the app name and its callback URL. Even if we are developing a desktop app and not a Web one, we need to be able to provide this HTTP based callback URL. We will see in the next section what is inside this application. By registering our application, the service will generate and give us some keys to be used in the next steps, mostly to recognize us when calling the service.

Create a Web application

We need a Web application to handle callback calls from the service we want to use OAuth. This application will only be used at authorization time and just have to be able to receive the service call, get the code sent by the service, then call the service again with the code and our application keys (there is a secret one to be used here). As a result, the service will send you back your OAuth token to be used in all the future service calls. This token is used to authenticate your service calls, no more password stuff inside! Here is a quick sequence diagram showing all the exchanges between actors…

On the QuickHub side, I chose to create this Web application by using the Play Framework I already mentioned several times in this blog. I used the Heroku paas to provide the Web application publicly.

As showed in this Gist (

), the callback method is really simple (as usual with Play!): just get the code and call GitHub with it and your application keys. When all lis done, just display a result page with a specific URL. Here it starts with ‘quickhubapp://oauth?’. Let’s see what it means in the next section…

Add custom URL handlers to your desktop app

Once the desktop application is authorized, our Web application redirects the user to a Web page which embeds a button targeting an URL starting by ‘quickhubapp://oauth?’. Here you already understand that by clicking on such a link must drive you to something special. The cocoa framework allows developers to register specific URL handlers for their applications. So we need to register QuickHub handlers so that OS X knows that every link  with the quickhubapp scheme must me routed to QuickHub. This is as easy as showed in this Gist 

.

This code snippet just tells QuickHub to invoke the getUrl method when it receives a quickhubapp event. Up to the getUrl method to handle things. In QuickHub, I just extract the OAuth token in order to persist it locally for future use.

Done!

So now we are able to use OAuth Web flows for desktop applications. In the best of the worlds, every service provider may provide a real desktop flow to be used without the need to create this additional web application. In the real world, it is not the case but as you see it can be bypassed quite easily.

I will provide more technical details on the second section ‘Create a Web application’ with real sample and code in the next weeks.


Classé dans:QuickHub, WebService Tagged: Application programming interface, GitHub, Hypertext Transfer Protocol, OAuth, petalslink, quickhub, Uniform Resource Locator

Hi dear Petals community!

Here is a big catch-up, for all who couldn't follow news on www.petalslink.com ;)
These last three months have been quite busy, with the releases of the following:

Petals Studio 1.2: New and fresh, with lots of improvments and features, and compatibility with latest component updates! All info and links here or get straight to the download.

Petals ESB 4 Preview: a milestone of Petals ESB 4 has been released. Get a peek on next major edition of Petals ESB!

Petals Components and Dev Kit: BC-SOAP and SE-POJO have been updated, as well as our Component Development Kit.

Petals BPM: discover our latest asset in SOA infrastructure and management! Petals BPM brings business process design through a light and slick GUI!

Stay tuned! Our new service persistence component, SE-ASE, and release of Petals ESB 4, are coming soon!

Your Petals team

Permalink | Leave a comment  »

This is a blog post to sum up some of my thoughts I shared on the Nebula-dev mailing list with Wim Jongmam and later on Twitter with Zoltán Ujhelyi and Dave Carver about naming build types at Eclipse.org and scheduling them.
It is a topic open to debate. My goal with it is to find out practices and names that are relevant and useful for community (contributors and, mainly, consumers) and also to give food for thoughts about how to deal with builds.

Background

Historically, Eclipse has 4 to 5 classical qualifier for binary artifacts:

  • Release
  • Maintenance
  • Stable
  • Integration
  • Nightly

This wording is specific to Eclipse, only Eclipse people do understand the meaning of it. Even worse, some of these are not used accurately. Although this is more or less official wording, I am not sure it is used by most projects.

This guy has driven a revolution in the way we build and deliver software

Now  Eclipse.org provides continuous integration, to automate and manage builds executions, and most of them happen whenever a change happen on your VCS. Continuous integration has highly changed the way binary are produced and made available to consumers, it is now much easier to get builds, lots of project have a ping of less that 10 minutes between a commit and a build ready to be released.

That was the starting point of my thoughts, with the Nebula build job: Why calling nightly a job that does run on a build any time a commit happen?

Requalifying binaries to make consumption clearer

As a producer of builds, here are my opinion on these qualifiers:

  • Release: The heartbeat of the product for consumers
  • Maintenance: is a release, but with 3rd qualifier digit different than 0
  • Stable: Is nothing but an integration build that was put on a temporary update-site for the release train. And to be honest, I don’t use it, I directly point the latest good continuous integration builds to the release train builder
  • Integration: The heartbeat of the product for developers, built on each commit
  • Nightly: What is in it? What does nighly mean at the same time for a developer in Ottawa and a developper in Beijing? Who cares that it is build nightly and not a 2pm CEST? For most projects, the nightly built artifacts are not at all different from the ones that would have been built with the last commit. What is the added value of artifacts built during a nighly scheduled build over the latest artifact built from last commit? Why building something every night if there was no commit in it for 3 monthes?

So I am in favor of removing Maintenance, Stable and Nightly. That gives

  • Release
  • Integration

Wow, 2 kinds of builds, reminds me a tool… Oh yes! Maven! Maven has 2 types of binaries: Release and SNAPSHOTS. That’s great, we finally arrive to the same conclusion as Maven, except that what Maven calls “snapshot” is called “integration” in the Eclipse teminology.

But now, let’s consider the pure wording: why having 2 names for the same thing? How to call a binary that is work in progress? An “Integration” or a “snapshot”?

Let’s be pragmatic: This report tells us there are about 9000000 Java developers in the world. This onetells us 53% of this population use Maven. Let’s say that about 60% of Maven users do understand the meaning of SNAPSHOT. That means 9000000 * 53% * 60% = 2,862,000 people do know what SNAPSHOT means.  Wayne confirmed me that the size of the Eclipse community is 132,284 people, who might know the meaning of “integration” in an Eclipse update-site name. That’s the number of people who have an account on eclipse.org sites – wiki, forum, bugzilla, marketplace. Even if we assume that 100% of them do understand the differences between the different qualifiers, and I am pretty sure that less than the 600 committers actually do, that makes that Snapshot is 21 times a more popular and understood word than integration.

So, Eclipse projects and update sites would be easier to understand and consume for the whole Java community by accepting the Maven wording, and using it on the websites and update-sites name.
Following the Maven naming and the release/snapshots dogmas would make consumption easier, but also avoid duplication of built artifacts, and also make things go more “continuously”. Your project is on rails, it’s going ahead with Snapshots, and sometimes make stops at a Release. That’s also a step ahead towards the present (see how GitHub works) of software delivery: continuous improvement, continuous delivery.

Requalifying build job to make production clearer

So now let’s speak about build management, no more about delivery.

You need several jobs to keep both quality and fast enough feedback, then set up several jobs!

Dave Carver reminded me about some basics of Continuous Integration: keep short feedback loops, one builds for each thing. That’s true. If you have a “short” build for acceptance and a “long” build for QA, you need to have separated jobs. Developers need acceptance feedback, they also need QA feedback, but they for sure cannot wait for a long build to have short-term feedback. Otherwise, they’ll drink too much coffee while waiting, it is bad for their health.


But do not create several builds until you have real needs (metrics can also help you to see whether you have need). If you have a 40 minutes full-build, that fails rarely, with slow commit activity, and that nobody is synchronously waiting for this build to go ahead, then multiplying builds and separating reports could be expensive for low Return On Investment. That’s the case for GMF-Tooling: we have a 37 minutes build with compile + javadoc + tests + coverage + signing, and we are currently quite happy with it, no need to spend more effort on it now. Let’s see how we feel when there will be a Sonar instance at Eclipse.org and that we’d have enabled static analysis… Maybe it will be time to split job.

Avoid scheduling build jobs, it makes you less agile

Before scheduling a job that could happen on each commit, just think about how long will be the feedback loop between a commit and the feedback you’ll get: it is the time that will happen between the commit and the clock event starting your job. It can be sooooo long! Why not starting it on commit and get results as soon as possible? Also why schedule and run builds when nothing may change between to schedule triggers?
I can only see one use case where scheduling job executions is relevant: when you have lots of build stuff in the pipe of your build server, and you have limited resources. Then, in that specific but common case, you want to set up priorities: you don’t want a long-running QA job to slow down your dev team who is waiting for feedbacks from a faster acceptance build. For this reason, I would use scheduling, but I would do so because I have no better idea on how to ensure the team will get the acceptance feedback when necessary, it is a matter of priority. Maybe inversting in more hardware would be useful. Then you could stop scheduling, and get all your builds giving feedbacks as soon as they can. Nobody would wait for a schedule event to be able to go ahead the project.
As I said to Zoltán on Twitter: “The best is to have all build reports as soon as possible, as often as possible” (and of course only when necessary, don’t build something that has already been built!). Scheduling goes often against those objectives, but it is sometimes helps to avoid some bottlenecks in the build queue, and then save time projects.

Name it to know what it provides!

Remember Into the Wild or Doctor Zhivago... "By its right name"...

Ok, at Eclipse, there are “Nightly” jobs. I don’t like this word. Once again, “Nightly” is meaningless in a worldwide community. And the most important issue are “What does this build provide?”, “Why is it nightly, can’t I get one on each commit?”.
If this build is run on each commit, then don’t call it “Nightly”, because it feeds the consumer with false information. You can think about having both “acceptance” and “QA” jobs, then put that in their names rather than a schedule info, that’s a far more relevant information.

Conclusion

Continuous integration has changed the way we produce and deliver software, we must benefit of it and adopt the good practices that come with it. Continuous integration is the first step towards what seems to be the present of near future of software delivery: continuous improvement and continuous delivery. We must not miss that step if we want to stay efficient.


Last week at OW2Con, Talend CTO talked about their data and service integration solution. This sentence impressed me (almost this one, not sure it was so short):

We have more than 500 connectors!

Wow! Great, let’s have a look to that! What is a connector? In the Petals ESB context connectors are the bindings represented in the upper part of this well-known image

Petals ESB

Petals ESB

So, we have almost 8 connectors. Looks that we are poor… So let’s look in details what is a Talend connector: 
http://www.talendforge.org/components/
.

Can you see that? A Talend component is almost an operation in a service, or let’s say that it is an input/output from a data source. So a service is not as generic as a Petals service but it is a specialized service i.e. we can also have tons of ‘components’ for Petals ESB, it just means that we have to provide configuration artifacts for all the services that we find: Salesforce, Amazon EC2, S3, all the databases in the world, etc, … Oh no, wait! Don’t you see the Talend component on the bottom of the figure just above? Yes we have a Talend connector so we have 500+ components in Petals ESB :)

No really, Talend guys really do an incredible work and their recent press releases are really impressive. Congrats!

 


Classé dans:PEtALS Tagged: esb, integration, ow2, petalslink, SOA, Talend

I was in Paris last week for the OW2 annual conference and I gave a talk called "Petals BPM and the Cloud" during the Open Cloud Summit Session (wow what a name!). This talk was about showing that we have things running and ready to be published in the Cloud. As I said during my talk, difficulty is not to provide the SaaS layer, pushing a Web app to the Cloud is not so hard (and not so interesting). The interesting part is about building the PaaS layer. In the current case, the PaaS will provide "Integration as a Service", or how we can use Petals Service Bus, to provide ways to integrate, orchestrate, manage and monitoring business services.

My son is an open source fan

My son is an open source fan

So let’s go back on my talk, where I planned to show things working… Unfortunately, I was not able to show anything due to some low resolution problems and this was really a shame; next time I will prepare a video in case of something like that happens. I am going to record these videos this week to show that we have interesting things under development : We can create business processes with Petals BPM and deploy them on the service bus in order to execute and monitor the process itself in a distributed way.

While waiting these videos, here are the slides of my talk. There are sort of ‘zen’ slides so the talk I gave was really important to understand all… So come and see me next time, or just send me comments.

For the other parts of the conference, as usual, there were really interesting presentations and discussions around OW2, open source and Cloud. One fun thing which I learnt was that OW2-Jonas is used in MS Azure Cloud solution as support of J2EE apps (can I also inform that Microsoft was a big sponsor of OW2Con? Yes, really, they gave money for an open source conference, that’s fun). Well, there were so many interesting things and I can not list all here. But open source is really something companies should have a look if they do not did it already, they will be surprised to see how active and professional is the community behind it.


Classé dans:Cloud, Conference, PEtALS, SOA Tagged: Annual Conference, bpm, Cloud computing, Microsoft, Open source, ow, ow2, ow2con, petalslink, SOA, Software as a service, Web application

… or less! Heroku is defined as a "Cloud application platform". I just want to redefine it to "Awesome Cloud application Platform". So, this awesome platform provides a way to host and scale your application in the Cloud really easily with 3 or 4 commands…

Since I am currently working on my talk at #OW2Con 2011 (coming later this week) dealing with BPM, Services and the Cloud, I wanted to host some Web services on several places. I never had time to test Heroku but I just took this precious time today. After looking some examples, I created a Maven project template (no I do not have time to create an archetype, maybe there is one somewhere) which uses Jetty and Apache CXF to expose JAXWS annotated classes as Web services. So now, using heroku to freely expose your services is easy as:

  1.  Sign up to heroku
  2. Download the heroku client for your platform
  3. Clone/Fork the repository at 
    https://github.com/chamerling/heroku-cxf-jaxws
  4. Add your own services
  5. Login to heroku ‘heroku auth:login
  6. Create the app on heroku ‘heroku create -s cedar
  7. Push your services to heroku ‘git push heroku master‘. There is a git hook somewhere which just automatically compile and start your application after you pushed it.
  8. Open your CXF services summary page ‘heroku open’
The default application name is some random one, you can rename it by using the ‘heroku rename yournewname‘ but in the current case I had an issue on the generated Web service endpoint name. So I suggest restarting your app after renaming (have a look to the ‘heroku ps‘ command).
That’s all, that’s quick!

Classé dans:Cloud, Java Tagged: Apache CXF, cloud, cxf, Git, Heroku, jaxws, jetty, petalslink, Programming, Web service

I am a big fan of Twitter and last week devoxx live tweet wall (
http://wall.devoxx.com/
) just made me want to create an open source live tweet wall. So last saturday night, I took some time to code it using the now well known Play! framework, some Web sockets and Twitter4J which supports the Twitter Stream API. This streaming API is really nice and provide a way to be notified almost instantly according to what you decided to listen to. After less than one hour (most of the time was consumed to read to Twitter4J documentation), the result is really fun. In the video below, I configured the wall to catch all the tweets containing ‘Apple’ and ‘Microsoft’ terms (I am not a MS supporter, but OMG, there are lot of tweets about it… probably only jokes and bugs…), and tweets are displayed in live on the wall (no I am not moving anything, the page is populated through a Web socket)

The resulting prototype is quite simple but works, and since I am not a front-end developer at all, it uses Twitter Bootstrap as CSS… The code is available on GitHub at
https://github.com/chamerling/play-twall
 and is not perfect at all but just works: Twitter configuration is available in the Play! application.conf file and the Twitter connection is created in the Bootstrap…

Looks like I should start creation sort of "Coding a nice thing in one hour" collection…


Classé dans:Java Tagged: Activity stream, GitHub, petalslink, Social network, twitter, twitter4j, wall, WebSocket

Welcome

Archives

Categories

Latest Tweets

Page 3 of 3312345102030...Last »