Jan 30, 2016

How to export highlights and comments from PocketBook InkPad 840

Getting the notes and highlights on the computer is in fact a very valid use-case when we are using an e-book reader, such as the one I purchased: InkPad 840. Sadly, there is no build-in functionality for this use-case. The recommended software on PC 'Adobe Digital Editions' also does not support such feature.

However browsing the filesystem of the device I found that the highlights, notes and snapshots can be easily extracted.
If you use Windows, connect your PockeBook to the PC. In the file explorer on the top menu bar (hit Alt  to show it), click on Tools/Folder settings. You have to set 'Don't hide system files', also 'Show hidden files'. Now you can browse the system files.
The highlights and comments for each book can be found in HTML format under the following path:

\system\config\Active Contents 

Btw this is the InkPad 840

Oct 21, 2014

JAX-WS client cookie manager - Customized cookie management

Customized cookie management with JAX-WS client  (JAX-WS RI - Metro specific)

For tl;dr option, please GOTO the bottom of this page.

Once upon a time I had to connect to a stateful SOAP webservice. It was stateful, because it cached some user-specific information in the session context, and sent back a session cookie. Bad design or not, this is what I had to work with. Nevertheless, the results of the calls depended on the userId of the caller made from the client application. Thus in the client application for each user, a session had to be maintained for the remote service. This means, the client stub was not exchangeable amongst users.
The inhabitants of internet already know the answer to how to maintain the session:
     .put(BindingProvider.SESSION_MAINTAIN_PROPERTY, true);
It is a very convenient approach, simple as slicing bread.

Plot twist
With above approach, session can be maintened per stub. But as I mentioned, we need a session per user of client application, so we need a stub per session. It was around 160 sessions open when we started getting OutOfMemoryError exceptions. We created a heap dump, and it turns out that one client stub takes up ~2.6Mbytes of memory. Yes the SOAP service interface was quite fat. So we quickly came to the conclusion that having one stub per user is a no go. We either need a pool of stubs with a LRU (Least Recently Used) or LFU (Least Frequently Used) cleanup algorithm to keep the size of stubs managable, or plug in our own session manager into JAX-WS.

One stub to rule them all
To use only one stub in a concurrent environment, calls to the client proxy must be thread safe. Thread safety of client proxies are not guaranteed by the JAX-WS specification[1]. But browsing the source code of the reference implementation[2] (that is also used by Glassfish) reveals that in fact client proxies are thread safe. It's inner datastructures created when issuing a request ensure that no request-specific information is stored in the stub itself, but passed around in the aforementoned data structures.

Finding a slot to stick my cookie jar
I could not find any feasible solution on the land of Internet, how to set the  cookie jarfor the ws stub. I wanted to tell it: use this object to store and retrieve cookies, so I can provide my own policy. This requirement is obvious when you use an Apache HttpClient and define your own CookieStore[3]. There is no way I cannot tell this to a mature and roboust SOAP framework. After digging the source of JAX-WS RI for several hours, I found a solution (for version 2.1). Then I had to face the cold truth: The Glassfish version we use (3.1.3) uses JAX-WS version 2.2.5, where the whole cookie management mechanism was rewritten fortunately, but is still proprietary. So after another couple of hours code shoveling the solution was clear.


Yes, those blessed guys who made JAX-WS RI hid a java.net.CookieStore into their prorietary HttpConfigFeature[4] class. This class implements WebServiceFeature (introduced in jax-ws 2.1). I may sound cynical here, but I am actually glad they have this feature. It is pretty obvious from the code that they designed a mechanism dedicated to plug in any CookieStore. I am just a little bit sad that it is literally nowhere mentioned in the documetation.[5]
I hereby present you the power of rtfc!
Using HttpConfigFeature
Below you find how to set your own CookieStore when you use JAX-WS RI (Metro) version 2.2.5 (and maybe above?). As I mentoned earlier, when a user logged in to the client application, he had his own session for the webservice too. We used CDI's Session Context to hold the cookies for every user, it was quite convenient.

import javax.xml.ws.Service;
protected hu.pal.ws.MyService createStub() {
    Service service = getService();
    MyCookieManager cookieJar = new MyCookieManager();
    WebServiceFeature httpConfigFeature = new HttpConfigFeature(cookieJar);
    return getServicePort(service, hu.pal.ws.MyService, "getMyServicePort", httpConfigFeature);
protected  T getServicePort(Service service, Class serviceInterface,
    String method, WebServiceFeature... features) {
    Class serviceStubClazz = service.getClass();
    // get endpoint and target namespace from
    // wsimport generated service class annotations
    String webEndpoint = getWebEndpoint(serviceStubClazz, method);
    String targetNamespace = getTargetNamespace(serviceStubClazz);
    QName qName = new QName(targetNamespace, webEndpoint);
    T stub = service.getPort(qName, serviceInterface, features);
    return stub;

[1] - JAX-WS 2.1 Specification (JSR224) - https://jcp.org/aboutJava/communityprocess/mrel/jsr224/index2.html
[2] - JAX-WS RI 2.2.5 Download page - https://jax-ws.java.net/2.2.5/
[3] - Apache HttpClient CookieStore - https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/CookieStore.html
[4] - Metro HttpConfigFeature on GitHub - https://github.com/gf-metro/jaxws/blob/master/jaxws-ri/rt/src/main/java/com/sun/xml/ws/developer/HttpConfigFeature.java
[5] - JAX-WS RI documentation - https://jax-ws.java.net/2.2.8/docs/

Nov 12, 2013

Building Hadoop 2.2.0

I am learning the new YARN and MapReduce brought by the stable version Hadoop 2.2.0, and thought the best way to find out how it works is by looking at the sources.

Prerequisites (copied from hadoop-common repository)

* Unix System
* JDK 1.6+
* Maven 3.0 or later
* Findbugs 1.3.9 (if running findbugs)
* ProtocolBuffer 2.5.0
* CMake 2.6 or newer (if compiling native code)
* Internet connection for first build (to fetch all Maven and Hadoop dependencies)


Linux: I am using a rather old 32bit Debian 6.0.6. 
debian@debian:~$ uname -a
Linux debian 2.6.32-5-686 #1 SMP Sun Sep 23 09:49:36 UTC 2012 i686 GNU/Linux

Java:  I have the newest (at the time this article is written) Java 1.7 installed
debian@debian:~$ java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) Server VM (build 24.45-b08, mixed mode)

Build and install the protocolbuffer-compiler 2.5.0

The newest and the one required by Hadoop is version 2.5.0. This is only available in debian experimental repository (at this time), and I could not get it installed via apt-get. If your Linux distribution provides 2.5.0 from software repository, use that one.

First you are going to need g++ installed. My virtual machine was really pure in terms of installed software, so I had to install g++ first: 

  $ aptitude install g++

  $ tar -xvzf protobuf-2.5.0.tar.gz
  $ cd protobuf-2.5.0
  $ ./configure --disable-shared #[1]
  $ make install

The above commands compiled, built and hopefully installed protoc into /usr/local/bin/protoc .

Install Maven 3.0+

Choose a 3.0+ version from link below. I used 3.1.1, the newest one available at the time this article written. http://maven.apache.org/download.cgi

You need a binary tar.gz: 

Put Maven to its place:
  $ tar -xvzf apache-maven-3.1.1-bin.tar.gz
  $ mkdir -p /usr/local/maven/
  $ mv apache-maven-3.1.1 /usr/local/maven
  $ ln -s /usr/local/maven/apache-maven-3.1.1 /usr/local/maven/current

Put a symlink into /usr/sbin
  $ ln -s /usr/local/maven/current/bin/mvn /usr/sbin/mvn

In fact, this is the same way, how you install the Oracle JDK/JRE. The other way is to put the .../bin folder of the appliction on the $PATH variable at the end of /etc/bash.bashrc.

Install Git

This is available from repository:
  $ aptitude install git

Clone hadoop-common

Go to your Eclipse workspace, or create one if you don't have any. I put it into my home:
  $ mkdir -p ~/Development/workspace_eclipse_java

Clone the git repository:
  $ git clone https://github.com/apache/hadoop-common.git hadoop-common

Install hadoop Maven plugin

Hadoop has it's own maven plugin to do stuff:
  $ cd hadoop-maven-plugins
  $ mvn install

First build everything

I found the project setup and build well documented. Everything is written down in the BUILDING.txt [2] 

First you need to build the whole hadoop-common to allow Maven caching the dependency jars in your local repository. That way, eclipse will be able to resolve all your inter-project dependencies.

  $ cd hadoop-common
  $ mvn install -DskipTests -nsu #-nsu means something cache forever

Generate Eclipse projects

I am only interested in YARN and MapReduce components, so I will:
  $ cd hadoop-yarn-project
  $ mvn eclipse:eclipse -DskipTests

Set M2_REPO variable in Eclipse

If not yet set, you have to create a variable in eclipse pointing to your local Maven repository, as every dependencies in the generated .classpath file start with M2_REPO/..

  [Window] => [Preferences]
  Java -> Build Path -> Classpath Variables

Add a new one named M2_REPO pointing to your Maven local repository, which by default is at   /home/username/.m2/repository

Import projects into Eclipse

  [File] => [Import]
  General -> Existing Projects into workspace
Set your root directory to the hadoop component you want to import. In my case it's 

I highly recommend creating working set to every hadoop component, since they all consists of several eclipse projects.


[1] http://www.coderanch.com/how-to/java/ProtocIndependentBinary

[2] https://github.com/apache/hadoop-common/blob/trunk/BUILDING.txt

Aug 16, 2013

Demistifying JSF lifecycle

(Mojarra 2.1.25)

17-08-2013: I made the source available, see below

There is two sections of this post. The first part is about how JSF lifecycle works, the second part is about showing events and action invocations on diagrams.

What is a view state?

JSF devs are more or less familiar with the JSF lifecycle. I will just quickly show you how it looks, to refresh your memory.

JSF lifecycle

It took quite long to me to clearly understand what is going on under the hood. Namely, what is restoring the view and why JSF has to set values in the application twice: at Apply request values phase and then at Update model values  phase.

When JSF handles a GET request, it first looks at the xhtml where the request points to. It parses that facelet, traverses every template definition, ui:include and components that can be reached and builds up an in-memory representation of components called the view tree. That is right, it happens on every request, that is why a huge view tree makes the application slower. One solution would be if the view tree was taken from a pool for every request as Rudi Simic suggests in this article: Stateless JSF – high performance, zero per request memory overhead -  http://www.industrieit.com/blog/2011/11/stateless-jsf-high-performance-zero-per-request-memory-overhead/ . It is a great article that teaches a lot of how JSF works.

Saving the view state - When you access a page that contains at least one <h:form>, the view state is saved in the session before the request ends. This in fact is some representation of the current view tree. Using an older, 2.1.2 Mojarra reveals this behavior: the JSESSIONID cookie is only created when you encounter a page that loads either a form or uses a @SessionScoped backing bean. In Mojarra 2.1.25, the JSESSIONID cookie is created regardless. In fact, you see the id of the view state at the end of the forms in a hidden input field.

Restoring the view state - On postback from a JSF form, the view state is restored. Every data of the view tree that has been persisted for this view or facelet, and attached to this view state id, is read  from the session and is used to restore the view tree.

<input type="hidden" name="javax.faces.ViewState" id="javax.faces.ViewState" value="-4005688715831258364:-4133587133831516823" autocomplete="off">

The reason for restoring the previous view tree for POST requests is to make sure every component is in place when applying the request values and updating the model values and calling any component event listeners. At first it's not obvious why this is necessary, but the view tree can change from request to request dynamically, and JSF has to make sure, postback of a form arrives to the same view tree as it saw when left the server.
Suppose that you dynamically add some input fields to a form in the invoke application phase or render response phase. After a postback, you could add them again, but only after apply request values and update model values phase. JSF still has to invoke the setters on the value attribute of those input fields and call any component event listener, therefore it restores those dynamically added components as early as possible. The only thing that you have to know now is that values of <h:input> tags are stored in the view state in the session on the server. So form data in fact is stored in two places: bound to your backing beans, and in the UIComponent representing an input field.

Applying request values - When the view state is restored in the components, then JSF sets the values of the components to the ones that came from the request as POST params. This behavior is the responsibility of the decode() method of the UIComponent's Renderer. For example, if you create your own paginator component that goes to the next page when the request value pagerClientId:page comes, this is the place where you inform your component about the new value.

Updating model values - This late phase is where the site developer first (most of the time) interacts with JSF. This is where the setters of the backing beans are called.

Invoking application - You should be familiar with this phase. As seen in the diagram above, this is where actions and actionListeners are invoked regardless of it is an AJAX request or a full page request.

Rendering the response - JSF calls the encode() method on the ViewRoot that encodes the in-memory view tree representation into an HTML represenation for sending back to the client. Every component is responsible for calling encode() method on their children, practically traversing the tree dfs. Additionally the view state is saved in the session, if there is a form element in the tree.

What happens when?

Next thing that takes much practice is to know when actions, listeners, actionListeners, lifecycle phase listeners, component phase listeners, setters, getters, valueChangeListeners are invoked. I created a very simple ajax form with various bindings to my application: a backing bean, a converter, a validator, component phase listeners and a lifecycle phase listener. The goal is to create a clear view of what get's called and when.
  • red events are from the lifecycle phase listener and show JSF lifecycle before and after events
  • blue events come from the declared component lifecycle phase listeners
  • green events are the ones that are calls to backing beans or custom converters, validators
Of course this is a very simple form and I haven't tested several things. For example when component phase listeners are invoked on the commandbuttons. Or whether or not the PreRenderView listener is invoked when the component is not rendered (probably not). 

I put arrows coming from the xhtml to the events to give you some pivot about which event comes from which declaration.

Sorry about the kinda' bad quality. Click on the links below the images to see them in big.

Sources on github: https://github.com/pkonyves/jsf-lifecycle-explained
Download as ZIP link: https://github.com/pkonyves/jsf-lifecycle-explained/archive/master.zip
Building instructions are in README.md

First page load

When you use <f:viewParam> parameters, the FULL lifecycle is triggered with APPLY_REQUEST_VALUES through INVOKE_APPLICATION phase. That is, because viewParams are implemented as UIInput elements therefore it requires these phases.

Partial ajax request executing only the 'number' input field

render: @all, execute: number -http://i.imgur.com/YurYAYj.png?1

Some remarks:
Value change listeners are invoked before setters and Action listeners are called before actions. This means, when the value change listener gets called, the new value is not set in the backing bean, of course the javax.faces.event.ValueChangeEvent makes it easy to retrieve the old and new values. And not suprisingly value change listeners are only invoked, when the value changed, on the other hands, setters are invoked every time.

All of the pictures were take in one session, and by the object IDs, you can see that the components were recreated on every request. I haven't tested with MyFaces, maybe that is smarter.

Ajax request executing the whole form

render: @all, execute:@form - http://i.imgur.com/7TfTDBn.png?1

Apr 18, 2013

Exceptions and Transactions in EJB, dealing with EJBException and EJBTransactionRolledbackException

When an exception is thrown from an EJB, several things work very differently from what you got used to in Java SE environment or basically outside of the EJB scope.

Things you have to consider:
  • Some exceptions thrown from the EJB scope trigger transaction rollback, other ones don't
  • Some exceptions coming from EJB get wrapped in an EJBTransactionRolledbackException, others don't
  • Whether or not the exception came from "nested" EJB method calls

Basic setup

You call an EJB method that has it's transaction attribute set to Required or RequiresNew. Two states distiquish this scenario from other scenarios: transaction is different from caller's transactional state, there is a transaction on the callee side.

Unchecked (java.lang.RuntimeException) exceptions

When a java.lang.RuntimeException is thrown from the myClientEjbMethod() method, the container will 1) roll back the transaction 2) discard the EJB Bean instance 3) Throw a javax.ejb.EJBException to the client. Every database changes made in your EJB call will be rolled back.

When you know what kind of exceptions you expect in your web layer, you have to unroll the EJBException to dig into the real reason, and show it to the user if it's meaningful to her. These kind of exceptions would be specialized runtime exceptions you defined (let's say BusinessValidationException extends RuntimeException).

Digging into the real reason is a simple method:

Throwable unrollException(Throwable exception,
  Class<? extends Throwable> expected){

  while(exception != null && exception != exception.getCause()){
      return exception;
    exception = exception.getCause();
  return null;

Checked (java.lang.Exception) exceptions

When a java.lang.Exception is thrown from the EJB method, you obviously have to declare it in the method's signature with the throws keyword. What happenes in this case as follows 1) the transaction gets commited 2) the exception is rethrown to the client. ...Yes, that's it. No rollback occurs, and the client sees the exact same exception as was thrown. In EJB terms a checked exception is an ApplicationException. The name indicates that this exception means a problem, the application developer is aware of. I can think of two types I would define for my application:
  • BusinessValidationException extends java.lang.Exception
  • BusinessLogicException extends java.lang.Exception
Because you, as the application developer throw the ApplicationException on purpose, you have to decide   if the transaction has to be rolled back, or the flow can be continued, and maybe throw the exception at the end of the method to indicate some problem happened during processing but it actually executed successfully.

To force the rollback manually for an ApplicationException, before throwing it, you have to get a reference to the EjbContext, and call EjbContext.setRollbackOnly().

You also have to be conscious about whether you want to just rethrow a FileNotFoundException, ParseException checked exceptions from various utilities. They don't cause a rollback, when simply rethrown from the EJB method, because they have to be present on the method signature, therefore are ApplicationExceptions. The EJB specification recommends that in this case, you wrap these exceptions into a javax.ejb.EJBException. That is because these are likely to be errors out of the developers scope, and you cannot continue the transaction, so you just rethrow it packaged in an EJBException runtime exception: becaus runtime exceptions do cause rollback.

For example:

Date getDateFromStringEjbMethod(String yearString){
  SimpleDateFormat dateFormat = new SimpleDateFormat("YYYY");
    return dateFormat.parse(yearString);
  } catch(ParseException e){
    throw new EJBException(e);

Design your business logic exceptions

I feel it to be a burden to call EjbContext.setRollbackOnly() every time I throw my little BusinessValidationException. I might also forget to call it, resulting in a commit and inconsistent DB state.
You have three sensible options to define exception types that mean an error in the business flow:
  1. Inherit your BusinessValidationException from java.lang.RuntimeException:
    Transaction is rolled back automatically, but you have to unroll the exception on client side, because it will be wrapped into an EJBException
  2. Inherit your BusinessValidationException from java.lang.RuntimeException (same as previous), and annotate this class with @ApplicationException(rollback=true)
    What you gained is, you will get the same exception on client side, not a wrapper, and the transaction is still rolled back.
  3. Inherit from java.lang.Exception and annotate it with @ApplicationException(rollback=true)
    This is the same as 2) except, you can warn the client to be aware of a business kind of exception to handle in a user-friendly way.
  4. 2) or 3) with rollback=false attribute for problems where the transaction can be continued/committed, but an indication should be sent to the user that there was a problem
When I throw business workflow error or validation kind of exceptions from my EJB methods, I always try to define a meaningful message, not a very technical one and show it to the user, instead of letting the exception bubble up, and let the servlet container show the standard error message.

Exceptions from nested EJB calls

When you call myOtherEjbMethod() from myEjbMethod(), things change a little. The container's behavior for myOtherEjbMethod() will be the same for the ApplicationException and RuntimeException as described previously i.e.: If an ApplicationException is thrown, you have to decide if the myOtherEjbMethod() will be rolled back or not. If  a runtime exception occurs, the transaction gets rolled back regardless (If that RuntimeException is not annotated with @ApplicationException).

But what will you see in the caller myEjbMethod()? In nested calls, myEjbMethod() is the client, and the container may wrap the actual exception into an javax.ejb.EJBException or javax.ejb.EJBTransactionRolledbackException. The rules are (I think) intuitive after you learned the behavior in the simple case in the previous section.

When myOtherEjbMethod() uses the client's ( myEjbMethod() )  transaction, the container will wrap the RuntimeException thrown from nested method into the javax.ejb.EJBTransactionRolledbackException. Why? Because this way you will know, that continuing the transaction in myEjbMethod() is "fruitless" (as the EJB 3.1 specification pens it)

When myOtherEjbMethod() uses a new transaction (with RequiresNew transactional attribute), in case of a RuntimeException, the container will wrap it in a EJBException. You will know, that it caused the new transaction to roll back, but you can continue the outer transaction. This is actually what RequiresNew is good for.

Sum it up

EJB makes a difference in Application Exceptions and System Exceptions. Application exception is something that you define, you throw, and you are aware of. By default the application exception does not cause a rollback, unless you define it that way (and I think it's recommended). Every checked exception that is mentioned in the method signature and also any checked or unchecked exception that is annotated with @ApplicationException, is an application exception.

System exceptions happen in cases, you don't control, and they are unchecked exceptions. They always cause rollback. Good practice is, if you wrap checked exceptions -- that cannot be avoided -- in your method into EJBException e.g. ParseException.

Mar 22, 2013

How to design clean business tier with EJB and JPA


In this post, I will drive you through the mentality and ideas of designing good EJB APIs. It is kind of long, but I want you to understand the ideas behind every choice.

Java EE was designed based on the observation that most business applications are built on a 3-tier architecture:
  • Data Access Layer (JPA, JDO, JDBC, proprietary NoSQL APIs)
  • Business Logic Layer (EJB, CDI)
  • Presentation Layer (Servlet, JSF, JSP, JAX-RS, JAX-WS)
Although the Java EE stack APIs are fairly well designed, they are also very complex, and have shallow learning curves. Not only each technology is hard to learn, one also has to understand how they cooperate. 

I like the phrase Separation of concern. Good software architecture is born only when these three words are constantly in the mind of the architects and developers. The 3 tiers are exactly based on separation of concern of the program components. A good EJB design is based on whether or not you understand the concerns your application as to address.

Design your software architecture through use-cases

The advantage of using JPA, EJB and JSF only gets clear (and does not become a overwhelming) when you first sit down at your desk and start to think of the use-cases of your software. These are basically user interactions to your software. You must have a very clear definition of at least a subset of correlating use-cases for your software.

You have to create  a software for a tennis club. The users are all going to be players, and twice every year there is a championship. The championship is among teams. A player can be a member for more teams. The teams are only permanent for one championship, they are reformed for every new championships. A player assigns herself for a team (or more). Of course there are many matches in a championship. It's the role of the administrator to create a championship, declare teams, create matches, and administer match results once they are played. The role of users are to see the matches, championship results, and to assign themselves for a team.

This is only a very simplified specification, but making it a software, takes much much consideration. You have to identify the exact use-cases such as:
  • administrator creates championship
  • administrator modifies championship attributes
  • administrator creates a match (when? who will be the opponents? what are the minimum attributes for a match?)
  • administrator modifies match attributes
  • administrator assigns team players to matches as opponents
  • replace players across teams
  • ...
Why do you need to define them so explicitly? Because you have to understand and partition the problem domain. If you don't know the problem enough, you will not be able to separate what belongs to the business logic layer and what is to the presentation layer. And this is a key to create good EJB API.
Know well your problem domain and use-cases!

Atomic operations and consistency (EJB)

The brain of your software sits in your EJBs. One of the most important facilities of the EJBs for us is:
  They are transactional by default (regarding RDBMS or messaging services). When you reach your database from an EJB via most probably JPA, you already have a transaction for the database session. As a consequence, every EJB operation is atomic toward the underlying database. Either every DB changes in the EJB takes place, or none.
You must map each use-cases you found to exactly one EJB method calls from the web gui! This is cruical to keep your data consistent.
Trivial Example:
The last use-case was to have a player, get her out from team A and put her into team B instead. A broken pseudo-code from the web layer may look like this.

ChampionshipBean championshipBean;

public void transferPlayer(Player player, Team toTeam) {
  Team currentTeam = championshipBean.getCurrentTeam(player); // 1. 
  championshipBean.deletePlayerFromTeam(player, currentTeam); // 2. 
  championshipBean.assignPlayerToTeam(player, toTeam);        // 3.

One (among many) way for this code to go wrong is when after you deleted player from her team (2.), toTeam is deleted by another administrator who is messing with the software concurrently, thus (3.) will throw an exception saying "no such team as toTeam". The result is that player gets deleted from her currentTeam, but not assigned to another. When the user sees the error "no such team as toTeam", he expects the player to be in the team as before the change, but she is gone. This is errornous behavior.

If you put the method transferPlayer(Player player, Team toTeam) into an EJB method, after the same scenario, the DB will be in a consistent state, because either every change takes place or none, in an atomic operation.

Different call scenarios for the same use-case

Hanging with the pervious example, suppose you want to expose a REST interface to your application, or just create a different view for the player transfer option on your web gui. You need the same behavior initiated from totally other context. This is a convenient situation to see if you created a good EJB design or a crappy one. In the latter case, you will see, that you exposed too much business logic in your presentation layer: connected JPA entities together via setters; made changes to couple second old detached entities that come from SessionScoped JSF ManagedBeans; made more EJB calls that change your database for the same use-case.

When you encounter any of the above patterns, remember from the first section separation of concern. The use-case should be implemented once in the EJB, then it can be called from as many places as you whish.

A clean EJB API

JPA Entity parameters?

I was struggling a lot whether to pass entites to EJB method parameters, or pass only entity attributes.

public class ChampionshipBean {
  public void assignPlayerToTeam(Player player, Team team) { .. }
  // vs.
  public void assignPlayerToTeam(Integer playerId, Integer teamId) { .. }

  public void changeMatch(Match match) { .. }
  // vs.
  public Match setMatchPoints(Integer matchId, List<Integer> points) { .. }

Then I recalled that we had done in PHP was passing only database record primary keys to the forms in hidden input fields to identify those records after a form POST. And it's the same when we use JSF. The only difference is that the JSF framework hides this behavior, and we can keep whole entities in the memory in @SessionScoped ManagedBeans across requests. However, we often do not even need to keep whole entities or list of entities in the memory in the web layer, because we only want the primary key or we transform them into new POJOs that can be best used in a dataTable much easier. We still need the entitiy's primary key to identify the original identity for our operations though.

My conclusion was that: 
In most cases it is totally enough to pass only the primary key of an entity back to the EJB and the parameters that has to be changed.
  • often we don't need to keep whole entities in web layer (presentation layer)
  • if we don't have the original entity in memory in web layer, a method call like assignPlayerToTeam(Player player, Team team) requires us to first call a method getPlayerForId(Integer playerId) then pass the returned Player to the preceding method. This is totally unnecessary and annoying
  • If we pass whole entities to the EJB methods, they are detached, so every time they have to be attached again with entitiyManager.merge(player), this does not seem to be a lot trouble, but when you have complex business logic and make use of nested EJB calls, you will not want to unnecessarily merge your entities all the time just because you are not sure if the parameter was attached or detached.
  • When you want to use your EJBs remotely because you deploy on a DAS cluster, you want to eliminate every unnecessary trafic overhead.

Unambiguous side effects of EJB methods

The previous reasoning was more of a technical one, but there is a semantical reason as well. When you pass an entity to an EJB method, you cannot make sure by heart that what is going to be persisted into the database. Get back to this example:

public class ChampionshipBean {
  public void assignPlayerToTeam(Player player, Team team) {
    List<Player> players = team.getPlayers();

This seems reasonable right? Well, it's totally wrong! The developer on the presentation layer will not know what side effects are gonna take place besides that the two entities are going to have a relashionship. He might think, he can also changes the name of the player with player.setName("Bob"); before calling the method above, because the entity will be merged anyways, so the new name will be persisted sure.
  But you, who implemented assignPlayerToTeam(Player player, Team team)  might have been thoughtful, and made sure these kind of side effects cannot happen: the method does not change anything beside setting the relationship between the two entities. Clear out this ambiguous behavior, you will use only primary keys as parameters: assignPlayerToTeam(Integer playerId, Integer teamId). If you still want pass whole entities as arguments, be clear in the function javadoc, how the function behaves, end be coherent with similar functions!

Changing simple Entity properties as opposed to entity relationships

What does public void changeMatch(Match match) do? You can create method like this, but a better name would be changeMatchProperties(Match match). It suggests that the method allows to change simple properties such as match#startTime or match#place but does not allow to create or delete relationships between entities: match.setPlayerOne(player); will simply have no effect at all.

You have to create separate EJB methods for updating an entity's properties and for updating relationships between entities. Mainly because the relationship for JPA entities must be set on boths sides, then both have to be merged. A defensive solution for changeMatchProperties(Match match):

void changeMatchProperties(Match match) {
  Match attachedMatch = entityManager.find(match.getId, Match.class);
  // set other properties

Some rules of thumb

  • the name must be clear about what exactly the method does
  • follow consistent naming conventions in your code to differentiate: CRUD operations, entity relationship handling and more complex functions.
  • if entities are passed as parameters, only allow the changes that are suggested in the method name, no unexpected side effects. Prohibit entity relationship targeted changes in changeEntityProperties named methods.
  • create methods for entity relationship handling that does not allow changing any other entity attributes, only attaching entities
  • pass only primary keys as much as possible in favor of passing entities
  • never ever try to pass back information from EJB by the use of references in the arguments! When you ever want to call your methods remotely, you will surprised

Don't create entity facades for CRUD operations as EJB

See http://weblogs.java.net/blog/felipegaucho/archive/2009/04/a_generic_crud.html point 5. This is a good pattern as a Data access layer to use in your EJBs! However, think about in a decent project you could have many tens of entities. Do you really want to create 10-20 EJBs just for CRUD operations which will be only a small part of your complex use-cases? When you modify match attributes or set points for matches, you must enforce constraints in your EJB as I described in Unambiguous side effects of EJB methods. Simple create, read, update, delete operations of single entities will be rare, because you usually don't  just delete a team, you have to deal with it's matches you already created. So think before you start factoring these kind of facades for yourself, and see if your really need all of them or only a subset. Don't create unnecessary methods in your EJB API. They cause confusion.

Accordingly, you don't create EJBs for each of your entities! You create EJBs for groups of your problem domain: managing matches, managing users, managing championships

Designate one EJB called e.g. RepositoryBean to read-only operations. Very unlikely that on a webpage you only need to present one entity. You usually have te request several entities to present data on your page. Think about if it's easier to inject only one EJB into the presentation layer or a bunch of them? 

How does "grouping the problems into different EJBs" and "designating one super EJB" not contradict? It can... When you have to use queries that is very specific to one set of the problems e.g. findAllMatchesOfOpponentTeams(Integer teamAPk, Integer teamBPk); you may put it into the ChampionshipManagedBean. Because they are logically cohesive. But for single entity or list of entity requests you will be glad not to inject 2-3-4 EJBs into your presentation layer just to present one page. On the other hand, when updating your database, it's easier to find the right EJB method within a single, problem specific class.
When you have to decide which method goes into which EJB, think about the cohesion of your methods. Ask yourself: "Am I going to use these methods together one after another? Are these methods address a single problem group, so I better find them in one place?"

Public and private EJB API

The preceding examples were very simple CRUD operations. But what to do when it comes to more complex use-cases?  You can create reusable EJB methods that can be called from other EJB methods. It would also be a good practice to put different EJBs for public or private use into different packages.(Unfortunatelly it's not allowed to declare EJBs as package private, neither EJB methods). That way you can make reusable EJB methods that are intended to be used from other EJB methods, but not from client code (presentation layer). For example you are allowed to pass whole entities to nested EJB method calls instead of only the primary keys, and you don't have to worry about merging the entities to the entity manager within those called methods. But it has to be well documented and clear which EJBs should be used by the client.

Please share your thoughts! :)

Dec 16, 2012

How not to use @PostConstruct in JSF


  I always thought that the right way of filling or initializing CDI (or either former javax.faces.* ManagedBean) backing beans with data from the persistent layer is by @PostConstruct annotated methods. I just recently came to the conclusion that this is really the worst idea.
  Lifecycle Phase Event listeners are maybe known to many, but most people tend to use it as a last resort, or to handle special cases. I think it needs to be changed. I hope after reading it, phase listeners will be among the first tools you will reach for to solve a problem, and not the last one.
  Below I am going to explain what I wanted to solve and how my Controller logic was evolving in such a way that I on the other hand was falling into the dark cave of confusion of JSF aspects and approaches.
  I tried to word my problem in a generic way, so many people have already met or will meet it.


What did I want to solve with @PostConstruct?

  I wanted to use @PostConstruct as a default action in my managed bean... When our application needs to maintain a state for the UI --the state changes in reaction to user interaction--, one would run into the problem: when to initialize the data (e.g. the list of cars) provided to the facelet ? I don't consider myself a newbie and I always seek for the best solution. I would not have written this long post, if finding the answer to this simple question had not been instructive experience to me :)


Basic architecture

  First of all, let me quickly introduce the architecture we use in our JSF applications and a very simplified JSF lifecycle applied to it.
  We have the MVC pattern on the presentation tier. The View is the .xhtml, the facelet, that renders and updates the data stored in a @SessionScoped managed bean. Latter is called the Model.
On the other side of the model, we have the @RequestScoped managed bean that is responsible for handling client initiated requests, and updates the model as well as the persistent data through the business tier. Therefore we can call it the Controller.


Simplified JSF lifecycle in the Basic architecture

User changes a form value, clicks on command button:
  1. JSF set's the properties in the Model through their setter methods.
  2. JSF calls the defined valueChangeListeners, actionListener, action methods in this order on the Controller bean. (I listed some action method types without attempt to be complete)
  3. Suppose, the user action was a bulk remove items from a list. OUR Controller retrieves the data from the Model and removes some items from it, then it also removes the exact same items from the persistent store through an EJB call, so the store and UI representation is consistent.
  4. JSF reads the updated model and presents the most recent data.

User visits the page on the first time:
  1. JSF calls getters on the Model and renders the facelet presenting data in the Model. (I did not mention calling any lifecycle phase listeners, because my predicate is that it does not get much attention from average JSF programmers)


The problem of model initialization and update

The example application is a list of people you know, where you can filter the people shown in the list according to whether you like them or not like them.
  Suppose that You have to present a list of people, and filter the list to show only the people you either hate, like or both. Where would you put the code that assembles the list? Don't forget it has to work in 3 different scenarios:
  1. The user visits the page for the first time from any link (i.e. you cannot rely on commandButton's action property to init the model)
  2. User changes the filter value and immediately sees the updated list
  3. User have changed the filter and visits the page again: the filter remains
A little sidenote here:
  I use icefaces ice and ace components. The biggest benefit of this framework is it uses AJAX by default. So when I change the selectOneMenu, the framework automatically triggers an AJAX request, set's the new value, updates and renders the involved components (e.g. everything within the enclosed form tags) and the result is immediately shown on the screen. This is important to the 2. point and that I don't want to deal with the <f:ajax /> tags because I'm lazy.

1st approach (the naive)

  1. The user visits the page the first time:
      The default value of PeopleModel.liked will be NULL. It is fine for us, the  PeopleRepositoryBean.findPeopleILike(Boolean) method will return everyone.
    Except one little problem: PeopleModel.people list will not be initialized, because we do not reference the PeopleController bean from the facelet --> it's @PostConstruct will not be called --> list of people remain empty (null)
  Why not put  PeopleController.init() into a PeopleModel.@PostConstruct method? Because we really want to focus on not confusing our controller logic with the model. Also, That would only be called once the session bean is initialized.

We go forward with the solution:
  Reference the PeopleModel via PeopleController in the facelet, so the PeopleController will be initialized, it's @PostConstruct method will be called

2nd approach (almost solved it):

  So we came to the conclusion that in order to initialize our model, we have to reference it from the controller
  1. The user visits the page the first time
    The list is initialized and shown on the gui, yeah!
  2. User changes the filter value and immediately sees the updated list
    Yeaaa, not so great :(

    • JSF sets the PeopleModel.liked field to whatever is chosen from selectOneMenu list AFTER it has called PeopleController.init().
      Why? Because we reference the PeopleModel.liked in the selectOneMenu via the PeopleController, thus this bean will be initialized first and will have it's @PostConstruct method called first. Problem? The list will be initialized according to the old value of liked field.
    • We could reference the PeopleModel directly within selectOneMenu. That would probably solve the problem, but would you rely on such a mechanism where the chain of events are determined by how you statically reference the backing beans? I would not.
  3. User have changed the filter and visits the page again: the filter remains
    Works great, the PeopleModel.liked is persisted within session.

3rd approach (a quicky, working code)

  In order to have the immediate impact of changing the filter on the UI, a convenient way is to put changeEventListener on the selectOneMenu component. We know by specification that it will be called after the value is set, seems to be the right choice to update our list.

  This code meets the three constraints we defined earlier. But don't be so happy. On filter change event, the list will be initialized twice: once with previous value, second with current value of filter. In this example it is not a big problem, but when you have  a complicated UI, you will try to avoid it.


4th The final (how it should have been done in the first place)

  And finally we realized the existence of JSF Lifecycle Phase Event Listeners and how every misery and suffering could have been avoided with this pure and elegant way.

  Notice the <f:event /> component in the facelet and the new PeopleController.updatePeopleList() method.

  What <f:event /> does is subscribes the defined method to the preRenderView event. This event will be triggered just before point 4) in the chart of architecture. So the PeopleController.init() method will be called every time right before the rendering phase, and after every backing bean properties are set.

More on JSF lifecycle phase event listeners

It must be noted, that JSF distinguishes between Application scoped lifecycle events and listeners and Component scoped lifecycle events and listeners. This example shows the use of a Component scoped lifecycle event listener that gets called by the framework when the request processing reaches the component in which the <f:event /> is specified (in this case in the <ice:form id="form">) in the same lifecycle phase as the <f:event /> specifies.

If you want to create Application scoped lifecycle event listener, you must specify your class that implements the javax.faces.event.PhaseListener interface and register your class in faces-config.xml such as:

You can also use @ListenerFor and @ListenersFor annotations if you create custom UIComponents or Renderers, but they are not intended to be used in ManagedBeans.

You can specify phase listeners programmatically instead of declaratively with
Application.subscribeToEvent() or UIComponent.subscibeToEvent() methods.

(More on lifecycle phase event listeners in JSF 2.0 specification sections 2.5.10 and 3.4.3)


  One would assume that JSF ManagedBean's lifecycle callback methods (PostConstruct, PrePersist) are integral parts of the JSF lifecycle. Why they really exist is because you cannot rely on your ManagedBean's constructor when you want to use injected resources while you are setting up a class (that you would normally do in a constructor), because resource injection comes after the class is instantiated, therefore cannot be accessed in the constructor. You should avoid using them the control your application's request flow.
  As I showed you in the last version of my code, you are assumed and encouraged to use JSF lifecycle phase listeners do such tasks as having a default action in your backing bean.