Having questions about which NoSQL to use?
Check this out:
http://jaxenter.com/evaluating-nosql-performance-which-database-is-right-for-your-data.1-49428.html
Yelei's Tech Notes
Some interesting technical stuff here ...
Monday, August 04, 2014
Wednesday, June 18, 2014
Connect Informatica to ActiveMQ as a durable subscriber
I was asked to give advice on how to use Informatica to subscribe to a durable topic hosted on ActiveMQ.
It's quite easy to make a Java subscriber durable. The tricky thing related to Informatica subscriber is that this has to be configured inside Informatica while how the configuration should be done is somehow defined by JMS providers.
In order to connect to ActiveMQ durable topic, both subscriber name and clientid are required to be provided by subscribers. Subscriber name can be configured in Informatica JMS adapter (as the data source) however besides that it doesn't allow people to set the clientid. So the first thing came to my mind was to set the clientid on connectionfactory. But it didn't work by adding clientid as a property to connectionfactory.
An alternative is to modify the connection url of Informatica connection. In the end, it proved to work by appending "?jms.clientID=xxinf" to the url. A lot more attributes can be appended to the url; details are described here: http://activemq.apache.org/connection-configuration-uri.html .
It's quite easy to make a Java subscriber durable. The tricky thing related to Informatica subscriber is that this has to be configured inside Informatica while how the configuration should be done is somehow defined by JMS providers.
In order to connect to ActiveMQ durable topic, both subscriber name and clientid are required to be provided by subscribers. Subscriber name can be configured in Informatica JMS adapter (as the data source) however besides that it doesn't allow people to set the clientid. So the first thing came to my mind was to set the clientid on connectionfactory. But it didn't work by adding clientid as a property to connectionfactory.
An alternative is to modify the connection url of Informatica connection. In the end, it proved to work by appending "?jms.clientID=xxinf" to the url. A lot more attributes can be appended to the url; details are described here: http://activemq.apache.org/connection-configuration-uri.html .
Wednesday, March 12, 2014
Implement pagination in JSF 2.x with Primefaces
Primefaces provids LazyDataModel which you can bind with DataTable object to provide very powerful pagination functionalities in the UI, including customizable pagination buttons, sorting functions, and multiple query criteria, and so on.
The way to use it is not difficult but there's not much information about it you can find in the documentation. The best starting point is to check the Primeface showcase at http://www.primefaces.org/showcase/ui/datatableLazy.jsf .
Steps to implement the support for LazyDataModel with JSF2.x can be briefly described as follows:
You can get some examples with EJB or DAO in the internet. But I didn't find much information on how to implement it with the setup of JSF2 and JPA. The tricky part is normally people inject JPA execution logic (query, update entities) into JSF managed beans, for example, PrimeBackingBean in our case. Such injection won't work for ChildLazyDataModel, as ChildLazyDataModel is not a container-manageable yet. I didn't try the option to make it container-manageable object yet. Instead, I used an easier alternative:
The way to use it is not difficult but there's not much information about it you can find in the documentation. The best starting point is to check the Primeface showcase at http://www.primefaces.org/showcase/ui/datatableLazy.jsf .
Steps to implement the support for LazyDataModel with JSF2.x can be briefly described as follows:
- Implement a child class (Let's call it ChildLazyDataModel) of LazyDataModel
with one @Override method List load(int first, int pagesize, String sortfield, SortOrder sortorder, Map filters). This method will be invoked whenever pagination happens. Inside the method, 2 pieces of logic should be implemented: - The total result count should be set on the datamodel.
- The result should be returned from the query.
- Build the JSF backing bean in the way that it contains an instance of ChildLazyDataModel. Let's call it PrimeBackingBean. The getter of the lazy data model will return a new instance.
- Bind the ChildLazyDataModel variable from PrimeBackingBean with the Primeface datatable object in the JSF files.
You can get some examples with EJB or DAO in the internet. But I didn't find much information on how to implement it with the setup of JSF2 and JPA. The tricky part is normally people inject JPA execution logic (query, update entities) into JSF managed beans, for example, PrimeBackingBean in our case. Such injection won't work for ChildLazyDataModel, as ChildLazyDataModel is not a container-manageable yet. I didn't try the option to make it container-manageable object yet. Instead, I used an easier alternative:
- Inject the JPA execution bean into PrimeBackingBean.
- Implement the constructor for ChildLazyDataModel with the parameter of JPA execution bean. The JPA execution bean instance can be used to do all the required queries.
- In the getter for the lazy data model of PrimeBackingBean, instantiate new lazy data model by passing the injected JPA execution bean using the new constructor.
Friday, March 07, 2014
Injecting EJB into JAX-RS service
Java EE 6 introduced dependency injection mechanism, which makes it extremely easy to instantiate bean objects and pass the reference to other java services.
Injecting EJB into web services, servlets or EJBs can be easily done by using @javax.ejb.EJB annotation.
However, it doesn't work in the same way for JAX-RS services. Server would report NPE if @EJB annotation is used to inject EJB instance. The reason is that JAX-RS services are not managed beans.
An easy way to fix it is to make the JAX-RS service a stateless session bean by annotating it with @javax.ejb.Stateless.
A better alternative is to make JAX-RS service a managed bean. Detailed steps are as follows:
1. Add an empty beans.xml to WEB-INF directory. The purpose of this is to indicate that the web application is JCDI enabled.
2. Add a no-argument constructor to the JAX-RS service class. This makes the service class a JCDI bean. If some properties need to be set with some values (from http headers, for example.), it can be done inside the @javax.annotation.PostConstruct method.
3. Annotate the JAX-RS service class with @javax.enterprise.context.RequestScoped. This defines the service has http request scope.
4. Inject EJB instance with @javax.inject.Inject annotation.
Injecting EJB into web services, servlets or EJBs can be easily done by using @javax.ejb.EJB annotation.
However, it doesn't work in the same way for JAX-RS services. Server would report NPE if @EJB annotation is used to inject EJB instance. The reason is that JAX-RS services are not managed beans.
An easy way to fix it is to make the JAX-RS service a stateless session bean by annotating it with @javax.ejb.Stateless.
A better alternative is to make JAX-RS service a managed bean. Detailed steps are as follows:
1. Add an empty beans.xml to WEB-INF directory. The purpose of this is to indicate that the web application is JCDI enabled.
2. Add a no-argument constructor to the JAX-RS service class. This makes the service class a JCDI bean. If some properties need to be set with some values (from http headers, for example.), it can be done inside the @javax.annotation.PostConstruct method.
3. Annotate the JAX-RS service class with @javax.enterprise.context.RequestScoped. This defines the service has http request scope.
4. Inject EJB instance with @javax.inject.Inject annotation.
Monday, March 03, 2014
Handle NTLM authentication with HttpClient v4.0.1 (Websphere application server v8.5)
I tried to use Apache HttpClient v4.3.3 in one of my jax-rs service implementation, which is deployed to websphere applciation v8.5.
However, the code always reported the issue that the INSTANCE field is not present in BasicLineFormatter class. It turned out that in one of the plugin jar file of WAS (com.ibm.ws.prereq.jaxrs.jar), the old version of HttpClient is included. According to the version properties file, the version is 4.0.1.
The problem came because HttpClient v4.0.1 doesn't support NTLM authentication, which is required for my service. I had to find a way to change the order of class loading of the web application. I tried a few options such as making the class loading using "parent last", adding the library to JRE path, and so on. None of them worked out.
Eventually I had to go with the v4.0.1 version which is shipped with WAS v8.5. Information provided on http://hc.apache.org/httpcomponents-client-4.3.x/ntlm.html is mostly correct, except that it applies with v4.3.3. Here's the instruction for using SAMBA JCIFS with v4.0.1:
1. Create NTLMEngine with the instructions on http://hc.apache.org/httpcomponents-client-4.3.x/ntlm.html .
2. Implement AuthSchemeFactory instead of AuthSchemeProvider for v4.3.3.
Normally sharepoint site use SSL for communication. The client certificate needs to be imported into the truststore of WAS. Otherwise, "SSL HANDSHAKE FAILURE" will be reported.
However, the code always reported the issue that the INSTANCE field is not present in BasicLineFormatter class. It turned out that in one of the plugin jar file of WAS (com.ibm.ws.prereq.jaxrs.jar), the old version of HttpClient is included. According to the version properties file, the version is 4.0.1.
The problem came because HttpClient v4.0.1 doesn't support NTLM authentication, which is required for my service. I had to find a way to change the order of class loading of the web application. I tried a few options such as making the class loading using "parent last", adding the library to JRE path, and so on. None of them worked out.
Eventually I had to go with the v4.0.1 version which is shipped with WAS v8.5. Information provided on http://hc.apache.org/httpcomponents-client-4.3.x/ntlm.html is mostly correct, except that it applies with v4.3.3. Here's the instruction for using SAMBA JCIFS with v4.0.1:
1. Create NTLMEngine with the instructions on http://hc.apache.org/httpcomponents-client-4.3.x/ntlm.html .
2. Implement AuthSchemeFactory instead of AuthSchemeProvider for v4.3.3.
3. Use the following code as an example to register the AuthSchemeFactory and create NTLM credential :public class JCIFSNTLMSchemeFactory implements AuthSchemeFactory {@Overridepublic AuthScheme newInstance(HttpParams arg0) {return new NTLMScheme(new JCIFSEngine());}}
DefaultHttpClient httpclient = new DefaultHttpClient();
httpclient.getAuthSchemes().register("ntlm",new JCIFSNTLMSchemeFactory());
CredentialsProvider credsProvider = new BasicCredentialsProvider();
NTCredentials ntcred = new NTCredentials(username, password,
InetAddress.getLocalHost().getHostName(), domain);
credsProvider.setCredentials(new AuthScope(hostname, port,
AuthScope.ANY_REALM, "NTLM"), ntcred);
httpclient.setCredentialsProvider(credsProvider);In this way, the NTLM support is done on WAS v8.5.
Normally sharepoint site use SSL for communication. The client certificate needs to be imported into the truststore of WAS. Otherwise, "SSL HANDSHAKE FAILURE" will be reported.
Wednesday, December 04, 2013
Pydev defect with Django 1.6?
In Pydev 3.0 Django menu, you can start the shell with current project path included.
However, when using this functionality with Django 1.6 project, the following error will be thrown:
In order to fix the issue, you just need to run the following statement:
from django.core import management;import tutorial.settings as settings;management.setup_environ(settings) Traceback (most recent call last): File "It turned out that module setup_environ was removed from django.core.management in v1.6 while pydev still calls this module.", line 1, in AttributeError: 'module' object has no attribute 'setup_environ'
In order to fix the issue, you just need to run the following statement:
os.environ['DJANGO_SETTINGS_MODULE'] = '(project).settings'Please replace (project) with your django project name.
Wednesday, November 20, 2013
ssh tunnel, a small but useful trick to bypass firewalls
I used to use ssh tunnel a lot during college times. It proved still very very useful in my work.
It's not a trick that most developers know that's why I'm going to give a brief explanation here.
What usually happens during enterprise application development is that some services exposed over Internet are protected by firewalls, which are configured to only accept communications from certain servers via pre-defined ports.
Here's a real-world scenario I experience in mutiple projects:
1. Development team works on their desktops/laptops with a bunch of development tools. And they also have ssh access to development servers (Linux based).
2. There are some message queue services, or SMTP service exposed only to development servers.
3. Developers want to know what happens to the queue (to check whether the queue manager is working fine, and to do peek, enqueue, dequeue operations) but they cannot access the internet-based queue services from their desktops directly. Also it's not recommended (sometimes forbidden) to use GUI applications such as vnc to access development server. It's also a bit more complex for everyone to use shell script in development server to performance development activities.
In order for developers to access the service from their own desktops, only ssh tunnel needs to be configured. Local port forwarding does the work and makes sure to forward all communications of a pre-defined local port to a certain target via ssh tunnel.
In our scenario, suppose we use local port 9000 as the port to be forwarded. Suppose our dev server is named "devserver" and the remote queue service is on "qservice" with port 1515. We need to configure to forward communications of localhost:9000 to qservice:1515 via "devserver".
In practice, this can be done easily by putty with the following 2 steps:
1. Start putty and select the ssh session to access devserver. (Don't login yet)
2. In the putty left-side menu, select connection --> SSH --> Tunnels, then "Add new forwarded port" using the following value: Source port: 9000, Destination: qservice:1515, Local. After this, you can login to the dev server.
What above steps do is starting a local process which opens a ssh tunnel with devserver, and also listens to localhost port 9000, whenever their's any data received, it'll forward the data to qservice from the devserver via the ssh tunnel. As a result, developers can connect to local 9000 port to use the queue service.
What usually happens during enterprise application development is that some services exposed over Internet are protected by firewalls, which are configured to only accept communications from certain servers via pre-defined ports.
Here's a real-world scenario I experience in mutiple projects:
1. Development team works on their desktops/laptops with a bunch of development tools. And they also have ssh access to development servers (Linux based).
2. There are some message queue services, or SMTP service exposed only to development servers.
3. Developers want to know what happens to the queue (to check whether the queue manager is working fine, and to do peek, enqueue, dequeue operations) but they cannot access the internet-based queue services from their desktops directly. Also it's not recommended (sometimes forbidden) to use GUI applications such as vnc to access development server. It's also a bit more complex for everyone to use shell script in development server to performance development activities.
In order for developers to access the service from their own desktops, only ssh tunnel needs to be configured. Local port forwarding does the work and makes sure to forward all communications of a pre-defined local port to a certain target via ssh tunnel.
In our scenario, suppose we use local port 9000 as the port to be forwarded. Suppose our dev server is named "devserver" and the remote queue service is on "qservice" with port 1515. We need to configure to forward communications of localhost:9000 to qservice:1515 via "devserver".
In practice, this can be done easily by putty with the following 2 steps:
1. Start putty and select the ssh session to access devserver. (Don't login yet)
2. In the putty left-side menu, select connection --> SSH --> Tunnels, then "Add new forwarded port" using the following value: Source port: 9000, Destination: qservice:1515, Local. After this, you can login to the dev server.
What above steps do is starting a local process which opens a ssh tunnel with devserver, and also listens to localhost port 9000, whenever their's any data received, it'll forward the data to qservice from the devserver via the ssh tunnel. As a result, developers can connect to local 9000 port to use the queue service.
Thursday, November 07, 2013
GAE application behaves abnormally after deployment
I've been playing with Flickr API with a GAE webapp.
To maximize the performance, I didn't use some framework to build it. Basic APIs such as http connections are used to handle the communication.
The small app has been auto-tested thousands of times on local GAE run-time, and it's working fine totally. And no GAE datastore is used for this app (hence no datastore indexes need to be generated); I expect the app to work fine after a successful deployment.
However I faced the issue that 45% of the http request gave the same error message, complaining the invalid format of the data stream (returned from Flickr REST service). From the debug message trace, it turned out almost half of the requests, which are supposed to get data in JSON format (by setting JSON format as one of the query parameters), got routed to flickr API community guidelines page, resulting in that irrelevant data in html format was returned, although every time the HTTP GET was requested towards the same Flickr url.
I did some googling, and was not able to find out why this happened. I was thinking probably Flickr partially blocked/redirected requests from google (GAE) because it was a Yahoo company. Not sure about that. On the second day, I just tested the same app again. It works totally fine again, just as what I experienced with the local GAE run-time. It seems there's always a delay before a newly deployed GAE app to behave properly.
This is a Java app; I'll check if Python app behaves similarly.
To maximize the performance, I didn't use some framework to build it. Basic APIs such as http connections are used to handle the communication.
The small app has been auto-tested thousands of times on local GAE run-time, and it's working fine totally. And no GAE datastore is used for this app (hence no datastore indexes need to be generated); I expect the app to work fine after a successful deployment.
However I faced the issue that 45% of the http request gave the same error message, complaining the invalid format of the data stream (returned from Flickr REST service). From the debug message trace, it turned out almost half of the requests, which are supposed to get data in JSON format (by setting JSON format as one of the query parameters), got routed to flickr API community guidelines page, resulting in that irrelevant data in html format was returned, although every time the HTTP GET was requested towards the same Flickr url.
I did some googling, and was not able to find out why this happened. I was thinking probably Flickr partially blocked/redirected requests from google (GAE) because it was a Yahoo company. Not sure about that. On the second day, I just tested the same app again. It works totally fine again, just as what I experienced with the local GAE run-time. It seems there's always a delay before a newly deployed GAE app to behave properly.
This is a Java app; I'll check if Python app behaves similarly.
Subscribe to:
Posts (Atom)