Archive for the ‘Java’ Category

Linked Data and the SOA Software Development Process

Thursday, November 17th, 2011

We have quite a rigorous SOA software development process however the full value of the collected information is not being realized because the artifacts are stored in disconnected information silos. So far attempts to introduce tools which could improve the situation (e.g. zAgile Teamwork and Semantic Media Wiki) have been unsuccessful, possibly because the value of a Linked Data approach is not yet fully appreciated.

To provide an example Linked Data view of the SOA services and their associated artifacts I created a prototype consisting of  Sesame running on a Tomcat server with Pubby providing the Linked Data view via the Sesame SPARQL end point. TopBraid was connected directly to the Sesame native store (configured via the Sesame Workbench) to create a subset of services sufficient to demonstrate the value of publishing information as Linked Data. In particular the prototype showed how easy it became to navigate from the requirements for a SOA service through to details of its implementation.

The  prototype also highlighted that auto generation of the RDF graph (the data providing the Linked Data view) from the actual source artifacts would be preferable to manual entry, especially if this could be transparently integrated with the current software development process. This is has become the focus of the next step, automated knowledge extraction from the source artifacts.

Artifacts

Key artifact types of our process include:

A Graph of Concepts and Instances

There is a rich graph of relationships linking the things described in the artifacts listed above. For example the business entities defined in the UML analysis model are the subject of the service and service operations defined in the Service Contracts. The service and service operations are mapped to the WSDLs which utilize the Xml Schema’s that provide an XML view of business entities. The JAX-WS implementations are linked to the WSDLs and Xml Schema’s and deployed to the Oracle Weblogic Application Server where the configuration files list the external dependencies. The log files and defects link back to specific parts of the code base (Subversion revisions) within the context of specific service operations. The people associated with the different artifacts can often be determined from artifact meta-data.

RDF, OWL and Linked Data are a natural fit for modelling and viewing this graph since there is a mix of concepts plus a lot of instances, many of whom already have a HTTP representation. Also the graph contains a number of transitive relationships , (for example a WSDL may import an Xml Schema which in turn imports another Xml Schema etc …) promoting the use of the owl:TransitiveProperty to help obtain a full picture of all the dependencies a component may have.

Knowledge Extraction

Another advantage of the RDF, OWL, Linked Data approach is the utilization of unique URIs for identifying concepts and instances. This allows information contain in one artifact, e.g. a WSDL, to be extracted as RDF triples which would later be combined with the RDF triples extracted from the JAX-WS annotation of Java source code. The combined RDF triples tell us more about the WSDL and its Java implementation than could be derived from just one of the artifacts.

We have made some progress with knowledge extraction but this is still definitely a work in progress. Sites such as ConverterToRdf, RDFizers and the Virtuoso Sponger provide tools and information on generating RDF from different artifact types. Part of the current experimentation is around finding tools that can be transparently layered over the top of the current software development process. Finding the best way to extract the full set of desired RDF triples from Microsoft Word documents is also proving problematic since some natural language processing is required.

Tools currently being evaluated include:

The Benefits of Linked Data

The prototype showed the benefits of Linked Data for navigating from the requirements for a SOA service through to details of its implementation. Looking at all the information that could be extracted leads on to a broader view of the benefits Linked Data would bring to the SOA software development process.

One specific use being planned is the creation of a Service Registry application providing the following functionality:

  • Linking the services to the implementations running in a given environment, e.g. dev, test and production. This includes linking the specific versions of the requirement, design or implementation artifacts and detailing the runtime dependencies of each service implementation.
  • Listing the consumers of each service and providing summary statistics on the performance, e.g. daily usage figures derived from audit logs.
  • Providing a list of who to contact when a service is not available. This includes notifying consumers of a service outage and also contacting providers if a service is being affected by an external component being offline, e.g. a database or an external web service.
  • Search of the services by different criteria, e.g. business entity
  • Tracking the evolution of services and being able to assist with refactoring, e.g answering questions such as “Are there older versions of the Xml Schemas that can be deprecated?”
  • Simplify the running of a specific Soapui test case for a service operation in a given environment.
  • Provide the equivalent of a class lookup that includes all project classes plus all required infrastructure classes and returns information such as the jar file the class is contained in and JIRA and Subversion information.

Configuring Persistence for Lift Web Applications

Sunday, May 29th, 2011

Generating a basic Lift web application using Maven (see Using Maven and Eclipse to generate Scala Lift Web Applications) creates a project that by default uses a H2 database. The generated application provides three obvious options for configuring persistence. The starting point is Boot.scala (located in src/main/scala/bootstrap/liftweb/Boot.scala).

In examples below we look at:

  • how H2 is specified as the default database in Boot.scala
  • using a properties file to specify a different database (e.g. MySQL)
  • using JNDI to specify a third database (e.g. PostgreSQL).

The Default Database

The code fragment below from the generated Boot.scala file determines that:

  • if there is no JNDI entry for the database and
  • there are no JDBC entries specified in the properties file
  • then connect to a H2 database using JDBC driver class “org.h2.Driver” and JDBC url “jdbc:h2:lift_proto.db; AUTO_SERVER=TRUE”.
class Boot {
def boot {
  if (!DB.jndiJdbcConnAvailable_?) {
    val vendor =
       new StandardDBVendor(Props.get("db.driver") openOr "org.h2.Driver",
	   Props.get("db.url") openOr
	  "jdbc:h2:lift_proto.db;AUTO_SERVER=TRUE",
	   Props.get("db.user"), Props.get("db.password"))

    LiftRules.unloadHooks.append(vendor.closeAllConnections_! _)

    DB.defineConnectionManager(DefaultConnectionIdentifier, vendor)
  }
 ...
}

Since the property file is not automatically generated with JDBC properties the H2 database becomes the default.

Updating the Maven POM File

When configuring for a database other than H2 the Maven POM file needs to be updated so that a jar file containing the appropriate JDBC driver is available at runtime. The following adds the JDBC drivers for both MySQL and Postgres.

<project ... >
   ...
  <dependencies>
    <dependency>
     ...
      <groupId>com.h2database</groupId>
      <artifactId>h2</artifactId>
      <version>1.2.138</version>
      <scope>runtime</scope>
    </dependency>
    <!--  Added for MySQL datasource -->
    <dependency>
      <groupId>mysql</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>5.1.15</version>
      <scope>runtime</scope>
    </dependency>
    <!--  Added for PostgreSQL datasource -->
    <dependency>
    	<groupId>postgresql</groupId>
    	<artifactId>postgresql</artifactId>
    	<version>9.0-801.jdbc4</version>
      <scope>runtime</scope>
    </dependency>
     ...
  </dependencies>
   ...
<project>

Configuring JDBC with a Properties File

Adding the property file “default.props” to the project at

  • src/main/resources/props/default.props

allows a different database to be configured by setting JDBC properties with names matching those expected in Boot.scala.

The example default.props below configures a MySQL database.

# Properties in this file will be read when running in dev mode
db.driver=com.mysql.jdbc.Driver
db.url=jdbc:mysql://localhost:3306/liftbasic
db.user=mysql_username
db.password=mysql_password

Using JNDI to Configure a Datasource

A JNDI configured database can be enabled by adding one additional line specifying the JNDI name of the datasource, for example:

DefaultConnectionIdentifier.jndiName = "jdbc/liftbasic"

The original generated code becomes.

def boot {
  ...
  DefaultConnectionIdentifier.jndiName = "jdbc/liftbasic"

  if (!DB.jndiJdbcConnAvailable_?) {
    val vendor =
       new StandardDBVendor(Props.get("db.driver") openOr "org.h2.Driver",
	   Props.get("db.url") openOr
	  "jdbc:h2:lift_proto.db;AUTO_SERVER=TRUE",
	   Props.get("db.user"), Props.get("db.password"))

    LiftRules.unloadHooks.append(vendor.closeAllConnections_! _)

    DB.defineConnectionManager(DefaultConnectionIdentifier, vendor)
  }
 ...
}

A more configurable option is to replace the code just added with the following which checks to see if the JNDI datasource has been set by the “jndi.name” property in the “default.props” property file. If not “jdbc/liftbasic” is used.

DefaultConnectionIdentifier.jndiName = Props.get("jndi.name") openOr "jdbc/liftbasic"

A resource-ref element is also added to the web.xml file (i.e. src/main/webapp/WEB-INF/web.xml) to allow the container to manage the connection to the database.


<web-app>
    ...
   <resource-ref>
      	<description>Database Connection</description>
      	<res-ref-name>jdbc/liftbasic</res-ref-name>
      	<res-type>javax.sql.DataSource</res-type>
      	<res-auth>Container</res-auth>
   </resource-ref>
    ...
</web-app>

Container Specific JNDI Settings

Each server platform has its own specific way of configuring the JNDI settings that map the JNDI name read by Lift to a specific database. Below are examples for Tomcat, Jetty and the Cloudbees platform.

Tomcat

Tomcat manages JNDI settings via the Context element. The following adds a Context element to the Tomcat server.xml file ( i.e. TOMCAT_HOME/conf/server.xml),  mapping the JNDI name to a PostgreSQL database.

<Server>
  <Service>
    <Engine>
      <Host>
         ...
		<Context path="/liftbasic" docBase="liftbasic" reloadable="true" crossContext="true">
			<Resource name="jdbc/liftbasic"
			auth="Container"
			description="DB Connection"
			type="javax.sql.DataSource"
			driverClassName="org.postgresql.Driver"
			url="jdbc:postgresql://127.0.0.1:5432/postgres"
			username="postgres_username"
			password="postgres_password"
			maxActive="4"
			maxIdle="2" maxWait="-1"/>
		</Context>
        ...
      </Host>
    </Engine>
  </Service>
</Server>

Jetty

There are a number of options for configuring JNDI for Jetty. One option is to add a jetty-env.xml file to the WEB-INF directory to configure JNDI resources specifically for that webapp. The example jetty-env.xml file below configures a MySQL JNDI datasource for Jetty.

<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd">
<Configure class="org.mortbay.jetty.webapp.WebAppContext">

	<New id="liftbasic" class="org.mortbay.jetty.plus.naming.Resource">
	    <Arg></Arg>
	    <Arg>jdbc/liftbasic</Arg>
	    <Arg>
	     <New class="com.mysql.jdbc.jdbc2.optional.MysqlConnectionPoolDataSource">
	                 <Set name="Url">jdbc:mysql://localhost:3306/liftbasic</Set>
	                 <Set name="User">mysql_username</Set>
	                 <Set name="Password">mysql_password</Set>
	     </New>
	    </Arg>
	   </New>
</Configure>

CloudBees

CloudBees specific configuration is placed in the cloudbees-web.xml file in the WEB-INF directory of the deployed WAR file (i.e. src/main/webapp/WEB-INF/cloudbees-web.xml).  The example below maps a CloudBees Managed MySQL datasource to the JNDI name read by Lift.

<?xml version="1.0"?>
<cloudbees-web-app xmlns="http://www.cloudbees.com/xml/webapp/1">
    <appid>lift</appid>
    <context-param>
        <param-name>application.environment</param-name>
        <param-value>prod</param-value>
    </context-param>
	<resource name="jdbc/liftbasic" auth="Container" type="javax.sql.DataSource">
	 <param name="username" value="cloudbees_mysql_username" />
	 <param name="password" value="cloudbees_mysql_password" />
	 <param name="url" value="jdbc:cloudbees://liftbasic" />
	</resource>
</cloudbees-web-app>

Using Maven and Eclipse to generate Scala Lift Web Applications

Sunday, April 24th, 2011

The Lift web framework is built on top of Scala which in turn runs on the Java Platform. Lift applications are built as WAR files and deployed to servlet containers such as Tomcat and Jetty.

The Maven archtetypes for Lift and the Scala IDE for Eclipse make it easy for a Java developer to start building and deploying Lift applications using familiar tools. Since Lift views (templates) are valid XHTML or HMTL5 the Eclipse IDE for Java EE Developers (Helios SR2) and its XML and HTML editors can be used to design Lift web interfaces.

Installing the Scala IDE and the M2Eclipse Maven plugin into the Eclipse IDE for Java EE Developers allows new Lift applications to be created by selecting a suitable Lift archtetype when creating a new Maven project.

For example the Maven command below uses the Lift basic archetype to generate a Lift project with a default H2 database, logging and user management.

mvn archetype:generate
-DarchetypeGroupId=net.liftweb -DarchetypeArtifactId=lift-archetype-basic_2.8.1
-DarchetypeVersion=2.2 -DarchetypeRepository=http://scala-tools.org/repo-releases -DremoteRepositories=http://scala-tools.org/repo-releases
-DgroupId=com._3kbo.scala -DartifactId=liftbasic -Dversion=1.0

The same Lift project can be generated from within Eclipse using the Maven new project wizard and selecting the Lift basic archetype (as shown below).

maven archetype

Apply the Scala nature to the project by first opening the project in the Scala perspective, select the project (e.g. liftbasic) in the package explorer and select  Configure | Add Scala Nature.

The application created using the Maven command line can be run in Jetty from the command line with the command:

mvn jetty:run

The web application can then be accessed at http://localhost:8080/.

Similarly the Eclipse Maven project can be run in Jetty within Eclipse by setting up and running a Runtime Configuration like the one below:

mvn jetty:run

Maven Commands

Friday, February 20th, 2009

A list of the maven commands and references I commonly use.

Creating Maven Projects

  • mvn archetype:create -DgroupId=com._3kbo -DartifactId=application (create a project for building a jar file )
  • mvn archetype:create -DgroupId=com._3kbo -DartifactId=webapplication -DarchetypeArtifactId=maven-archetype-webapp (create a web app project for building a war file)
  • mvn archetype:generate (Select the number of the desired archetype.)

Building, Testing, Packaging and Installing to the Repository

  • mvn clean            (cleans the project, i.e. delete the target directory)
  • mvn compile        (compiles the source code)
  • mvn test              (if required compiles the source code, then runs the unit tests)
  • mvn package       (compiles, tests then packages the jar or war file)
  • mvn clean install  (cleans the project, compiles, tests, packages and installs the jar or war file)
  • mvn install           (compiles, tests, packages and installs the jar or war file)
  • mvn -o install      (compiles, tests, packages and installs the jar or war file in offline mode)
  • mvn -Dmaven.test.skip package       (package without running the tests)
  • mvn -Dmaven.test.skip clean install  (clean and install without running tests)
  • mvn -o test         (test offline)

Unit Tests

  • mvn test –Dtest=AspectTest (run a specific test)
  • mvn test -Dtest=*ModelTest (run a subset of tests using regular expression. E.g. test all ModelTest classes)

Excluding log4j.properties from jar file

Move log4j.properties from src/main/resources to src/test/resources.
That way it gets picked up by the testResources for testing but excluded from the jar file.

Manually Install a file to the Repository

mvn install:install-file -DgroupId=com._3kbo -DartifactId=model -Dversion=1.0.0 -Dpackaging=jar -Dfile=new_file.jar

Set Java Build Version

<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.5</source>
<target>1.5</target>
</configuration>
</plugin>
</plugins>
</build>

Documentation

The mvn site command builds a useful dependency tree at target/site/dependencies.html.

  • mvn site (generate documentation in the target/site directory)
  • mvn javadoc:javadoc (generate javadoc in target/site/apidocs directory )
  • mvn help:active-profiles (lists the profiles currently active)
  • mvn help:effective-pom (displays the effective POM for the current build)
  • mvn help:effective-settings (displays the calculated settings for the project, given any profile enhancement and the inheritance of the global settings into the user-level settings.)
  • mvn help:describe (describes the attributes of a plugin)

For large projects mvn site can run out of memory. Use MAVEN_OPTS in the mvn or mvn.bat script to allocate additional memory, e.g:

MAVEN_OPTS=”-Xmx512m -Xms256m -XX:MaxPermSize=128m”

References

Setting up MySql on Mac OSX for Jena SDB

Friday, October 17th, 2008

Awhile ago I installed MySQL 5.0.37 on my MacBook Pro using the default mysql settings.

Recently I installed Jena SDB 1.1, following the instructions on the wiki.

As part of the install I created a mysql database, specifying utf8, e.g.

mysql> create database sdb-index character set utf8 ;

and set up a store description (named sdb-index.ttl) based on the SDB example, changing it to use mysql and the “layout2/index” layout.

The create command worked fine
SDBROOT > bin/sdbconfig –sdb=sdb-index.ttl –create

but when I ran the testsuite
SDBROOT > bin/sdbtest –sdb=sdb-index.ttl testing/manifest-sdb.ttl

I got the following error in the Unicode-5 test.

Checking out the SDB notes for Mysql it seemed likely that the problem was related to the msyql default character set.

To see what was currently set I ran the “show variables” command below

mysql> show variables like ‘character%’;
+————————–+————————————————————+
| Variable_name | Value |
+————————–+————————————————————+
| character_set_client | latin1 |
| character_set_connection | latin1 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | latin1 |
| character_set_server | latin1 |
| character_set_system | utf8 |
| character_sets_dir | /usr/local/mysql-5.0.37-osx10.4-i686/share/mysql/charsets/ |
+————————–+————————————————————+

The SDB notes for Mysql recommends setting default-character-set=utf8. The Mysql documentation seemed to favour setting character-set-server=utf8 and collation-server=utf8_general_ci.

To make the changes I needed to create a config file with the changed settings that mysql reads on startup.

To do this I copied the example config file for small installations to /etc/my.cnf.

cp /usr/local/mysql-5.0.37-osx10.4-i686/support-files/my-small.cnf /etc/my.cnf

In the [mysqld] section of my.cnf I added the lines:
# utf8
init-connect=’SET NAMES utf8′
character-set-server=utf8
collation-server=utf8_general_ci

After restarting mysql the “show variables” command showed the following utf8 updates.
mysql> show variables like ‘character%’;
+————————–+————————————————————+
| Variable_name | Value |
+————————–+————————————————————+
| character_set_client | latin1 |
| character_set_connection | latin1 |
| character_set_database | utf8 |
| character_set_filesystem | binary |
| character_set_results | latin1 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | /usr/local/mysql-5.0.37-osx10.4-i686/share/mysql/charsets/ |
+————————–+————————————————————+

When I tried it again the SDB testsuite ran without errors.

SDBROOT > bin/sdbtest –sdb=sdb-index.ttl testing/manifest-sdb.ttl