Posts Tagged ‘OWL’

Linked Data and the SOA Software Development Process

Thursday, November 17th, 2011

We have quite a rigorous SOA software development process however the full value of the collected information is not being realized because the artifacts are stored in disconnected information silos. So far attempts to introduce tools which could improve the situation (e.g. zAgile Teamwork and Semantic Media Wiki) have been unsuccessful, possibly because the value of a Linked Data approach is not yet fully appreciated.

To provide an example Linked Data view of the SOA services and their associated artifacts I created a prototype consisting of  Sesame running on a Tomcat server with Pubby providing the Linked Data view via the Sesame SPARQL end point. TopBraid was connected directly to the Sesame native store (configured via the Sesame Workbench) to create a subset of services sufficient to demonstrate the value of publishing information as Linked Data. In particular the prototype showed how easy it became to navigate from the requirements for a SOA service through to details of its implementation.

The  prototype also highlighted that auto generation of the RDF graph (the data providing the Linked Data view) from the actual source artifacts would be preferable to manual entry, especially if this could be transparently integrated with the current software development process. This is has become the focus of the next step, automated knowledge extraction from the source artifacts.

Artifacts

Key artifact types of our process include:

A Graph of Concepts and Instances

There is a rich graph of relationships linking the things described in the artifacts listed above. For example the business entities defined in the UML analysis model are the subject of the service and service operations defined in the Service Contracts. The service and service operations are mapped to the WSDLs which utilize the Xml Schema’s that provide an XML view of business entities. The JAX-WS implementations are linked to the WSDLs and Xml Schema’s and deployed to the Oracle Weblogic Application Server where the configuration files list the external dependencies. The log files and defects link back to specific parts of the code base (Subversion revisions) within the context of specific service operations. The people associated with the different artifacts can often be determined from artifact meta-data.

RDF, OWL and Linked Data are a natural fit for modelling and viewing this graph since there is a mix of concepts plus a lot of instances, many of whom already have a HTTP representation. Also the graph contains a number of transitive relationships , (for example a WSDL may import an Xml Schema which in turn imports another Xml Schema etc …) promoting the use of the owl:TransitiveProperty to help obtain a full picture of all the dependencies a component may have.

Knowledge Extraction

Another advantage of the RDF, OWL, Linked Data approach is the utilization of unique URIs for identifying concepts and instances. This allows information contain in one artifact, e.g. a WSDL, to be extracted as RDF triples which would later be combined with the RDF triples extracted from the JAX-WS annotation of Java source code. The combined RDF triples tell us more about the WSDL and its Java implementation than could be derived from just one of the artifacts.

We have made some progress with knowledge extraction but this is still definitely a work in progress. Sites such as ConverterToRdf, RDFizers and the Virtuoso Sponger provide tools and information on generating RDF from different artifact types. Part of the current experimentation is around finding tools that can be transparently layered over the top of the current software development process. Finding the best way to extract the full set of desired RDF triples from Microsoft Word documents is also proving problematic since some natural language processing is required.

Tools currently being evaluated include:

The Benefits of Linked Data

The prototype showed the benefits of Linked Data for navigating from the requirements for a SOA service through to details of its implementation. Looking at all the information that could be extracted leads on to a broader view of the benefits Linked Data would bring to the SOA software development process.

One specific use being planned is the creation of a Service Registry application providing the following functionality:

  • Linking the services to the implementations running in a given environment, e.g. dev, test and production. This includes linking the specific versions of the requirement, design or implementation artifacts and detailing the runtime dependencies of each service implementation.
  • Listing the consumers of each service and providing summary statistics on the performance, e.g. daily usage figures derived from audit logs.
  • Providing a list of who to contact when a service is not available. This includes notifying consumers of a service outage and also contacting providers if a service is being affected by an external component being offline, e.g. a database or an external web service.
  • Search of the services by different criteria, e.g. business entity
  • Tracking the evolution of services and being able to assist with refactoring, e.g answering questions such as “Are there older versions of the Xml Schemas that can be deprecated?”
  • Simplify the running of a specific Soapui test case for a service operation in a given environment.
  • Provide the equivalent of a class lookup that includes all project classes plus all required infrastructure classes and returns information such as the jar file the class is contained in and JIRA and Subversion information.

ISO-15926

Sunday, June 29th, 2008

The ISO-15926 standard is titled: “Industrial automation systems and integration—Integration of life-cycle data for process plants including oil and gas production facilities“. One of its main requirements was that the scope of the data model covers the entire lifecycle of a facility (e.g. oil refinery) and its components (e.g. pipes, pumps and their parts, etc.)

The data model that has evolved is an RDF/OWL ontology. Its development and evolution has set some important precedents that other engineering and construction projects such as the development of the Common Inspection and Test Plans can learn from. These include:

  • The use of OWL to model concepts and the potential reuse of concepts already identified by ISO-15926 and modeled in OWL.
  • The construction of OWL ontologies through community participation.
  • Public sharing of web based ontologies in order to speed up the adoption of standardized concepts.
  • The development of a Semantic Web Ontology browser.
  • Provisioning for individual companies to provide their own customizations.

Wikipedia provides overviews of both ISO-15926 and ISO-15926 WIP (Work In Progress).

15926.ORG is a wiki based site providing a Knowledge Base dedicated to the practical implementation of, and information about ISO 15926. It includes an ISO 15926 General Introduction.

Constructing an Ontology – Common Inspection and Test Plans

Sunday, June 15th, 2008

ABE Services has developed a web based application, the Compliance Data Management Service (CDMS) for checking work performed on site as part of building and construction projects. These are projects undertaken by the building, construction and related industries.

The diagram below gives an overview of how CDMS works.

The three main parts of CDMS are:

  • Designing Projects by identifying the tasks to be performed and allocating those tasks to the people who will perform them.
  • Using a mobile phone on site to check that completed tasks comply with industry standards and best practice
  • Monitoring and Managing the project via the progress and status of the completed tasks, the tasks outstanding and the non-compliant tasks.

Inspection and Test Plans (ITPs) are central to the three main parts of CDMS.

An Inspection and Test Plan (ITP) identifies the inspection, testing (verification) and acceptance requirements of a particular type of task, eg brickwork. Conceptually an ITP could be thought of as a refined checklist, identifying the usual steps to be followed when undertaking a particular type of task. A specific task may also have additional requirements specific just to it.

An Inspection and Test Plan is composed of one or more:

  • Verification Point(s) which identify the parts of the task to verify. Each Verification Point is composed of one or more:
  • Criterion (plural Criteria) which define the specific requirements by which a verification point can be deemed compliant. This could include referencing specific industry standards which the task is required to comply with.

A set of commonly recurring Inspection and Test Plans (ITPs) has been identified and published on the ABE Services web site. To help promote a higher standard of work in the building and construction industries ABE Services decided to make these common Inspection and Test Plans (ITPs) freely and publicly available. Currently though the details of these common Inspection and Test Plans (ITPs) are hidden within the CDMS application. To place them in the public domain an OWL ontology, named the Common Inspection and Test Plans ontology, is being constructed based on this initial set of commonly recurring Inspection and Test Plans (ITPs).

The definition of ontology in this context is along the lines of “an ontology is a specification of a conceptualization“. In this case the specification is that which constitutes a generic Inspection and Test Plan and the more specialized Inspection and Test Plans listed in the set of common Inspection and Test Plans (ITPs). Other related definitions of ontology include the more philosophical “the study of the nature of being” and the more detailed Wikipedia definitions Ontology and Ontology (Information Science)

The Common Inspection and Test Plans ontology is being implemented using the Web Ontology Language (OWL). OWL is an ontology language that can formally describe the meaning of terminology used in Semantic Web documents (See Why OWL?). In turn the Semantic Web is about two things. It is about common formats for integration and combination of data drawn from diverse sources, where on the original Web mainly concentrated on the interchange of documents. It is also about language for recording how the data relates to real world objects. That allows a person, or a machine, to start off in one database, and then move through an unending set of databases which are connected not by wires but by being about the same thing.

In the world of building and construction projects, and activities related to Real Estate, the buying and selling of houses, house and building maintenance, and potentially in the selection of contractors to perform building work, the Common Inspection and Test Plans ontology will form one of these databases, connected by the concept of how to perform a specific type of building, construction or related task.

In the first instance the Common Inspection and Test Plans ontology has been published in a very simple draft form, listing only the Inspection and Test Plans and not the more detailed the Verification Points and Criteria.

This draft version is available at the URI: http://www.abeservices.com.au/schema/2008/05/InspectionTestPlans.rdf.

A good way to view it is to install the Tabulator rdf browser plugin for Firefox as outlined in Description of a Project

It is intended that the Common Inspection and Test Plans ontology evolve as a result of community participation. As well as general feedback it is hoped to also make available a web-based vocabulary editor which would allow for greater collaboration . (Potential options include using Neologism and OntoWiki).

The next steps are to

  • add Verification Point(s) and Criteria to the currently identified Inspection and Test Plans
  • identify additional concepts related to Inspection and Test Plans, e.g. Hold Points
  •  add some worked examples

In a future article I’ll also outline how to query the Common Inspection and Test Plans ontology for specific informaton using the RDF query language SPARQL. A SPARQL end point is currently available at http://abeserver.isa.net.au:2020/ providing a SPARQL Query form .