Jenkins on the Cloud – CI Part 1

The good folks at Bitnami have made a large number of applications available via VM’s for all the major cloud platforms.  One of my favorite CI tools is Jenkins, which as you guessed Bitnami offers multiple VM’s and local images.   If your sys admins don’t have time to set up a Jenkins server for you, then you can easily deploy a Jenkins to AWS or Azure or Digital Ocean.. you name it.

In this post we are going to show you how easy it is to set this up on Azure.  It’s just as easy on all other providers.  One of the advantages of Azure is that your server will have the hostname DNS resolvable without the need for a static IP.

Here are the steps.

1. Sign up for Azure
2. Sign up for a Bitnami account.
3. Open the following URL.
4. Follow the simple steps to get the .publishsettings file.  Then, Select the Machine size and region.
Screen Shot 2015-08-25 at 2.10.40 PM
5. Create the Jenkins Server.  Don’t forget to name your server something meaningful.
     Screen Shot 2015-08-25 at 2.10.58 PM
6. Server is being Created.
Screen Shot 2015-08-25 at 2.11.23 PM
7. Server is created…
Screen Shot 2015-08-25 at 3.19.37 PM

Once the server is created, login to your Jenkins server and start configuring “Items”.  This was a breeze! Thank you Bitnami.  On my next post, I will show you how to configure Jenkins with Github and Salesforce Chatter as part of a CI process for Salesforce’s Apex – force.com

Advertisements

Continuous Integration in Waterfall?

It’s has been a while since I posted on this blog.  Now that I have aligned all my ducks in a row, walked the dog and fed the cat it’s time to blog again.

I can’t address enough the benefits of Continuous Integration on IT projects.   In a nutshell, CI is a software development methodology that is based on the premise that developers check their code into source control multiple times a day.  The code check-in triggers an automated build which compiles all the code and runs unit tests.  Thus, a software development team has many integrations in a day.

As Agile methodologies continue to overtake the traditional Waterfall, yes waterfall is not dead yet, we can witness CI become a key activity in driving software quality.  Quality is driven by the many integrations and the early identification of software defects during the automated builds.

Working as an adviser to many project teams, another key activity that it’s missed is the “Demo”.  Whether you use Scrum, Kanban or any other methodology, be sure that your team gets to demo what they built iteratively  Recently, I was on a project where the development team did not demo their application until the last week of the development cycle and the customer had a large number of changes that they wanted.  Unfortunately for this customer, the Waterfall approach that the System Integrator was using did not take into account this simple activity.  Instead, the changes were logged as “Change Requests” and left to another process in order to define the priority.

How do we fix this symptom?  If you are in a Waterfall vacuum, try to schedule a few demo’s with the stakeholders either a product owner or a system user.  Get feedback early in the process and incorporate the changes that are within the scope of the SOW.  If the changes or feedback fall outside of the SOW, the notify your customer and log the request on a risk register.

Also, incorporate CI into Waterfall.  Have your team stand-up a build server and have them check their code with unit tests.   Build often and reward the build breaker with a special hat or task.

 

SaaS the market mover

For the last decade, developing & implementing Enterprise Software was/is a nightmare.  The procurement life cycle was long and at most times inefficient as customers face challenges internal & external to the organization.  Software Vendors for the most part were caught in this inefficient life cycle as they had to support multiple versions of their product and provide special support for those customers who could not upgrade.  This approach hinders the innovation of any product.  The product team had to analyze the hardware architecture and OS’ that they needed to support in the new product version.  And what new features had to be leveraged and added to the product.

I survived many nightmares when I worked for a Software Vendor, which had a billing platform.  The billing system  was supported on at least 4 different OS’ and over 4 different Databases.  In addition, you needed a supported C compiler and supported JVM version.    I remember that patching was a nightmare, specially if you had customizations to the platform.  But billing platforms were not the only ones – I also encountered this paradigm when I work for the 2 biggest software companies in the market with a broad range of products offerings.   The support, upgrade and maintenance of the products were painful and the overall customer experience was awful.   I remember a CIO telling us in a meeting  that ‘the upgrade was needed to correct the mistakes of the past’.  It was not her fault that the System Integrator butchered the implementation, but this was the way business was done!

In the meantime, several software companies saw the opportunity to capitalize on this mayhem.  They were pioneers on finding a better way to build and maintain Software products.  They built new software products with the premise that the customer experience was first. Secondly, the product had to  scale without having the customer make huge capital expenditures on data-centers,  hardware and human capital.  I am not talking about taking existing products and putting in them in a Data Center on Virtual Machines and calling that SaaS.  I am talking about building the software product from the ground up to be distributed and scalable.  Where the platform is “shared” by many tenants with their own shards of meta-data and data.

The true SaaS offering provides a seamless experience.  Upgrades are pushed to sandbox instances where customers can test new features and test their existing code base before going live to production.  More importantly, the testing is more rigorous because the ‘customers’ participate in testing.  Each customer runs their regression tests on the new version of the platform – Collaborative Testing.  This collaborative approach fosters a true partnership between a vendor and a customer.  Success is everyone’s priority because it’s defined together.

More to come….

 

TileStream On Heroku

On my spare time, I decided to build a Geo-spatial application and decided to host my tile server in lieu of using Google Maps. Yes, I decided to create my own map layers with Mapbox’s Tilemill and host the map layers on my own server rather than to use Mapbox or Cloudmade SaaS hosting model.

There are many advantages to using Tilemill – the most important reason for me was the ability to interface with PostGIS, Shapefiles and SQLite. Also, TileMill uses Carto CSS, which is a real joy to use when creating maps.  I am not going to discuss how to use Tilemill, you can reference the online tutorials in order to get a feel for the tool.  Here’s an overview of the steps required to push Tilestream server to Heroku.

TileStream is a high performance Tile Server that renders MBTiles and developed by Mapbox.

First, I rolled out Tilestream on AWS.   AWS provides the ideal scenario for a developer/power user. It provides you with the most administrative and configuration control.   In the long term, I migrated to Heroku only to minimize costs.

First, create a standard Git project and add the following directories and files. The project structure should look like this:
mbtiles -> The Directory, which contains your Mbtiles
README.MD -> Standard Git project descriptor
package.json –> defines the project dependencies
Procfile –> Heroku application file which defines the process types in an app.

Here’s a reference project: https://github.com/jsantisi/flageo. If you decide to clone the project, make sure you get the ssh url.
Creating a new Git project
A. Create a directory for your project
B. Create the mbtiles directory inside your project directory
C. Copy the MBTiles from TileMill into the mbtiles directory (Hint – you have to export your MBTiles from Tilemill)
D. Create the rest of the files at the root level of your project.
E. Use git to initialize the project (Assumes that you have a git account and you are comfortable with Git Commands)
:>git init
:>git add .
:>git commit -m ‘your comment’
:>git push
:>git push heroku master

In order to push your application directly to Heroku you must configure Heroku with Git & ssh. Heroku downloads the app’s dependencies as specified in the package.json. The application is in node.js format (TileStream is a node.js server)
{
“name”: “flageo”,
“version”: “0.0.1”,
“author”: “jsantisi”,
“dependencies”: {
“tilestream”: “1.1.0”
},
“engines”:{
“node”:”0.10.26″,
“npm”: “1.4.3”
}
}

Heroku is smart and figures out that this is a node.js application that needs tilestream, node and npm in order to run. And your console should look like this…

:>git push heroku master
Fetching repository, done.
Counting objects: 5, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 411 bytes | 0 bytes/s, done.
Total 3 (delta 2), reused 0 (delta 0)

—–> Node.js app detected
—–> Requested node range: 0.10.26
—–> Resolved node version: 0.10.26
—–> Downloading and installing node
—–> Restoring node_modules directory from cache
—–> Pruning cached dependencies not specified in package.json
npm WARN package.json flageo@0.0.1 No repository field.
—–> Writing a custom .npmrc to circumvent npm bugs
—–> Exporting config vars to environment
—–> Installing dependencies
npm WARN package.json flageo@0.0.1 No repository field.
npm WARN package.json tilestream@1.1.0 ‘repositories’ (plural) Not supported. Please pick one as the ‘repository’ field
—–> Caching node_modules directory for future builds
—–> Cleaning up node-gyp and npm artifacts
—–> Building runtime environment
—–> Discovering process types
Procfile declares types -> web

—–> Compressing… done, 28.2MB
—–> Launching… done, v16
http://jtilestream.herokuapp.com/ deployed to Heroku

To git@heroku.com:jtilestream.git
7cc236a..a17c1dc master -> master

Procfile
We have to address the procfile since without the correct settings the server will not start or be accessible to the outside world.
web: tilestream --host jtilestream.herokuapp.com --host '*' --uiPort=$PORT --tilePort=$PORT --tiles=./tiles
web: defines a web application
host: the name of your application(Heroku dashboard) or the host that will be allow to connect to tilestream. You can specify all hosts.
uiPort: $PORT is the Heroku application Port value. Heroku provides several environment variables that you must use in order for your application to be able to open a connection.
tilePort: $PORT assigned to your application by Heroku.

One advantage of leveraging a Paas/SaaS model like Heroku, is that the $PORT environment variable is managed for you.  They abstracted the AWS offering from the developers and administrators and provide you with ports for your application along with Git and other application centric service offerings.  This approach makes the hosting model more appealing.   I don’t want to spend my limited playing time configuring Paas and SaaS application services.

 

Apache Camel

During the past 18 months, I have been working with a customer who did not have a budget for purchasing Commercial Middleware licenses.  At first, I was very skeptical about the capability that my team was going to be able to build with the open source stack.  My previous integration experience was with all the major software vendors such as Oracle, IBM, Tibco and Vitria.

I like challenges, so we chose Apache Camel to build mediation services for several systems with different messaging formats.  It turns out that Apache Camel is the Swiss Army knife of Open Source Middleware (OSM).

I don’t want to start a purist war on Commercial versus Open Source software; However, I am going to point a few of the benefits to using Apache Camel given the business decisions that our team faced.

  • Team Skill Set – Apache Camel is written in Java and it’s easy for Java Developers to climb the learning curve due to Camel’s Domain Specific Language (DSL).
  • Portability – One of the requirements that our team had, was the ability to ship the services to partners.  Camel is highly portable, since you can run it as a web application in tomcat or jetty.  Or you can embed the mediation module inside your web application.  At first, we ran our camel services in Glassfish. Then, we migrated to tomcat.
  • Prototyping – We found that the level of effort to design and prototype services was minimal.  Granted, our team was composed of proficient Java developers.
  • Enterprise Integration Patterns (EIP) – Apache Camel was built to provide the EIP as components. http://camel.apache.org/eip.html

Also, there are some obvious drawbacks such as no WYSIWYG editor, limited support and too much flexibility.  What I mean by too much flexibility is that Camel provides you with low level API’s to process and route messages.  In the wrong hands, your code can become very complex.  For example, I had one of my Senior developers implement the low level API in order to execute XSLT with SAXON.  He wrote approximately 100 lines of code to do this.  When all he needed to do was use camel DSL’s like this.

from("activemq:My.Queue").
  to("xslt:com/santisij/mytransform.xsl").
  to("activemq:Another.Queue");
Apache Camel is a light weight Middleware component. It does not come with all the bells and whistles.  For example, if you want a JSR-94 Business Rule engine, you will need to use Drools and integrate the business rule engine with your Camel routes. Then, there’s clustering and high availability that you can set-up as well, but there’s no wizard nor dedicated administrative server to configure.
Camel does provide a lengthy list of components that you can leverage to build your mediation modules.  In camel, the components are the piping. You still have to develop the logic in Java.  Also, there are no JCA compliant adapters.  For some Enterprises this can be a deal breaker specially if you are trying to service enable an IBM Mainframe.  IBM definitely does not make a Camel Component.
When do you want to use camel?  There are several scenarios that Camel is ideal for.  The first one that comes to mind is when a Web Application needs to handle messaging (JMS/MQ), and message routing based on content along with message transformation.  Camel is lightweight and can be embedded in the application, which helps reduce the code footprint when compared to non-camel applications.  Another scenario is for file polling and processing.  Camel provides a file poller component that is robust and multi-threaded.  And you can implement it with one line of code:
from("file://inbox?preMoveNamePrefix=inprogress/")
Remember the days that you had to right the Java Code for this?   How much code did you write? Even with Spring – this is not as trivial as with Camel. (You can also use Camel with your Spring Applications)
In the near future, I will update my blog site with meaningful Apache Camel scenarios with code.  If you have a favor topic that you need me to address – leave me a comment.

Date Driven Programs – Part 1

If you work in the Information Technology Services Industry, sooner or later you will run into a project that’s driven purely by dates and milestones.  The challenge with such projects, is that the dates and milestones where pre-defined during the initial planning phase of the project or during the RFP where the customer asked for a work plan with milestones.  For the scope of this blog, project and program will be used interchangeably.

What are the dates and milestones missing?  Simply, expert or quasi expert estimates from the vendors and domain experts.  The team that answered the RFP or performed the initial project planning as part of a Program Management Office (PMO) provided a guesstimate (guess estimate).      Granted, all projects and programs need a project plan with an initial baseline so that the project teams can kick-off phase of their relative projects.

A living “schedule”: The big mishap here – is that the schedule does not become a living document due to contractual constraints or the political cloud that surround all organizations.  Often, delivery dates and milestones are unchangeable.  Our PMP governing body is completely ineffective when the schedule is not managed.  Schedule management is the key to a successful project/program.

Scope Creep: The customer wants additional functionality that’s outside the statement of work or project charter. With a weak PMO, this will cause the project to fail because the project teams will be increasing their scope.   Scope management is the key to successful program delivery.   Without scope management, the project teams will be aiming at a moving target.

Schedule Slippage: When the project starts to slip to the right – the System Integrator or the Vendor will make the case to add more resources with justifiable metrics and other pertinent decision making data.  From the customer’s perspective, this is a win-win situation; “you add more consultants and we get the functionality that we need”.  This is not the case.  This is one of the biggest pitfalls in program management and in non Agile programs. This combination is lethal to the success of program.  Here are the misconceptions with such an approach.

  • Mythical Man Month – “Adding more resources to a project that’s already late, will delay the project even further” Why?  Because valuable resources that are delivering functionality will have to come of the assembly line in order to train the new resources. (Fred Brook – http://www.amazon.com/Mythical-Man-Month-Software-Engineering-Anniversary/dp/0201835959)
  • Delivery Team Coverage: Since the schedule is not being managed, the delivery team will have to work  over-time (OT) in order to make up for the lost time that they never had to begin with.  OT – only works in the short term before the team enters the law of diminishing marginal returns.  What I mean by this is that the Production Index or Velocity for a project team will reach its climax at a given point in time and the productivity will not increase but rather decrease due to fatigue and other factors  .Work Life Balance for the delivery team will be non-existent. (I have live through several of these projects)
  • Got Middle-Management (MM)? –  If you are a delivery manager and you can’t manage the Work Breakdown Structure (WBS) nor the schedule; then, you are set up to fail.  The challenge is that the WBS and project schedule are not realistic nor portray the actual tasking that it’s taking place.   This leads to the status reporting and more status meetings.  The latter part drives most teams away from interfacing with the “status keepers” and alienates the delivery team from MM.  The “us” versus “them” culture starts to take place and the delivery team will start to become dysfunctional.
  • Got Status? – Anytime a project slips, the hierarchy of reporting becomes greater and more time consuming.  Everyone, who never showed interest on your effort will now zero in on your team and add additional organizational pressures.  Valuable resources will have to allocate additional time for status meetings that were non-existent before the slip.

Next Post will have recommendations on how to deal or manage such challenges.

Endeca 2.3 – A record is missing its Spec

I need to confess: I have been working with Endeca for the past 5 months.  Initially, I was skeptical about yet another search engine.  In the past, we had a custom Apache Lucene implementation, which exposed me to the core functionality of search engines:

  • Ingest
  • Index
  • Display Index or Indexes.

I am not going to dive into the technical capabilities that Endeca provides on this blog.  All data, including big data, is bound to be processed  in the following manner Extraction, Transformation and Load (ETL).  Endeca provides Clover as the de-facto tool for ETL.  I am briefly going to describe a scenario that I ran into during my ETL process.

ETL Process

During the Load stage of the ETL process I ran into the following error:

Clover ETL Error Message

Endeca requires a unique record identifier.  The identifier is referred to as the ‘Spec Attribute”.  Let me briefly define how Endeca sees a record.

A record in an Endeca Index is a collection of key/value pairs uniquely identified by the ‘Spec Attribute’.   You can create a sequence which generates the Spec Attribute value if your data does not have a unique key .  In my scenario, I was passing the Yahoo! GeoPlanet (What on Earth Id) woeid as the Spec Attribute.

The first time that I ran the graph, I saw the error.   My next step was to edit the “Bulk/Add Replace”.  I had defined the Spec Attribute as “Woeid”.  I checked the “Reformat” activity and it could not be! I had fat fingered the value.  I corrected the value to be “WoeId” and walla!

Clover Endeca Component

JEE Web Service with Annotations: SOAP Based

The evolution of WebServices technology, specially JAX-WS, has eased the implementation of JEE components as Services. The focus of the blog is to show how you can implement EJB’s as services. Many would argue about leaving services as POJO’s, but there are drawbacks to exposing your services as POJO’s. If you are running in a JEE Application Server, it’s to your benefit to leverage the container as much as possible, especially for transaction handling and security.

For those of you who have stayed away from EJB, don’t worry. Developing a EJB 3.0 service is the same as you would develop a POJO based service; except for the following Annotations (JSR-181):

  1. @Stateless – tells the container that the class is a Stateless Session Bean
  2. @WebService – tells the container that the class is also a WebService.

Let’s look at a simple “Greet Me Service” in order to illustrate. The GreetMeService implements a Service Endpoint Interface.  In order to create the service, first define the Interface.

The interface defines the methods that must be implemented by the class. From a Web Service perspective, it defines the web service operations. Let’s look at the Service Endpoint Interface.

import java.rmi.Remote;

import javax.jws.WebService;
import javax.jws.soap.SOAPBinding;
import javax.jws.soap.SOAPBinding.Style;
import java.rmi.RemoteException;

@WebService(name = “GreetMeEJBBeanService”)
@SOAPBinding(style = Style.DOCUMENT)
public interface GreetMeEJBBeanService extends Remote {
@WebMethod(operationName = “greetMe”)
@WebResult(targetNamespace=”http://com.santisij.greetme/types”,
name=”greeting”)
public String greetMe(@WebParam(targetNamespace = “http://com.santisij.greetme/input”,name = “name”,mode = Mode.IN)String name) throws RemoteException;

//The RemoteException must be thrown by the EJB
}

I am assuming that you are familiar with the different types of WebService Bindings- Document or RPC and whether it’s Literal or Encoded. The JAX-WS specification defines the web service binding style as Document by default.   It does not hurt to define the Binding as DOCUMENT, but you can easily leave out the @SOAPBinding annotation.

The key difference between Document and RPC is that the Document style has full XML documents, while RPC indicates that the underlying SOAP message contains parameters in the request and response values in the response message.

Now that we defined our Interface let’s implement the method/operations:

import javax.ejb.Remote;
import javax.ejb.Stateless;

import javax.jws.WebService;

@Stateless(name = “GreetMeEJB”, mappedName = “GreetMe-GreetMeEJB”)
@Remote(GreetMeEJBBeanService.class)
@WebService(portName = “GreetMeEJBBeanServicePort”, endpointInterface = “com.santisij.greetme.GreetMeEJBBeanService”)
public class GreetMeEJBBean{

public String greetMe(String name){
StringBuilder sb = new StringBuilder();
sb.append(“This is the Greeting Service….”);
sb.append(“Hello “);
sb.append(name);

return sb.toString();
}
}

The advantage of using the JEE spec is that this EJB is both a web service and an EJB.  If you need to invoke this service from the same JVM, you don’t have to go through the SOAP stack in order to get your response.
Here’s a simple client that looks up the EJB from the JNDI tree and invokes the GreetMeService.
public class GreetMeEJBClient {

public static void main(String [] args) {
String name = “Juan”;
String result = null;
try {
final Context context = getInitialContext();
GreetMeEJBBeanService greetMeEJB = (GreetMeEJBBeanService)context.lookup(“GreetMe-GreetMeEJB#com.santisij.greetme.GreetMeEJBBeanService”);
System.out.println(“I got the Context: executing method”);
result = greetMeEJB.greetMe(name);
System.out.println(“result from RMI: “+result);
} catch (Exception ex) {
ex.printStackTrace();
}
}

private static Context getInitialContext() throws NamingException {
Hashtable env = new Hashtable();
// WebLogic Server 10.x connection details
env.put(Context.INITIAL_CONTEXT_FACTORY, “weblogic.jndi.WLInitialContextFactory” );
env.put(Context.PROVIDER_URL, “t3://localhost:7001″);
return new InitialContext( env );
}
}

I used JDeveloper to build and deploy the service to WebLogic, but you could easily build and deploy the service to Glassfish Server with Eclipse.  In addition, make sure that you generate/view the WSDL file. If your service location has a the value ” REPLACE …”, you will have to replace the service location with the URL of your service.

WebLogic – http://myhost:7001/GreetMeEJBBean/GreetMeEJBBeanService
Take a look at the WSDL: http://localhost:7001/GreetMeEJBBean/GreetMeEJBBeanService?wsdl

I  use soapUI – a great tool to test your services – this is the URL that you want to point to.  The following links provide great reference material …

http://java.sun.com/webservices/docs/1.6/tutorial/doc/

There are several books that dive into the JAX-WS and EJB 3.x specifications (Just search for your favorite publisher)

RUL-00026

The following description is from the oracle online documentation:

RUL-00026: exception in invoked Java method {0} {1,choice,0#|0<at line {1,number,integer}} {2,choice,0#|0< column {2,number,integer}} {1,choice,0#|0< } {3,choice,0#|0<in {4}}Cause: A Java method invoked from RL threw an exception.

Action: Investigate the root cause of the exception thrown by the Java method. It is available as the cause of the exception thrown with this message.Level: 1

Type: ERROR

Impact: Programmatic

This error is caused by the Oracle RL executed in the business engine. If you have not had the chance, create a debug function for your custom function. (I will create a short tutorial shortly) Then, test the function.

Also, validate the XML input in order to make sure that all required elements are being submitted.