How to set up wordpress on OpenShift in 10 minutes

What this is about?

A lot of customers would like to give the brave new container world (based on Docker technology) a try with real life workload. The WordPress content management system (yes, it has become more than a simple blog) seems to be an application that many customers know and use (and that I’ve been asked for numerous times). From a technical point of view the WordPress use case is rather simple, since we only need a PHP runtime and a database such as MySQL. Therefore it is a perfect candidate to pilot container aspects on OpenShift Container Platform.

Preparation

Install Container Development Kit

I highly recommend to install the freely available Red Hat Container Development Kit (shortly CDK). It will give you a ready to use installation of OpenShift Container Platform based on a Vagrant image. So you’re up to speed in absolutely no time:

Please follow the installation instructions here: https://developers.redhat.com/products/cdk/get-started/

Setup resources on OpenShift

Spin up your CDK environment and ssh into the system:

vagrant up
vagrant ssh

Create a new project and import the template for an ephemeral MySQL (since this is not included in the CDK V2.3 distribution by default). If you prefer to use another database or even one with persistent storage, then you can find additional templates here.

oc new-project wordpress
oc create -f https://raw.githubusercontent.com/openshift/openshift-ansible/master/roles/openshift_examples/files/examples/v1.3/db-templates/mysql-ephemeral-template.json

Now we create one pod for our MySQL database and create our WordPress application based on the source code. OpenShift will automatically determine that it is based on PHP and will therefore choose the PHP builder image to create a Docker image from our WordPress source code.

oc new-app mysql-ephemeral
oc new-app https://github.com/wordpress/wordpress
oc expose service wordpress

Now let’s login to the OpenShift management console and see what has happened:

We now have a pod that runs our WordPress application (web server, PHP, source code) and one pod running our ready to use ephemeral (= non-persistent) MySQL database.

Install wordpress

Before we need to note down the connection settings for our MySQL database. Firstly we look up the cluster IP of our mysql service; secondly we look up the database name, username & password. Have a look at the following screenshots:

Now it is time to setup and configure wordpress. Simply click on the route that has been created for your wordpress pod (in my case the hostname is “http://wordpress-wordpress.rhel-cdk.10.1.2.2.xip.io/wp-admin/setup-config.php”).

Congratulations for installing WordPress on OpenShift!

What’s next

For now we’ve created all the resources manually in a not yet reusable fashion. Therefore one of the next steps could be to create a template from our resources, import it into the OpenShift namespace and make it available for our users as a service catalog item. So our users could provision a fully installed WordPress with the click of a button.

My personal look at the German eID system (“Neuer Personalausweis”)

Business Problem

Many business processes in Germany involve paper (or better TONS OF PAPER!) and surely many manual steps: think of opening a bank account or registering a car at your local “Zulassungsstelle”. In my opinion one of the main reasons for that is that the identity of a user cannot be properly verified online. You could now argue that things like video identification or Deutsche Post PostIdent came up to address this problem. However this only solves part of the problem, since the signature still needs to be done manually.

In Germany the so called nPA (neuer Personalausweis) is able to solve this problem by providing a qualified signature. So you will be able to digitally sign contracts online. Therein lays the potential to completely transform tons of paper-based processes. And huge amounts of time and money could be saved as well!

Source: http://www.personalausweisportal.de/

Use cases of the eID system

The nPA has two main functions “Identification with Online-Ausweisfunktion” and “Electronic Signature”, which allow to implement many exciting use cases. These range from simple verifications (like age check, address validation) to login mechanisms for websites (the nPA can be considered as a single-sign-on system in this context). Moreover the nPA also allows to apply a qualified digital signature to documents, which is equal to a genuine signature (according to German law).

Since its launch in 2010 a couple of federal institutes and enterprises have made their services ready for the nPA:

  • ElsterOnline (German tax)
  • Rentenkonto online (German pension fund)
  • Punkteauskunft aus dem Verkehrszentralregister (VZR)
  • UrkundenService
  • Allianz Kundenportal

A complete list of applications can be found here: http://www.personalausweisportal.de/DE/Buergerinnen-und-Buerger/Anwendungen/Anwendungen_node.html. However, from my perception the adoption still leaves a lot of room for improvement.

Architectural overview

There is extensive documentation available which describes the technical architecture behind the eID system (personally I recommend the information from BSI found here: https://www.bsi.bund.de/DE/Publikationen/TechnischeRichtlinien/tr03130/tr-03130.html). That it why I do not want to go into the nitty gritty details.

However, to give you a rough understanding have a look at the following illustration, which looks similar to what is available in token based authentication systems (think of SAML and/or OpenID Connect concepts). There is something like a service provider (“WebServer”) who wants to protect a service; then an authority that is able to validate the identity (“eID-Server”); and a login component (“AusweisApp”) that allows the end user to enter login information like a PIN. Last but not least the user must have a card reader connected to his local system, which talks to the login component (“AusweisApp”).

Source: http://www.bsi.bund.de

It is important to understand that the login component (“AusweisApp”) is implemented as a standalone application, which must run on the user’s computer (and of course be installed beforehand). For 2017 it is planned to release mobile versions of the app (see Google Play Store) in order to use a mobile device as a card reader. In my opinion this will help to reduce the overall complexity from an end user’s perspective.

When looking at the system from a service provider’s point of view (e.g. I am an online shop provider who wants to enable users to login with their nPA), you have to consider a lot of things. Since their is neither a public instance of the “eID-Server” nor source code available, you have two options: create your own implementation based on the BSI spec or buy the service from a provider. Additionally you will have to think of how to integrate the token into your application: since there is no “reference implementation” of the “eID-Server” spec there is little to no documentation available. Overall the process feels rather complex and intransparent to me.

A detailed description of the application process can be found here: “Become Service Provider”.

Conclusion

The opportunity behind the German eID system is really huge and could speed up lots of processes and make all of our lives easier. But in my opinion there are a lot of things hindering the adoption and success of the system:

  1. There is no public eID-Server instance that can be used by public and private institutions. This makes the adoption unnecessarily complicated because all service providers have to find a solution for themselves.
  2. Little documentation for service providers available. Instead only tons of specs available that need a lot of work lifted by the service provider.
  3. Many services require that you map your eID to the identity in their system (at least once). This makes the process very uncomfortable for the end user.
  4. Currently an external card reader is needed. Firstly it has to be bought by the end user and secondly this does not work on the go. Fortunately this caveat has already been addressed with the mobile app version.

My final thoughts: the adoption cannot be forced by laws. Instead, I think that the eID system should be developed in a more transparent and community based manner. Moreover the integration by service providers should be as easy as putting a social login on my personal website.

 

New interview on mobile published at JAXCenter

“Man sollte auf Standards setzen und nicht für jede Applikation ein Silo aufbauen”

Looking forward to kick-off a discussion with you! 🙂

Cheers,
Sebastian

Faster and more efficient processes by combining BPM and Mobile

A. Synopsis

What this is about

A lot has happened in the area of mobile since Apple kicked off the revolution by announcing the first iPhone. However, the overall mobile market still has to be considered as young and especially unstandardized. This really puts a lot of organizations in front of huge challenges concerning the efficient development of mobile applications and the secure integration into backend IT systems.

But there is no way around mobile in the next years! The smart combination of mobile techniques (MBaaS, micro services, etc.) and business process management approaches will definitely drive process efficiency and speed to a whole new level.

The use case or “What if the process was at the fingertips of your customer?”

This showcase addresses a scenario that almost all enterprises in the insurance industry are facing: nowadays users expect to be able to contact their insurance 24/7 on an ad-hoc basis (e.g. for opening a claim or just for asking a question concerning their policy). Additionally they want to see on demand what the status of a certain request is. From an enterprise point of view insurances are looking at new ways on reacting to this new speed of communication and transparency. They’re also thinking of new concepts to efficiently integrate agencies and remote workers in their existing processes. They key consequence to address these requirements is to enhance existing input & output management infrastructure by a newly established mobile channel.

In this showcase we used Red Hat Mobile Application Platform (https://www.redhat.com/en/technologies/mobile/application-platform) as a key building block to efficiently and securely connect the outside world with existing enterprise systems.

ARC_OVERVIEW - Use case

Through the platform approach we do not need to reinvent the wheel for each mobile app on the horizon. Instead we put in place a centralized platform for developing and running mobile application in a standardized manner.

The use of Red Hat Mobile Application Platform (RHMAP) comes with the following benefits:

  • Agile approach to developing, integrating, and deploying enterprise mobile applications—whether native, hybrid, or on the web
  • Out-of-the-box automated build processes (including build farm)
  • A service catalog for reusable connectors to backends
  • Easy scale-out through cloud native architecture
  • Collaborative development across multiple teams and projects with a wide variety of leading tool kits and frameworks

Architectural overview

From a technical point of view the showcase is comprised of three main building blocks:

  • CLIENT LAYER: Hybrid mobile applications running on the end user devices
  • CLOUD LAYER: Node.js based backend running in the cloud on RHMAP
  • BACKEND LAYER: Set of business process applications running on JBoss BPM Suite as the underlying BPM engine

ARC_OVERVIEW - Component model

Client layer

Since we have two different user groups (external end customer and employees) we’ve decided to develop two separate applications:

  • Customer App: This app is meant to be used by our end customers (using a broad range of different mobile devices) and has therefore being implemented especially with hybrid app development principles in mind. We chose Apache Cordova (https://cordova.apache.org/) as our core development framework, which enables us to build our app against all common mobile OS with only one code base (“develop once, run everywhere” principle). In terms of the UI and application framework we decided to go for a combination of ionic (http://ionicframework.com/) and AngularJS (https://angularjs.org/). Both projects have a vibrant and active community and have been successfully adopted by many projects.

  • Employee App: This app targets remote workers (such as insurance agencies e.g.) who shall work on our processes from remote. We’ve decided to go for the same hybrid app approach in order to share code and speed up development. However, for such an end user group where we might influence the use of certain device types (such as Apple iPhone) we could have also thought about a native app (RHMAP provides an SDK for all popular mobile OS; so we could also reuse existing backend functionality in our cloud layer).

The source code of both applications is hosted on RHMAP which allows us to make use of the built-in build farm (allowing us to create push button builds for iOS, Android et al), configure and also preview the application.

Client application in RHMAP

Cloud layer

The cloud part of an application built with RHMAP is comprised of a so called “Cloud Code App” providing the core functionality for our clients and a set of reusable MBaaS services that enable the connectivity to 3rd party (backend) systems. The following illustration shows an overview of all components created for our showcase:

Application overview in Red Hat Mobile Application Platform

Cloud code apps

For our showcase we’ve implemented a single Node.js based app called Cloud App which accepts all incoming requests from our client layer. RHMAP provides a feature rich development framework (including custom Node.js convenience modules) making the creation of cloud code apps easy and efficient. Through the use of Node.js as our programming language we get all the benefits of its evented and asynchronous model, that works extremely well with our use case of a data intense realtime application (DIRT paradigm).

MBaaS services (Mobile backend as a service)

An MBaaS (Mobile Backend-as-a-Service) is the primary point of contact for end user applications – both mobile and web. The MBaaS hosts Node.js applications – as REST API servers and/or Express.js based web apps. The primary purpose of the MBaaS is to allow users (developers) of RHMAP to deploy Node.js server-side for their mobile apps. The MBaaS also provides functionality such as caching, persistence, data synchronization and a range of other mobile-centric functionality. Multiple MBaaS may be utilized for customer segregation and/or lifecycle management (environments).

For this showcase we’ve developed a new MBaaS connector called fh-connector-jbpm-cloud, which is meant to be reused across multiple applications hosted on RHMAP. For the use in our project we’ve instantiated it and configured the environment variables to connect to our specific JBoss BPM Suite in the backend layer.

RHMAP MBaaS BPM connector

Function wise the MBaaS connector currently provides the following functionality:

  • Process management
    • Start process
    • Get process instance
  • Task management
    • Load tasks
    • Load task content
    • Claim task
    • Complete task
    • Release task
    • Start task

Push notifications

We make use of the RHMAP built-in mobile push API which provides a generic way to interface with multiple push networks (Google Cloud Messaging, Apple Push Notification Service and Microsoft Push Notification Service) via REST or Node.js. This makes it very convenient to send out push notifications from 3rd party application (such as JBoss BPM Suite as demonstrated in our showcase).

RHMAP Push Configuration

More information on the push API can be found here http://docs.feedhenry.com/v3/product_features/push_notifications.html

Backend layer

This layer is comprised of a large set of different backend systems that typically run inside the datacenter of an organization; such as application servers, databases, messaging systems or ESB-like services. For the sake of this showcase we’ve chosen JBoss BPM Suite ((https://www.redhat.com/en/technologies/jboss-middleware/bpm)) as the only system in here. The BPM Suite provides a full blown authoring and runtime environment for business process applications focused on the use of open standards (such as BPMN 2.0). The included BPM engine also exposes a rich REST API that is used extensively by our MBaaS connector fh-connector-jbpm-cloud to start new process instances, control the process flow etc.

Request processing application

The core business process for our scenario is implemented as a simple BPMN 2.0 workflow that is being deployed in form of our Java based Request Processing Application.

Business Process Diagram

After being instantiated the process firstly sends out a push notification to the requesting customer by simply calling the RHMAP push API. Secondly a human task called Process request is used to create a new work item in the work basket of our employees. Through our Employee App mobile application we empower remote employees to directly work on the request.

In addition to that the work items can be claimed via a traditional web-based application named Business Central, that is provided as part of JBoss BPM Suite.

Edit task via Business Central

Based on the decision the process completes with an according push notification to inform the customer.

More information on how to develop process applications can be found in the JBoss BPM Suite Development Guide.

B. Walkthrough

1. Customer creates new request

Customer App - Login
Customer App - Dashboard
Customer App - Create new request
Customer App - Create new request
Customer App - Create new request
Customer App - Dashboard showing push
Customer App - Show process status
Customer App - Process instance details

2. a) Employee works on request

Request processing application - Work on task instance
Request processing application - View process model

3. b) Agency / Remote worker completes

TBD

4. Customer receives push updates on current status

Customer App - Push notification on process status
Customer App - View dashboard
Customer App - View process status

C. Reference Information

Source code

The source code can be found here:

Client layer

Cloud layer

Backend layer

D. Credits

Special thanks to Sebastian Dehn (sdehn@redhat.com) for implementing large parts of the client layer.

OpenShift Quicktip: Testdriving persistent storage with NFS

Creating a persistent volume for NFS storage plugin

The administrator is responsible for creating volumes (PV). The administrator assigns some external thing (partition, entire device, NFS volume, whatever) to a PV.

  1. Login to OpenShift with an admin user
  2. Create persistent volume:
oc create -f persistent-volume-nfs.yaml
  1. Check status of persistent volume:
oc get pv

Creating a persistent volume claim

The end-user/developer/user/whatever is responsible for making a request for a volume (PVC). They get assigned some volume, and they don’t really know what it is. Keep in mind that there is a 1:1 mapping between PV and PVCs.

  1. Login to OpenShift
  2. Create persistent volume claim:
oc create -f persistent-volume-claim.yaml

The source code can be found on my Github page: https://github.com/sebastianfaulhaber/openshift-v3-showcase/tree/master/openshift-resources

OpenShift Quicktip: Limiting resource consumption for users

A. Resource limits

Resource limits allow you to set boundaries (max/min & default) for the compute resources a developer can define on pod/container level (see https://docs.openshift.com/enterprise/3.1/dev_guide/compute_resources.html).

  1. Login to OpenShift with an admin user
  2. Change to target project with
oc project <my-project>
  1. Import limit range (see https://docs.openshift.com/enterprise/3.1/dev_guide/limits.html on what options are available)
oc create -f limit-range.json
  1. Browse to OpenShift admin console, select the project and then “Settings”.

B. Quotas

A quota allows you to set hard limits on the overall resource consumption on project level. This is in particular useful to create a t-shirt size based accounting model (small, medium, large) for OpenShift. See also https://docs.openshift.com/enterprise/3.1/dev_guide/quota.html.

  1. Login to OpenShift with an admin user
  2. Change to target project with
oc project <my-project>
  1. Create quota for project:
oc create -f resource-quota.json
  1. Browse to OpenShift admin console, select the project and then “Settings”.

The source code can be found on my Github page: https://github.com/sebastianfaulhaber/openshift-v3-showcase/tree/master/openshift-resources

IBM WebSphere Application Server Liberty Core on OpenShift V3 Tutorial

A. Synopsis

What this is about

This project demonstrates how to use IBM WebSphere Liberty (a lightweight Java EE container comparable with Apache Tomcat) on Red Hat’s leading Platform-as-a-Service (PaaS) solution OpenShift Enterprise V3 (https://enterprise.openshift.com/. Since OpenShift is perfectly suited for running containerized workloads based on the Docker format, we could reuse the officially supported image built by IBM. Additionally we’ve added OpenShifts powerful templating mechanism in order to create a superior developer experience:

  • Self-service based provisioning of new IBM WebSphere Application Server Liberty Core containers
  • Existing, not yet containerized applications can simply be reused
  • No prior experience with Docker needed
  • Automated build & deploy life cycle

The source code can be found here: https://github.com/sebastianfaulhaber/openshift-v3-showcase/tree/master/websphere-liberty

Screenshots

1. Select WebSphere Liberty template

2. Provide details on application

3. Application artifacts successfully created

4. OpenShift automatically builds initial Docker image for application

5. Build and deployment completed successfully

6. IBM WebSphere Liberty startup screen

9. Demo application running

B. Installation

1. Setup OSE Environment

There are multiple ways to spin up a new OpenShift environment:

All-In-One VM

This community provided Vagrant box probably provides the most convenient and fastest way to start your OpenShift developer experience. It features a complete OpenShift installation within one VM that allows you to test all aspects of a container application platform.

See for detailed instructions: http://www.openshift.org/vm/

On premise installation

The instructions for setting up an on premise installation of OpenShift Enterprise V3 can be found here: ([https://docs.openshift.com/enterprise/3.1/install_config/install/index.html]https://docs.openshift.com/enterprise/3.1/install_config/install/index.html).

OpenShift Dedicated

OpenShift Dedicated is a new offering from OpenShift Online. OpenShift Dedicated offers a hosted OpenShift 3 environment to run the containers powering your applications. This offering provides an isolated instance hosted on Amazon Web Services (AWS), providing increased security and management by the OpenShift operations team, so that you have peace of mind about the stability and availability of the platform.

See https://www.openshift.com/dedicated/

2. Enable OpenShift to run Docker images with USER in the Dockerfile

The currently provided version of IBM’s WebSphere Liberty Docker image requires the use of USER in the Dockerfile. Due to the security implications raised by USER statements OpenShift restricts the use by default. In order to make this project work, you will need to relax the security settings as described here: https://docs.openshift.com/enterprise/3.1/admin_guide/manage_scc.html#enable-images-to-run-with-user-in-the-dockerfile.

# Login to your OpenShift master via SSH
su -
oc edit scc restricted
# Change the runAsUser.Type strategy to RunAsAny

3. Import template into your OpenShift environment

wget https://raw.githubusercontent.com/sebastianfaulhaber/openshift-v3-showcase/master/websphere-liberty/websphere-liberty-template.json
oc create -f websphere-liberty-template.json -n openshift

C. User guide

1. How can I access the provided demo application?

This project provides a simple Java EE web application that can be used to verify that the showcase is working. It can be accessed after provisioning via: /Sample1/SimpleServlet (e.g. http://liberty-app-http-route-demo.apps.example.com/Sample1/SimpleServlet).

2. How can I use this showcase in my own OpenShift installation?

  1. Create a fork of the repository in your own GIT environment
  2. Add your applications to the app/ folder. They will be picked up and get deployed automatically.
  3. Specify the URL to the forked project as SOURCE_REPOSITORY_URL when creating a new application.
  4. Done.

3. How can I automate the build & deployment lifecycle

The project template comes with preconfigured OpenShift webhook triggers for Github and a generic system (see https://docs.openshift.com/enterprise/3.1/dev_guide/builds.html#webhook-triggers for more details).

20. Configure webhook triggers

In order to automate the build and deployment lifecycle you simply need to integrate the webhook URLs according to your SCM specific instructions:

4. How can I view the logs of my application?

The logs can be accessed via the OpenShift Enterprise console:
Browse &gt; Pods &gt; YOUR_LIBERTY_POD &gt; Logs. Alternatively you could also use the CLI command oc logs YOUR_LIBERTY_POD (https://docs.openshift.com/enterprise/3.1/cli_reference/basic_cli_operations.html#troubleshooting-and-debugging-cli-operations).

21. View application logs

5. How can I connect to the container instance that is running my application?

You can open a terminal connection to the container via the OpenShift Enterprise console: Browse &gt; Pods &gt; YOUR_LIBERTY_POD &gt; Terminal. Alternatively you could also use the CLI command oc rsh YOUR_LIBERTY_POD (https://docs.openshift.com/enterprise/3.1/cli_reference/basic_cli_operations.html#troubleshooting-and-debugging-cli-operations.

22. Connecting to the container

D. Reference Information

WebSphere specific

OpenShift specific

E. Credits

Special thanks to Chris Eberle <ceberle@redhat.com>

IBM WebSphere Application Server Liberty Core on OpenShift V2 Tutorial

A. Synopsis

What this is about

We’ve created a IBM WebSphere Application Server Liberty Core cartridge in order to demonstrate the power and flexibility of Red Hat’s Open Hybrid Cloud strategy. Liberty Core provides a lightweight alternative to the classic WebSphere Application Server ND (cartridge available here: https://github.com/juhoffma/openshift-origin-websphere-cartridge) mainly targeting web applications using JEE web profile.

The cartridge currently supports the following features:

  • Provisioning of new IBM WebSphere Application Server Liberty Core instances in seconds (!)
  • Full build & Deploy life cycle (as with JBoss EAP cartridge)
  • Hot Deployment
  • Auto Scaling with web traffic
  • Jenkins Integration
  • Integration into JBoss Developer Studio

The source code can be found here: https://github.com/juhoffma/openshift-origin-liberty-cartridge

Screenshots

1. Create new Gear

2. Select WebSphere Application Server Cartridge

2. Select WebSphere Application Server Cartridge - Scaling

3. Cartridge creation is finished

4. Overview of newly created application

5. View of created sample application

6. HAProxy Scaling demo

B. Installation

1. Setup OSE Environment

You have the following deployment options for this cartridge:

2. Cartridge Installation

This cartridge does not actually ship the Liberty Profile binaries. These
have to be installed manually before this cartridge actually works. The Binaries can
be installed in 2 different ways:

  • OPTION 1 – Install the Liberty Core underneath the versions directory
  • OPTION 2 – Install Liberty Core outside of the cartridge context

The following sections describe the 2 different methods.

Prepare the installation

Download the required IBM WebSphere Application Server Liberty Core Installer from the IBM developer site https://developer.ibm.com/wasdev/downloads/liberty-profile-using-non-eclipse-environments/:

node# cd /opt
node# wget https://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/wasdev/downloads/wlp/8.5.5.4/wlp-developers-runtime-8.5.5.4.jar

Clone the cartridge repository:

node# git clone https://github.com/juhoffma/openshift-origin-liberty-cartridge.git

Initialized empty Git repository in /opt/openshift-origin-liberty-cartridge/.git/
remote: Counting objects: 139, done.
remote: Compressing objects: 100% (103/103), done.
Receiving objects: 100% (139/139), 898.18 KiB | 1.33 MiB/s, done.
remote: Total 139 (delta 46), reused 106 (delta 15), pack-reused 0
Resolving deltas: 100% (46/46), done.

OPTION 1 – Install the Liberty Core underneath the versions directory

node# java -jar /opt/wlp-developers-runtime-8.5.5.4.jar --acceptLicense /opt/openshift-origin-liberty-cartridge/versions/8.5.5.4

Before you can use, extract, or install IBM WebSphere Application
Server for Developers V8.5.5, you must accept the terms of
International License Agreement for Non-Warranted Programs and
additional license information. Please read the following license
agreements carefully.


The --acceptLicense argument was found. This indicates that you have
accepted the terms of the license agreement.


Extracting files to /opt/openshift-origin-liberty-cartridge/versions/8.5.5.4/wlp
Successfully extracted all product files.

OPTION 2 – Install Liberty Core outside of the cartridge context

Binary installation

You can also install IBM WebSphere Application Server Liberty Core outside of the cartridge and
define the location using a node level variable like you can do with the full
WebSphere cartridge.

To make this work all you have to do is to create the file
/etc/openshift/env/OPENSHIFT_LIBERTY_INSTALL_DIR and put the installation location into
it. See the Official Documentation for an example on how to configure node level variables.

Customize SELinux Configuration

Since IBM WebSphere Application Server Liberty Core is installed outside of the gear’s sandbox, you need to customize SELinux permission settings in a way that the installation directory “/opt/IBM/” can be accessed with according permissions.

As a workaround and/or for testing purposes you could also temporarily disable SELinux policy enforcement:

setenforce 0

Install and activate the cartridge

node# oo-admin-cartridge --action install -s /opt/openshift-origin-liberty-cartridge --mco


1 / 1
vm.openshift.example.com
   output: install succeeded for /opt/openshift-origin-liberty-cartridge
Finished processing 1 / 1 hosts in 3174.10 ms

broker# oo-admin-ctl-cartridge -c import-node --activate --obsolete --force
Importing cartridges from node 'vm.openshift.example.com'.
Updating 26 cartridges ...
54f62e57e659c5cd31000001 # U mysql-5.5 (active)
54f62e57e659c5cd31000002 # U mysql-5.1 (active)
54f62e57e659c5cd31000003 # U jenkins-1 (active)
54f62e57e659c5cd31000004 # U nodejs-0.10 (active)
54f62e57e659c5cd31000005 # U haproxy-1.4 (active)
54f62e57e659c5cd31000006 # U jbosseap-6 (active)
54f62e57e659c5cd31000007 # U jbossews-2.0 (active)
54f62e57e659c5cd31000008 # U jbossews-1.0 (active)
54f62e57e659c5cd31000009 # U php-5.4 (active)
54f62e57e659c5cd3100000a # U php-5.3 (active)
54f62e57e659c5cd3100000b # U mongodb-2.4 (active)
54f62e57e659c5cd3100000c # U postgresql-9.2 (active)
54f62e57e659c5cd3100000d # U postgresql-8.4 (active)
54f62e57e659c5cd3100000e # U python-3.3 (active)
54f62e57e659c5cd3100000f # U python-2.7 (active)
54f62e57e659c5cd31000010 # U python-2.6 (active)
54f62e57e659c5cd31000011 # U perl-5.10 (active)
54f62e57e659c5cd31000012 # U diy-0.1 (active)
54f62e57e659c5cd31000013 # U jenkins-client-1 (active)
54f62e57e659c5cd31000014 # U ruby-1.8 (active)
54f62e57e659c5cd31000015 # U ruby-1.9 (active)
54f62e57e659c5cd31000016 # U ruby-2.0 (active)
54f62e57e659c5cd31000017 # U amq-6.1.1 (active)
54f62e57e659c5cd31000018 # U cron-1.4 (active)
54f62e57e659c5cd31000019 # U fuse-6.1.1 (active)
54f62e57e659c5cd3100001a # A hoffmann-liberty-8.5.5.4 (active)

Make sure you see the line reporting the cartridge 54f62e57e659c5cd3100001a # A hoffmann-liberty-8.5.5.4 (active)

broker# oo-admin-broker-cache --clear; oo-admin-console-cache --clear

C. Reference Information

WebSphere specific

OpenShift specific

JBoss Operations Network Showcase

A. Synopsis

What this is about

This demo project showcases some of the most common use cases regarding JBoss Operations Network, Red Hat’s leading enterprise middleware systems management tooling (more information on JON can be found here: http://www.redhat.com/de/technologies/jboss-middleware/operations-network).

  • Automated provisioning of resources (e.g. JBoss Enterprise Application Platform)
  • Integration of custom JMX MBeans into JON

The resources provided with this showcase are intended as a starting point and shall help you to setup these use cases in your own JBoss Operations Network environment.

Screenshots – Automated provisioning of resources

1. Deploy Bundle - Select Bundle
2. Deploy Bundle - Select Bundle
3. Deploy Bundle - Select Bundle Version
4. Deploy Bundle - Enter input vars
5. Deploy Bundle - Bundle Deployment successful
6. Inventory - Import EAP instance
7. Inventory - JBoss EAP successfully imported

Screenshots – Integration of custom JMX MBeans

1. Custom MBean overview
2. Execute an operation on a custom MBean
3. View the operations history
4. Monitoring of custom MBean metrics

B. Base installation and prerequisites

1. JBoss Operations Network (JON)

Follow the official installation instructions for JON V3.3 here: https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Operations_Network/3.3/html/Installation_Guide/index.html

2. Apache Maven

You will need to install Apache Maven for building the source code for this showcase. Please follow the installation instructions here: http://maven.apache.org/download.cgi

C. Use Case – Automated provisioning of resources

1. Benefits

This use case describes how to create a standardised JBoss Enterprise Application Platform (EAP) installation package. In terms of JON this is a so called “Bundle”. This Bundle can then be provisioned to a target system (or even multiple) in a highly automated fashion.

More information on the provisioning features can be found in the official JON documentation: https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Operations_Network/3.3/html/Users_Guide/part-Deploying_Applications_and_Content.html.

2. Showcase components

  1. eap-bundle – This project provides the core ingredients for creating a bundle to provision instances of JBoss Enterprise Application Platform V6.3.

3. Installation

  1. Create Bundle Group “JBoss EAP V6 Base Installation”
    Create Bundle Group

  2. Define a new bundle
    Create Bundle - Upload Recipe
    Create Bundle - Assign Bundle Group
    Create Bundle - Upload Bundle Files
    Create Bundle - Summary

  3. Deploy the bundle to a target system
    Deploy Bundle - Select Bundle
    Deploy Bundle - Select Bundle
    ![Deploy Bundle – Define Destination](https://raw.githubusercontent.com/sebastianfaulhaber/jon-demo/master/doc/11_provisioning_03_deploy_bundle_02_define destination.png)
    Deploy Bundle - Select Bundle Version
    Deploy Bundle - Enter input vars
    Deploy Bundle - Deployment scheduled
    Deploy Bundle - Bundle Deployment successful
    Deploy Bundle - Filesystem view on deployment

  4. Import the newly created EAP instance into JON inventory
    Inventory - Import EAP instance

  5. Configure connection settings
    Inventory - Configure connection settings

  6. Start the server
    Inventory - JBoss EAP successfully imported
    Inventory - JBoss EAP successfully imported

D. Use Case – Integration of custom JMX MBeans

1. Benefits

It is a very common scenario that your applications have been enhanced with MBeans for management, runtime configuration or even monitoring capabilities. JBoss Operations Network allows you to integrate these MBeans into its management environment, which actually means that you can then leverage JON’s full systems management and monitoring capabilities:
– Historical monitoring of your MBeans
– Dashboarding and alerting
– Execution of MBean operations

2. Showcase components

  1. helloworld-mbean – JEE application that shows different ways of exposing MBeans (taken from EAP quickstarts at https://github.com/jboss-developer/jboss-eap-quickstarts/).
  2. custom-jmx-plugin – JBoss Operations Network Agent plugin that integrates custom MBeans (provided by helloworld-mbean) via JMX into JON’s monitoring & dashboarding.

3. Installation

1. Build and deploy the MBean provider application

Build the application with Maven:

# OPTIONAL: Copy the provided settings.xml to your local maven conf dir
cp https://raw.githubusercontent.com/sebastianfaulhaber/jon-demo/master/doc/settings.xml ~/.m2/

# Start the build
cd ./helloworld-mbean
mvn clean install

# Hot deploy the application to an instance of JBoss EAP
# you could use the instance that has been provisioned in the use case "Automated provisioning of resources"
cp ./helloworld-mbean-webapp/target/jboss-helloworld-mbean-helloworld-mbean-webapp.war <EAP_INSTALLATION_DIR>/standalone/deployments

2. Build and deploy the agent plugin

Build the application with Maven:

# OPTIONAL: Copy the provided settings.xml to your local maven conf dir
cp https://raw.githubusercontent.com/sebastianfaulhaber/jon-demo/master/doc/settings.xml ~/.m2/

# Start the build
cd ./custom-jmx-plugin
mvn clean install

Deploy the agent plugin:

cp ./target/custom-jmx-plugin-1.0-SNAPSHOT.jar <JON_SERVER_INSTALL_DIR>/plugins/

The JON server periodically scans the “plugins” directory for updates and will pickup the agent plugin after some time, which is finally pushed to the connected JON agents. The agents might need to be restarted to detect the plugin after initial installation.

In the end you should see a service group called “Myapp Services” that contains the applications’ MBeans. You can now start to add the contained metrics to custom dashboard, define alerts on it and so on.

1. Custom MBean overview

Z. Appendix

The source code can be found here: https://github.com/sebastianfaulhaber/jon-demo

Demonstration of IBM WebSphere Application Server on OpenShift V2

I’ve recorded a small demonstration video showing the OpenShift V2 cartridge:

Cheers,
Sebastian