Behavior, Content, Money – 3 Things you should never give away for free!!!

BCmoney MobileTV

Integrate PHP based SOAP RPS game server in another language, JAVA desktop GUI

Posted by bcmoney on February 23, 2017 in Gaming, Java, PHP, Web Services with No Comments

No Gravatar
Rock-paper-scissors chart

Rock-paper-scissors chart (Photo credit: Wikipedia)

Now that we have a Web Service ready to consume (even though it is SOAP based), it should be pretty easy to extend to Java, which also means it should be possible with a little bit of effort to create a Desktop GUI.

For more on creating a Java-based SOAP server to have an all-Java version of this solution, see:



See the other parts here:

Exposing your PHP Rock Paper Scissors game via SOAP Web Services

Posted by bcmoney on January 14, 2017 in Gaming, PHP, Web Services with No Comments

No Gravatar
Rock-paper-scissors chart

Rock-paper-scissors chart (Photo credit: Wikipedia)

So why SOAP? Well quite simply this was what I used back when I was first working through PHP and learning the hard way how to expose some server-side “business logic” as a Web Service. This was over a decade ago (late 2006 through 2007), and Google’s SOAP Search API had yet to be phased out, and was still one of the biggest reference implementations of SOA principles. In all truth I’d almost never use SOAP for creating Web Services from scratch anymore, but it does have a few minor advantages in niche cases thanks to ws-security, SAML integration, well-formed response guarantee via schema validation, contract-first development for more stability amongst separate inter-dependent departments/organizations, etc.

I decided to dust this example off simply to have a historical reference of traditional SOA while moving a number of legacy APIs I support to RESTful architecture (since REST is the clear winner and most APIs I need to work with today are REST based anyway).

So this example will serve as a useful tool to go back and review every now and then, especially if your consulting ever takes you to an enterprise gig that still uses legacy SOA technologies (and yes despite REST taking over 6 years ago followed shortly by JSON replacing XML, there are still a decent amount of SOAP-based WS deployments out there at big companies, the kind of big enterprises with big budgets at stake and/or political reasons to keep their legacy technology stacks running). REST/JSON may have won the internet thanks to its simplicity, but there’s something to be said about what the various organizations that came up with SOAP and the ws-* stack had in mind (aside from complexity and lucrative implementation contracts/tooling sales), namely robustness and predictability. Take Netflix and YouTube for example, two of the most frequently called APIs on the web, both “RESTful-ish” but each taking their own liberties with Fielding’s original REST thesis in unique ways, particularly around Auth mechanisms and Usage Policies required to work with the data, DRM, Advertising, somewhat creative usage-restricting API metering, and/or pay-per-use schemes that come into play as soon as you want to do anything serious with the data, and both have suffered their developer communities significant amounts of non-backwards-compatible disruptive versioning, changes and feature deprecation.

The following enpoint is where you can make requests to initiate and get responses from specific operations being exposed by the SOAP Web Service:

In our case just a ServerTime check, GameScore check, GamePlay to initiate a game (but lots of other operations could potentially be added to this such as Leaderboard tracking by username or region, Multiplayer game listings to show available competitors, etc).


Web Service Description Language (WSDL)
This is the “contract” part of SOAP that tells SOAP clients how to interact with our Web Service. In general, you can reach the WSDL of a SOAP-based Web Service:

A useful tool for validating your WSDL is the W3C WSDL Validator service.

XML Schema (XSD)

I’m a big fan of keeping the same basic format between requests and responses rather than many Web Services out there with vastly different request and response formats. This just creates unnecessary work writing (or annotating/generating) distinct XML parsers.

Request format:

  <game id="1234">
    <player id="1234#p1">
    <player id="1234#p2">

Response format:

    <game id="1234">
        <player id="1234#p1" outcome="WON">
        <player id="1234#p2" outcome="LOST">

Notice how only the data underneath the player element changes between request and response, and the two formats are virtually identical.

For more easily visualizing WSDL’s available operations and the data formats within each operation’s response check out the XML Grid – XSD/WSDL Viewer service.

Check some of the classic SOAP API examples that are still around today including Amazon’s Product Adveristing API (WSDL) which I’ve previously used as a product data source in my XmasListz Facebook App or the WebServiceX Currency Exchange API (WSDL) which was used in my post on how to work with SOAP in JavaScript & jQuery.

See the other parts here:

Raspberry PI – Alexa PI experiment

Posted by bcmoney on September 3, 2016 in IoT, Mobile, Web Services with No Comments

No Gravatar

Turn your Raspberry PI into a fully functioning Alexa (either by literally calling Amazon’s Alexa APIs, or, calling a variety of services in specialized areas as a stand-in).

HSVO project problems and NRC’s SAVOIR 2.0 SDK solution

Posted by bcmoney on December 13, 2015 in E-Government, E-Learning, IoT, Web Services with No Comments

No GravatarIn the last post I set the context for the HSVO project and how I winded up a member of it via my contract at NRC, setting the stage to help any potential readers understand all the partners and the objectives they had going in.

English: National Research Council of Canada, ...

National Research Council of Canada, Institute for Information Technology (NRC-IIT) in Fredericton, NB (Photo credit: Wikipedia)

Now I’ll talk about the major problems faced on the project (also suffered by Healthcare IT & Medical Schools all over the world and in any shared E-Learning initiatives in general). Hopefully, any time you need to have so many partner systems work together based on the anecdotes shared here, you’ll quickly realize the importance of establishing an Electronic Data Interchange (EDI) amongst your partners, specifically, a simple to parse Canonical Data Model (CDM). These constant communication and data formats must form the foundation for any cross-organizational initiatives like this involving high levels of integration between two or more devices, services, systems, apps and/or courses.


How we did it

NRC management was essentially the lead on the project as far as owning the largest piece being delivered (i.e. their mission assigned by the HSVO was to “provide the glue that would join all these separate devices being built by our research partners”). After about 6 months of initial research, the plan formulated by my superiors before I was even hired on by NRC as an Application Developer to implement it, was to extend their existing SAVOIR 1.0 source code. It is common for government organizations like NRC to look to past work and projects/experiments from within their portfolio which can potentially be leveraged or extended beyond their initial expiry date and use cases. Nothing wrong with that, but its not always the best fit, especially when dealing with complex problems which might be better served by green fielding a totally new solution based on Agile development using direct interaction with the customer (in our case, our research partners, but primarily NOSM who the entire project was tailored to in the name of the promises of “E-Learning for Remote Medicine”).

That said, SAVOIR 1.0 was basically a glorified Application Launcher which acted as a sort of dock that could run any application installed on your Operating System with a single-click. Think RocketDock but without any of the slick animations, and, implemented in Java so it could at least work cross-platform on Windows, Unix/Mac, or Linux. It was originally intended to simplify the lives of Architects as part of a separate project that finished up in 2008. Behind its code were also two basic Web Services:

  • User Management (called “SAVOIR_UserMgmt“) this allowed the system to keep track of who was using the SAVOIR dock to run their applications and which applications they launched at which time. It also allowed system administrators to turn on an optional desktop-based login popup in Java if desired, to force specific users to login with their username/password before using the applications in the SAVOIR dock; however of course if the applications were installed outside of SAVOIR’s install pack, then those applications could be launched anonymously as usual at any time, by going directly to their “exe” file on the system, or clicking on a shortcut (such as in their Program Files menu).
  • Session Management (called “SAVOIR_SessionMgmt“) this allowed a unique identifier to be attached to each running instance of the SAVOIR dock, regardless of whether or not anyone was actually logged in, or if the login feature was turned on by the system administrator for the network on which SAVOIR was running.

We delivered what could really be called SAVOIR 1.5 after approximately my first 6 months on the job, with some heavy modifications to the core Java Swing GUI including the ability to drag & drop to the dock or slide/move applications up and down in launch order, and, the ability to also launch particular websites/webapps in the browser for the first time. This browser launch feature was probably the one we were most proud of, and only became reasonably easy to do in Java 1.6+. Now, any link could be placed on the Dock, either of the form:

  • file://path/application.exe (for locally installed apps)
  • http(s)://domain:port/path/#hash?param1=abc&paramN=etc (for apps on the web)

This gave the SAVOIR launcher a great deal of flexibility, we even developed a couple canned demos of the new capability which we thought would delight our partners. By being able to popup a simple search window (also in Java) and navigate straight to deep search results of the CMA Guidelines Infobase, PubMed medical journal archive, Wolfram Alpha as a calculation tool, and Wikipedia as a general info resource, as per the user’s selection.

The reception was lukewarm at best though, which came to a bit of a surprise to me after all the hard work myself and the team had put in, doing exactly what our superiors had requested. What I began to learn was that there was a growing disconnect between what our partners (who these partners were is covered in my last post) were expecting and what we were delivering. What our partners were telling us, was that they did not simply want a customized version of SAVOIR 1.0 rather what they wanted was a more unique application with some intelligence that basically not just got out of the way so they could use their applications in a particular order, but they wanted something that could do alot more heavy lifting and facilitate communication between their separate applications.

A brief word on EAI
Enterprise Application Integration (EAI) is the dilemma we face when trying to integrate large complex applications (but the cocepts really apply to applications of any size). With two applications, the integration is easy. Expose one or both as an API and send data, one or two ways, as needed:

It quickly becomes tough to manage as you add in more applications to support/integrate:

If you have any kind of external scalability requirement to support alot of external services/devices, just forget about point-to-point integrations:

Of course the ESB vendors like MuleSoft, WS02, TIBCO and even Apache ServiceMix evangelists promise their ESB products will instantly make your EAI efforts look and feel like this:

Not quite. Even with proper use of JMS for messaging, HTTP for addressing, and an ESB to broker transformations, what we learned the hard way is that despite all the hype of the ESB providers, you can’t expect to just plug in the services to an ESB and walk away laughing, problem solved (its possible that certain members of our team who sha’nt be named bought this notion a little too much though). You simply won’t be able to avoid the need to gain at least a rudimentary amount of working knowledge of each of the given devices or APIs you want to integrate, before you can even try to do anything meaningful with the data inputs they send and outputs they can receive. What that also means is that you’ll need to gain some basic domain knowledge and/or work directly with a domain expert in the initial design stages (akin to Agile development). We lacked that domain expertise at NRC, and at first management also relucted on opening direct lines of communication between our developers and main clients (which from the broader group were NOSM and McGill). At first the reaction was all around frustration, then lots of talk of from the medical experts on our project needing something they were calling “The Eye of Sauron” followed by a rash decision to implement a Rule Engine (in our case Drools) which could live on the ESB to help determine the routing of messages based on business logic defined by our HSVO partners. We struggled with this, not just the implementation and integration of the Rule Engine as yet another endpoint on the ESB and whether that was helping us or hurting us in the long-run. Perhaps most significantly what we struggled with was how to simplify the Domain-Specific Language (DSL) Drools required as input down to a simple enough format that it could be either hand-coded in Excel (one of Drools’ supported input formats) then uploaded periodically before running E-Learning classroom simulations (aka “Scenarios” as the team dubbed them), or, to a consistent enough format that the rules could be automatically generated in the backend by an Authoring Tool (yet another piece of new technology that would have to be built to support this growing monstrosity). Next came discussions about the need for a “Rosetta Stone” behind the scenes of SAVOIR, which I codified as a “Term Dictionary” which was capable of mapping concepts between endpoints (i.e. Blood Pressure in one system may be the input “BP”, but in another system it might be “kPa”; in some it could be split by S-Systolic and D-Diastolic while others it could be together as “S/D mmHg”). Finally, a need for a “Unit Converter” to convert different medical units of measurement (i.e. metric to imperial to bridge gaps from Canada/UK/Ireland to United States, differences between Med School SOAP notes, etc). So much for an Application Launcher huh? We went through several student placements but none of them made a very significant dent in the Rule Engine (aka. Eye of Sauron) piece, or, the Message Translator/Mapper/UnitConverter (aka. Rosetta Stone); so it was up to myself and Roger Sanche at NOSM to come up with a way to get this beast working. Working with the domain experts daily we realized the main problem. Interaction in one application/service/device must be able to fire-off events in a predictable way to the Message Bus and get routed through to the next application/service/device yes, but we are doing this in support of multiple learners at multiple locations, this was the key. It was one of the hardest things I’ve ever done but after many late nights we pulled off a working integration that would show the recipe. We called it the SAVOIR SDK, mostly because we had spent our creative reasoning powers and were too exhausted come up with anything else.

What we realized is that if we could apply Sturgeon’s Law (aka. KISS, DRY, YAGNI) and simply start thinking of each of the complex devices we were trying to integrate as nothing more than differing set of inputs/outputs, with some unique endpoint “addressing needs” based on their own scalability and protocols (i.e. HTTP based web app that can support thousands of users at once, or, a UDP camera that only one person or classroom could use at a time) then we could start to see a different more manageable picture. Finally we could realize something similar to the ESB simplification we were promised; to get SAVOIR 2.0 anywhere near our partners’ desired level of integration, we needed to create just three simple CDM and thin wrappers (SDKs) for our partners to use to send and receive messages in a single consitent format. That simple realization finally got us the big breakthrough we needed, and a matching of Scenario (path to scenario being worked through), sessionID (learner unique identifier), and instanceID (learner’s device’s OS/browser/tab identifier) as attributes of the services we were describing would be the last step to the messaging delivery problem. I’ll be the first to admit, its such a simple concept. To this day I wish I’d bought the infamous EAI book sooner, but it was a chance landing on the book’s website that lead us down the right path. Check out the schema we put together:
Read the rest of this entry »

5 years later, a look back at the HSVO E-Learning project and NRC’s role

Posted by bcmoney on November 3, 2015 in E-Government, IoT, Web Services with No Comments

No GravatarToday marks the 5-year anniversary of the conclusion of my 2 year on-site contract for NRC which ran from November 3rd, 2008 to November 3rd, 2010. As I believe all the NDAs on the project we worked on have expired, I’ll be looking back at what that project was all about, what its challenges were, and (in a separate follow-on post) how we solved them.

English: National Research Council of Canada, ...

National Research Council of Canada, Institute for Information Technology (NRC-IIT) in Fredericton, NB (Photo credit: Wikipedia)

With over 20 research facilities in nearly as many different cities across the country, the National Research Council of Canada (NRC) is our nation’s largest government-sponsored, citizen-operated science & technological research organization.

How I got there

Stephen MacKay (then Research Group Leader at NRC-IIT Fredericton) would become the first representative of NRC I would meet in-person during my early career. It was my final summer of freedom, towards the end of my “startup run” after Grad School in Japan, where I had spent a year in between the conclusion of my Sony internship (2007-10-12) up to my eventual hiring at NRC (2008-11-03). For that year, I had been a self-starter entrepreneur living on the thinnest of shoestring budgets. I had been trying my best to setup a business around online video that I was convinced would change the way we monetize content and behavior online more fairly in favor of content creators, and attempting to evangelize a model where site loyalty would be rewarded properly (as I still think it should be, although making a go of it I’ll admit requires not just initial funding but you must guarantee a certain scale which is the real trick, if that original revenue-share idea is to be sustainable). Think about that “not yet” failed business/technology as Blockchain but the rewards of a “BitCoin” don’t go out for solving a cryptographic hashing algorithm, rather for duration of video viewing and in particular ecosystem ad/partner economic interaction. Apologize for the digression, but it’s important to set the tone though, as I had just given it my all and built a product still in search of an audience (hah even to this day).

In any case, I met Stephen MacKay at the 2008 CNSR Conference which by chance took place in Halifax, NS allowing me to attend on my limited “wannabe entrepreneur” budget. I had already spent the last of my “petty cash” booking a June flight to Toronto, to speak at MobileMonday Toronto, an effort which I had hoped would expose me to some potential wealthy investors in the big city. Stephen was interested in my work and saw my potential, we exchanged cards and he said to look him up and that there might be some opportunities to work with NRC coming up in the near future. I continued my long 18-20 hour days working on “the dream” throughout that summer until my 25th birthday in October, by which time I had promised my parents and most importantly myself that I would have either secured funding, or, bitten the bullet and started looking for a full-time job.

So it was with great resignation that I finally desperately reached out to Stephen to follow-up on our meeting at CNSR to see what opportunities he mentioned might be available. I had at first just attempted to gain an audience to pitch the business idea and early prototype to NRC as a potential for their “new business incubation lab”. Stephen however urged me to look into some of the recent full and part-time contract postings and working with NRC that way, as it was unlikely that they could take on any funding relationship with such an early stage startup (to this day I don’t think they take enough risks on startups but prefer helping fairly established incumbents build out their products or research new ones, but that’s another story).  I found the Application Development position profile to be quite interesting and relevant to the work I had been doing. They wanted someone who knew multimedia/video-conferencing (had been in deep research on online & mobile video for over 2 years), who knew SOA and how to integrate web services (had been integrating YouTube, GoogleSearch, Flickr, PayPal and several other key APIs to my startup), and who could program in Java (focused on Java for 4 years in University and 1-2 years of practical on-the-job work afterwards).

In fact, when I really think back though it was actually one of my would-be interviewers Bruce Spencer, my eventual supervisor, whom was the first person at NRC that I corresponded with. I had briefly contacted him in 2006 while working on my thesis at the International University of Japan. That work focused on Semantic Web technologies and how they might benefit the growing MobileTV (now more commonly known as OTT) market. Bruce was a leading academic in Atlantic Canada for accomplishing intelligent content recommendations in the RACOFI project that lead to the InDiscover music recommendation service which got sold to Bell Labs in 2007. This was what led me to email him via my 4th year “CS Advanced Topics – Semantic Web” teacher at Acadia University (Dr. Andre Trudel) who was my first go-to for Semantic Web questions, as I worked through my thesis in Japan. So quite a long-winded explanation, but that’s how I wound up at NRC for 2 years of my life. The rest will tell the story of what happened while I was there.

NRC’s Goal

According to their own website, their purpose is to be:

“the Government of Canada’s premier research and technology organization (RTO).”

They intend to accomplish this by:

“Working with clients and partners, to provide innovation support, strategic research, scientific and technical services to develop and deploy solutions to meet Canada’s current and future industrial and societal needs”

In many ways in fact NRC really acts as the public-facing version of the Military-focused government research department known as the Defence Research and Development Canada which of course researches and develops technologies and solutions for our navy, military and airforce in cooperation with established military-industrial provider companies (Aerospace, Transport, Armour, Ballistics/Weapons, Threat Detection, etc).

Within that operating model there are a few other similar departments (check out the full list of Canadian government departments) but based on my understanding the different the two which stand out differ as follows:

  • Innovation, Science and Economic Development Canada (ISED) – focuses on investments in domestic companies and research projects at a large-scale, particularly those at late-stages to assist with implementation and real-world roll-outs, to realize immediate benefits to Canadians (i.e. jobs-based investments, or practical short-term ROI projects needing a help to finalize projects already underway)
  • International Development Research Centre (IDRC) – focuses on investing in projects beyond our borders with large-scale benefit potentials, particularly those within developing countries
  • Natural Sciences and Engineering Research Council of Canada (NSERC) – focuses on working with academics and students at at post-secondary education institutions (Universities, Colleges, etc) and helping find companies to encourage joint-investment from into academic research projects with potential to deliver innovations to Canadians.

See slides 22-29 for a nice summary of the NRC’s role in HSVO:
Read the rest of this entry »

BC$ = Behavior, Content, Money

The goal of the BC$ project is to raise awareness and make changes with respect to the three pillars of information freedom - Behavior (pursuit of interests and passions), Content (sharing/exchanging ideas in various formats), Money (fairness and accessibility) - bringing to light the fact that:

1. We regularly hand over our browser histories, search histories and daily online activities to companies that want our money, or, to benefit from our use of their services with lucrative ad deals or sales of personal information.

2. We create and/or consume interesting content on their services, but we aren't adequately rewarded for our creative efforts or loyalty.

3. We pay money to be connected online (and possibly also over mobile), yet we lose both time and money by allowing companies to market to us with unsolicited advertisements, irrelevant product offers and unfairly structured service pricing plans.

  • Archives