HTML5, CSS3, jQuery, JSON, Responsive Design...

Angular Release Candidate 5 - "major breaking changes"???

Michael Brown  August 31 2016 05:13:55 AM
Who’d be an AngularJS Developer?  Well, quite a lot of people if the stats are to be believed!  But oh Lordie, they really seem to be having a really rough time it with the upgrade to Angular 2.

I've just listened to a recent Adventures in Angular podcast, entitled Angular RC5 and Beyond.  I’m not much fan of a fan of Angular, as you can probably tell, but I like to keep up with it anyway.  If for nothing else, it’s good to have reasons to show people/bosses that moving to Angular would be a truly terrible idea.  The Angular 2 rollout is giving me plenty of those!

Of Release Candidates

Anyway, RC5 refers to Angular Release Candidate 5.  "Aha", I thought; "they must be pretty close to release if they're on a fifth Release Candidate!"  However, I was disabused of this thought within the first few minutes, in which we’re told that Release Candidate 5 of Angular 2 contains “major breaking changes from release candidate 4”.

Say what?  Major breaking changes in Release Candidate?  Thankfully, a couple of Google people are on hand to explain that in GoogleLand, things work a little differently.  A Release Candidate isn't a candidate to be release, as in the gold release; you know, how the term is applied by just about every other software company in the world.  No, it seems that for Google, Release Candidates are actually the first serious test releases geared towards public consumption.   Alphas and betas are mainly for internal testing, within Google only.  Angular 1 had seven Release Candidates, apparently.   Well, that's one approach, I suppose.

There's a telling moment about half way through the podcast.  As one of the Google guys is detailing change after change in RC5, the podcast host pauses proceedings to ask one of the other participants why he hasn’t spoken much yet.  “Oh I’m just sitting here all grumpy, thinking of all the code I have to change...across hundreds of projects”, comes the reply.  Quite so.  And I don't think he was referring to Angular 1 code, either.

NG Modules

One of the big new things in RC5, apparently, NG Modules.  These are a way of breaking up your application in some more manageable fragments or components.  (So like React has had from the get go then.)  It seems that Angular 1 had some kind of modules thingy in it too.  These were originally removed for Angular 2, but they’re back now.   Only they’re not quite the same as they were in Angular 1, but "it helps if you think of them that way”.


Almost as an afterthought, the Google guy drops another bombshell during the podcast's closing moments:  "did I mention that Angular 2 is now moving from SystemJS to Webpack?", he asks, laughing at what I took to be a joke, at first.  But no, he was serious: they really are moving to Webpack.  That may be all to the good, because Webpack rocks, IMHO.  But really, they want to be making a change like that in the fifth Release Candidate?  (Oh, I forgot; they're not really really Release Candidates, are they!)


Goodbye, Chromebook, hello...Chromebook!

Michael Brown  August 21 2016 12:21:34 AM
It was my birthday last week.  One of my treats was a brand-new Toshiba Chromebook 2, bought to replace my aging Samsung model.

The latter has slowed down to the point of being barely useful.  To be honest, it was probably underspecced when I bought it three years ago, having an ARM processor and only two Gig of RAM.  But the truth is that Intel processors at the time simply could not mach the battery life of the ARM processors:  the Samsung could give me over 8 hours of battery, which I'd never seen a laptop before!

However, that ARM processor and also came with some limitations, which I hadn’t appreciated when I bought it.  For one thing, some Chrome Apps didn't even run on the ARM version of the Chromebook; they would only run on Intel versions, which was something of disappointment.  Maybe that’s less of a problem today.

Full Linux Install

Another problem was with the full Linux installation that I’d always intended to put on any Chromebook that I bought.  (With a Crouton-based install, you can switch instantaneously between ChromeOS and a full Linux install, which is a pretty neat trick!)  What I hadn’t realised though was that ARM versions of a some Linux packages simply aren’t available.  Most of the biggies are present and correct, e.g. Chrome, Firefox, LibreOffice, Citrix Receiver, GIMP, as well as developer packages, such as Git, Node/npm and various web servers.  But the killer was that there’s no SublimeText, boo hoo!  SublimeText maybe cross-platform, but it’s not Open Source, and its makers have showed zero interest in making an ARM-compatible version so far.  Sadly, I was never able to find a truly satisfactory replacement for that one.  I finally settled on the Chrome-based Caret editor, which does a half decent job, but it’s no SublimeText.

The New Toshiba Chromebook 2

Intel had to raise its game to respond on the battery life front, and give the Devil its due, that’s exactly what Intel did.  Battery life is now on a par with the ARMs, but with the benefit of extra power and also that Linux package compatibility.  For example, here's Sublime Text running in an XFCE-based Linux installation, in a Chrome window on my new Toshiba Chromebook:

SublimeText on a Chromebook

Other benefits of the Toshiba over the Samsung:
  • More powerful (Intel processor & double the RAM) so much faster performance
  • Much better screen: full HD 1920x1080 vs 1366x768 on the Samsung
  • delivers to Australia!!  And likely to other countries too.  (Good luck finding any decent Chromebooks actually on sale in Australia!)

Local Storge

Local SSD storage is the same on both models: a disappointing 16Gig.  You'll often hear ChromeOS aficionados telling you that local storage doesn't matter "cos' on a Chromebook you do everything in the cloud".  IMHO, that's a bunch of crap.  Local storage is important on a Chromebook too, especially if you have that full Linux install eating into it!!

Now both of my models do come with an SD card slot, which allows me to boost that storage space significantly, and at no great cost.  But it's the Toshiba that shines here too, as you can see from the two photos below:

Samsung Chromebook with SD CardToshiba Chromebook 2 with SD Card

In both of these photos, the SD card is pushed in to its operational position, i.e., that's as far in as it will go.  See how far it sticks out on the Samsung?  What's the chances of my throwing that in my bag and then retrieving it a few hours later with the card still in one piece?  Not high, and that's why I never do it.  It sounds like a small thing, I know, but it's a royal pain the rear to fish around for the SD card in my bag whenever I need to use it.  With the new Tosh, the SD card sits absolutely flush with the edge of the case, so I can leave it there all the time, giving me a permanent 48 Gig of storage!!

That Other OS

The cost of this new baby?  $300 US on, which translated to just over $400 Oz, including postage.

At which point I have little doubt that somebody is waiting to tell me "but for that kind of money you could have got a proper laptop that runs Windows apps".  But as you've probably worked out by now, I already know that.  And if I'd wanted to get a Windows laptop, then I would have got one.   The thing is that I don't like Windows much.  I don't like the way it works (or doesn't work), and most of the dev tools that I now live and breath don't work natively on Windows.  (Although there is, apparently, a native Bash terminal coming in Windows 10 at some point, courtesy of Canonical.)

And what kind of Windows apps would a $400 Oz machine even be able run?  Microsoft Office?  It might run; as in it might actually start up.  Adobe Photoshop?  Ditto.  And how about all those Windows games?  Well, I suppose you might coax the new Doom into a decent frame rate, as long as you were prepared to compromise a little on the graphics!

Doom (circa 1993)

Domino server up time: eat this, Microsoft!

Michael Brown  August 19 2016 02:46:25 AM
There are some things that we just take for granted.

I have this Domino server in the cloud, on Amazon Web Services.  It just occurred to me that I hadn't updated the Amazon Linux that it's running on for a while now.  So I logged in to check it out and I was right: it has been a while.  517 days, in fact!

That's 1.42 years.

Or one year and five months, or thereabouts.
Domino server uptime
In fact, it would likely have been a lot longer of that, if I hadn't take it down to upgrade it to Domino 9.0.1 in the first place.

You know what?  I think I'm just going to leave it as is, and see how long it goes for...

NodeJS posting data to Domino

Michael Brown  August 13 2016 02:09:17 AM
So recently, I was working on project that was not domino based, but rather used web tools and Rest APIs.  What a breath of fresh air!  SublimeText, NodeJS, EsLint and all that other webbie-type goodness, that looks great on your CV.

Moving back to working with our Domino-based CMS (Content Management System), I came down to Earth with a very rude bump.  You see, in that system, we store our web programming content in Notes Documents.  Our HTML,  JavaScript and CSS is either typed/pasted directly into Notes Rich Text fields, or is stored as attachments within those same Notes Rich Text fields.

Not to criticise that CMS system itself, which happens to work rather well, as it happens.  It’s just the editing facilities, or lack thereof.  Typing text directly into a Rich Text field, you have no syntax checking, no linting, no colour coding: no visual feedback of any kind, in fact.  Not even of the limited kind that you get with the JavaScript Editor in the Notes Designer.

So I was faced with a choice:
  1. Go back to typing stuff directly into Notes fields, and finding my coding errors the hard way, i.e. when it fails in the browser.  Not fun.
  2. Use SublimeText/EsLint etc to get the code right on my hard drive, then copy and paste the results to the Notes field so I could test in the browser.  And kid myself that the last step isn’t a complete and utter productivity killer.

Obviously, neither option was particularly appealing.  Which got me to thinking… now, wouldn’t it be great if I could still use of all those achingly trendy, client webbie-type tools, but have the my code sync automatically synched up to my Notes Rich Text fields on the Domino server?  You know, in real time?  Then I’d have the best of both worlds.  But surely, not possible…

Actually, it is very possible (otherwise this would be a very short post!).  And I have built a system that does exactly that.  It’s based on NodeJS, npm on the client-side and a good old Notes Java agent on the server side.

Basic Approach

So here's the basic division of work between the NodeJS client and the Domino server:

Client/server Sequence diagram

(Sequence diagram created with PlantUML.)

The NodeJS client gathers up the user's source file, transpiling it if necessary, and posts it to a Domino agent as part of an encoded JSON object.  (Yes, I know JSON is actually a string, but I'll call it an object here.)  The agent works out where the target document is, based on the data passed in the JSON object.  It them posts the user's decoded data to a Rich Text field on that document (or attaches it), before sending a success or error message back to the client.  The agent runs a Web User agent, so ID and Domino HTTP Password are passed from client to server (not shown in diagram above).

The NodeJS can client can even be set to run in the background, and watch a file on your hard drive - multiple files, in fact - watching for if those files have been changed on your hard drive.  If detects a change, the Node system can post the changes to the Domino server immediately.  You can refresh your browser a couple of seconds later, and your changes are there, on the Domino server.

This isn't theory.  I have a working system now, that does exactly what I describe above.  I will post source code to Github if anybody's interested, but in the mean time here's a few tasters of how things are done.

Posting Data from the NodeJS Client: Request Package

The key to posting data from client to server is the npm Request package.  This is kind of a equivalent of jQuery's Ajax call, only in a NodeJS terminal instead of in a browser.  The code below shows how you might call request to post data to a Domino agent:

const request = require("request");

var postConfig = {
   url: "",
   method: "POST",
   rejectUnauthorized: false,
   json: true,
   "auth": {
         "user": username,
         "pass": password
   headers: {
       "content-type": "application/text"
   body: encodeURIComponent(JSON.stringify(configObj.postData))

request(postConfig, function(err, httpResponse, body) {
// Handle response from the server

The actual data that you would post to that agent, would look something like this:
"targetdbpath": "mike/dummycms.nsf",
"targetview": "cmsresources",
"targetfieldname": "contentfield",
"updatedbyfieldname": "lastupdatedby",
"attachment": false,
"devmode": true,
"data": "my URLEncoded data goes here"

Server Side Java Agent

So here's how the server-side Java agent interprets the JSON data that's been posted to it:

import lotus.domino.*;
import org.json.*;

public class JavaAgent extends AgentBase {
public void NotesMain() {
   try {
           Session session = getSession();
           AgentContext agentContext = session.getAgentContext();
           // Your code goes here
           Document currentDocument = agentContext.getDocumentContext();

           pw.println("Content-Type: text/text"); //Content of the Request

           PostedContentDecoder contentDecoder = new PostedContentDecoder(currentDocument);
           String decodedString = contentDecoder.getDecodedRequestContent();

It's a standard Domino Java agent.  I grab the context document from the agent context.

PostedContentDecoder is my own Java class, which grabs the actual content data from the request_content field of that document.  This is actually a bit more complicated than it sounds, because of the way different Domino handles data greater than 64kb in size that's posted to it.  If it's less than 64Kb, then Domino presents as single field called "request_content".  If it's more than 64kb, Domino presents a series of request_content fields, called "request_content_001", "request_content_002" and so on, up to how many fields are needed to hold the size of the data.  The PostedContentDecoder class takes care of these differences.  The class also takes care of URL decoding the data that was encoded by the client-side JavaScript call, encodeURIComponent() (see above), via the line below:

requestContentDecoded =, "UTF-8");

The final piece of the puzzle, in terms of interpreting the posted data on the server side, is to covert the JSON object string into an actual Java object.  There's no native way of doing this in Java, but the huge advantage of Java over LotusScript server agents - and I did try LotusScript first -  is that Java can easily import any number of 3rd-party .jar files to do their donkey work for them.  There's a number of such .jars that will convert JSON strings to Java objects, and vice versa.  Douglas Crockford's JSON reference page lists over 20 JSON packages for Java.

I went with Crockford's own org.json library, which you can download from the Maven Repository.  This gives you a new class, called JSONObject, and this what you should use.  Don't try to define your own Java data class and then try to map that to the JSON data somehow.  I tried that at first, and ran into some weird Domino Java errors.

Here's some code that turns the JSON into a JSONObject.  It then prints the various object member so the Domino server console.
JSONObject obj = new JSONObject(decodedString);
JSONObject obj = new JSONObject(decodedString);
Boolean devMode = false;
if (obj.has("devmode")) {
   devMode = obj.getBoolean("devmode");
   System.out.println("devMode (variable) = " + devMode);

if(devMode) {
   System.out.println("targetdbpath=" + obj.getString("targetdbpath"));
   System.out.println("targetview=" + obj.getString("targetview"));
   System.out.println("targetdockey=" + obj.getString("targetdockey"));
   System.out.println("targetfieldname=" + obj.getString("targetfieldname"));
   System.out.println("updatedbyfieldname=" + obj.getString("updatedbyfieldname"));
   System.out.println("effectiveUserName=" + agentContext.getEffectiveUserName());

Now I have the data, and know where it has to go, it's pretty much standard Notes agent stuff to paste the data there.

array.prototype.pureSplice npm package

Michael Brown  June 25 2016 11:22:42 PM
I've just released my seventh npm package, array.prototype.pureSplice().  FYI, my seven packages now have over 2, 000 downloads per month, combined.  Okay, that may not be in the same league as ReactJS (over 160,000 downloads per month) or AngularJS (over half a million downloads per month), but hey, it's a start!!!

So, pureSplice() is an array method to return a new array with a specified number of elements removed. Unlike JavaScript's native array.splice() method, array.pureSplice does not modify the source array. This is important if immutability of data is important to you and/or you are using libraries such as Redux.

Full instructions for use are on the array.prototype.pureSplice page on  Also, a new feature on the npmjs site: you can now check how pureSplice() works in your browser, via Tonicdev.

How to show current Git branch name in your terminal

Michael Brown  April 23 2016 09:36:55 PM
If you're using Git for your source control (and really, if not, why not?), then seeing what branch you're currently working on is a rather important feature!  Sadly, it's not one that's built into the terminals on either OS X or Linux.

Fortunately, you can implement this by editing you bash profile config file.  This is how the terminal looks on my Mac after I did this.  You can see that the Git branch name, "master" in this case, is shown in green:
OS X terminal showing the Git branch name

Here's the code that you need to add to your bash profile config file:
parse_git_branch() {
   git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/ (\1)/'
export PS1="\u@\h \W\[\033[32m\]\$(parse_git_branch)\[\033[00m\] $ "

You'll find the files to edit on your home folder on both OS X on Linux.

On OSX, it's the file ~/.bash_profile.

On Linux, it's ~/.profile.  (There maybe differences here between different distributions, that worked for me on Linux Mint.)

The files will be hidden by default.  Tip: on OS X, you can hit Command->Shift->full stop (that's a period to some of you) to show hidden files in an OS X File Open dialog.

Kendo UI with ES6 modules, via Webpack, SystemJS & Babel

Michael Brown  March 27 2016 07:14:36 PM
From my Github account.  I've set up a boilerplate repository that shows how to use the KendoUI Pro widget system with Webpack module loader and ES6 modules.

I discovered that Kendo is AMD compliant.  So my aim was to try and reduce the page load times by avoiding a full load of Kendo, which is 2.5 meg (900kb gzipped) for the file kendo.all.min.js!  By using Webpack to load up only the Kendo modules that I needed for my demo, I was able to reduce that to a single file of 900kb minified (300kb gzipped).

All this requires NodeJS/npm and a (shock, horror) build step!  Actually, it's not too bad when you get used to it, but I appreciate that for some web developers, requiring a build step for JavaScript is a step too far.  So, I've also included a SystemJS version in the repository too.  SystemJS loads all the individual kendo JS files on-the-fly for you; i.e. no build step.  It's slower, because it's loading the files individually, instead of as one combined file, and there can be quite a few files pulled in by Kendo.  For that reason, I'm not sure if SystemJS is viable for larger production projects, but see for yourself.  It should at least give you a feel for what module loaders are and what they can do for you, i.e:
  1. Break your code into manageable chunks, but without...
  2. ... having to manage a boat load of script tags on your HTML pages.  This includes avoiding any mutual dependency, infinite loop type problems.  
  3. Banish global variables and having all your functions in global scope.

Honestly, once you've coded using JavaScript modules, and see their benefits, then there's really no going back.  You can now have multiple developers working on an app, without running into save conflicts because they're all editing the same, massive JavaScript file!!  No more loading huge, third party function libraries when you only need one or two functions out of them anyway.

This is the future.  Modules are part of ES6, which is the next, official version of JavaScript.  It's not some third party, jury-rigged system, that may or may not become a de facto "standard" one day, if only enough developers come on board.  This is going to happen.  Well, it will as soon as the browser makers get off their backsides!   Many of them have native support for numerous other ES6 features, but so far, only Chrome has tentative native support for modules (the import statement), and even that's stuck behind a compatibility flag.  Until the, we have to rely on loaders like Webpac, SystemJS and Browserify.

Array.prototypemove - another new npm package

Michael Brown  March 9 2016 02:18:25 AM
That's my fourth npm package, for anybody counting!

It adds an Array method (to the Array's prototype, yikes!) that allows you to move an element of that array from one index to another.


The syntax is:

myArray.move(moveFromPosition, moveToPosition)

  • `myArray` is your array.  It can be an array of objects, as well as an array of primitives (strings, numbers etc).
  • `moveFromPosition` is the index of the array element that you want to move, where zero is the first element.
  • `moveToPosition` is the index of the array where you want the element that you're moving to end up.

Example 1:

var simpleArray = ["Han Solo", "Luke Skywalker", "C3P0", "R2D2"];
simpleArray.move(3, 0);

will move R2 to the start of the array.

The method will also accept negative numbers for either of the "move" variables.  In that case, -1 is the last element of the array, -2 is the next to last element, and so on.

Example 2:

var simpleArray = ["Han Solo", "Luke Skywalker", "C3P0", "R2D2"];
simpleArray.move(0, -1);

will move Han to the end of the array.


Installation and import instructions are on the package's npmjs page.


Taken from Reid's accepted answer from the most popular stackoverlow post on this topic.  All credit goes to Reid.  I've not changed his code at all.  I merely packaged it up and put it into npm.

I was inspired to do so by the author of very similar npm package, in a kind of backhanded way!  When I pointed out, via his Github repository, that I was having a problem with his package, he behaved like an ignorant arsehole.  He insisted that it couldn't possibly be his package, and how dare I even suggest such a thing!  Then locked the comments on my bug report, so he didn't have to trifle with trash like me any further.  The irony is, I now think that he was probably right about his package: I don't think the fault was there, after all.  But I couldn't tell him that, of course, because he'd locked the comments.  Well screw him.  I've got my own package now!

Web server for Chromebook

Michael Brown  February 29 2016 03:39:51 AM
I tend to use my Chromebook for web development when I'm on the road, or ferry in my case!  I have $2000 Macbook Pro, which I love, but I'm usually too scared to take it too many places.  My $330 Chromebook on the other hand...

Of course, to do much build work with npm, Git, Gulp, Webpack, Babel etc, I have to switch to my Linux crouton, which is a bit of cheat, but it works!  I also had to do that just to fire up a local web server, but not any more.  It seems that there is now a Web Server for Chrome.  It's called, wait for it, Web Server Chrome, and you can install it from the Chrome Store.

Once installed, just launch it from the Chrome Launcher and tell it which folder your HTML is in.  It will give you a link for your browser.  Really, it couldn't be easier.  And being a Chrome app, it runs on all platforms on which Chrome itself runs.  (The screenshot below is from my Macbook Pro.)

Now if I could only get a Chrome version of Git...
Web Server for Chrome

Steam Link and Controller shipping to Australia now!

Michael Brown  December 27 2015 07:15:10 PM
... and possibly in other countries too.

I’ve been after one of Valve’s new Steam Link devices and one of their Steam Controllers for ages now.  It was initially released in the USA only and whenever I open Steam, it always says “Coming soon”.  The official release date for Australia is supposed to be Feb 2016.

So I was quite surprised when I came across a forum comment (can’t remember which forum now) where an Aussie user said he'd simply ordered a Steam Link from hardly ever ships electrical or techie stuff to Australia, so I assumed that he must have used a third party mailing service, such as ShipItTo or PriceUSA, to get around Amazon’s geographical restrictions.

But he didn’t.  I know because I just ordered one myself!  I have a shipping confirmation and an arrival date of Jan 11 for one Steam Link and one Steam Controller.  As part of the Amazon order I was also emailed activation codes for 1 copy of Counter-Strike: Global Offensive and 1 copy of Left 4 Dead 2, yes, they worked.  Well, the Counter-Strike one did.  I already had Left 4 Dead 2.  I don’t know if these codes are transferable to another Steam subscriber.

  I’ll post a review  of both items when I’ve tested them out.