Noise Isolation .vs. Cancellation .vs. Reduction – protect your ears and work in peace & quiet

Most of the time, Developers’ day-to-day jobs can be a bit stressful. With tight deadlines, high expectations, and often unwieldy applications/codebases to manage, you can see how there really is a need to be able to concentrate in order to succeed in this role. Of course, this applies to non-technical roles as well, but in particular you want to maximize the productivity of those building your company’s mission critical applications, products and services. That’s of course assuming whatever industry you’re in, those have already been in some way affected by the pervasiveness of the internet, always-on mobile devices and constantly adapting technology in all its various forms. For instance, the frequent headlines about how modern cars now contain more lines of code than airplanes (not just the passenger/cargo planes, even when compared to military fighter jets).
Noise Isolation
Noise Isolation means that the headphones, earphones or similar device is ergonomically designed to block out as much noise as possible from entering the ear canal through careful selection of optimum construction materials and muffling/insulation layers. It does not do much to block low-level surrounding noise and does not make use of any power or electronics to do “Active” blocking of noise. This “Passive Noise minimization” approach typically does not offer much if any protection to your ears against loud noises, but can definitely help your podcasts, audiobooks, music, radio, E-Learning courses, etc take over from the ambient and background noises in your office or around your home.
Ideal usage: task concentration, or, distraction-free multi-tasking
Noise Cancellation
Also known as Active Noise Control (ANC), or active noise reduction, is a method for reducing unwanted sound by the addition of a second sound specifically designed to cancel the first.
Idea usage: …
Noise Reduction
NRR rating for protecting your ears from sudden loud noises.
Idea usage: …
Conclusion
Check out the video below for a nice overview of the difference certain Noise Cancelling technologies make to the sound you hear with and without music:
Taking over a large-scale Adobe Experience Manager (AEM) project


This week, we traveled to Toronto in order to negotiate with our primary Content Management System (CMS) vendor, Razorfish. We all know what the Agile Manifesto says about “Customer collaboration over contract negotiation“, so don’t get me wrong, I’ve never been a huge proponent of contract negotiations, but even I’ll admit that sometimes it does seem like a bit of a “necessary evil”. Particularly when certain vendors aren’t necessarily will to “play ball” or “come to the table” to collaborate and work to find mutually beneficial solutions (won’t name names right now, but HINT, its definitely not Razorfish!)
ALC initially started working with Razorfish back in 2015 (when they were still called Nurun before rival larger digital interactive agency Razorfish acquired them) on the “Corporate CMS re-design project”, which aimed to upgrade and migrate the entire “corporate.playsphere.ca” sub-domain’s content over to a modern enterprise CMS, namely, Adobe Experience Manager (AEM) which ALC has chosen as its corporate CMS. Of course in November, 2016 even larger rival firm Sapient negotiated an agreement with Razorfish to merge, creating a somewhat “super digital interactive agency”. (UPDATE 2020-02-18: and perhaps at the borderline “disturbing” scale, Publicis then acquired the Sapient-Razorfish — SRF for short — conglomerate, creating a frankenstein’s monster digital interactive agency with few remaining significant rivals for the big contracts, this entity now called “Publicis-Sapient-Razorfish”).
Since then, quite a bit has changed, as there were a number of “pillars” in the Darwin programme (collection of projects), which made it the single largest undertaking in ALC’s history according to most in-the-know who I’ve spoken to, coming in at a whopping ~$35 million total estimated cost. Whereas AEM was initially selected to replace only that “Corporate” part of PlaySphere, it has since been selected to replace the entire PlaySphere system (particularly the front-end portions), and provide a number of vendor integrations. This is because only the “Corporate takeover” portion of the Darwin programme was actually on track.
The Darwin portfolio of projects rather ambitiously aims to simultaneously rejuvenate and completely replace both our legacy PlayShere system and a very large number of its vendor integrations, alongside our Retail systems, part of our Call Center technologies, and, a number of other supporting systems that are expected to get small updates. Of particular contention are those myriad of 3rd party APIs we need to support and integrate, each provided by vendor partners, and needing to be consolidated in a number of ways. The aim is to reduce the total number of vendor touchpoints (vendors needed to be contracted with), Software-as-a-Service (SaaS) providers, and, other similar agencies/consultancies we need to work with and/or get support from.
As I shared with my team, I’m happy to report it was a very productive jam-packed 2-day trip. Most importantly, they’ve tentatively agreed to move their code repository for the Darwin project’s new AEM-based ALC.ca replacement for our prior PlaySphere system from their internal Stash (BitBucket Server) instance running within their network, and where all our codebase currently lives, over to our own BitBucket Cloud instance. They will also move away from using their JIRA Server instance as their “source of truth” for all tickets and issues, to instead using our JIRA Cloud instance which we have been using for over a year now. We’re aiming at mid-to-late May at the latest to get these cut-overs done, it will be alot of work to first test out “mirroring the repo” between instances and exporting then importing all the JIRA issues. All agreed though, to pull this off at least one additional trip would be recommended, to bring more of the team up to Toronto next time in order to “observe their day-to-day Agile methodologies” and which pieces of that we may want to bring into our new team as it grows. Agile will be a new thing at ALC in general, so I want to be really certain we “get it right” (and yes, I realize there is no such perfect combination right out of the gates, rather we need to just start somewhere and regularly evolve/tweak it as we go). However, towards still trying to have some kind of plan together, I always love referencing this meme:

After what I’ve seen so far, and my past experience at other companies, we will likely end up adopting some kind of hybrid of Scrum and Kanban. I am hearing that “Scrumban” term more and more, so we’ll see how that goes. Scrum seems like a great fit for project implementations, while Kanban seems like the no-brainer choice for all our enhancements, bug fixes, and “keep-the-lights-on” (KTLO) types of development activities.
With our current plan, its looking like we’ll finally launch one full year behind schedule in September, 2017 some time (that’s without true Agile so far, more like an “incremental Waterfall” approach so far, mostly due to vendor limitations and nothing spelled out in our contracts about how we want to and/or expect our partners to work). I’ve been told the September date is not negotiable and can’t slip no matter what, but also the famous like “September is a long month” (an inside joke reflecting the FUD). Will do what I can to prevent such a large set of initiatives and projects to ever need to be cobbled together again, and instead hopefully we can just do a great job maintaining this new AEM platform, so that all we need are little feature delivery sprints and minor projects.
Leading up to September, 2017 we will be collaborating heavily with the SapientRazorfish team, bolstering our current team of 5 with their 15+ active Developers (although with a tapering down towards eventually only having a few of them remain for at least a year in a support role during the “warranty period” as we call it, post go-live). The plan from our launch date onwards will be that our team slowly but surely ramps up to full capacity to be able to support the web application and Mobile App webview integrations that have been done within AEM totally by ourselves, and, to continue to build on that with various other business project, enhancements, internally drive innovations, etc. It will be an interesting challenge, and we’ll see how it goes.
UPDATE (2017-09-17): We finally launched the darn thing, and it wasn’t even the last possible day of the month as many expected! What a whirlwind the past nearly two and a half years have been (feels like I’ve done about 3-4 years worth of work myself, and I’m certain that if you add up all the person hours on this project including OT and “extra efforts” that went into getting this beast across the finish line, you’d come to like 100+ years of life force spent). But I can finally show off the new look of the webapp:

Example authoring, to choose which Components are allowed within a given Static Template in AEM:

Feeling lucky? Give it a try yourself now, at https://www.alc.ca
Speech Recognition – Nuance’s Dragon NaturallySpeaking 14
They often call it Voice Recognition in Nuance’s marketing and promotional material, which doesn’t help the average user to have clarity about what exactly the product’s capabilities are, but in fact up until recently Nuance’s suite of Audio Recognition software has strictly been focused on Speech Recognition.
As such, they have emerged as one of the industry leaders in this field, now on version 14 of their flagship product Dragon NaturallySpeaking.
Nuance/Dragon Company Histories
They certainly have history on their side, the first academic iteration being created in 1975 by Dr. James Baker at the University of Carnegie Mellon in a partnership with IBM Thomas J. Watson Research Center. The prototype reached a “beta version” by 1982 when Dr. Baker left the University to start a company with his wife focusing on commercializing the DRAGON system they developed together. Due to financial struggles and a desire to improve the underlying recognition engine before entering the consumer market, the first 1.0 production-grade version was, however, not released until June of 1997. The company went through financial turmoil and several mergers & acquisitions, but the common theme was that investors and consumers were truly interested in the products and services that Dragon would make possible. It would finally find its stride when an Optical Character Recognition (OCR) and document scanning company with ties to infamous futurist Ray Kurzweil called ScanSoft acquired the Dragon assets, and then merged them with another fledgling Speech Recognition company named Nuance Communications which itself also had roots in academia through SRI’s STAR laboratory.
Mainstream Breakthroughs
The following products/partnerships are the key
- Dragon NaturallySpeaking 9 achieves above >90% recognition with training
- Dragon NaturallySpeaking 11 achieves above >90% recognition without training
- Dragon Medical
- Dragon Legal
- Dragon Dictate iOS app
- LG Smart TV 2012
- Siri project/company partnership (speech recognition powered by Nuance/Dragon)
- Siri sale to Apple for iOS integration
- Apple Mac OSX
They’ve also recently announced that they would after many years of requests be opening up their software’s capabilities as a broader platform via publishing APIs and inter-connectable Web Services which other developers can use to build Speech Recognition into their own applications.
Nuance’s Dragon NaturallySpeaking – Voice Command Cheat Sheet
Audio Recognition overview (TTS, STT, Voice .vs. Speech)
It is still in many ways the early days of innovation in the several sub-categories of Audio Recognition.
Microphones
Thanks to technological advancements, microphones have become smaller and smaller (perhaps to some extent this has been driven by the post-war and Cold War eras where espionage became so critical, so governments worldwide competed producing better and better audio recording technologies). Either way, a good Microphone is the key technology to ensuring high-quality accuracy & results. While software solutions are increasingly capable of making due with embedded microphones (such as the commodity grade ones that tend to come installed in Mobile Phones, Laptops or other devices), a good external Microphone is essential for high accuracy. Examples of external microphones include wearable headsets or standalone mics connected via Bluetooth, USB cable or Analog/Digital cords. The technology has now improved to the point that the average person can produce audio on par with that of major production studios, all within a reasonable budget.
Speech Recognition
What was said?
Bell Labs pioneered advancements in this area with the creation of the first Text-To-Speech (TTS) technologies, and later Speech-To-Text (STT) during part of their ____ projects in the 19??’s.
Voice Recognition
Who said it?
Security companies have started adding Voice Recognition capabilities to their systems since _____ .
Agents
Something the Semantic Web promised but had not initially delivered on was an emergence of Intelligent Agents (i.e. code-powered Personal Assistants). Today, we finally see some of this promise being realized through things like Siri by Apple, Cortana by Microsoft, “Now!” by Google and Alexa/Echo by Amazon.
Web APIs
Microsoft has offered Windows-specific OS-level Speech API (SAPI) since WindowsXP and developers have been integrating Voice/Speech into their Windows apps for a while now, but now it will soon also offer web-based APIs through the announcement of “Project Oxford”. Project Oxford is aimed at building a set of intelligent services to support information retrieval which can optionally tie into the Bing Search APIs (which supports queries by content type including Web, News, Images, Video, )
JS Podcatcher (a Podcast client written in JavaScript)

English: The “Made for iPod, iPhone, iPad” emblem appearing on accessories approved by Apple Inc. for iPod, iPhone, and iPad. (Photo credit: Wikipedia)
So just this month my 5-year old iPhone3GS finally bit the dust. I had been hanging on and managed to extend its life well beyond its 3-year Telco contract (which I immediately cancelled the day I was out) by pairing it with a MiFi hotspot for much cheaper VoIP-based calling and using data-intensive applications only when on WiFi. That trusty iPhone3GS made it through a major liquid submersion (thanks to the good folks at Atlantic Cell Phone Repair) two cracked screens (thanks to the good folks at iCracked). At some point I may even replace the screen again, which is what’s gone a third time. I’m pretty stubborn though, and now that I’ve finished off my Mobile contract for the MiFi as well, pretty much at all costs I really didn’t want to have to buy another discounted device which usually requires one to agree to the terms of a foolishly one-sided/restrictive 2-year or 3-year contract; likewise, I really don’t want to shell out anywhere near the full asking price in the $500-$1000 price range for a new smartphone. So it’s either go back to my old Nokia flip-phone and live in the early 2000’s on a basic voice-calling only plan, or, hack my old 4th generation iPod Touch into something with phone call abilities. Of course, I opted for the latter!
Luckily thanks to an excellent VoIP app called BRIA (of which a 4th gen. iOS 4 version is still available in the iTunes App Store), I was able to continue doing voice calling by using my Anveo VoIP service (highly recommend this low-cost VoIP provider, please enter Referral Code 5334764 if registering). I was already using Anveo through BRIA on the iPhone, over MiFi when on-the-go, for over a year and a half since I got out of that first contract. I’ve described Anveo in great detail in “My Experiment in Cutting Cords (and costs) with VoIP” where I went over setting the VoIP service up on an iPhone (with BRIA app) and just how much could actually be saved per month by taking the plunge and switching to VoIP instead of a traditional Telco calling/data plan. I’ve found that with a little patience and using replacements (such as Slingplayer in place of Bell MobileTV, or, SoundHound in place of Shazam) along with some occasional disappointment (can’t get older versions of Netflix, Skype, Fitocracy, and several other top apps), I was able to get a good amount (about half) of the apps I was most frequently using on my iPhone3GS, downloaded to the iPod4th gen, in their older iOS 4-supported versions.
One somewhat irreplaceable app though that I just simply could not find, nor find a replacement for was the basic “Podcasts” app built by Apple (common alternatives such as Overcast, Downcast, TuneIN, Slacker, and even RSSradio all did not work on my device either). I mean, seriously Apple, WTF!? Even the very first iPod devices were within a few years of their release to become known as the cannonical “Podcatcher” (Podcatcher means a podcast downloader/player).
The term “podcasting” itself was first mentioned by Ben Hammersley in a February 2004 article in The Guardian newspaper as a portmanteau of the words “pod”, from the success in consumerizing digital music with the “iPod” line of Apple products and “broadcast” (as in traditonal Radio/TV broadcasting to many receivers over a wide area, constantly). As such, the native “Podcasts” app has been around since the early days, as Podcatching (better known as receiving and listening to Podcasts), became one of the main functions of iPods just as it continues to be a core functionality on the many other iOS devices. Why then, are older (iOS < 6) versions of the Podcasts app not still available through the iTunes App Store? The app existed back then, for those devices, and now its just plain unavailable it seems. Why not keep the old versions around? What if a legacy iPod user (anyone still on iOS 4 or lower for that matter) accidentally wipes or restores their device to factory settings? Tough luck if they didn’t store a backup that had that legacy version of the app which still runs on their device. This is an example of planned obsolescence at its worst!!!
Apple be damned, could the Podcast app’s functionality be replaced with a quickly hacked together web app though? Being a developer, that’s the question I wanted an answer to. So I realized it definitely should be doable, as Podcasts to me have always simply been RSS news feeds with links to Audio files embedded in them in a variety of ways. Thanks to Apple’s aforementioned “Podcatching” dominance, and iTunes’ position of oligopoly, Podcasts also need to be garnished with plenty of Apple-specific syntactic metadata to satisfy the behemoth that is the iTunes Store and rank better therein, so have to be able to parse that crap too.
All that to set the context for this experiment, which aims to concisely (I promise hah, from here on) describe how I took my original RSS parser from the post “RSS Reader in jQuery .vs. JavaScript”) on using JavaScript and/or jQuery to implement an RSS news reader, and modified it a few weeks ago to allow me to read the media links and embed codes.
BC$ = Behavior, Content, Money

The goal of the BC$ project is to raise awareness and make changes with respect to the three pillars of information freedom - Behavior (pursuit of interests and passions), Content (sharing/exchanging ideas in various formats), Money (fairness and accessibility) - bringing to light the fact that:
1. We regularly hand over our browser histories, search histories and daily online activities to companies that want our money, or, to benefit from our use of their services with lucrative ad deals or sales of personal information.
2. We create and/or consume interesting content on their services, but we aren't adequately rewarded for our creative efforts or loyalty.
3. We pay money to be connected online (and possibly also over mobile), yet we lose both time and money by allowing companies to market to us with unsolicited advertisements, irrelevant product offers and unfairly structured service pricing plans.