Tuesday, November 5, 2013

From Global Knowledge Clickbait to Discourse on Cisco's UC Domination

Bopping around on Twitter Monday, I ran across this tweet by Global Knowledge:


I figured I'd bite, but quickly determined that this whitepaper does NOT tell you how Cisco carved anything in anything.  It's just an overview of the CUCM solution.  This got me thinking it would be fun (Am I weird? Don't answer that.) to write a post that took into account all the things that made Cisco the leader in the Unified Communications space. So here goes my attempt at explaining the story.

Back in the day, you had the PBX.  And the PBX was a MONSTER.  It was complicated to manage, really expensive (both up-front and for recurring maintenance), and inflexible as all-get-out.  Initially, PBX services only augmented services provided by the PSTN, namely dial-tone.  Then came voicemail solutions, and ACD solutions, and ring-downs, and you get the idea..  By the time the mid-90's rolled around computer networks, or Local Area Networks (LANs),  were becoming more mainstream and leading the vendor pack was Cisco.  Cisco was founded in 1984 as a network vendor that sold routers, used for connecting disparate network segments (and sometimes disparate protocols) together, over various transportation mediums.  These networks could be local on the LAN, or remote via circuits provided by the telcos.  In, 1993, Cisco entered the LAN switching market by acquiring Crescendo Communications. Over the next few years, Cisco rounded out the LAN switching porfolio by acquiring Kalpana and Grand Junction Networks.  With this dominance in computer networking, Cisco began to earn a loyal IT following because their solutions JUST WORKED. 

In addition to the Crescendo acquistion, another interesting thing happened in 1993...the creation of the Cisco Certified Internetworking Expert (CCIE) certification.  Thus began Cisco's dominance in another area: mindshare in being THE certification for the networking industry.  Cisco certifications are valued even by Cisco competitors, such as Juniper and Brocade.  Most IT departments view Cisco certifications as a requirement for positions ranging from Help Desk Technicians to Network Engineers.  By creating their certification program early-on, they got the jump on other possible industry giants that might have wanted to create as reputable a certification portfolio.

During the mid-90's, the Internet Protocol (IP) was becoming the de facto standard, and with it more and more applications as well as traditionally isolated solutions were being ported to run over IP-based networks.  One of those solutions was the PBX.  Now in most companies the PBX was owned and operated by the facilities department.  Because the PBX was a huge capital investment, akin to things like real estate and furniture, it made sense for the onus of the PBX to be on facilities.  Keep that in the back of your mind for a bit, we'll come back to it.  The first IP-based PBX was Selsius System's CallManager.  Selsius was formed in 1997, out of another company called Intecom.  Cisco took note of what Selsius was doing with IP-based voice, and decided to make an offer for them.  They successfully beat out Nortel for the bid, and acquired Selsius in November of 1998.  Oh how things would have been different if Nortel had won that bid....  If you want to read more on some history here with Selsius, check out this old post by Mark Nelson, a Cisco employee.

With IP-based solutions, the IT department started to own more and more corporate infrastructure.  Network engineering jobs saw an increase in demand, which required people with Cisco certifications like the CCNA, CCNP, and CCIE.  Therefore, Cisco's network market share increased because these certified folks wanted to work on the same gear they studied for.  See what they did there?

After the acquisition of Selsius, Cisco spent two years getting the product ready for prime-time.  In 2000, Cisco released version 3.0 of Cisco CallManager, which was the first Cisco branded release of the product.  Remember the facilities department and their grasp on the PBX?  Well, CallManager was pitched to IT departments, specifically NETWORK ENGINEERS.  This effectively supplanted the facilities department's long-held PBX vendor relationships.  Well played, Cisco, well played.   This was really a perfect storm.  Cisco was the IT department's friend, because their stuff "just worked".  Ever hear the phrase "Nobody ever got fired for buying x", where x equals a big IT vendor.  Well, because the IT department was being handed more and more solutions, it made sense for them to take the least risky path.  Even with the bugs that plagued those early CallManager releases, Cisco's world class support was there to allow those customers to rest easily knowing that they were in good hands.  This path of least risk is further proven by this brand survey, which puts Cisco at the top of the heap.  The author specifically calls out risk aversion as one of the main reasons for Cisco's leadership.

A strong component of the Cisco strategy has always been to sell and deliver solutions through the use of the Value Added Reseller channel.  The channel acts as an extension to Cisco's sales and engineering staff, without incurring direct expenses associated with the relationships.  Cisco certifications also play heavy into the channel space, because in order to be able to sell certain Cisco products, the Cisco VAR (also referred to as Cisco Partner) must have engineering talent on staff for the products in question.  Cisco Partners often sell other technology solutions as well, including those made by Microsoft, HP, IBM, etc.  This means that oftentimes, the partner sales rep has relationships with accounts that Cisco's own sales team might not have.  Talk about hedging your bets.  This channel delivery model has without a doubt been one of the key components involved with Cisco's dominance in various markets, including Unified Communications.

Oh, there's one more thing that Cisco did to help solidify their position as a market leader. Ever watch an episode of 24 and noticed a Cisco IP Phone? What better way to subtly influence a society full of TV-watchers than to put your products in the middle of TV shows! Genius! This started back in 2005, which was still relatively early for Cisco's UC portfolio.  But it got the image of those phones out there, you know, saving the world and stuff.

Everybody got the recipe for UC success?

Build a brand on a reliable premise, such as the plumbing of your corporate IT infrastructure.  Create a "club" mentality with certifications that aren't that easy to obtain, and that are respected by IT departments.  Encourage people to get said certifications because they will get you network engineering jobs.  Buy a VoIP company.  Stir new and old product offerings until well mixed.  Leverage partners for extending sales and engineering teams. Sell to your certified network engineers in IT departments everywhere. Make tons of money.  Sounds easy, eh?  Thanks for reading!

Monday, November 4, 2013

Some Cisco UC Basics, part three

In the olden days (read: 3 years ago), Cisco's desktop IM/Softphone client combo was called Cisco Unified Personal Communicator (CUPC). The experience was less than stellar.  In addition to CUPC, Cisco had an attempt at a softphone on iOS and Android called Cisco Mobile.  Nowadays, the desktop and mobile device landscape for Cisco entirely centers around a client they call Jabber.  You might remember a few years ago that Cisco bought Jabber, Inc (the folks behind the XMPP protocol).  It's only been in the last couple of years that Cisco has visibly capitalized on this investment, which has basically entailed a major re-branding and consolidation of CUPC and Cisco Mobile.  Jabber works on Windows, Mac, Android, and iOS (no love for Linux, despite a ton of people asking for it, and the irony that all of the UC appliances run on top of Linux).  Do not be deceived, these Jabber clients are in no way the same code base.  A major shortcoming of Jabber to date has been a lack of feature parity between platforms.  That's getting better, but still has a ways to go until the features on Jabber for Mac are the same as Jabber for Android.  Another change in the move to Jabber is that Cisco switched from the presence protocol of choice being SIMPLE to it being XMPP.  That's a little ironic because Jonathan Rosenberg, the creator of SIMPLE actually works for Cisco.

The most exciting thing that Cisco has done in the last few years is purchase Tandberg. Which, as a side note, does give a clue to Cisco's primary innovation mechanism: acquisitions.  Tandberg had really perfected video conferencing through the firewall through the use of the H.460 protocol, which is H.323's solution for NAT traversal.  Basically Tandberg had these two appliances, called Video Communications Server (VCS), one sitting inside the firewall, one outside the firewall or in the DMZ.  The one internal is constantly connecting to the outside box over TCP asking if it has any calls for the internal endpoints.  If it does, then media and signaling ports are opened up by the internal box as requests to the external one, and magically the firewall doesn't have to have a bunch of pin holes added from the outside in (except when the outside VCS is sitting in the DMZ, where you'll need signaling and media ports opened from the outside to the DMZ).  Tandberg adapted this to work with SIP as well, so B2B video "just worked".  You would still have STUN, TURN, and ICE on the outside for endpoint purposes, but NAT traversal at the head-end works through this pairing of appliances inside and outside of the firewall.  Fast forward to now, and this coupling of appliances is going to be a major strategy for Cisco in providing VPN-less connectivity for the Jabber client, but also B2B audio and video calling.  If you're a burgeoning UC admin, I highly recommend getting familiar with the concept of SIP DNS SRV records and SIP URI's, because it's going to become more significant in future UC deployments, regardless of if Telepresence is in use or not.

The Tandberg acquisition happened around the same time that Cisco purchased another company you may or may not have heard of before: WebEx.  WebEx had created an industry all on its own around web conferencing, and with Cisco's acquisition, everyone stood around and scratched their head trying to figure out what Cisco's play was going to be.  So we waited.  And waited.  And then finally something interesting happened...Cisco created an integration option with their traditional conferencing platform, known as MeetingPlace, and WebEx for scheduling and sharing of web content.  MeetingPlace was a monolithic platform that required multiple nodes for audio, video, and web conferencing, as well as separate nodes for web-based scheduling.  MeetingPlace would integrate with CUCM via either H.323 or SIP.  So Cisco's integration between MeetingPlace and WebEx worked OK, but felt like an awkward blind date.  The integration was a little brittle, and the administrative interfaces didn't look anything alike, so it was a bit of a learning curve to juggle the two of them.  A little over a year ago, Cisco announced something that surprised a lot of people: Cisco's WebEx in an on-premise package called Cisco WebEx Meetings Server (CWMS).  This also marked a lack of announcement for another release of MeetingPlace, so the writing was on the wall that Cisco was going to sunset the MeetingPlace product in favor of WebEx.

CWMS is an exciting premise, because it allows for a customer to have a full copy of the WebEx platform hosted on their own servers.  The scalability ranges from 50 concurrent users all the way up to 2,000.  A downside to "on-prem WebEx" is that a lot of compute power is required to run even the basic 50 user configuration.  Also, the solution requires that VMware vCenter is deployed, which can add some complexity to the environment.  The full CWMS solution consists of an Internet Reverse Proxy (IRP) virtual machine, an Admin virtual machine, and for larger implementations (250+ users), a separate Media virtual machine.  For the smallest size installation, a single physical host can handle an instance of CWMS with an IRP and Admin VM.  To make it redundant, you add a second physical host.  As you scale up, more physical hosts are required because the processor and memory requirements grow quite a bit.  At the 800 user mark, the total number of cores required for a NON-REDUNDANT configuration is 80, and the amount of memory required is 116 GB.  At the 2,000 user mark, there's an additional virtual machine type for web conferencing only, and you are required to have 3 media virtual machines.  Total number of cores for a non-redundant configuration is 160, and total amount of RAM is 276 GB.  Ouch.

If you want to connect your video conferencing solution to WebEx, you might want to sit down.  WebEx was a proprietary platform when Cisco bought them.  So proprietary, in fact, that it took Cisco 5 years to announce that they were going to allow integration between their Telepresence portfolio and WebEx.  The integration was named WebEx Enabled Telepresence (WET), and requires a Tandberg Expressway/Control VCS pairing, DNS SRV, the Telepresence Management Suite (TMS) product, a Multipoint Control Unit (MCU), and some endpoints.  The downside to this solution is that you'll likely need to hold an IPO just to afford everything.  To setup a meeting, the conferencing is scheduled either through TMS or WebEx, and when it's time for the meeting all the local participants will connect to the MCU as well as the WebEx "cloud".  Everyone can see everyone else, however to those on-premise folks, the WebEx video will appear as a single feed and vice versa for WebEx participants.  The experience can be a little frustrating if you have a large number of people on WebEx video.  This solution DOES NOT work with the on-premise CWMS.  At least not yet.  

I hope this has been helpful to give you a snapshot of some current Cisco UC basics.  Let me know if there's anything you'd like to read about that has puzzled you, and I'll do my best to break it down.  Thanks for reading!

Friday, November 1, 2013

Ultra-teeny-tiny-book and Choosing Your Own Device

I should preface that I work for Softchoice Corporation, but this blog post is ENTIRELY my own opinion.

Also, one or more of the vendors in this post asked my company to sanitize their version of this blog post, removing all negative commentary.  The following is all the content they didn't want you to see.  Enjoy!

(UPDATE: I'm sending my device back.  It's too under-powered for me, and I'm just not productive with it.)

Ahh the UltraBook.  Small form factor, great good battery life, decent specs.  I've been trying out the Lenovo Helix, the tablet/laptop hybrid that made a splash at CES earlier this year.  My employer, Softchoice, has been trialing a Choose Your Own Device campaign sponsored by Intel and some other vendors, and I was lucky enough to be included in the program.  I chose the Helix because the specs listed in that were better than the rest of the devices.  I'm a power user, so I need at least a Core i7 and 8 GB of RAM.  Storage isn't a big deal, since I keep the majority of my files on Box.net, and can pick and choose what gets synchronized down to my device.  The Helix as it was listed indeed met my spec requirements.

After a small wait, the device finally arrived. And I was like a kid at Christmas time.  (Admittedly, this happens anytime I receive a package from Amazon too.)  I opened the package, turned the device on, and...squinted to see the screen.  The Helix has an 11.6" screen.  And since I've used 10" tablets for a couple of years now, I thought an extra 1.6" would be a bonus.  Nada.  The resolution on the Helix made using the device as a "laptop" near impossible.  Holding it closer to my eyes like I would a true tablet was fine, but when I tried to use the keyboard (you know, all laptoppy and stuff), I looked like a sad T-Rex.  So that's "-1" for Lenovo for pairing a tiny screen, high resolution, and a keyboard without also including a coupon for a visit to an eye doctor prior to shipment.

Then I tried the device's tablet mode.  Undocking the screen from the keyboard was like the starship Enterprise going through a saucer section separation. It just worked, and there were no problems whatsoever.  The device has USB ports as well as a display port on the bottom edge of the tablet, which is normally flush with the keyboard dock.  I definitely love the idea of being able to plug a USB device into my tablet.  So a "+1" to Lenovo for building the tablet-only portion of the Helix to stand alone nicely without the keyboard dock.

At this point, the Helix had been on for about 30 minutes.  I started to notice some heat radiating from the back right, so I placed my hand on that part of the device.  The temperature was uncomfortably hot for a tablet, and I had not really even been stress testing the unit.  Another "-1" to Lenovo.

I decided to see if anything was using my processor, so opened Task Manager.  Lo and behold, I only had 4GB of RAM.  This was confusing to me, because I specifically remembered seeing 8GB of RAM in the spec sheet.  I checked the processor: Core i5.  And lastly, for giggles, the HD was a paltry 100GB instead of the 180GB advertised, though the real disappointment here was on the processor and memory specs.  Giving my company the benefit of the doubt. since this program is a "trial run", I decided to move forward with the usage of the device to augment my desktop.

As an engineer, I can safely say that 4GB of RAM is almost useless in a world with tabbed browsers, Windows 8, Office 2013, and multiple office files nearing (and in some instances exceeding) 10MB in size. Thus began my frustrations with the Lenovo Helix.  I used it for all of 30 minutes on my first day of ownership, primarily because the Helix was just sluggish compared to my primary machine.  There was no way I was going to sacrifice productivity when I could use something else that was snappier.  Ok, so maybe I'll just stick with using it on the couch and when I travel.

This was also my first foray into Windows 8 on an OEM device.  I'm less than impressed with Microsoft's latest OS.  The interface isn't very intuitive and doesn't flow well between applications, especially if you have a mixture of Metro (full-screen) and traditional (windowed) apps. The Windows 8 store is a might....challenged (staying politically correct here).  The majority of the pre-loaded apps wouldn't update, nor could I install anything new from the store for at least a few days.  I think a Windows update sorted the problem out.  The app ecosystem is a bit light as well.  Some of the core apps that I use on my Android tablet were duplicated here, but the functionality was in some instances crippled.

My first road trip with the device was a success, other than the aforementioned issues in using the device as a laptop.  When I arrived back home, using the Helix had made me appreciate my main machine.  I can safely say that given another chance/choice, I would go with something that had a larger screen and better specs, even if I had to put a little skin in the game with my own money.

The idea of allowing users to choose their own device from a pre-set list is a much better approach than allowing carte blanch into your organization (like what happens with BYOD).  In the context of a CYOD model, I would encourage every IT department to work with the lines of business to establish user profiles that equate to classifications of specs.  This way, you're not giving engineers machines that are under-powered and not capable of running VMWare Workstation or opening a 200MB Excel document.  Conversely, it doesn't make any sense to give someone in sales a Core i7 with 8GB of RAM.  Use cases will set the criteria for those device lists.  Also, build your CYO policy to account for what happens when a user chooses a device, and that device falls short of the user's needs.  CYO is another tool in the utility belt of IT to accommodate the changing workforce.  In my opinion, it should always be chosen over BYOD, as it gives control back to IT, and likely will end up being the "best tool for the job" for IT departments everywhere for the next few years.

Wednesday, October 23, 2013

Some Cisco UC Basics, part Deux

Cisco's call control product is UC Manager, which used to be named CallManager, and gets referred to as Communications Manager or CUCM.  It's designed to be scalable, potentially up to 10,000 endpoints registered per node.  The per-node scalability is determined by the VMWare OVA template that gets applied in the initial build-out of the system.  Systems Integrators will choose the correct OVA by evaluating the current number of endpoints in the environment plus a target growth number, which normally is provided by the customer.  The reason not every Systems Integrator goes right to the 10,000 user OVA is that the system specifications for those virtual machines is weighted heavier, therefore available system resources on the physical host are reduced,  which reduces the overall capacity of the physical host to hold other virtual machines.

UC Manager nodes can be clustered together, and that involves a Cisco proprietary protocol that passes call and endpoint state between nodes within the cluster.  This proprietary protocol requires that every node maintain a connection to every other node, so the connections are fully-meshed.  This impacts bandwidth required between nodes in the cluster, as a single connection requires up to 1.5MB of bandwidth.  This is a big deal if you are planning to cluster nodes over the WAN, and have limited bandwidth to work with.  This protocol allows for the entire cluster to act as one big registrar for endpoints.  Cisco allows for up to 8 call processing nodes in a cluster, and they cap the number of phones per cluster at 40,000. 


In SIP terms, UC Manager is a B2BUA, however media never flows through, it's always between endpoints.  This isn't even an configuration option to change this.  This fact gives a little bit more predictability to the scalability considerations of the system.  Nowadays, SIP is definitely the protocol of choice for CallManager, both line-side to endpoints and trunk side to PSTN gateways, although the protocol used to the gateway can vary depending on what Cisco voice engineer you talk to.


The gateway is almost always a Cisco router.  Voice functionality is built into Cisco's networking OS (IOS), and is enabled by a "UC" license.  The gateway runs one of 4 possible protocols: SCCP (Skinny), MGCP, H.323, or SIP.  The protocol used is usually determined by what the customer is connecting to the gateway.  Analog lines, for instance, usually work better if SCCP or MGCP is used. PRI's are going to entail usage of either MGCP, H.323, or SIP.  If the customer is setting up a SIP trunk to a telco, SIP would be the protocol used from the gateway to CUCM.  MGCP relies on CallManager for call control and dial-plan decisions.  H.323 and SIP are effectively peer-to-peer protocols, meaning there are separate dial-plans and call routing decisions made on both CallManager and the gateway.  Another set of functions that the gateway can provide is transcoding, ad-hoc conferencing, and Media Termination Point (for media relaying and DTMF termination) services.  These services register to CUCM using SCCP, and can easily be co-located with the other protocols.

In the first part of this series, I mentioned that Cisco had chosen IBM Informix for the database component.  The way that CallManager passes configuration data between nodes is through a database Publisher/Subscriber model.  There are a few challenges to this approach in the CallManager space.  The first node installed becomes the Publisher server.  This role is locked in to the first installed node, and cannot be changed later without more or less backing up the Publisher, re-installing, and restoring the backup.  So if you find yourself with a downed Publisher and no backup, you'll have a headache in front of you, because the environment will need to be rebuilt.  The Subscriber nodes will continue functioning, however no configuration changes will be able to be made to the servers without that Publisher.  Cisco has not provided the capability to promote a Subscriber server to a Publisher.  Also, for sometimes unknown reasons (and quite a few known ones), database replication will break.  There's a list of CLI commands to go through when this happens, and that resolves the problem 80% of the time, and the remaining 20% of the time you will be on the phone with Cisco's Technical Assistance Center (TAC).


Stay tuned for part 3, where I'll talk about some of the other Cisco UC applications (client and server).

Monday, October 21, 2013

Some Cisco UC Basics, part 1

In part one of this series, I'm going to explain the overall Cisco UC landscape as it exists today (with a bit of historical information thrown in). 

The Cisco UC Product Suite consists of a slew of different appliance virtual machines, providing functions like traditional call control (dial-tone), voicemail, presence, E911 (specifically, providing more specific location information to the Public Safety Answering Point than just physical address), conferencing (think web-based content sharing and audio bridging), and telepresence (video conferencing).  All of those functions run separately, so there are no co-resident applications on a single virtual server instance.  On the physical host multiple virtual machines are allowed, however over-subscription of resources is not permitted.  For instance, if each virtual machine takes 2 processor cores and 4 GB of RAM, a physical host that has 2 quad core processors (8 total cores) and 32 GB of RAM would have a maximum of 4 virtual machines. Even though there was more RAM available, the physical host ran out of available cores, and thus would not be able to hold additional virtual machines.


All the individual services within the virtual machines run on top of a re-packaged Red Hat Enterprise Linux.   Even though this is the RHEL that a lot of companies know and love, the normal Linux bash shell is not accessible.  Cisco built a CLI wrapper that mimics some of the traditional IOS functionality and allows anadministrator to perform basic OS administrative functions.  This re-packaged RHEL is referred to as the Cisco Telephony OS.  Cisco uses an IBM Informix database for storing all configuration, call states, and Call Detail Records.  Depending on the application, you may or may not be able to directly access the database, and might be limited to an API for accessing and/or modifying relevant information.  The graphical administrative interfaces and web-based API's all run on top of Apache Tomcat.  

As far as scalability goes, Cisco is very rigid in their product system requirements.  This means you can't just go willy nilly and throw a product instance on just any old server or hypervisor.  The general rule of thumb is that their products should be on UCS servers, virtualized on VMWare 5.x, and if there is any SAN connected to the servers it should be connected via Fibre Channel.  However they will allow customers to go outside of these requirements to some degree.  You may use any brand server you wanted as long as the hardware specs meet the requirements; and you also may use iSCSI or NFS for storage. The caveat is that the level of support you would get from Cisco  would not be as good as it would be if you had stayed on UCS servers and fibre channel or direct attached storage. 


At first, customers didn't have the option to virtualize, so every appliance would run on bare metal machines that were OEM'ed from HP or IBM by Cisco.  Version 8.0 introduced the option for doing the hardware abstraction, and now virtualization is going to be the only way these appliances are supported in the upcoming 10.0 release.  There have been rumors that Cisco would open up the hypervisor requirement to use something else.  Personally, I'm hoping for some KVM support, because today VMWare is just another thing that customers have to purchase.

Thanks for reading, and stay tuned for part 2!



Saturday, June 8, 2013

The Future of Communications, Part 3: SIP Federati

For a while now, setting up SIP for voice to the outside world through these amazing SRV records has been kinda like being the first person in the world that owned a fax machine:  They probably did not get very many faxes, since there weren’t any other machines out there to send them anything.  


The idea of communicating to an organization outside of your own is called “federation”, which is defined as the joining of two distinctly different networked systems.  The need to connect one domain to another first arose because of the desire to share Instant Messaging services between organizations.  From this, we are seeing two divergent philosophies around federation.  The first is the idea that not just anyone should be able to federate with my domain.  A user in my domain should make a request for the federation, and the systems administrators can work with the systems administrations of the other domain to set the relationship up at the server level.  The second is the idea that my domain is open to anyone that wants to send me an instant message, voice, or video call.  


handshake isolated on business background
The open federation model is more along the same philosophical lines as email and SMTP, but creates the possibilities of voice or video spam, unwanted calls by telemarketing fir...wait a minute, we already have these things on the PSTN!  Can you imagine if early within SMTP’s infancy we had adopted the “manual federation” mentality? That protocol would have been worthless if it couldn’t have scaled with the growth of the internet.  Sure, we got spam out of it, but there are always going to be people who abuse the system that’s in place. Our world has been transformed by email and instant communication worldwide, the same way that the world was transformed by the telephone back in the late 1800’s.  It’s only logical that we take voice communication to the same level that we have email and video conferencing, ubiquitous B2B voice calls over SIP.


Another reason to take B2B SIP voice seriously is that we aren’t going to have the PSTN forever, at least not in its analog and TDM forms.  Efforts are underway to establish what the next phase of the PSTN should look like, and by all accounts SIP and direct IP-to-IP communications are the forerunners in the options race.  There have been a lot of dates thrown out for this “refreshing” of the PSTN, some as early as 2015, some as late as 10 years from now.   One thing's for certain, the carriers will try to push for a solution that is going to maximize their profits and burgeoning monopolies (good article here).


I believe it’s up to IT departments everywhere to move the companies that they support into the new age of voice communications with SIP.  This doesn’t necessarily mean ripping and replacing existing infrastructure, as SIP can be added to existing PBXs through IP-to-TDM gateways that do the protocol and medium conversions, all while preserving the dial-plans in place.  This can be taken further and setup in your DMZ to enable SIP calls to your organization by using the directory number @ domain name notation.  This of course, is a one-way solution, since legacy PBX phones can’t dial SIP URI’s, but it’s a start, and it keeps the phone companies out of at least some of your voice calls.  The ideal solution would utilize a modern IP PBX like Cisco’s Unified Communications Manager or Microsoft Lync, both of which can federate with outside organizations. Beware though, Microsoft Lync philosophy to the core is vendor lock in, so the use of standards based protocols are few and far between.  


We are at a crucial time for the evolution of communications worldwide.  I believe the pieces are there to make rich media voice, video, and text communications operate just like email, in a decentralized, highly effective network of individually controlled servers and appliances.  Whatever solution you are considering, make sure that the manufacturer actively supports B2B SIP or at least has it on a near-term roadmap, and operates in the realm of Internet standards.  Proprietary solutions may work better in one or two silo areas, but overall, the vendor lock-in will restrict your flexibility and interoperability to the outside world.   If you want some help wading through the options available to you, talk to your local trusted VAR that specializes in Unified Communications, or find a good freelance consultant that knows this stuff.  Thanks for reading!

Thursday, June 6, 2013

The Future of Communications, Part 2: Can you see me now?

This is a continuation from Part 1 of my series, The Future of Communications.

Let’s shift gears away from voice for a bit, and talk about video (don’t worry, we’ll come back to voice).  Video Teleconferencing or VTC has been around since the 1980’s in practical form, utilizing the ISDN networks built by the carriers in anticipation of the “digital age” of communication.  When data networking became prevalent in the 1990’s video teleconferencing was one of the first communications methods to utilize the technology.  At the time, video conferencing over ISDN was extremely expensive, and required dedicated ISDN circuits to communicate with outside organizations.  It turned out that the Internet was a lot cheaper to use, which by then most larger businesses were connected to.  In the early 2000’s, H.323, a protocol born out of the ITU, was used by VTC manufacturers to connect disparate video units together.  The challenges with H.323 were many, chiefly among them the complexity of the protocol meant implementing it was not for the faint of heart.  The shift to SIP started to happen in the mid-2000’s, when video teleconference manufacturers started including SIP software support in their endpoints and infrastructure components.  


The ability to scale SIP the same way that SMTP had been scaled is what set the SIP protocol fire ablaze.  SIP was developed with the expectation that DNS would be used, specifically service, or SRV, records in order to provide a dynamic way to establish sessions to known internet domains, and subsequently, servers owned by those domains.  Here’s what a sample SRV record for SIP looks like:  


_sip._tcp.example.com. 86400 IN SRV 0 5 5060 sipserver.example.com.


In this example, an initiating SIP server or proxy would make a DNS query for the example.com domain looking for what server and port to send a SIP INVITE message.  In this case the server is sipserver.example.com on TCP port 5060 (the standard unsecured SIP port).  Using this method VTC manufacturers could scale multiple VTC units inside an organization, without having to assign a public IP to every single unit.  A simple Uniform Resource Identifier, or URI, could be used to identify individual endpoints.  Much like an email address, a URI consists of a username, the @ symbol, and the domain name.  For example, a screen in the boardroom of Example.com might be named boardroom@example.com, and could be called using this simple, easy to remember URI.  


This exact same capability exists for voice calls, however the implementation of SIP in the enterprise has been focused on the station (phone) and trunk (carrier) side, rather than the Business-to-Business (B2B) possibilities.  I’d say that I blame carriers for this, but in the end, we all know what their motivations are, and shouldn't be surprised by their greedy natures.  Carriers hate the idea of losing out on revenue streams, and while these same carriers can also provide data services to run SIP calls over, currently they charge for both.  The same can be said for carriers that also have a cellular business, as voice plans still cost more than data plans (don’t even get me started on texting).  If B2B SIP calls were more common, carriers would start to lose out on these revenue streams, because SIP calls are run best over more reliable (typically) higher bandwidth WiFi networks.  At the end of the day, the advancement of B2B SIP falls squarely on the shoulders of the corporate world, or the users of the equipment that could enable this capability.  If we complacently continue to accept what options we are given, instead of demanding that we want change in the way our communications work then this situation with the carriers is going to continue, and the “communications mafia” is going to keep collecting their monthly check from you.

Part 3 coming soon!



Wednesday, June 5, 2013

The Future of Communications, Part 1: In the beginning was the phone, and the phone was without dial-tone.

One could argue that the telephone paved the way for modern society as we know it, connecting friends, relatives, co-workers, employers, and employees in ways never before seen in human history.  We've all essentially grown up on the telephone, talking to our distant relatives when we were kids, or as teens sneaking to stay up late talking to the opposite sex.  In the last 10 years, we've witnessed the explosion that has been the cellular telephone industry, and coupled with that, the improvement in the technology of the handsets themselves, marrying telecommunications and computing capabilities.  The times, they are a-changin'.


Much of the hoopla (technical term) over the past few years in the telecom industry has been over the concept of Unified Communications, which is essentially just the idea of running voice services over a data (or Internet Protocol) network.  While the carriers have been doing it for years, this more recent push for Voice over IP (VoIP) has been typically seen as something done within an organization, utilizing a company’s already in-place routing and switching infrastructure, and even extending voice and IM services to those ├╝ber intelligent mobile handsets.  Voice services to the outside world have always been provided by the archaic, limited in function, soon to be obsolete Public Switched Telephone Network (PSTN).


Modern phone companies can trace their roots all the way back to the invention of the telephone in 1875 by Alexander Graham Bell.  The technology has evolved, but the basics remain the same.  You pick up the receiver, and you end up talking to someone on the other end.  The modern PSTN utilizes a numbering plan that allows for regional phone companies to facilitate easy, operator-less calls between parties.  This includes the intra-country dial-plan, as well as allowing for interoperating with outside countries via country codes.  For example, in the US our international dialing prefix is 011, which we follow with the country code, followed by that country’s dial-plan.  For instance, for me to call the office of the City of Melbourne, Australia, I would dial 011 61 3 9658 9658.  There will be some pricey international charges for that call, as the phone companies bill customers back for long distance and international calls.  This has led a lot of businesses to restrict who has access to dialing internationally from their PBX.


What if I wanted to send an email to a friend that lives in Australia?  Well when I click "send", my mail server fires up a request to the local Domain Name System (DNS) server to see what Mail eXchange (MX) record exists for the domain name used in my friend’s email address.  Once my mail server knows the address, it sends the email to the Simple Mail Transfer Protocol (SMTP) service running on port 25 on the destination server, and voila! the email is delivered.  Have you ever stopped to ask yourself why we don’t pay for international email charges?  Quite simply, the internet was designed as a foundation for whatever services and protocols that clever, inventive computer scientists could dream up.  Email just happens to be one of them.


As it turns out, there’s another protocol out there capable of doing the exact same thing that email does, but for voice communication.  Session Initiation Protocol (SIP) was designed to mimic that same decentralized session establishment that SMTP utilizes, but for any type of interactive session.  This has led to the use of SIP for myriad of different services, such as video, Instant Messaging, and especially voice communication, since voice is one of those legacy technologies that was due for some market disruption.  SIP was created back in 1996, and ratified in its current iteration in 2002.  SIP came out of the Internet Engineering Task Force (IETF), as opposed to the normal batch of telecommunications protocols, which are controlled by the International Telecommunications Union (ITU).  This highlights a very key point in our story, which is that SIP came out of the standards body responsible for the protocols that run the Internet, not the one made up of all the world’s phone companies.  Thus, the focus, intent, and application of SIP has typically come from companies like Cisco, who at their very core are Internet Protocol dependent.


One of the biggest challenges with SIP early on was that the original standards documents (called RFCs, or Requests for Comments) left a lot of room for interpretation in implementing certain features.  I called SIP the Subject to Interpretation Protocol for years, having seen first hand all the challenges with interoperability of various products and services.  With SIP coming into its prime in the last 5 years and telecom manufacturers developing very effective methods for accounting for operability, telcos have started offering SIP-based trunking services to businesses all over the world.  This is a great thing for the enterprise, the small and medium business, and especially the phone companies.  They get to remain the voice clearinghouse, charging for access fees, international calling, and of course, benefiting from those pesky billing mistakes that always manage to sneak past accounts payable departments everywhere.  

Part 2 coming soon!