Sadly, I am no longer employed by IBM, so this blog which started as a team blog for the IBM Business Partner Technical Strategy Enablement (IBPTSE) Telecom team no longer represents IBM in any shape or form. As I became the Chief Telecom Architect for IBM for the WebSphere brand worldwide, I continued to write posts. The WebSphere brand merged into the new Cloud brand in 2015 and I retained the same role working with Telcos all around the world to design software solutions to solve their business problems. Now as an independent Telecom architect, my hope is to continue/resurrect this blog on Telecom business issues and technology.
Thanks for visiting. Please comment on posts and leave your thoughts.
If you like me are hearing 'Blockchain this, blockchain that', it almost seems like blockchain will solve world peace, global hunger and feed your pets for you! We're obviously at the 'peak of inflated expectations' of the Gartner hype cycle.
I saw a tweet yesterday from an ex-colleague at IBM yesterday that spoke about using blockchain to combat fraud in a Telco. While I can see that as a possible use case, I was thinking about other opportunities for blockchain.
Perhaps I need to explain blockchain briefly so that those that don't understand it can also understand the Telecom use cases for blockchain. Wikipedia defines it like this:
"A blockchain... is a distributed database that maintains a continuously growing list of records, called blocks, secured from tampering and revision. Each block contains a timestamp and a link to a previous block.By design, blockchains are inherently resistant to modification of the data — once recorded, the data in a block cannot be altered retroactively. Through the use of a peer-to-peer network and a distributed timestamping server, a blockchain database is managed autonomously. Blockchains are "an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. The ledger itself can also be programmed to trigger transactions automatically."
So, it's an immutable record of changes to something. I was thinking about that yesterday and there were a number of use cases in Telecom that I could think of that could use blockchain. I'm not suggesting that they should use blockchain or that it's needed, just that they could. These are the Use cases I came up with:
Fraud prevention : being immutable makes it harder to 'slip one by' the normal accounting checks and balances that any large company has. I suppose the real question is 'exactly which records need to be stored in a blockchain to enable that fraud prevention?' The obvious one is the billing records.
Billing - maintaining state of post-paid billing accounts, who is making payments, billing amounts and other biulling events (such as rate changes, grace periods etc)
Tracking changes to the network. At the moment, many of the changes being made in a Telco's network may be made by staff, but increasingly, maintenance and management of the network is being outsourced to external companies and you want to keep en eye on them to ensure they're doing what they say they're doing. In the new world of Software Defined Networks (SDN) utilising Network Function Virtualisation (NFV) to build and change the network architecture at a rate that we've not seen before, it becomes important for a Telco to be able to track changes to the network to diagnose faults and customer complaints. Over a 24 hour period, a path on a network that supports enterprise customer X may change tens of times - much higher frequency than would be possible if the network elements were physical.
Tracking changes to accounts by customers and telco staff - I could imagine a situation where a customer claims that they didn't request a configuration change, but a blockchain based record of changes could be used to track beck through all the changes in a customer's account to determine what happened and when - potentially enabling a Telco to limit the liability to the customer... or vice versa...
Tracking purchases - A blockchain record of purchases would allow a CSP to rebuild a customer's liability from base information; provided there was an immutable record of the data records as well...
xDRs - any type of Data Record (CDRs, EDRs...) could be stored in a blockchain to facilittate rebuilding of a client's history and billing records from base data. The problem with using a blockchain to store xDRs is the size requirements. I know that large CSPs in India for example produce between five and ten BILLION records per day. It wouldn't take long for that to build up to a very large storage requirement - even if you store the mediated data records, it's going to be very large. I guess the question is : 'what is the return on investment?' - it is worth while doing. I can't think of a business case to justify such an investment, but there may be one out there.
Assurance events - Recording records associated with trouble tickets and problem resolution.
I don't for a second think that all of these can be justified in terms of cost/benefit analysis, but I could see blockchain being used in these scenarios.
Do you have any ideas? Please leave a comment below.
I realise I missed the usual business case that blockchain is used for - a financial ledger. Obviously storing a CSP's financial data in a blockchain would work (and make sense) as it would in ANY other enterprise. I really wanted to illustrate the CSP specific use cases for blockchain.
This post is an update to my earlier post which is now sadly mostly incorrect because IBM's web site has been completely restructured and none of the links I provided previously are valid any more.
I know this isn't strictly related to my normal Industries, but it is applicable for anyone who want's to chat with IBMers, so I thought it was valuable enough to share. For a number of years now, my email signature has included a link for non-IBMers to contact me via Sametime. If you're an IBMer reading this, you might consider linking to this post in your email signature yo allow your customers and partners to chat with you via Sametime.
Here is a step by step guide to setting it up so that you can chat with IBMers over Sametime/IBM Instant Messaging.
There are a few things you'll need for this to work:
An ibm.com id - these are free and available from Sign up for an IBMid if you don't already have one
A Sametime/IBM Instant Messaging compatible client installed on your computer/device. Previously a web client was available however that link is no longer working, so a 'fat client' install would seem to be the way to go. You can download the latest Sametime client from Lotus Greenhouse site which will also require a (free) ID to be created. This is a different ID to the IBMid mentioned above, but just as quick anbd easy to get. You can use other non-IBM clients such as Adium or Pidgin but those clients will require some 'hacking' to allow them to connect to the IBM Instant Messaging Gateway - if you're keen, please check out this Blog post from nomaen that details that configuration. Personally, the IBM client does the job really nicely and is available for Windows, Mac, and Linux (RPM and DEB) so' I'd just go that route.
Once you have your client installed, you'll want to set up a server community for the IBM IM Gateway. The details you need are:
Host Server : extst.ibm.com
Server Community Port : 80
Connection : Direct connection using HTTP protocol
See these screen dumps for reference...
Once you login with your IBMid, you'll be presented with the ST client and no one in your buddylist. Sending instant messages to yourself isn't very interesting and really what you want to do anyway is to chat with IBMers so lest add an IBMer to your buddylist so that you can chat with them...
You will need to know their Internet email address as you have to manually type it in. You will not be able to serach for them. Select the "Add external person by email address' radio button, then type in their email address and name, asign a group if you want to group your contacts.If you don't know they're email address, you can search here to find it.
Once you click on 'add' a popup will appear telling you that the IBMer will need to approve you to be able to see their status and chat with them through the IM Gateway.
NB. In the buddylist - the au1.ibm.com is my internal Sametime community id (which is the same as my email address) and the optusnet.com.au email address is my ibm.com id.
Once you've added your IBM contacts, you're up and running and the interface should look something like this (below):
A chat session between my two IDs (my IBMid and my internal id) looks like this in both the standalone client (used for my external IBMid and the embedded client in my IBM Notes client - on Linux)
and the internal view of the same conversation:
You might notice that all the rich text, file, image functions are greyed out - that's because they are not supported by the external IBM gateway so you'll be restricted to plain text in your chats...
This capability is not well known among IBMers, but I have spoken with a number of partners, exIBMers and my wife via this facility in the past.
Hopefully, this post will spread the word a bit more....
The TeleManagement Forum (TMF) have defined a set of four frameworks collectively known as Frameworx. The key frameworks that will deliver business value to the CSP are the Information Framework(SID) and the Process Framework (eTOM). Both of these can deliver increased business agility - which will reduce time to market and lower IT costs. In particular if a CSP is undertaking with the multiple major IT projects in the near term, TMF Frameworx alignment will ease the pain associated with those major projects.
Without a Services Oriented Architecture (SOA), such as many CSP's have currently, there is no common integration layer, no common way to perform format transformations with that multiple systems can communicate correctly. A typical illustration of this point to point integration might look like the Illustration to the right:
Each of the orange ovals represents a transformation of information so that the two systems can understand each other - each of which must be developed and maintained independently. These transformations will typically be built with a range of different technologies and method, thus increasing the IT costs of integrating, maintaining such transformations, not to mention maintaining competency within the IT organisation.
A basic SOA environment introduces the concept of an Enterprise Service Bus which provides a common way to integrate systems together and a common way of building transformation of information model used by multiple systems. The Illustration below shows this basic Services Oriented Architecture - note that we still have the same number of transformations to build and maintain, but now they can be built using a common method, tools and skills.
If we now introduce a standard information model such as the SID from the TeleManagement Forum, we can reduce the number of transformation that need to be built and maintained to one per system as shown in the Illustration below. Ensuring that all the traffic across the ESB is SID aligned means that as the CSP changes systems (such as CRM or Billing) the effort required to integrate the new system into the environment is dramatically reduced. That will enable the introduction of new systems faster than could otherwise been achieved. It will also reduce the ongoing IT maintenance costs.
As I'm sure you're aware, most end to end business processes need to orchestrate multiple systems. If we take the next step and insulate those end to end business processes from the functions that are specific to the various end point systems using a standard Process Framework such as eTOM, then business process can be independent of systems such as CRM, Billing, Provisioning etc. That means that if those systems change in the future (as many CSPs are looking to do) the end to end business processes will not need to change - in fact the process will not even be aware that the end system has changed.
When changing (say) the CRM system, you will need to remap the eTOM business services to the specific native services and rebuild a single integration and a single transformation to/from the standard data model (SID). This is a significant reduction in effort required to introduce new systems into the CSP's environment. Additionally, if the CSP decide to take a phased approach to the migration of the CRM systems (as opposed to a big bang) the eTOM aligned business processes can dynamically select which of the two CRM systems should be used for this particular process instance.
What that means for the CSP.
Putting in place a robust integration and process orchestration environment that is aligned to TMF Frameworx should be the CSP's first priority; this will not only allow the subsequent major projects integration and migration efforts to be minimised, it will also reduce the time to market for new processes and product that the CSP might offer into the market.
Telekom Slovenia is a perfect example of this. When the Slovenian government forced Mobitel (Slovenia) and Telekom Slovenia to merge, having the alignment with the SID and eTOM within Mobitel allowed the merged organisation to meet the governments deadlines for the specific target KPIs:
Be able to provide subscribers with a joint bill
Enable CSR from both organisations to sell/service products from both organisations
Offer a quad-play product that combined offerings from both Telekom Slovenia and Mobitel
All within six months.
When a CSP is undertaking multiple concurrent major IT replacement projects, there are a number of recommendations that IBM would make based on past observations with other CSPs that have also undertaken significant and multiple system replacement projects:
Use TMF Frameworx to minimise integration work (requires integration and process orchestration environment such as the ESB/SOA project is building) to be in place
Use TMF eTOM to build system independent business processes so that as those major systems change, end to end business processes do not need to change and can dynamically select the legacy or new system during the migration phases of the system replacement projects.
To achieve, 1 and 2, the CSP will need to have the SOA and BPM infrastructure that is capable of integration with ALL of the systems (not just limited to (say) CRM or ERP) within the CSP in place first
If you have the luxury of time, don't try to run the projects simultaneously, rather run them linearly. If this cannot be achieved due to business constraints, limit the concurrent projects to as few systems as possible, and preferably to systems that don't have a lot of interaction with each other.
Operators hoping to engage in widespread deployment of voice over LTE in order to gain spectral efficiencies in their network may face some unhappy customers because one vendor's recent tests showed that VoLTE calls can slash a device's talk-time battery life by half.
For years now, we've known that higher speed mobile networks would mean more power required in handsets to maintain the higher bandwidth connections. I recall it being raised as a concern when UMTS (3G) was being rolled out while GPRS or EDGE were the dominant technology in the mobile data networks. In fact, while I am travelling, I often switch off my 3G/3.5G network capability and trop back to GPRS and EDGE just to make my batter last through the day. It's interesting that it has been quantified like this.
When you think about it though, it makes sense. VoLTE (Voice over LTE) is not using a traditional GSM or CDMA circuit, rather it is using a packet data network to encapsulate the voice traffic - so it is voice over a data network. We've known for a long time that data traffic (particularly higher speed data traffic) uses a lot more power than voice traffic. More power equals less talk time from the same charge.
This study is a US based one, so it brings the luggage of CDMA rather than GSM like the rest of the world uses, but I think there are lessons here for the GSM carriers around the world too. CDMA battery life (from my experience) has been on a par with GSM battery life. I think it would be reasonable to equate the CDMA battery life in this study with GSM battery life.
I am seeing more and more countries around the world clawing back the 2G spectrum for use with Digital TV, LTE or other local requirements. At some point in the future (at least for some markets) the only Voice traffic will be using VoLTE and those subscribers will have severely reduced standby and talk time compared to mobile phones of a few years back. Will that lead to a backlash in the community? By that point it may be too late with the spectrum re-deployed for other uses. Will we end up with VoLTE being the only voice option in some countries and others still having CDMA or GSM voice networks - and will that complicate things for phone manufacturers? (remember the days of so called 'Global phones' that had to be made to cater to all the different spectrums used around the world - yes multi band phones became pervasive, but will so called Global Phones that retain backward compatibility with GSM networks be so popular when the primary channel for mobile phone distribution is still the telephone carriers themselves (and they have committed to VoLTE in their own country)?
Who knows. I do think that we'll end up with a big group of primarily voice subscribers who aren't going to be happy campers!
Last week, I was at the TeleManagement Forum's (TMF) Africa Summit event in Johannesburg, South Africa. The main reason for me attending was to finish of my TMF certifications (I am level 3 currently) in the process framework (eTOM) - if I have passed the exam, I will be Level 4 certified. It was a really tough exam (75% pass mark) so I don't know if I did enough to get over the line'.
Regardless, the event was well attended with 200-230 attendees for the two days of the conference. It was interesting to hear the presenter's thoughts on telco usage within Africa into the future. Many seemed to think that video would drive future traffic for telcos. I am not so sure.
I n other markets around the world, video was also projected to drive 3G network adoption, yet this has not happened anywhere. Why do all these people think that Africa will be different?
I see similar usage patterns in parts of Asia, yet Video has not take off there. Skype carries many more voice only calls than video calls. Apple's Facetime video chat hasn't taken off like Apple predicted. 3G video calls makes a tiny proportion of all calls made.
Personally, I think that voice (despite it's declining popularity relatively speaking in the developed world) will remain the key application, especially voice over LTE for the foreseeable future in Africa. I also think that social networking (be it Facebook, freindster, MySpace or some other African specific tool) will drive consumer data (LTE) traffic. Humans are social animals, and I think these sorts of social interactions will apply just as much in the African scenario as it has in others.
The other day, I was at a customer proof of concept, where the customer asked for 99.9999% availability within the Proof of Concept environment. Let me explain briefly the environment for the Proof of Concept - we were allocated ONE HP Proliant server, with twelve cores and needed to run the following:
IBM BPM Advanced (BPM Adv)
WebSphere Operational Decision Management (WODM)
WebSphere Services Registry & Repository(WSRR)
Oracle DB (not sure what version the customer installed).
Obviously we needed to use VMWare to deploy the software since installing all of the software on the server (and being able to demonstrate any level of redundancy) would be impossible.
Any of you that understand High Availability as I do would say it can't be done in a Proof of Concept - and I agree, yet our competitor claims they have demonstrated six nines (99.9999% availability) in this Proof of Concept environment - it was deployed on the customer's hardware; hardware that did not have any redundancy at all. I call shenanigans on the competitor claims. Unfortunately for us, the customer swallowed the claim hook line and sinker.
I want to explain why their claim of six nines cannot be substantiated and why the customer should be sceptical as soon as a vendor - any vendor makes such claims. First, lets think about what 99.9999% availability really means. To quantify that figure, that means 31.5 seconds of unplanned downtime per year! For a start, how could you possibly measure availability for a year over a two week period. Our POC server VMs didn't crash for the entire time we had them running - does that entitle us to claim 100% availability? No way.
The simple fact is that the Proof of Concept was deployed in a virtualised environment on a single physical machine - without redundant Hard Drives or power supplies - there is no way we or our competition could possibly claim any level of availability given the unknowns of the environment.
In order to achieve high levels of availability, there can be no single point of failure. That means no failure points in the Network, the Hardware or the Software. For example, that means:
Multiple redundant Network Interface Connectors
RAID 1+0 drive array,
Multiple redundant power supplies,
Multiple redundant network switches,
Multiple redundant network backbones
Minimise unused OS services
Use Software clustering capabilities (WebSphere n+x clustering *)
Active automated management of the software and OS
Database replication / clustering (eg Oracle RAC or DB2 HADP)
HA on network software elements (eg DNS servers etc)
We need to go back to the Telco and impress upon them that six nines availability depends on all of the above factors (and probably some others!) and not just about measuring the availability of the software over a short (and non-representative) sample period.
Typically this level of HA is very expensive, indeed every additional '9' increases the cost exponentially - that is: six nines (99.9999% availability) is exponentially more expensive than five nines(99.999% availability). I found this great diagram that illustrates the cost versus HA level.
This diagram is actually from a IBM Redbook (See http://www.redbooks.ibm.com/redbooks/pdfs/sg247700.pdf ) which has a terrific section on high Availability - it illustrates how there is a compromise point between the level of high availability (aiming for continuous availability) and the cost of the infrastructure to provide that level of availability.
n is number of servers needed to handle load requirements
x is the number of redundant nodes in the cluster – to achieve six 9's, this should be in excess of 2)
Further to my last post, it now looks like the WAC is completely dead and buried.
One thing that is creating a lot of chatter at the moment though is TelcoML (Telco Markup Language) - there it a lot of discussions about it on the TeleManagement Forum (TMF) community site and while I don't intend to get in a big discussion about TelcoML, I do want to talk about Telco standards in general.
The Telco standards that seem to take hold are the ones with strong engineering background - I am thinking of networking standards like SS7, INAP, CAMEL, SigTRAN etc, but the Telco standards focussed on the IT domain (like Parlay, ParlayX, OneAPI, ParlayREST and perhaps TelcoML) seem to struggle to get real penetration - sure standards are good - they make it easier and cheaper for Telcos to integrate and introduce new software; they make it easier for ISVs to build software that can be deployed at any telco. So, why don't they stick?
Why do we see a progression of standards that are well designed, have collaboration of a core set of telcos around the world (I'm thinking the WAC here) yet nothing comes of it. It we look at Parlay for example, sure CORBA is hard, so I get why it didn't take off, but ParlayX with web services is easy - pretty much every IDE in the world can build a SOAP request from the WSDL for that web Service - why didn't it take off? I've spoken to telcos all around the world about ParlayX, but it's rare to find one that is truly committed to the standard - sure the RFP's say must have ParlayX, but then after they implement the software (Telecom Web Services Server in IBM's case) they either continue to offer their previous in house developed interfaces for those network services and don't use ParlayX or they just don't follow through with their plans to expose the services externally: why did we bother? ParlayX stagnated for many years with little real adoption from Telcos. Along comes GSMA with OneAPI with the mantra 'ParlayX web services are too complicated still, lets simplify them and also provide a REST based interface'. No new services, just the same ones as ParlayX, but simplified. Yes, I responded to a lot of Requests For Proposal (RFP) asking for OneAPI support, but I have not seen one telco that has actually exposed those OneAPI interfaces to 3rd party developers as they originally intended. So, now, OneAPI doesn;t really exist any more and we have ParlayREST as a replacement. Will that get any more take up? I don't think so.
The TMF Frameworx seem to have more adoption, but they are the exception to the rule.
I am not really sure why Telco standards efforts have such a tough time of it, but I suspect that it comes down to:
Lack of long term thinking within telcos - there are often too many tactical requirements to be fulfilled and the long term strategy never gets going (this is like Governments who have a four year terms not being able to get 20 year projects over the line - they're too worried about getting the day to day things patched up and then getting re-elected)
Senior executives in Telcos that truly don't appreciate the benefits of standardisation - I am not sure if this is because executives come from a non-technical background or some other reason.
What to do? I guess I will keep preaching about standards - it is fundamental to IBM's strategy and operations after all - and keep up with the new ones as they come along. Lets hope that Telcos start to understand why they should be using standards as much as possible, after all they will make their life easier and their operations cheaper.
"Apigee, the API management company that was most recently spotted powering that new “print to Walgreens” feature in half a dozen or so mobile applications, is now acquiring the technology assets of WAC, aka the Wholesale Applications Community.
WAC, an alliance of global telecom companies, like AT&T, Verizon,
Sprint, Deutsche Telecom, China Mobile, Orange, and others (and pegged
by TechCrunch writer Jason Kincaid back in 2010 as “a disaster in the making“)
was intent on building a platform that would allow mobile developers to
build an application once, then run it on any carrier, OS or device.
The group also developed network API technology, which is another key
piece to today’s acquisition."
I think this is a really interesting development. The Wholesale Application Community (WAC) was supposed to give Telcos a way of minimizing the revenue losses to the likes of Apple's App Store and Google Play. IBM's Telecom Solution Lab in France built a demonstration that was shown at Mobile World Congress (MWC) in 2011 demonstration how a Telco's own app store could incorporate applications from the WAC App store as well as other app stores within their own combined app store. I've demonstrated this a number of times around the world and the thing that always seemed odd to me is that applications in the WAC App Store could not be native applications (for Android, Blackberry, WinMob or Symbian) but rather, they could ONLY be HTML5 based apps. That was always going to limit the number of apps that would be in the WAC App store, but since the WAC was announced at WMC 2010, the number of apps in the store has never really taken off.
I'm not sure if this is effectively the end of the road for the WAC, or if it's just a stop on their journey. Certainly, the Telcos that I have dealt with that form the core WAC Telco members remain dedicated to the WAC. I guess we'll have to wait and see what happens.
This makes for an interesting comparison to the National Emergency
Warning System (NEWS) that was implemented in Australia last year as a
result of the Black Saturday bushfires. Here is the URL for this bookmark: gizmodo.com/5857897/this-is-not-a-test-the-emergency-alert-system-is-worthless-without-social-networks This makes for an interesting comparison to the National Emergency
Warning System (NEWS) that was implemented in Australia last year as a
result of the Black Saturday bushfires. Of particular interest is that the USA have avoided the SMS channel when in Australia, that has been the primary channel - alternates like TV and Radio are seen as not as pervasive and thus a lower priority. I don't think that NEWS here in Oz is connected to twitter, facebook, foursquare or any other social networking site either, but that could be an extension to NEWS - the problem is getting everyone to "friend" the NEWS system so that they see updates and warnings!
While I can understand HP getting out of the PC business - it's a very competitive marketplace wit low margins - after all, that is why IBM sold it's PC division to Lenovo. What surprises me is the timing. Only 18 months after buying Palm for US$1.2 Billion, they're cutting their losses and shedding it.
Since I don't live in the US, I can't comment on the marketing push that HP put behind the Pre and the TouchPad, but I've never seen any marketing for it. When your competitor is Apple, the only way to make any dent is the push and push hard. They needed to out market Apple and I'm sure I don't need to tell you how difficult and expensive that would be!
Yesterday, IBM launched the latest iteration of the Service Provider Delivery Environment (SPDE), a software framework for Telecom that has been around since 2000. Over the years, it has evolved with change sin market requirements and architecture maturity. The link below is for the launch:
The following enhancements are part of the new SPDE 4.0 Framework:
1. CSP Business Function Domains – a clear articulation of “communications service provider business domains” that describe the business functions that are common to any service provider across the world. These business domains offer us a simpler way to introduce the SPDE capabilities to a LOB audience, as well as to other client and partner constituents that are new to SPDE:
Sales & Marketing
2. New Capabilities - In the areas of cloud, B2B commerce, enterprise marketing management, business analytics, and service delivery.
3. Introduction of the SPDE Enabled Business Projects - that deliver solutions to address common business and IT needs for the LOB (CIO/CTO/CMO) and represent repeatable solutions and patterns harvested from client engagements.
4. Improved alignment with Telemanagment Forum (TMF) Industry Standards - a clearly defined depiction of the areas of alignment to TMF Frameworx - key industry standards that underpin much of the communications industry investment.
5. Simplified Graphics and Messaging - to improve ease of adoption and consumability by a broader LOB audience.
Built on best practices and patterns from client engagements with CSPs around the world, IBM SPDE 4.0 is the blueprint that enables Smarter Communications by helping deliver value-added ser¬vices that launch smarter services, drive smarter operations and build smarter networks. IBM is leading a conversation in the marketplace about how our world is becoming smarter, and software is at the very heart of this change. IBM's Industry Frameworks play a critical role in our ability to deliver smarter planet solutions by pulling together deep industry expertise, technology and a dynamic infrastructure from across the company to provide clients with offerings targeted to their industry-specific needs. Disclaimer. I have 'borrowed' some of the text from an IBM Marketing email about the new SPDE 4.0 framework - so not my words...
I am in Dublin at the moment for TeleManagement World 2011 which has changed locations from Nice, France last year. it looks to be a very interesting conference. I've already done two days of training and now, we're beginning the sessions. the keynote session has the Irish Minister for Communications, Mr Rabitte who is talking about the challenges that CSPs face all the world around. He is also talking about an innovation programme that the Irish Government have started called 'Examplar' which is part of their NGN Trial network. i'll see if I can get some more info over the next few days... Steven Shurrock, the new CEO at O2 Ireland who has been in the role for just six months is very bullish about the opportunities in Ireland for data services. After Steven, we saw a host of Keynote speakers who have been focused on a number of themes, but many common presenters included:
Standards compliance - including certification against standards. Particularly with the TMF Frameworx standards
Horizontal platforms and moving away from silos is their IT strategy
SOA is the basis for all of the new IT initiatives
I have recorded a number of keynote speakers as video, but for the time being, those files are very large. Once I have had a chance to transcode them to a smaller size, I'll add them to the blog as well - while not particularly technical, they're very intesting for a Telecom perspective.
OK, I know over the past six months or so, my blog has sat idle. For that I apologise. I could blame workload, personal issues, the amount of travel etc etc, but I am just going to cop it on the chin and say that I am sorry to anybody out there that can be bothered to read my posts. In light of the fresh start, I am going to change the name of the blog from Telco Talk to ...
Well, that's the thing, I haven't decided yet what i should change it to. The content isn't going to change - it will continue to be Telco focused, so I don't want to start a new blog from scratch. I will just rename this one. I just need some inspiration for the new name. Within IBM, our global market folks have decreed that we should no longer use the term "Telco" and that instead we should use "Communications Service Provider" or CSP for short. As a result I was thinking about changing the blog name to "CSP Comms" or "CSP Communiqué". Before I change it, I would like your opinion (if there is anyone out there) or suggestion of a new name.
I'll be watching my blog comments with bated breath, so please comment and suggest names.
The threatened ban was narrowly averted and the ban in India looks as if it will avoid a ban after all. I wonder if RIM installed (r promised to) a Network Operations Centre in the UAE (which is what I saw a a possible way of appeasing the authorities) or if they have come up wit some other way to give the UAE authorities access to the encrypted traffic.
In the meantime, India has hinted (per my previous post) that they will be going after private VPN traffic in addition to the Blackberry traffic. We'll see where that ends up soon I guess.
I know I have been lax in posting recently. I've had a lot of work on and I am sorry for not getting to the blog.
That said, over the past few weeks, I have been watching what seems to be a snowballing issue of governments spying on their citizens in the name of protection from terrorism. First cab off the rank was India a couple of years ago asking Research In Motion (RIM) for access to the data stream for Indian Blackberry users, then asking for the encryption keys. That went quiet until recently (1Jul10), the Indian Government again asked RIM for access to the Blackberry traffic and gave RIM 15 days to comply (See this post in Indian govt gives RIM, Skype 15 days notice, warns Google - Telecompaper). That has passed and the Indian government yesterday gave RIM a new deadline of 31Aug10 (See Indian govt gives 31 August deadline for BlackBerry solution - Telecompaper). In parallel, a number of other nations have asked their CSPs or RIM for access to the data sent via Blackberry devices.
First, was the United Arab Emirates (UAE) who will put a ban on Blackberry devices in place which will force the local Communications Service Providers (CSPs) to halt the service from 11Oct10. RIM are meeting with the UAE government, but who knows where that will lead with the Canadian government stepping in to defend it's Golden Hair Child - RIM. Following the UAE ban, Saudi Arabia, Lebanon and more recently Indonesia have all said they will also consider a ban on RIM devices. As an interesting aside, I read an article a week ago (See UAE cellular carrier rolls out spyware as a 3G "update") that suggested that the UAE government sent all Etisalat Blackberry subscribers an email advising them to update their devices with a 'special update' - it turns out that the update was just a Trojan which in fact delivered a spyware application to the Blackberry devices to allow the government to monitor all the traffic! (wow!)
Much of the hubbub seems to be around the use of Blackberry Messenger, an Instant Messaging function similar to Lotus Sametime Mobile, but hosted by RIM themselves which allows all Blackberry users (even on different networks and telcos) to chat to each other via their devices.
I guess at this stage, it might be helpful to describe how RIM's service works. From a historical point of view, RIM were a pager company. Pagers need a Network Operations Centre (NOC) to act as a single point from which to send all the messages out to the pagers. That's where all the RIM contact centre staff sat and answered phones, typed messages into their internal systems and sent the messages out to the subscribers. RIM had the brilliant idea to make their pagers two way so that the person being paged could respond initially with just an acknowledgement that they had read the message, and then later with full text messages. That's the point at which the pagers gained QWERTY keyboards. From there, RIM made the leap in functionality to support emails as well as pager messages, after all, they had a full keyboard now, a well established NOC based delivery system and a return path via the NOC for messages sent from the device. The only thing that remained was a link into an enterprise email system. That's where the Blackberry Enterprise Server (BES) comes in. The BES sites inside the Enterprise network and connects to the Lotus Domino or MS Exchange servers and acts as a connection to the NOC in Canada (the home of RIM and the location of the RIm NOC). The connection from the device to the NOC is encrypted and from the NOC to the BES is encrypted. Because of that encryption, there is no way for a government such as India, UAE, Indonesia, Saudi Arabia or other to intercept the traffic over either of the links (to or from the NOC)
Last time I spoke to someone at RIM about this topology, they told me that RIM did not support putting the BES in the DMZ (where I would have put it) - since then, this situation may have changed.
Blackberry messenger traffic doesn't get to the BES, but instead it goes from the device up to the NOC and then back to the second Blackberry which means that non-enterprise subscribers also have access to the messenger service and this appears to be the crux of what the various governments are concerned about. Anybody, including a terrorist could buy a Blackberry phone and have access to the encrypted Blackberry messenger service without needing to connect up their device to a BES which explains why they don't seem to be chasing after the other VPN vendors (including IBM with Lotus Mobile Connect) to get access to the encrypted traffic between the device and the enterprise VPN server. Importantly, other VPN vendors typically don't have a NOC in the mix (apart from the USA based Good who have a very similar model to RIM). I guess the governments don't see the threat from the enterprise customers, but rather the individuals who buy Blackberry devices.
To illustrate how a VPN like Lotus Mobile Connect differs from the Blackberry topology above, have a look at the diagram below:
Lotus Mobile Connect topology
If we extend that thought a little more, a terrorist cell could set them selves up as a pseudo enterprise by deploying a traditional VPN solution in conjunction with an enterprise type instant messaging server and therefore avoid the ban on Blackberries. the VPN server and IM server could even be located in another country which would avoid the possibility of the government easily getting a court order to intercept traffic within the enterprise environment (on the other end of the VPN). It will be interesting to see if those governments try to extend the reach of their prying to this type of IM strategy...
Since I last posted about New Zealand's National Broadband project which seemed to me to be much more focused on the subscribers and the products they would have available to them (and the retailers that sold them) than the high speed backbone network. My impressions may have been tainted by the work I was doing with the Telecom New Zealand Undertaking In Progress (UIP) project that I was involved with - the rather public forced split of Telecom New Zealand's Retail, Wholesale and Network departments to ensure equivalency of input for all retail and wholesale partners for (only) broadband services.
My understanding of the situation has developed somewhat since then and we can see that the situation in New Zealand Government also involves a similar structure to what is happening in Australia with the Communications Alliance and the NBN Company. In New Zealand, the companies are a little different. Certainly, we have the NZ Government Ministry of Economic Development (MED) as one participant, then we have Crown Fibre Holdings (not much of a web site there!) -set up by the Government to manage the process of selecting the companies to build the National Broadband Network and manage the government's investment in the NBN. Together with the companies that are bidding for the deal Crown Fibre holdings will form Local Fibre Companies (LFC) which (combined) will match the government's contribution to the NBN. That will mean the total project will cost NZ$3 Billion** with the LFCs kicking in NZ$1.5B and the NZ government contributing NZ$1.5B. I dont have the full schedule, but from a couple of sources, I have compiled an overview of the progress to date:
21 October 2009 - Communications and Information Technology Minister Steven Joyce announced the government's process for selecting private sector co-investment partners.
13 November 2009 - Intention to respond due.
9 December 2009 - The Ministry and Crown Fibre Holdings release a clarifications and amendments
14 January 2009 - The Ministry and Crown Fibre Holdings released additional clarification and amendments with respect to the Invitation to Participate.
29 January 2010 - Proposals must be lodged
4 February 2010 - Crown Fibre Holdings notify respondents of handover of responsibility for the partner selection process
October 2010 - Successful respondents announced/notified.
What I find a bit interesting is that the government are only looking to cover 75% of the population by 2019. For a small country (compared to Australia at least), that seems to me to be a very low target to aim for. If we compare that with Australia's NBN project, their target is 90% coverage at greater than 100Mbps and 10% greater than 12Mbps (that's 100% coverage!) by 2017. Admittedly, the Australian project has about a year's head start, but it's also a MUCH bigger country with a population nearly five times larger. Lets have a quick look at the comparisons:
Cost per person (US$/person)
Cost per area (US$/km2)
* 100% coverage is split between greater than 100Mbps (90%) and greater than 12Mbps (10%) ** One Billion is using the short scale definition = 109 = 1,000,000,000
What do I take from this quick comparison? Lets take a quick look at the numbers. Obviously, Australia is a much bigger country (28.4 times larger) and has a much larger population (5.2 time larger), so it is reasonable (in my opinion) that the cost per potential NBN customer should be higher for Australia (and it is at 2.2 times higher) but the thing that makes me ponder is the cost per square kilometre: New Zealand is nearly twice that of Australia. When the New Zealand target is only 70% of the population and thus enables them to avoid areas that are physically difficult to provide coverage to (I'm no NZ geologist, but I would imagine lots of the South Island's most mountainous areas would pose significant problems for cablers) I find myself wondering why the NZ network is going to be so expensive. I guess it could be a matter of scale - but I thought the biggest cost was actually laying the cables rather than the back end systems which every broadband network will need (routers, switches, administration and management systems). Maybe I am missing something - does anyone have any ideas?
edit: I've just found this quote in Wikipedia which (I think) is truely revealing when you consider New Zealand's 70% coverage target:
"New Zealand is a predominantly urban country, with 72% of the population
living in 16 main urban areas and 53% living in the four largest cities
By only extending the NBN to those 16 main urban areas and nowhere else - they've achieved their target! You wouldn't want to live in country New Zealand and be dependent on a fast network!
I was looking at where some of the traffic for this blog comes from this morning. Someone had used Google to search for "ibm sdp cloud" which I am glad to say yielded this blog as the third and forth results. Above Telco Talk in the results was a post from 2005 from fellow MyDeveloperworks blogger Bobby Woolf with his post What is in RAD 6.0 - which is interesting in that the post wasn't about Service Delivery Platforms and the term "SDP" is only mentioned in the comments on the post, yet it rated higher in Google's index than my posts which have been about cloud, SDPs or both! That's another conversation though...
The thing that really caught my attention was a new whitepaper form IBM on Smarter Homes. This has been an ongoing area of interest for me for a few years now. This new whitepaper "The IBM vision of a smarter home enabled by cloud technology" is interesting - it talks about some of the concepts that I have seen coming over the past few years, but it also introduces the concept of Cloud based services providers as the key enabler outside the home to enable smarter home to deliver on their lofty promises. In the introduction of the whitepaper, it states:
A common services delivery platform based on industry standards supports cooperative interconnection and creation of new services. Implementation inside the cloud delivers quick development of services at lower cost, with shorter time to market, facilitating rapid experimentation and improvement. The emergence of cloud computing, Web services and service-oriented architecture (SOA), together with new standards, is the key that will open up the field for the new smarter home services.
The dependence on external networks (from our homes) and external Communications Service Providers presents an opportunity for them to provide much more than just the pipe to the house. This is an area that some Telcos are trying to tap into already. Here in Australia, Telstra have recently introduced a home based smart device called the T-Hub which is intended to arrest some of the decline in homes installing or keeping land line phones (in Australia, more and more homes are buying a naked DSL or Hybrid Fibre Coax (HCF) service for Internet and using mobile phones for voice calls and not having a home phone service at all). I recently cancelled my Telstra Home Phone service, so I cannot buy one of the T-Hubs and apparently it won't work with my home phone service via my HCF connection. It is an intriguing idea though. I find myself wondering if Telstra's toe in the Smarter Home pond is too little too late. For years, in Telstra's Innovation Centres (one in Melbourne and one in Sydney) they had standing demonstrations of smarter home technology (I think the previous Telstra CEO, Sol Tujilllo closed them down). I even helped to install a Smarter Healthcare demo at the Sydney Telstra Innovation Centre a few years ago (more n that later) and their demos were every bit as good as the demos that IBM has at the Austin (Texas, USA) and LaGaude (France) Telecom Solutions Labs.
Further into the whitepaper, when talking about Cloud based Service Delivery Platforms (pp 10) there is a nice summary of why a Telco would consider a cloud deployment of their SDP:
An SDP in the cloud supports the expansion of the services scope by enabling new services in existing markets and by expanding existing services into new markets with minimum risk. By exposing standard service interfaces in the network, it enables third parties to integrate their services quickly, or to build new services based on the service components provided in the SDP. This creates the opportunity for new business models, for instance, for media distribution and advertising throughout multiple delivery scenarios.
I think this illustrates what all Telcos should be thinking about - the agility needed to compete in today's marketplace. Cloud is one way to enhance that agility but also adds elasticity - the ability to grow and shrink as the market demands grow and shrink. Sorry for rambling a bit there... some semi-random thoughts kept popping up when talking about Smarter homes and Telcos. Anyway, I would encourage you to have a read of the whitepaper for yourself. It's available at:
In just five months, Bharti Airtel's App store has had over 13 Million downloads. What a terrific example of a Telco App Store in action and (presumably) making money for the Telco. This article came across my screen this afternoon and given my previous posts about Bharti's App Store and carriers wanting to get into them (something I've seen all over Asia) to try and arrest some of the revenue bleeding to Apple (and to a lesser extent Google, Nokia and RIM) through single brand (phone) app stores.
The article is really brief, barely a footnote, but it does lay out some interesting facts:
13 Million downloads since Feb '10
Over 71,00 Applications available, up from 1250 at launch
Support for 780 different devices
1.2 downloads per second
I guess having over 200 Million subscribers does help achieve these sorts of numbers . I have some a bit of background about Airtel's App Central store and the technology it uses, much of it IBM technology. IBM Portal and Mobile Portal Accelerator are used to drive the interface which is able to support over 8,000 different devices from iPhones to WebTVs (remember them? They seem to be making a bit of a comeback at the moment) and everything in-between. These screen dumps are from their old mobile site - I will post some new ones if I can get them soon.
Since I penned my last post, I have done some more reading on Facetime and watch Steve Job's launch of Facetime. While I will happily admit that Apple have in fact used some standards within their Facetime Technology (Jobs lists H.264, AAC, SIP, STUN, TURN, ICE, RTP, SRTP as all being used), I am somewhat bemused by the "standards" discussion that most of the media seem to be focusing on with regard to Facetime. Almost everyone that refers to compliance with standards is talking about interoperability with current PC based video chat capabilities - from the likes of Skype, MS Messenger, GTalk and others. Am I the only one that has noticed the iPhone 4 is not a PC and is in fact a mobile phone? Why is it that no one else is questioning interoperability with existing video chat capable mobile phones?
After thinking on this for a little while, I guess it might be that most of the media coverage about the iPhone 4 is coming from the USA - where is was launched. It's only natural. The problem with the US telecoms market is that it is not representative of the rest of the world - who has had video calling for ages and don't really use it. Perhaps it was the overflowing Apple coolaid fountain in the iPhone 4 launch that got the audience clapping when Jobs placed a video call, or perhaps it was just that they had never seen a video call before - I wasn't there so I cant be sure. Right now, the Facetime capability on the iPhone 4 is only for WiFi connections - which makes it pretty limiting. Apparently, there is no setup required, no buddylist, you just use the phone number to make a video call - which is the way video calling already works (see the screen dump of my phone to the right and the short video below), but the WiFi limitation on the iPhone 4 will mean that you have to guess when the recipient is WiFi connected. At least with the standard 3GPP video call, the networks are ubiquitous enough to pretty much guarantee that if the recipient is connected to a network, they can receive a video or at least a phone call. Job's didn't explain what would happen if the recipient was not WiFi connected - does it just make a voice call instead? I hope so.
If you look at the pixelation and general poor quality of the video call, consider that I am in a UMTS coverage area, not HSPA (the phone would indicate 3.5G if I were), so this is what was available more than seven years ago in Australia, longer in other countries. If I was in a HSDPA coverage area, I would expect the video call to be higher quality due to the increase bandwidth available.
I recall in 2003, Hutchison 3 launched their 3G network in Australia with much fan-fair. Video calls was a key part of the 3G launch in Australia for all of the telcos. This article from the 14Apr03 Sydney Morning Herald (on day before the first official 3G network in Australia) illustrates what I am talking about. The authors say that the network's "...main feature is that it makes video
calling possible via mobile phone." Think about it for a second. That's from more than seven years ago and Australia was far from the first country to get a 3G network. A lifetime in today's technology evolution. Still the crowds clapped and cheered as Jobs made a Video call. If I had have been in the audience, I think I would have yawned at that point.
The other interesting thing that I noticed in job's speech as his swipe at the Telcos. He implied that they needed to get their networks in order to support video calls. Evidence from the rest of the world would suggest that is not the case - perhaps it is in the USA, or perhaps he is trying to deflect blame for not allowing Facetime over 3G connections away from Apple and back to the likes of AT&T who have copped a lot of flack over their alleged influence on Apple's Application store policies involving applications that could be seen to be competitive with services from AT&T. I am not sure how much stick AT&T deserve on that front, but it's pretty obvious from job's comment that he is not in love with carriers - and certainly from what I've seen, carriers are not in love with Apple. It might be interesting to see how long the relationship lasts. My guess is that as long as Apple devices continue to be popular, both parties will be forced to share the same bed.
On another related point, I have been searching the Internet to find what standards body Apple submitted Facetime to for certification - Jobs says in the launch that it will be done "tomorrow" - this could be marketing speak for 'in the future' or it could literally mean the day after he launched the iPhone 4. If anyone knows please let me know - I want to have a look into the way Facetime works.
Thanks very much to my colleague Geoff Nicholls for taking the Video Call in the video above.
I came across this article today - Apple wanting to propose their new Facetime technology for video chat now that they finally have a camera on the front of their iPhone 4. I'm now on my second phone with a camera on the surface of the phone (that's at least four years that my phones have had video chat capabilities) which has not proved to be much more than a curiosity where Telcos have launched it around the world. I recall the first 3G network launch in Australia - for Hutchinson's '3' network - video chat was seen as the next big thing - the killer application, yet apart from featuring in some reality shows on the TV, very few people used it. I wonder why Steve Jobs thinks this will be any different. At least the video chat capabilities that are in the market already have a standard that they comply with which means that on my Nokia phone, I can have a video call with someone on a (say) Motorola phone. With Apple's Facetime, it's only iPhones 4 to iPhone 4 (which does not support a 4G network like LTE or WiMax I hasten to add). If Apple really is worried about standards as the Computerworld article suggests, then I have to ask why doesn't Apple make their software comply with existing 3GPP Video call standards instead of 'inventing their own'. If Apple were truly concerned about interoperability, that would have been a more sensible path.
According to Wikipedia, in Q2 2007 there were "... over 131 million UMTS users (and hence potential videophone users), on
134 networks in 59 countries.". Today, in 2010, I would feel very confident in doubling those figures given the rate at which UMTS networks (and more latterly, HSPA networks) have been deployed throughout the world. Of note is that the Chinese 3G standard (TD-SCDMA) also supports the same video call standard protocol. That protocol (3G-324M - See this article from commdesign.com for a great explanation of the protocol and it's history - from way back in 2003!) has been around for a while and yes, it was developed because the original UMTS networks couldn't support IPv6 or the low latency connectivity to provide a good quality video call over a purely IP infrastructure. But, things have changed with LTE gathering steam all around the world (110 telcos across 48 countries according to 3GPP) and mobile WiMax being deployed in the USA by Sprint and at a few other locations around the world (See WiMax Forum's April 2010 report - note that the majority of these WiMax deployments are not for mobile WiMax and as far as I know, Sprint are the first to be actively deploying WiMax enabled mobile phones as opposed to mobile broadband USB modems) so, perhaps it is time to revisit those video calling standards and update them with something that can take advantage of these faster networks. I think that would be a valid thing to do right now. If it were up to me, I would be looking at SIP based solutions and learning from the success that companies like Skype have had with their video calling (albeit only on PCs and with proprietary technology) - wouldn't it be great if you could video call anyone from any device?
I guess the thing that annoys me most about Apple's arrogance is to ignore the prior work in the field. Wouldn't it be better to make Facetime compatible with the hundreds of millions of handsets already deployed rather than introduce yet another incompatible technology and proclaim it as "... going to be a standard".
Yes, I should have posted this a week ago during the TeleManagement World conference - I've been busy since then and the wireless network at the conference was not available in most of the session rooms - at least that is my excuse.
At Impact 2010 in Las Vegas we heard from the IBM Business Partner (GBM) on the ICE project. At TMW 2010, it was ICE themselves presenting on ICE and their journey down the TeleManagement Forum Frameworx path. Ricardo Mata, Sub-Director,
VertICE (OSS) Project from ICE presented (see his picture to the right) presented on ICE's projects to move Costa Rica's legacy carrier to a position that will allow them to remain competitive when the government opens up the market to international competitors such as Telefonica who are champing at the bit to get in there. ICE used IBM's middleware to integrate components from a range of vendors and align them to the TeleManagement Forum's Frameworx (the new name for eTOM, TAM and SID). In terms of what ICE wanted to achieve with this project (they call it PESSO) this diagram shows it really well.
I wish I could share with you the entire slide pack, but I think I might incur the wrath of the TeleManagement Forum if I were to do that. If you want to see these great presentations from Telcos from all around the world, you will just have to stump up the cash and get yourself to Nice next year. Finally, I want to illustrate the integration architecture that ICE used - this diagram is similar to the one form Impact, but I think importantly shows ICE's view of the architecture rather than IBM's or GMB's.
For the benefit of those that
don't understand some of the acronyms in the architecture diagram above,
let me explain them a bit:
ESB - Enterprise Services Bus
TOCP - Telecom Operations Content Pack (the old name for WebSphere Telecom Content Pack) - IBM's product to help Telcos get in line with the TMF Frameworx)
NGOSS - Next Generation Operations Support Systems (the old name to TMF Frameworx)
Here is the URL for this bookmark: http://apcmag.com/telstra-to-block-ipad-micro-sims-in-other-devices.htm Interesting... in the rest of the world (and as I heard repeatedly last
week at TeleManagement World in Nice, France) Telcos are suffering from
all you can eat plans - particularly plans for devices like the iPhone
which encourages users to be online all the time and to consume rich
media like movies. I heard from a number of Telcos that teenagers are
preferring to watch movies on their iPhones in their bedrooms rather
than in the lounge room on the normal TV (not that they can always get
access tot he same movies on the TV) - surely a larger screen will
encourage more of that sort of behaviour. This is driving too much
traffic on Telcos 3G networks with flat rate plans. Optus have also
announced a similar all you can eat plan for their iPads.
At almost the same time, both Optus and VHA
(Vodafone Hutchison Australia) have offered unlimited 3G plans for just
AU$50. It makes me wonder if these Telcos in Australia are listening
to other Telcos around the world. There's been a lot of press about
AT&T's network problems associated with iPhone users. I know the
world would be a perfect place if we learnt from everyone else's
mistakes, but come on - you don't need to be a genius to see how this
could damage their business. I guess they see this as a competitive
pressure - if their rivals do it, then they have to as well - I had
hoped that the Australian Telcos would be (jointly) a bit more sensible
I do not have any Apple products and I'll admit to a bit of jealousy at
an all you can eat plan for only AU$50 when I get about 1 Gb for a
similar amount on my Nokia e71 - it doesn't seem fair that I get so much
less for similar money on the same network - just because of the device
I choose to use...
While IBM missed out on winning the TeleManagement Excellence awards this year (congratulations to those four competition winners (see the winners on the TMF web site) we do have a great stand with multiple demos (I haven't counted, but I think there are six demos) and a small meeting area. Check out the photos below:
TeleManagement World conference, 2010. Nice France.
Lui Aili, Board Director for China Mobile presented this morning at the TeleManagement World conference in Nice, France. Mr Lui spoke of China mobile's challenges. For them, Internet based competitors posed a real threat, despite the size of China Mobile (more than 528 million subscribers) they see companies like Google (with GTalk) and Skype, but also device manufacturers such as Apple and Nokia as providing on device applications and value added services on their own devices which reduces China Mobiles function down to a bit carrier. As Mr Lui put it, these companies "moved our cheese"
For China Mobile, to compete with these Internet based companies, they needed to radically reduce their costs - to do this, they started a project about six years ago to move to an all IP network from their existing legacy network. This architectural move reduced their Capex by a massive 68%. The reduction was through reduced administration and management costs (by re-organising their operational management system and spreading it across all of their IP networks)
Strategy for IP transformation
China Mobile's network services are predominantly occupied by low value services - straight 2G services. They undertook a detailed analysis to look at network utilisation and management tools to better manage their network and control the customer experience. For them, ALL IP is not the same as All-in-one IP. they are separating their IP customers into high and low value services with security barriers in place - they have a separate virtual network for high value services and for low value standard services. He did not state it directly, but I took it to mean that they have different Service Level Agreements (SLAs) associated with the high and low value services.
From a network administration perspective, they have implemented network management agents at as many points as possible - including every router to enable efficient and rapid fault discovery and correction.
For China Mobile, IP skill levels among their staff was a key success factor - Mr Lui spoke of it multiple times, including implementing comprehensive training schemes for their staff.
"IP Transformation has been a huge task... the job is fare from finished" Mr Lui said. Despite this, he also said that right now, almost all of their voice traffic is already carried over their IP infrastructure
In summary, Mr Lui made the following points:
IP transformation simplifies the network, but males O&M more complex. .
Operators must invest in OSS systems to make IP networks and transformation more efficient.
(there was a third point that I missed - I will add it once I can download the presentations)
The yoyo mobile interface for MyDeveloperWorks is back again! Had I known, it was available I would have been using it all week instead of Skyfire to post blog entries from Impact this week. I just hope it is here to stay this time! :-)
For those of you that don't know about the Lotus Connections Mobile interface, it looks like this on my Nokia e71 and is available from https://www.ibm.com/developerworks/mydeveloperworks/mobile: (I have it zoomed out to 75%, so those of you getting on in life like me, you might prefer it at 100% or greater... :-) )
<edit> It was nice while it lasted - but the mobile interface is down again! </edit>
Well, Impact 2010 is over. It's been four and a half days of terrific content, catching up with other IBMers, customers and business partners. All of the Telco related sessions finished Wednesday so the last day and a half today, I have been concentrating on product updates to Business Process Management products. I went to a WebSphere Process Server V7 update yesterday and a WebShere Services Registry and Repository this morning. By far, the best session of the last two days was the final session which covered how to get started and be successful on your first BPM project. The presenter had lots of recommendations which made a lot of sense. Once the presentation is posted to the Impact collaboration site, I will summarise it (I didn't think to do it as the session ran as I did for the telco sessions - sorry!)
The WPS update had the BPM product Architect Eric Herness (BPM Chief Architect) along with Amy Dickson (WPS Product Manager) and Kevin Barker (WBI Architect) went through the many improvements that were introduced with WPS V7 as well as the improvements in WebSphere Integration Developer.
As I write this (on my phone) I am sitting downstairs in the Venitian waiting for the time to tick over before I head to the airport. Unfortunately, McCarran Airport (LAS) doesn't have an American Airlines lounge*, so I might as well wait here where I have free wifi and food as be at the airport. From there, I go to Los Angelies (LAX) and then finally home (after 15 or so hours in the air) to Melbourne.
Next week, I will be heading to the TeleManagement World in Nice, France so if I have wifi connections during the sessions, I will post from the sessions there as well. I hope you'll join me there or failing that, at least read about it here.
* The observant and well travelled among you will know that LAS does actually provide free wifi, but sitting at the airport is not as nice as sitting in the comfy chairs at the hotel.... #ibmimpact
In Costa Rica, the government owned telco - ICE is being forced to open up it's market to competitors because of the Central American Free Trade Agreement (CAFTA) that Costa Rica has joined. This represented a huge change for ICE who were a Power and Communications provider, without a competitor in their market, they didn't have any competitive forces to push them to modernise their systems and processes. For instance, fulfilment of basic services took weeks as a result.
GBM, an IBM business partner and IBM Software group proposed to ICE that they base their new OSS/BSS architecture on the TeleManagement Forum's Frameworx (eTOM, TAM, SID, TNA) - for which they used the WebSphere Telecom Content Pack and IBM Dynamic Process Edition to ensure ICE would have the standards compliance and dynamic BPM capabilities. By using WTCP and DPE, ICE reduced the effort required to build and deploy their new processes by an estimated 20-50%. A fundamental principle of Dynamic BPM is the Business Services layer which sits on top of the BPM layer which in turn sits on the SOA layer. A Business Service is abstracted up from the physical process. For instance, a business service might be 'Check Technical Availability' which would apply regardless of the service you are talking about - mobile, POTS or xDSL. These business services are defined within the Telecom Content Pack which enables system integrators like GBM to accelerate the architecture work on projects like this one for ICE.
GBM made use of IBM's Rapid Delivery Environment (RDE) - where they sent a number of their architects to the IBM Telecom Solution Lab in Austin, Texas for six weeks to conduct a proof of concept and to learn how to apply WTCP to a real customer situation such as that faced by ICE. The RDE allowed GBM to work with the IBM experts to build the first few scenarios so that GBM could continue the work locally in Costa Rica without a lot of assistance from IBM. The other benefit of using the RDE is to get access to the eTOM level 4,5 and 6 assets - the connections to the physical systems that the RDE has previously developed. For instance, the connection to Oracle Infranet Billing engine which can then be reused by other customers who also engage with the RDE.
GBM and ICE have not yet been able to measure that acceleration that WTCP and DPE provided, but anecdotal evidence suggests that it was significant. In preparation for CAFTA, ICE have already launched a 3G network and are preparing to launch pre-paid services in preparation to compete with several new operators that will enter the market this year. #ibmimpact
At this morning's keynote session with Beth Smith (IBM) and Shanker Ramamurthy (IBM) one of my customers - Globe Telecom from the Philippines was mentioned - unfortunately they could not be here to see if for themselves, so I thought I would post the photos and short video I took of it. <edit> I have replaced my shonky video with an extract of the relevant section form the official Impact videos on youtube </edit>.
Some official videos have been uploaded to YouTube - taken from the real (read good quality) cameras at the event. I have extracted out the relevant Telco section and added subtitles to clarify what Beth and Shankar are actually saying.
AT&T are part way through a major SOA/BPM project which if you know a little about their history* must be an enormous task. They are introducing modelling tools and reverse modelling their existing systems as well as using a tool from iRise to prototype the user interfaces and reduce the risk of not hitting the business requirements.
They have deployed Rational Requisite Pro to capture requirements without the need to get users away from their beloved MS Word. In the last five months, their requirements have gone from 15,000 requirements registered in January to over 30,000 now. Certainly illustrates the traction that they are achieving with their business people. Users access Req Pro via Citrix sessions and the tools are available to thousands of business users.
AT&T are also exposing WebSphere Business Modeler and iRise to a smaller set of subject matter expert users - building a Centre of Excellence in UI design and Process Modelling. So far, they have modelled over 800 process flows base on eTOM models which have been extended to meet their specific requirements. All of these are stored within a common Rational Asset Manager instance which helps their business analysts to improve asst use and reuse.
Those process models feed directly into the model driven development method which is aligned with the requirements and process models. That MDD method uses WebSphere Integration Developer(WID), Rational Software Architect (RSA) for development and WebSphere Process Server (WPS) runtime. WebShere Business Modeler and WebShere Services Registry and Repository (WSRR) in support of the runtime. IBM GBS have put in place processes to support AT&T's development life cycle and governance requirements.
Key success factors that AT&T see include:
Solve Critical Business Problems
Win over senior Exec support
Achieve Business Partner Alignment
Integrated Tools Approach
Communicate, communicate, communicate!
* AT&T have been through multiple de-mergers and mergers and acquisitions over the past 10 years resulting in a hugely complex IT environment. #ibmimpact
I have just seen Amy Wohl of Amy D Wohl Opinions present on Cloud computing, she was going through the various cloud models and spoke about Community Clouds. What she means by that is multiple community focused clouds as part of a larger (private) cloud. An example of that is the Vietnam Government that bought an IBM Cloudburst to provide multiple virtual private clouds to small businesses in Vietnam so that they can have access to computing power that they otherwise now be able to afford. For Telcos, this could be an offering to their local community groups - perhaps a local schools, bar, sporting clubs, service clubs etc but also potentially for commercial organisations - perhaps to small businesses.
She also made the interesting point that (in her opinion) we are too early in the cloud evolution to actually define standards. She believes that any standards set now would stifle innovation in cloud technology and interoperability. I was interested to hear about this since I attended a web conference call a few weeks ago run by the TeleManagement Forum's effort to create standards around clouds, particularly For Enterprise use rather than public clouds. I guess the Enterprise cloud market is the most likely type of cloud user that will need interoperability first, thus the emphasis on standards.
Amy co-presented with John Falkl from IBM who discussed BPM within the cloud. Given BPM is a business function, items subjects such as Security are usually one of the biggest hurdles for Cloud Services. There are multiple factors that fall under the title of 'security' such as encryption, roles, authentication (especially when using federates or external authentication services), legal data protection requirements and authorisations. John also pointed out a number of considerations that should be considered in enterprise cloud services including Governance models (which he sees as an extension to normal enterprise governance models). John's view of standards for Cloud services is that it will most likely start with Web Services standards such as WS-Provisioning and mentioned that there were multiple efforts around cloud standards. I might see if I can have a chat to both John and Amy after the session to get their views on the TMF's efforts around cloud standards. If that discussion is interesting, I will report back.
Amy made a really interesting point during the Q&A - she said that when she was at Microsoft a few weeks ago and asked about transactional activity in their cloud - they said that MS could not do it.... Very interesting especially when you consider that transactional integrity is a core capability on IBM's cloud capability.
<edit> I asked Amy about the TMF Cloud standardisation - she hadn't heard about it, but did say that she thought that TMF's approach was right - asking the enterprise customers to specify their requirements - she also thought they were probably the right place to start for any cloud standards too. </edit> #ibmimpact
Gridit is a Finnish company that is providing online retail services which was only founded in 2009. They are owned by nine local network providers. Think of them as an aggregated application store that sells a broad range of services and products from those nine network companies as well as third party content providers. They plan to sell services and content such as:
They do not make exclusive agreements with the content/service providers and provide their customers with freedom of choice. For Gridit, the customer is king - they will seek out new content providers if there is demand from the customers. Gridit also interact with local network providers and 3rd party content providers giving the customers a single point of contact and billing for the services that they resell.
What Gridit are providing is pretty similar to an app store solution we deployed last year in Vietnam which was also a joint venture by a number of Telcos and a bank which provided a retail online store for products and services from those communications providers as well as 3rd party content providers except that Gridit are also offering a hosted wholesale service - I could go to Gridit and build my new company 'Larmourcom' and offer products and services from a range of providers that Gridit front end for Larmourcom. Gridit can stand up an online commerce portal for Larmourcom and also provide an interface to the back end providers to allow for traditional and non-traditional service assurance, fulfilment and billing processes.
To achieve this abstraction from the back end providers, Gridit have used WebSphere Telecom Content Pack to provide an architectural framework and accelerator for all of those services. IBM has helped Gridit to map those processes as defined within the TeleManagement Forum's standards (eTOM, TAM, SID) and map them to the lower level processes to wherever the content or services come from.
Like the Vietnamese app store, Gridit are also using WebSphere Commerce to provide the online commerce and catalogue. For Gridit, the benefits they expect to see (as a result of a Business Value Assessment that was conducted) was 48% faster time to value by using Dynamic BPM and Telecom Content Pack versus a traditional BPM model. That is real business value and a great story for both Gridit and IBM. #ibmimpact
Orange in France are using WebSphere sMash to provide an easy development environment using PHP and Groovy to build Telco enabled applications that consume Orange Application Programming Interface (API) which are exposed through pre-built widgets. The custom Orange API is not compliant with either OneAPI or ParlayX and I would normally not endorse a custom API like this, but time to market forces meant that Orange had to move before the (OneAPI) standards were in place. What I would take from their experience in France is their model and use cases. All of which could be done and (now) use standards for those APIs. Interestingly, I think that Orange could also use IBM Mashup Center to support developers with even less skills that the PHP and Groovy developers they're currently targeting.
#ibmimpact Once I get back to my PC, I will insert an Orange video the positions the usage and simplicity of their offering.
Telus is a Communications Service Provider in Canada, the second largest in their market with 12M connections (wireline, mobile and broadband). Telus have a very complex mix of products, services and systems and they need to maximise their investments while still be able to grow and maintain a lid on their costs. New projects still need to be implemented through good times and bad, so they need an architecture that will allow Telus to continue to grow and maintain costs through a range of economic conditions. Telus selected an agile method/strategy where a reasonable investment early on with the plan to become agile and support new 'projects' through small add ons in terms of investment. Ed Jung from Telus characterised the 'projects' in the later stages as rule or policy changes which may or may not require a formal release.
To achieve this agility, Telus are using WebShere Telecom Content Pack (WTCP) as an accelerator to keep costs down, while still maintaining standards compliance for their architecture. He sees key success factors as:
Selecting a key implementation partner (IBM)
Using standards where possible to maintain consistency
For Telus, they elected to start with fulfilment scenarios within their IPTV system. The basis for this is a data mapping to and from a common model - within the TeleManagement Forum's standards, that relates to the SID. Ed sees this common model as key to their success.
Dynamic endpoint selection is used within Telus to enable their processes to integrate and participate with their BPM layer. Ed suggest the key factors for a successful WTCP project are:
Adopt a reference architecture
Select a good partner
Seed money for lab trials
Choose correct pilots
Put governance in place (business and architects)
Configure data / reduce code
Ed thinks that last point (configure data / reduce code) is the best description of an agile architecture that really drive lower total cost of ownership for projects as well as a lower capital expenditure for each project.
Craig Hayman is up now and making some great announcements. He went through them to quickly to capture them all on my phone, but I took a photo which I will add to this post later. They included a new Castiron, is a new acquisition (today) which will add to IBM's cloud integration capabilities.
WebSphere Lombardi edition, bringing together BPMBlueworks and Blueprint in a cloud initiative are just some of the new announcements. The others are on the photo below:
Below is the official YouTube video from Craig's Hayman's Speech
Robert LeBlanc is speaking now and previewing the 2010 CEO study - always an interesting read and it looks like there will be similar revelations come out of this years report too. Benifets like 75% of successful businesses make extensive use of BPM and SOA. Robert said there would be preview copies available which hopefully I will be able to get a copy of. The study should be available mid-May. Robert is discussing agile businesses. How individual IBM customers are becoming more agile.
Kaiser Permente is a healthcare provider and is really making changes to the way they work. Their CEO is speaking about the evolution of medical records from paper charts, to ecectronic records, predictive analytics and personalised records. They're making this revolutionary changes by using IBM SOA & BPM technology. It's impressive to see the real changes they have made that have a real impact on patient care, efficiency and capabilities.
The next customer example that Robert is giving is Ford. FoMoCo Exec VP, Paul Nussbaum is talking about their OneIT initiative that focused on standardisation and process simplification and consolidation allowed Ford to survive a thrive through the Global Financial Chrisis.
Well, I'm here! Las Vegas for this year's Impact conference. As I sit here listening to Steve Mills talk about IBM's BPM and SOA strategy since 2002 and it strikes me that the basic story around SOA and BPM has not changed in all that time. Sure things have changed, but those changes represent growth on top of the same SOA & BPM story. A key add on that Steve is talking about now is the Smarter Planet initiative which was launched in 2008 and build on the SOA basics to really improve our world.
I'm really looking forward to this week, to see the latest and greatest from IBM, IBM Business Partners and Customers. #ibmimpact
I am sitting here in Singapore and reading today's Straits Times, keeping up with the affairs in the region and around the world where on page 3 (the most important page in a newspaper after the front page) is an article about the leaked/lost next generation iPhone that Gizmodo reportedly paid US$5000 dollars for (other online reports that I've read have suggested other amounts such as US$350. I'm not sure who is right). The article occupied almost half of page 3.. for the next gen iPhone... that seems excessive to me for a non-specialist publication, but I guess it is reflective of the general hype that exists around Apple products. The previous hype was around the next gen MacBooks with faster processors and prior to that the iPad. I've read articles suggesting that the iPad will revolutionise newspapers and home computing and telcos. I'm not so sure. While I think a lot of iPad will be sold worldwide (once it is released outside of the USA), but I also think a lot of those devices will get a lot of use through a honeymoon period and then sit idle until they are eventually disposed of. I am so sick of the hype around all these Apple products. There are some things that Apple do really well (UI and Design) and some they do really poorly (Business use support, locking in users). I respect them, but I do not like them.
It reminds me of a great parody that The Onion did a while ago:
Ok, this is my first attempt at writing a blog post on the full web interface via Skyfire (a proxied browser for mobile devices similar to Opera Mini). I am using my Nokia e71. The big advantage of doing it this way is access to all the rich text options and images that are already uploaded to myDW. Let's test that by inserting an image... On second thoughts, that didn't work too well. I tried to insert an image, but to do that, you have to move the cursor from the text insertion mode so that I can click on the insert image button and Skyfire got a bit confused at that point... Oh well, just text then. That after all was all I would get with the mobile interface if it were available. The mobile interface is definitely faster though...
I'm off to Impact 2010 in Las Vegas in a couple of weeks time, then a couple of week after that, I am off to TeleManagement World at Nice, France - that's two conferences in three weeks - now that I've tested posting from my phone (without the Connections mobile interface) and I've proved the concept, I have a model that will allow me to post form the conference floors.
Guilty of not posting what I should have over the past few weeks. First a quickie - IBM's nominations in the TeleManagement Forum excellence awards for this year have dropped down to two, that is to say, IBM has made the finalist lists for two categories:
Business Innovation award
Industry Leadership award
While it's a shame we didn't make the cut for the Solution Excellence award (I am not sure which solution was nominated) I am still proud that we've made the finalist cut for two categories. If you are a TMF member - please go and vote at http://www.tmforum.org/ExcellenceAwards2010/Finalists/8647/Home.html#1 (you choose who you want to vote for, you can probably guess who I voted for! )
I have been working on a post about our newly announced Industry Framework for the Media & Entertainment Industry - you should expect that post to come along soon! (oh and don't forget to vote in the TMF awards!)
I spotted this article this morning - I don't know much about it yet, but I will try to find out some more over the next week or so. I would however note the section of the article that states:
"... In its defense, IBM claims all its solution will do is identify and
block large sources of SPAM SMS- not scan every single message to see if
it’s in accordance with the Chinese Government’s guidelines...."
I know that some Telcos that I have worked with have what they call "Anti-SPAM" servers on their network. The key difference between those and this new one at China Mobile is that this new solution looks to be part of the mobile to mobile SMS traffic whereas other that I have seen are all about mobile originated traffic to shortcodes (for application traffic). This has become a problem for some telcos who offer unlimited (or close to unlimited) SMS plans. Existing systems that I know of simply count the number of SMS's sent by that MSISDN (phone number) to a particular shortcode - if it exceeds 50 within a 24 hour period, simply drop the messages. Those systems present an interesting conundrum for SMS voting and SMS competition entries. A subscriber thinks they have entered/voted (say) 200 times by sending 200 short messages, but the actual count that the application (the voting or the competition entry database) is only 50 for that 24 hour period. If we're talking about unlimited SMS plans, there is no real penalty to the subscriber other than their perceived votes/entries won't be as high as they thought. But for mobile plans that are paying for each SMS sent, the subscriber is not getting what they pay for... I can understand why a subscriber of a pay as you consume mobile plan would be very upset with their messages getting dropped, not that a true SPAMer would ever use a mobile phone plan like that.
I found out today that IBM has been nominated for three of the four categories for this year's TeleManagement Forum's awards. IBM is the only company to have three nominations. (Click on the image to see all of the nominees for the TM Forum awards). It makes me proud to say "I'm building a smarter planet. I'm an IBMer"
IBM has been nominated for an award in:
Business Innovation Award
Industry Leadership Award
Solution Excellence Award
The other award (Operational Excellence Award) has only Telcos nominated.
Let me start by apologising. I have been very busy over the past few weeks - this week is the first at home in five weeks. That's my best excuse for not posting (other than drawing a blank when it comes to topics ) I know I have quite a few people who read my ramblings and I really appreciate it. Unfortunately, my day job keeps getting in the way. The other big news I have (big for me, not so much for almost anyone that is reading this) is that the Industry Business Partner Technical Enablement team is being disbanded and wound into the IBM channels infrastructure. That means that there will no longer be any industry speciality in the technical enablement that we provide to our business partners - of course our partners are not being left out in the cold either. The channels team will continue to provide first rate technical enablement and assistance and IBM will continue to have industry specialists. For Business Partners, it will just be a matter of engaging with (non-channels) IBMers in the industry teams as well as the channels team. I would expect that the channels team will provide the conduit to those industry specialists such as me when the specialised industry skills are needed..
By now you might be wondering if my team is going away, what is happening to yours truly? Well, I have a position with the ... wait for it.... GMU BPM Tiger Team focused on telecommunications. And I thought IBPTSE was a mouthful. I will continue to be a Telecom specialist architect in this new team. Let me break down those acronyms a bit for you.
GMU is Growth Market Unit which equates to the whole world less North America, Japan and Western and Northern Europe.
BPM is Business Process Management and is the layer of intelligence that sits on top of a Service Oriented Architecture; it is the business processes, the workflows, the business rules etc that for the basis for the business strategy.
Tiger Team is a small team of the best of the best resources to chase down deals. What is unusual for this tiger team is the focus on industry - most other Tiger teams in IBM are focused on a particular brand such as Rational, Lotus, WebSphere, Tivoli or Information Management.
This move has been in the works for a few weeks for me, but it's now at a stage where I can talk about it. I would like to take this opportunity to thank everyone associated with the IBPTSE team around the world, particularly Jim Toohey, my manager. Over the past three years that I have been in the team, we have accomplished a lot of things that make me feel very proud. Multiple deals, partners enabled, partners validated against our SPDE framework for Telcos. Despite me being the only team member in Australia, I have always felt a part of a team despite the geographical challenges. Thanks guys!
Providing a National Broadband Network within a country is seen by many governments as a way to help their population and country compete with other countries. I have been involved in three NBN projects; Australia, Singapore and New Zealand. I don't claim to be an expert in all three projects (which are ongoing) but I though I would share some observations and comparisons between the three projects.
Where Australia and Singapore have both opted to build a new network with (potentially) new companies running it, New Zealand has taken a different path. The Kiwis have decided to split the incumbent (and formerly monopoly) Telecom New Zealand into three semi-separated 'companies' Retail, Wholesale and Chorus (the network), but only for the 'regulated products' which for the New Zealand government is 'broadband'. They all
still report to a single TNZ CEO. I have not seen any direction in terms of Fibre to the Home or Fibre to the Node, just defined the product as 'broadband'. The really strange thing with this split is that the three business units will continue to operate as they did in the past for other non-regulated products such as voice.
As an aside, the Kiwi government not regulating voice seems an odd decision to me - especially when you compare it to countries like Australia and the USA where the government has mandated that the Telcos provide equivalent voice services to the entire population. Sure, New Zealand is a much smaller country, but it is not without it's own geographic challenges in providing services to all kiwis, yet
A key part of the separation is that these three business units are obliged to provide the same level of service to external companies as they provide to Telecom and it's other business units. For example if Vodafone wants to sell a Telecom Wholesale product, then Telecom Wholesale MUST treat Vodafone identically to the way they treat Telecom Retail. Likewise Chorus must do the same for it's customers which would include ISPs as well as potentially other local Telcos (Vodafone, Telstra Clear and 2Degrees). This equivalency of input seems to me to be an attempt to get to a similar place to Singapore (more on that later). Telecom NZ have already spent tens of million of NZ$ to this point and they don't have a lot to show for it yet. It seems to me like the Government is trying to get to a NBN state of play by using Telecom's current network and perhaps adding to that as needed. For the kiwi population, that's not anything flash like fibre to the home, but more like Fibre to the node and then have a DSL last mile connection. That will obviously limit the sorts of services that could be delivered over that network. When other countries are talking about speeds in excess of 100Mbps to the home, New Zealand will be limited to DSL speeds until the network is extended to a full FTTH deployment (not planned at the moment as far as I am aware)
Singapore, rather than split up an existing telco (like Singtel or Starhub) have gone to tender for the three layers - Network, Wholesale and Retail. The government (Singapore Ltd) has decided that should only be one network and run by one company (Nucleus Connect - providing Fibre to the Home), that there would be a maximum of three wholesale companies and as many retail companies as the market will support. A big difference to New Zealand is that the Singapore government wants the wholesalers to offer a range of value added services - that they refer to as 'sit forward' services to engage the population rather than 'sit back' services that do not engage the population base. Retail companies would be free to pick and choose wholesale products for different wholesalers to provide differentiation of services.
Singapore, New Zealand and Australia are vastly different countries - Singapore is only 700km2 in size, Australia is a continent in it's own right and new Zealand is at the smaller end of in between. This is naturally going to have a dramatic effect on each Government's approach to a NBN. Singapore's highly structured approach is typical of the way Singapore does things. Australia's approach is less controlled - due to the nature of the political environment in Australia rather than it's size and New Zealand's approach seems somewhat half-hearted by comparison. I am not sure why the NZ government has not elected to build a new network independent of Telecom NZ's current network.
In Australia on the other hand, the government have set up
the Communications Alliance to manage the NBN and subcontract to the likes of
Telstra, Optus and others. The interesting thing with that approach (other than the false start that has already cost the Australian Taxpayers AU$30 million) and the thing that sets it apart from Singapore is that the approach doesn't seem to have any focus on the value added services (unlike Singapore's approach) - it's all about the network, even the wholesaler plan for Australia is talking about layer 2 protocols (See The Communications Alliance Wiki. All of the documents I have seen from Communications Alliance are all about the network - all very low level stuff.
Of course, these three countries are not the only countries that are going through a NBN project. For example the Philippines had a shot at one a few years ago - the bid was won by ZTE, but then a huge scandal caused the project to be abandoned. It came back a while later as the Government Broadband Network (GBN) but that doesn't really help the average Filipino. It's interesting to see how these projects develop around the world...
A colleague of mine at IBM, Anthony Behan has just had an article published in BillingOSS magazine. I'll admit that I have never heard of the magazine before, but this particular issue has quite a few articles about Cloud computing in a Telco environment. I don't agree with all of the content in e-zine, it is still an interesting read none the less. Check out the full issue at http://www.billingoss.com/101 and Anthony's article on pp48.
The image is a screen capture of Anthony's article from the billingoss.com web site.
Last week, Bharti Airtel launched their new App Store - upping the competitive stakes in India. As I mentioned in my post - App Stores, Are they right for Telcos? Telcos are looking to add value beyond just providing the transport. Time will tell how successful they are, but I think it could be worth watching. Bharti have a huge subscriber base, India has one of the lowest ARPU values in the world, so I guess they see it as a vital step to raise the ARPU above their competitors.
New Delhi, February 09, 2010 :Bharti Airtel, Asia’s
leading integrated telecom service provider, today announced the launch
of India’s first mobile applications store - Airtel App Central. Now,
Airtel mobile customers can transform their basic phone into a Smart
Phone by accessing over 1250 Apps across 25 categories for their
business, games, books, social networking and other needs. Offering an
easy single click purchase – with no credit card required – the cost is
automatically added to the customer’s mobile bill or deducted from the
available talk-time. Starting as low as Rs. 5, Airtel App Central will
offer local and regional Apps for customers across the length and
breadth of the country.
I had hoped to write an inciteful post this week about the National Broadband Network projects and contrasting the way that the three I have been involved in are dealing with them. In Australia (where I live) there has been a LOT of bad media coverage for the NBN project - the first attempt at which wasted AU$30 million of taxpayers money. Australia, New Zealand and Singapore are all tackling what is essentially the same problem in vastly different ways. Of course there are really good reasons for those differences, and I wanted to explain those as well... but, on my first week back from leave, things have gone nuts - this week, I've had four separate Service Delivery Platform RFI/RFPs plus some ongoing work with Globe, and other partners in Japan and New Zealand. The time I had hoped to set aside for the post just hasn't happened.
All I can say is that I am sorry and I hope to get that to you early next week while I am in Singapore and Bangkok. If you would like to see some other Telcom topics discussed, please fell free to comment and I will try and get to them...
Next week, I will be running a Telco training class for our System Integrator business partners in Bangkok - teaching, demonstrating and helping them to come to grips with IBM's software offerings in the Telecom industry - it should be good, I am looking forward to it..
On the Wednesday of the week before last (the week before my leave) at about 1am my time, I got an urgent request for a RFI response to be presented back to the customer at Friday noon (GMT+8 - 3pm for me - 2.5 business days for the locals in that timezone). This RFI was asking lots of hypothetical questions about what this particular telco might do with their Service Delivery Platform (SDP). It had plenty of requirements like "Email service" or "App Store Service" and so on. These 'use cases' made up 25% of the overall score, but did not have any more detail than I have quoted here. Two to four words for each use case. Crazy! If I am responding to this, such loose scope means I can interpret the use cases any way that I want. It also means that to meet all the use cases (14 in all) ranging from 'Instance
Messaging Presence Service (IMPS)' to 'Media Content and Management Service' to 'Next-Generation Network Convergence innovative services' the proposal and the system would have to be a monster with lots of components. The real problem with such vague requirements is that vendors will answer the way they think the customer wants them to, rather than the customer telling them what they want to see in the response. The result will be six or eight different responses that vary so much that they cannot be compared which is the whole point of running the RFI process - to compare vendors and ultimately select one to grant the project to.
On top of the poor quality of the RFI itself, the lack of time to respond creates great difficulties for the people responding. 'So what, I don't care, it's there job' you might expect them to say and to an extent you are correct, but think about it like this: A short timeframe to respond means that the vendor has to find whoever they can internally to respond - they don't have time to find the best person. A short timeframe means that the customer is more likely to get a cookie cutter solution (one that the vendor has done before) rather than a solution that is designed to meet their actual needs. A short timeframe means that the vendor may not have enough time to do a proper risk assessment and quality assurance on the proposal - both of which will increase the cost quoted on the proposal.
All of these factors should be of interest to the Telco that is asking for the proposal because they all have a direct effect on the quality and price of the project and ultimately the success of the project.
I know this problem is not unique to the Telecom industry, but of all the industries I have worked with in my IT career, the Telcos seem to do it more often. I could go on and on quoting examples of ultra short lead times to write proposals - sometimes as little as 24 hours (to answer 600 questions in that case), but all it would do is get me riled up thinking about them.
The whole subject reminds me of what my boss in a photolab (long before my IT career began) would say "Quality, Speed, Price: Pick two". Think about it - it rings true doesn't it?
I will be away on leave, so no posts this week, but as a consolation prize, vskinner should be publishing an interview with me in her blog Yin meets Yang
Responding to Val's interview request has given me an idea for some future blog posts - publish interviews with some of our key Telco partners, those in the Service Provider Delivery Environment validation programme and those that I work with from a NEP or System integrator perspective. if you think this sounds like a good idea, please comment and let me know.
In the meantime, I am going to enjoy some time away from work. See you in a week's time.
Sizing of software components (and therefore also Hardware) is a task that I often need to perform. I spend a lot of time on it, so I figured I would share how I go about doing it and what factors I take into account. It is an inexact science. While I talk about sizing Telecom Web Services Server for the most part, the same principles would be applied to any sizing exercise. Please also note that the numbers stated are examples only and NOT should not be used to perform any sizing calculations of your own!
Inevitably, when asked to do a sizing, I am always forced to make
assumptions about traffic predictions. I don't like doing it, but
is is rare for customers to have really thought through the impact
that their traffic estimates/projections will have on the sizing of a
solution or it's price.
Assumptions are OK
Just as long as you state them - in fact they could be viewed as a
way to wiggle out of any commitment to the sizing should ANY
of the assumptions not hold true once the solution has been deployed.
Let me give you and example - I have seen RFPs that have asked for
500 Transactions Per Second (TPS), but neglected to state anywhere
what a Transaction actually is. When talking about a product like
Telecom Web Services Server - you might assume that the
transactions they're talking about are SMS, but in reality, they
might be talking about MMS or some custom transaction - a factor
which would have a very significant effect on the sizing estimate.
Almost always, different transaction types will place different
loads on systems.
Similarly, it is rare for a WebSphere Process Server opportunity (at a Telco anyway) to fully define
the processes that they will be implementing and their volumes once
that system goes into production. So, what do I do in these cases?
My first step is to try to get the customer to clear up the
confusion. If that fails (I often have multiple attempts at
explaining to the customer why we need such specific information -
it is in their benefit after all - they're much more likely to get
the right (sized) system for their needs. This is not always
successful, so my next step is to make assumptions to fill in the
holes in the customer's information. I am always careful to write
those assumptions down and include them with my sizing estimations.
At this point, industry experience and thinking about potential use
cases really helps to make the assumptions I make reasonable (or I
think so anyway )
instance, if a telco has stated that the Parlay X Gateway must be able
to service 5760000 SMS messages per day, I think it would be reasonable
to assume that very close to 100% of those would be sent within a 16
hour window (while people are awake and to avoid complaints to the
telco about SMS messages that come in at all hours of the day -
remembering we are talking about applications sending SMS message -
nothing to do with user to user SMS messages ) which gets use down to
360000 (5760000/16) SMS per hour or 100 TPS for SendSMS over SMPP - now
this is fine for an average number, but I guarantee that the
distribution of those messages will not be even, so you have to make an
assumption that the peak usage will be somewhat higher than 100 TPS,
remembering that we have to size for peak load not average. How much
higher will depend on use cases. If the customer cant give you those,
then pick a number that your gut tells you is reasonable - lets say 35%
higher than average which is roughly 135 TPS of SendSMS over SMPP (I
say roughly because if that is your peak load, then as our total is
constant for the day (5,760,000) the load must be lower during the
non-busy hours. As we are making up numbers here anyway, I wouldn't
worry about this discrepancy, and certainly erring on the side of over
sizing is the safer option anyway - provided you don't over do the over
Assumptions are your friend
said I prefer to not make lots of assumptions, but stating stringent
assumptions can be your friend if the system does not perform as you
predicted and the influencing factors are not as you stated exactly in
your assumptions. For instance if you work on the basis of 35% increase
in load during the busyhour and it turns out to be 200%, your sizing is
going to be way off, but because you asked the customer for the
increase in load during the busyhour and they did not give you the
information, you were forced to make an assumption - they know their
business better that we ever could and if they can't or won't predict
such a increase during the busyhour, then we cannot be reasonably
expected to predict it accurately either - the assumptions you stated
will save your (and IBM's) neck. If you didn't explicitly state your
assumptions, then you would be leaving yourself open to all sorts of
consequences and not good ones at that.
Understand the hardware that you are deploying to
saw a sizing estimate the other week that was supposed to be able to
handle about 500 TPS of SendSMS over SMPP, but the machine quoted would
have been able to handle around 850 TPS of SendSMS over SMPP; I would
call that over doing the over sizing. This over estimate happened
because the person who did the sizing failed to take into account the
differences between the chosen deployment platform and the platform
that the TWSS performance team did their testing on.
you look at the way that our Processor Value Licensing (PVU) based
software licensing works, you will pretty quickly come to the
conclusion that not all processors are equal. PVUs are based on the
architecture of the CPU - some value a processor at just 30 PVUs per
core (Sparc eight core cpus), older Intel CPUs are 50 PVUs per core,
while newer ones are 70 PVUs per core. PowerPC chips range from 80 PVUs
per core to 120 PVUs per core. Basically, the higher the PVU rating to
more powerful each core is on that CPU.
that are rated at higher PVUs per core are more likely to be able to
handle more load per core than ones with lower PVU ratings.
Unfortunately, PVUs are not granular enough to use as the basis for
sizing (remember them though) we will come back to PVUs later in the
discussion. To compare the performance of different hardware, I use
RPE2 benchmark scores. IBM's Systems and Technology Group (Hardware) keeps track of RPE2 scored for IBM hardware
(System p and x at least) Since pricing is done by CPU core, you should
also do your sizing estimate by CPU core. For TWSS sizing, I use a
spreadsheet from Ivan Heninger (ex WebSphere Software for Telecom
Performance Team lead). Ivan's spreadsheet works on the basis of CPU cores for (very old) HS21
blades. Newer servers/CPUs and PowerPC servers are pretty much all
faster than the old clunkers Ivan had for his testing. To bridge the
gap between the capabilities of his old test environment and modern
hardware i use RPE2 scores. Since Ivan's spreadsheet delivers a number
of cores required result, I break the RPE2 score for the server down to
a RPE2 score per core, then use the ratio between the RPE2 score per
core for the new server and the test servers to figure out how many
cores of the new hardware are required to meet the performance
– so now, using the spreadsheet, you key in the TPS required for the
various transaction types - lets say 500 TPS of SendSMS over SMPP (just
to keep is simple; normally, you would also have to take into account
the Push WAP and MMS messages as well not to mention other transaction
types such as Location requests which are not covered by the
spreadsheet) that's 12 x 2 cores for Ivan's old clunkers, but on newer
hardware such as newer HS21s with 3 Ghz CPUs, that's 6 x 2 cores or on
JS12 blades it is 6 x 2 cores. Oh that's easy you say, the HS21s are
only 50 PVUs eash easy, I just go with Linux on HS21 blades and that
will be the best bang for the buck for the customer, well don't forget
that Intel no longer make dual-core CPUs for server they're all
quad-core, so in the above example, you have to buy 8 x 2 cores rather
than 6 x 2 cores for the JS12/JS22 blades.
the x 2 after each number: that is because for TWSS in production
deployments, you must separate the TWSS Access Gateway and the TWSS
Service Platform. The x 2 indicates that the AG and the SP both require
that number of cores.
Lets work that through:
Lets first say that TWSS is $850 per PVU.
For the fast HS21s - that's 8 x 2 x 50 x $850 = $680,000 for the TWSS licences alone For JS12s - that's 6 x 2 x 80 x $850 = $816,000 for the TWSS licences alone
(and all sales people who are pricing this should know this) the
pre-requisites for TWSS must be licensed separately as well. That means
the appropriate numbers of PVUs for WESB (for the TWSS AG) and the
appropriate numbers of PVUs for WAS ND (for the TWSS SP) as well as the
Database. It's pretty easy to see how the numbers can add up pretty
quickly and how much your sizing estimate can effect the prices of the
Database sizing for TWSS
the database, of course we prefer to use DB2, but most telcos will
demand Oracle in my experience. For TWSS, the size of the server is usually not the bottleneck int he environment what is important is the DB writes and reads per second
which equates to disk input/output to achieve high transaction rates
with TWSS. It is VITAL to have an appropriate number of disk spindles
in the database sick array to achieve the throughput required - the
spreadsheet will give you the number of disk drives that need to be in
a RAID 1 array to achieve the throughput. For the above 500 TPS
example, it is 14.6 disks = 15 disks since you cant buy only part of a
disk. While RAID 1 will give you striping and consequently throughput
across your disk array, if one drive fails, you're sunk. To achieve
protection, you must go with a RAID 1+0 (sometimes called RAID 10)
which gives you both mirroring (RAID 0) and stripping (RAID 1). RAID
1+0 immediately doubles your disk count so we're up to 30 disks in the
array. Our friends at STG should be able to advise on the most suitable
disk array unit to go with. In terms of CPU for the database server, as
I said, it does not get heavily loaded. The spreadsheet indicates that
70.7% of the reference HS21(Ivan's clunker) would be suitable, so a
single CPU JS12 or HS21 blade even an old one would be suitable.
time I do a TWSS sizing, I get asked how much capacity we need in the
RAID 1+0 disk array - despite always asking for the smallest disk's
possible. Remember we are going for a (potentially) large array to get
throughput, not storage space. In reality, I would expect a single
32Gb HDD would be able to easily handle the size requirements for the
database, so space is not an issue at all when we have 30 disks in our
array. To answer the question about what size - the smallest possible
- since that will also be the cheapest possible provided it does not
compromise the seek and data transfer rates for the drive. In the
hypothetical 30 drive array, if we select the smallest drive now
available (136Gb) we would would have a massive 1.9 Tb of space
available ((15-1) x 136 Gb) which is way over what we need in terms of
space, but it is the only way we can currently get the throughput
needed for the disk I/O on our database server. Exactly the same
principles apply regardless of DB2 or Oracle being used for the
Something that I have yet to see empirical data on is how Solid State Drives (SSD) with their higher I/O rates will perform in a RAID 1_0 array. In such a I/O intensive application, I suspect that it would allow us to drop the total number of 'disks' in the array down quite significantly, but I don't have any real data to back that up or to size an array of SSDs.
We have also considered using an in memory database such as SolidDB either as the working database or as a 'cashe' in front of DB2, but the problem there is the level of SQL supported by SolidDB is not the same as that supported by DB2 or Oracle's conventional database. To port the TWSS code to use SolidDB will require a significant investment in development.
Remember : Sizing estimates must always be multiples of the number of cores per CPU
sure you have enough of a overhead built into your calculations for
other processes that my be using CPU cycles on your servers. I assume
that the TWSS processes will only ever use a maximum of 50% of the CPU
– that leaves the other 50% for other tasks and processes that may be
running on the system. As a result, I always state that with my
assumptions as well. As an example, I would say:
achieve 500 TPS (peak) of SendSMS over SMPP at 50% CPU utilisation, you
will need 960 PVUs of TWSS on JS12 (BladeCenter JS12 P6 4.0GHz-4MB
(1ch/2co)) blades or 800 PVUs of HS21 (BladeCenter HS21 XM Xeon L5430
Quad Core 2.66GHz (1ch/4co)) blades. I would then list the assumptions
that I had made to get to the 500 TPS figure such as:
There is no allowance made for PushWAP or MMS included in the sizing estimate.
500 TPS is the peak load and not an average load
SMSC has a SMPP interface available
All application driven SMS traffic will be during a 16 hour window
What about High Availability?
I think that High Availability (HA) is probably a topic in it's own
right, but it does have a significant effect on the sizing, so I will
talk about it in that regard. HA is generally specified in nines - by that I mean if a customer asks for "five nines
", they mean 99.999% availability per annum (that's about 5.2 minutes
per year of unplanned down time). Three nines (or 99.9% available) or
even two nines (99%) are also sometimes asked for. Often, customers
will ask for five nines, not realising the significant impact that such
a requirement will have on the software, hardware and services sizing.
If we start adding additional nodes into clusters for server
components, that will not only improve the availability of that
component, it will also improve the transaction capability and the
price. The trick is to find the right balance between hardware sizing
and HA requirements. For example: if a customer wanted 400 TPS of
Transaction X, but also wanted HA. Lets assume a single JS22 (2 x dual
core PowerPC) blade can handle the 400 TPS requirement. We could go
with JS22 blades and just add more to the cluster to build up the
availability and remove single points of failure. As soon as we do
that, we are also increasing the license cost and the actual capacity
of the component., so with three nodes in the cluster, we would have
1200 TPS capability and three times the price of what they actually
need just to get HA. If we use JS12 blades (1 x dual core PowerPC)
which have half the computing power of a JS22, we could have three
JS12s in a cluster, achieve 3 x 200(say) TPS = 600 TPS and even if a single
node in the cluster is down, still achieve their 400 TPS requirement.
With JS12's, we meet the performance requirement, we have the same
level of HA as we did with 3 x JS22s but the licensing price will be
half that of the JS22 based solution ( at 1.5 x the single JS22 option).
guess the point I am trying to get across is to think about your
options and consider if there are ways to fiddle with the deployment
hardware to get the most appropriate sizing for the customer and their
requirements. The whole thing just requires a bit of thinking...
What other tools are available for sizing?
IBMers have a range of tools availbel to help with sizing - the TWSS spreadsheet I was talking about earlier, various online tools and of course Techline. Techline is also available to our IBM Business Partners as well via the Partnerworld web site (You need to be a registered Business Partner to access the Techline pages on the Partnerworld site). For
more mainstream products such as WAS, WPS, Portal etc, Techline is the team to help Business Partners - they have questionnaires that they will use to get all the
parameters they need to do the sizing. Techline is the initial contact
point for sizing support. For more specialised product support (like for TWSS and the other WebSphere Software for Telecom products) you may need to contact your local IBM team for help. If you are a partner, feel free to contact me directly for assistance with sizing WsT products.
is a IBM class for IT Architects called 'Architecting for
Performance' - don't let the title put you off, others can do it - I
did it and I am neither an architect (I am a specialist) or from IBM
Global Services (although everyone else in the class was!). If you get
the opportunity to attend the class, I recommend it - you work through
plenty of exercises and while you don't do any component sizing, you do
do some whole system sizing which is a similar process. I am not sure if the class is open to Business Partners, if it is, I would also encourage architects and specialists from our BPs to do the class as well. Let me take that on as a task - I will see if it is available externally and report back.
Sizing estimations is not an exact science
I glance back over this post, I guess that I have been rambling a bit,
but hopefully you now understand some of the factors in doing a sizing
estimate. The introduction of assumptions and other factors beyond your
knowledge and control makes sizing non-exact - it will always be an
estimate and you cannot guarantee its accuracy. That is something that
you should also state with your assumptions.