Sadly, I am no longer employed by IBM, so this blog which started as a team blog for the IBM Business Partner Technical Strategy Enablement (IBPTSE) Telecom team no longer represents IBM in any shape or form. As I became the Chief Telecom Architect for IBM for the WebSphere brand worldwide, I continued to write posts. The WebSphere brand merged into the new Cloud brand in 2015 and I retained the same role working with Telcos all around the world to design software solutions to solve their business problems. I left IBM in 2017 and am now working for DGIT Systems , a small IT company focused on helping Telcos be more agile through alignment with TMForum Frameworx - particularly for Order Management and Fulfilment solutions. My hope is to continue/resurrect this blog on Telecom business issues and technology.
Thanks for visiting. Please comment on posts and leave your thoughts.
(OK, it's not strictly Telco related, but check the footnote to see my personal connection with the J9 VM in particular)
WebSphere Liberty is the high performance Java Enterprise Edition Server that's ultra light-weight - it includes and OSGi container and uses the IBM J9 VM at it's core - which IBM has also donated to the Open source community (via Eclipse - see https://projects.eclipse.org/projects/technology.openj9) .1
1. A little history lesson on the IBM J9 VM - it was originally developed by IBM's (now defunct) Pervasive Computing division for IBM's J2ME lightweight VM. It was then ported to the J2SE and J2EE platforms. When it was developed, I was in a Tech Sales role for the Pervasive Computing division, so I have a soft spot for the J9 VM.
An ex-colleague of mine (Violet Le - now the Marketing Director at Imageware) asked me about the drivers for Analytics in Telcos. I'll admit that it's a subject that I haven't really given a lot of thought to - all the projects that I've worked on in the past that have included Analytics have had a larger business case that I was trying to solve... Marketing, Future Planning, Sales etc I've never worked on an Analytics project for the sake of analytics, nor have I designed a solution that was just (or mainly) analytics.
There is a definite value in analytics in providing an insight into how the business is running - to enable business to plan for the future and to manage how they run in the present. Both Strategic and Tactical cases for analytics would seem to me to be of value to any business. An analytics system that delivers insight into the business (customer behaviour, sales effectiveness, capacity usage and predictions etc) is great, but at the end of the day, a Telco needs to do something about that information/insight to actually deliver business benefits.
As I'm no analytics specialist, I wont' try to describe how to define or build those systems. What I will try to do is to describe the bits around the analytics systems that make use of that insight to deliver real value for the CSP.
What are the business cases that I've seen?
Sales & Marketing
Driving promotions to to positively affect subscriber retention or acquisition... I did a project with Globe Telecom in the Philippines which was primarily aimed at driving SMS based outbound marketing promotions that are targeting based on subscriber behaviour. An example might be if a subscriber had a pre-paid balance less than (say) 5 pesos, and the subscriber topped up more than 20 pesos and less than 50 pesos, then send a promo encouraging the subscriber to top up by more than 100 pesos... all the interaction is via SMS (via a ParlayX SMS API)
Back in 2013, I did an Ignite presentation at the IBM Impact Conference in Las Vegas - Here is the presentation (Smarter Marketing for Telecom - Impact 2013) The session was videoed, however, the video recording is no longer on Youtube. Happily, I've found the video is still available - just not easy to find. Here it is for your enjoyment!
Social networking analysis to determining who should be targeted. IBM's Research group was pushing for years a Social Networking Analysis capability that looked at Social Networking connection to determine which subscribers are followers, which are community leaders and influencers and based on that assessment.
Ensuring utilisation of the network is optimised for the load requirements. I worked with a telco in Hong Kong that wanted to dynamically adjust the quality of service level to be delivered to a specific user based on their location (in real time) and a historical analysis of the traffic on the network. For example, if a subscriber was entering the MTR (subway) station and the analytics showed that particular station typically got very high numbers of subscribers all watching youtube clips at that time of day on that day of the week, then lower the QoS setting for that subscriber UNLESS they were a premium or post-paid customer in which case, keep the QoS settings the same. The rating as a premium subscriber could be derived from their past behaviour and spend - from a traditional analytics engine.
Long term planning on network (SDN/NFV will allow Networks to be more agile which will reduce the need for traditional offline analytics to drive network planning and make the real time view more relevant as networks adapt to real time loads dynamically ... as traffic increases in particular sections of the network, real time analytics and predictions will drive the SDN to scale up that part of the network on demand. This is where new next gen AI's may be useful in predicting where the load will be int he network and then using SDN to increase capacity BEFORE the load is detected... read Watson from IBM and similar....
A few years ago, a number of ex colleagues (from IBM) formed a company on the back of real time marketing use case for Telcos and since then, they've gone ahead in leaps and bounds. (Check them out if you're interested, the company name is Knowesis) <edit> Unfortunately, I can't link to them without setting off the spam protections on mydeveloperworks, but I'm sure you can figure it out... www.knowesis.com will do it!</edit>
Do you have significant use cases for analytics in a CSP? I'm sure they are and I'm not claiming this is an exhaustive list - merely the cases that I've seen multiple times in my time as a solution architect focused on the telecommunications industry.
I wouldn't normally just post a link to someone else's work here, but in this case Frank Wong - a colleague of mine at my new company (DGit Systems) has done some terrific work in helping to eliminate the miss-match between the data model used by the TMF's REST based APIs and the TMF's Information Model (SID). I know this was an issue that IBM were also looking to resolve. In the effort to encourage the use of a simple REST interface, the data model used in the TMF's APIs has been greatly simplified from the comprehensive (some might say complex) data model that is the TMF's Information Model (SID). This meant that a CSP who is using the SID internally to connect internal systems needed to map to the simplified API data model to expose those APIs externally - there was no easy one-to-one mapping for that mapping which meant that the one could not simply create a API for an existing business service (eTOM or otherwise) - a lot more custom data modelling work would be required.
Across many industries, including the Telecommunications sector, there seems to be a strong movement towards a MicroServices Architecture and (somewhat) away from Service Oriented Architecture. I've seen this move in a CSP here in Australia. The TeleManagement Forum have a significant project that is trying to standardise the REST APIs that a CSP might publish.
The TMF state:
"TM Forum’s Open API program is a global initiative to enable end to end seamless connectivity, interoperability and portability across complex ecosystem based services. The program is creating an Open API suite which is a set of standard REST based APIs enabling rapid, repeatable, and flexible integration among operations and management systems, making it easier to create, build and operate complex innovative services.
"TM Forum is bringing different stakeholders from across industries to work together and build key partnerships to create the APIs and connections. The reference architecture and APIs we are co-creating are critical enablers of our API program and open innovation approach for building innovative new digital services in a number of key areas, including IoT applications, smart cities, mobile banking and more."
Laurent Leboucher, Vice President of APIs & Ecosystems, Orange
TM Forum REST based APIs are technology agnostic and can be used in any digital service scenario, including B2B value fabrics, Internet of Things, Smart Health, Smart Grid, Big Data, NFV, Next Generation OSS/BSS and much more."
I've been a part of a number of projects where these REST APIs have been exposed primarily to a CSPs trading partners - my very first Service Delivery Platform exposed APIs to external developers. Back then, it was Parlay X Web services (REST didn't really exist and certainly there to external developers.were no Telco standards in place for REST based interfaces) that exposed the functionality of network elements to 3rd party developers. With many of the APIs that the TMF have defined, they seem to be more focused on OSS/BSS functions instead. Now that the TMF have quite a number of Open APIs defined, there are some network focused APIs that are coming onto the list - for instance, a Location API would have typically be exposed using the ParlayX Web Services or ParlayREST REST interfaces to the network's Location Based Server (LBS). As a result, there does seem to be a small amount of crossover between the new TMF APIs and the older ParlayREST APIs.
Does this mean that the new TMF OpenAPIs are of no use? Not at all. There are certainly advantages to exposing functions that a CSP has to external developers and REST based OpenAPIs make the consumption of those functions easier than the ParlayX web services or Parlay CORBA services have been in the past. Ease of consumption is not to be underestimated. An API that is easy to include in an application and provides a real capability that would have been otherwise difficult to provide stands a much greater chance of wide usage.
Sure, there is a place for externalising the OSS/BSS functions of a CSP. Trading partners could place orders against a CSP, they could bill to a subscriber's post or pre-paid accounts, they could update the subscriber profile held by the CSP. All relevant use cases for externalising the TMF Open APIs.
The big question in my mind is will REST APIs be of use internally?
REST based APIs being easier to integrate internally will drive some value. But in CSPs that have significant investments in a Service Oriented Architecture (SOA), I'm struggling to see the business value in abandoning that in favour of a MicroServices Architecture where there is no common integration tool, no common orchestration capability, rather lots and lots of point to point integrations through REST APIs.
For those of us that have been around a while, you will have seen point to point integrations and the headaches they cause - complex dependencies in mesh architectures make maintenance hard and expensive. Changing a (say) billing system that is integrated through multiple point to point connections is a nightmare - even if they have a standardised API describing those interfaces. The plane truth of the matter is that not all of those interfaces will be adequately described by the TMF's Open APIs, so custom specifications APIs will arise and make swapping out the billing system expensive. Additionally, not all of a CSPs internal systems will have TMF Open API compliant interfaces - many won't even support REST interfaces natively. Changing all of a CSP's systems to ensure they have a REST interface is a non-trivial task.
A Hybrid environment may be needed.
I'd suggest that a Hybrid approach is needed - existing Enterprise Service Busses may be able to interface with REST APIs - certainly IBM's Integration Bus and the (now superseded) WebSphere Enterprise Service Bus could connect to REST APIs just as easily as they could connect to Web Services, Files and other connectivity options. The protocol transformation capabilities of a ESB are able to provide REST APIs to systems that would have otherwise not supported such modern interfaces. Similarly, where a function is not provided by a single system, a traditional orchestration (BPM) capability can coordinate multiple systems to provide a single interface to that capability even if (behind the scenes) there are multiple end point systems involved in providing the functionality of that transaction/interface. The diagram below shows my thinking of what should be in place....
Think about it - orchestration is everywhere in a Telco - the Order to Cash process, The Ticket to Resolution process, the service and resource fulfilment process and even the NFV MANO processes. Orchestration is everywhere...
There is a hierarchy to processes in a Telco - just as the TMF recognises that there is a hierarchy in business services (within the eTOM Process Framework). At the highest level, the Order to Cash process might look like this:
Each task in this swimlane diagram will have multiple sub-processes. If we delve down into the provision resources task for instance, a CSP will need processes that will interrogate the resource catalog and network inventory to determine where in the network that resource can be put and what characteristics need to be set, then tell the resource manager to provision that resource. If it's a physical resource, that may involve allocating a technician to install the physical resource. If it's a virtual resource such as a Virtual Network Function (VNF) then the Network Function Virtualisation (NFV) orchestration engine will need to be told to provision that VNF. If we go one level deeper, the NFV Orchestration engine will need to tell the NFV Manager to provision that VNF and then update the network inventory.
Perhaps the diagram below will help you to understand what i mean:
This diagram is a very simplified hierarchical process model designed to show the layers of process. As you can see, there are many layers of orchestration required in a CSP and as long as the orchestration engine is flexible enough and can handle the integration points with the many systems it needs to interact with, there is no real reason why the same orchestration engine couldn't be used by all levels of process.
Over the past couple of years as NFV has risen significantly in popularity and interest, I've seen many players in the market talk about orchestration engines that just handle NFV orchestration and nothing else. To me, that seems like a waste. Why put in an orchestration engine that is just used for NFV when you also still need orchestration engines for the higher process layers as well? I'd suggest that a common orchestration and common integration capability makes the most sense delivering:
High levels of reuse
Maximising utilisation of software capabilities
Common Admin and Development skills for all levels of process (be they business focussed or service or resource focussed)
Common Integration patterns (enabling developers and management staff to work across all layers of the business)
Greater Business Agility - able to react to changing business and technical conditions faster
There are a number of Integration platforms - typically marketed as Enterprise Service Buses (ESB) that can handle integration through Web Services, XML/HTTP, File, CORBA/IIOP even Socket/RPC connections for those legacy systems that many telcos still have hanging around. An ESB can work well in a MicroServices environment too - so don't think that just because you have a ESB, you're fighting against MicroServices - you are not. MicroServices can make use of the ESB for connectivity to conventional Web Services (SOA) as well as legacy systems.
A common Orchestration layer would drive consistency in processes at all layers of a Telco - and there are a number of Business Process Management orchestration engines out there that have the flexibility to work with the Integration layer to orchestrate processes from the lowest level (such as within a Network Function Virtualisation (NFV) environment) all the way up to the highest levels of business process - the orchestrations should be defined in an standard language such as Business Process Execution Language (BPEL) or Business Process Model Notation (BPMN).
To me, it makes no sense to re-invent the wheel and have orchestration engines just for the NFV environment, different orchestration engines for the Service Order Management, the Resource Order Management, the Customer Order Management, the Service Assurance, the Billing, the Partner/Supplier management etc etc - all of these orchestration requirements could be handled by a single orchestration engine. Additionally, this would make disaster recovery simpler and faster and cheaper as well (fewer software components to be restored in a disaster situation).
A link to this blog entry popped up in my LinkedIn feed today which in turn linked to a Developerworks article - Combine business process management and blockchain which steps you though a use case and allows you to build your own basic BPM & Blockchain demo. Complex processes could save and get data to/from Blockchain ensuring that every process in any organisation (within the same company and across company boundaries) are using the most up to date data.
I thought it would be appropriate to paste in a link given my previous post on Blockchain in Telcos. As I think about this topic more, I can see a few more use cases in Telecom. I'll explore them in subsequent posts, but for now, I think it's important that we be pragmatic about this. Re-engineering processes to make good use of blockchain is non-trivial and therefore will have a cost associated with it. Will the advantages in transparency and resilience be worth the cost of making the changes? Speaking about resilience, don't forget the damage that a failure can cause. British Airways IT system failure (which I believe is outsourced but I cannot be sure) was down for the better part of three days - failures like that have the potential to bring down a business. We don't know yet what will happen to BA in the long term, but you certainly don't want the same sort of failure happing ing to your business.
If you like me are hearing 'Blockchain this, blockchain that', it almost seems like blockchain will solve world peace, global hunger and feed your pets for you! We're obviously at the 'peak of inflated expectations' of the Gartner hype cycle.
I saw a tweet yesterday from an ex-colleague at IBM yesterday that spoke about using blockchain to combat fraud in a Telco. While I can see that as a possible use case, I was thinking about other opportunities for blockchain.
Perhaps I need to explain blockchain briefly so that those that don't understand it can also understand the Telecom use cases for blockchain. Wikipedia defines it like this:
"A blockchain... is a distributed database that maintains a continuously growing list of records, called blocks, secured from tampering and revision. Each block contains a timestamp and a link to a previous block.By design, blockchains are inherently resistant to modification of the data — once recorded, the data in a block cannot be altered retroactively. Through the use of a peer-to-peer network and a distributed timestamping server, a blockchain database is managed autonomously. Blockchains are "an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. The ledger itself can also be programmed to trigger transactions automatically."
So, it's an immutable record of changes to something. I was thinking about that yesterday and there were a number of use cases in Telecom that I could think of that could use blockchain. I'm not suggesting that they should use blockchain or that it's needed, just that they could. These are the Use cases I came up with:
Fraud prevention : being immutable makes it harder to 'slip one by' the normal accounting checks and balances that any large company has. I suppose the real question is 'exactly which records need to be stored in a blockchain to enable that fraud prevention?' The obvious one is the billing records.
Billing - maintaining state of post-paid billing accounts, who is making payments, billing amounts and other biulling events (such as rate changes, grace periods etc)
Tracking changes to the network. At the moment, many of the changes being made in a Telco's network may be made by staff, but increasingly, maintenance and management of the network is being outsourced to external companies and you want to keep en eye on them to ensure they're doing what they say they're doing. In the new world of Software Defined Networks (SDN) utilising Network Function Virtualisation (NFV) to build and change the network architecture at a rate that we've not seen before, it becomes important for a Telco to be able to track changes to the network to diagnose faults and customer complaints. Over a 24 hour period, a path on a network that supports enterprise customer X may change tens of times - much higher frequency than would be possible if the network elements were physical.
Tracking changes to accounts by customers and telco staff - I could imagine a situation where a customer claims that they didn't request a configuration change, but a blockchain based record of changes could be used to track beck through all the changes in a customer's account to determine what happened and when - potentially enabling a Telco to limit the liability to the customer... or vice versa...
Tracking purchases - A blockchain record of purchases would allow a CSP to rebuild a customer's liability from base information; provided there was an immutable record of the data records as well...
xDRs - any type of Data Record (CDRs, EDRs...) could be stored in a blockchain to facilittate rebuilding of a client's history and billing records from base data. The problem with using a blockchain to store xDRs is the size requirements. I know that large CSPs in India for example produce between five and ten BILLION records per day. It wouldn't take long for that to build up to a very large storage requirement - even if you store the mediated data records, it's going to be very large. I guess the question is : 'what is the return on investment?' - it is worth while doing. I can't think of a business case to justify such an investment, but there may be one out there.
Assurance events - Recording records associated with trouble tickets and problem resolution.
I don't for a second think that all of these can be justified in terms of cost/benefit analysis, but I could see blockchain being used in these scenarios.
Do you have any ideas? Please leave a comment below.
I realise I missed the usual business case that blockchain is used for - a financial ledger. Obviously storing a CSP's financial data in a blockchain would work (and make sense) as it would in ANY other enterprise. I really wanted to illustrate the CSP specific use cases for blockchain.
This post is an update to my earlier post which is now sadly mostly incorrect because IBM's web site has been completely restructured and none of the links I provided previously are valid any more.
I know this isn't strictly related to my normal Industries, but it is applicable for anyone who want's to chat with IBMers, so I thought it was valuable enough to share. For a number of years now, my email signature has included a link for non-IBMers to contact me via Sametime. If you're an IBMer reading this, you might consider linking to this post in your email signature yo allow your customers and partners to chat with you via Sametime.
Here is a step by step guide to setting it up so that you can chat with IBMers over Sametime/IBM Instant Messaging.
There are a few things you'll need for this to work:
An ibm.com id - these are free and available from Sign up for an IBMid if you don't already have one
A Sametime/IBM Instant Messaging compatible client installed on your computer/device. Previously a web client was available however that link is no longer working, so a 'fat client' install would seem to be the way to go. You can download the latest Sametime client from Lotus Greenhouse site which will also require a (free) ID to be created. This is a different ID to the IBMid mentioned above, but just as quick anbd easy to get. You can use other non-IBM clients such as Adium or Pidgin but those clients will require some 'hacking' to allow them to connect to the IBM Instant Messaging Gateway - if you're keen, please check out this Blog post from nomaen that details that configuration. Personally, the IBM client does the job really nicely and is available for Windows, Mac, and Linux (RPM and DEB) so' I'd just go that route.
Once you have your client installed, you'll want to set up a server community for the IBM IM Gateway. The details you need are:
Host Server : extst.ibm.com
Server Community Port : 80
Connection : Direct connection using HTTP protocol
See these screen dumps for reference...
Once you login with your IBMid, you'll be presented with the ST client and no one in your buddylist. Sending instant messages to yourself isn't very interesting and really what you want to do anyway is to chat with IBMers so lest add an IBMer to your buddylist so that you can chat with them...
You will need to know their Internet email address as you have to manually type it in. You will not be able to serach for them. Select the "Add external person by email address' radio button, then type in their email address and name, asign a group if you want to group your contacts.If you don't know they're email address, you can search here to find it.
Once you click on 'add' a popup will appear telling you that the IBMer will need to approve you to be able to see their status and chat with them through the IM Gateway.
NB. In the buddylist - the au1.ibm.com is my internal Sametime community id (which is the same as my email address) and the optusnet.com.au email address is my ibm.com id.
Once you've added your IBM contacts, you're up and running and the interface should look something like this (below):
A chat session between my two IDs (my IBMid and my internal id) looks like this in both the standalone client (used for my external IBMid and the embedded client in my IBM Notes client - on Linux)
and the internal view of the same conversation:
You might notice that all the rich text, file, image functions are greyed out - that's because they are not supported by the external IBM gateway so you'll be restricted to plain text in your chats...
This capability is not well known among IBMers, but I have spoken with a number of partners, exIBMers and my wife via this facility in the past.
Hopefully, this post will spread the word a bit more....
The TeleManagement Forum (TMF) have defined a set of four frameworks collectively known as Frameworx. The key frameworks that will deliver business value to the CSP are the Information Framework(SID) and the Process Framework (eTOM). Both of these can deliver increased business agility - which will reduce time to market and lower IT costs. In particular if a CSP is undertaking with the multiple major IT projects in the near term, TMF Frameworx alignment will ease the pain associated with those major projects.
Without a Services Oriented Architecture (SOA), such as many CSP's have currently, there is no common integration layer, no common way to perform format transformations with that multiple systems can communicate correctly. A typical illustration of this point to point integration might look like the Illustration to the right:
Each of the orange ovals represents a transformation of information so that the two systems can understand each other - each of which must be developed and maintained independently. These transformations will typically be built with a range of different technologies and method, thus increasing the IT costs of integrating, maintaining such transformations, not to mention maintaining competency within the IT organisation.
A basic SOA environment introduces the concept of an Enterprise Service Bus which provides a common way to integrate systems together and a common way of building transformation of information model used by multiple systems. The Illustration below shows this basic Services Oriented Architecture - note that we still have the same number of transformations to build and maintain, but now they can be built using a common method, tools and skills.
If we now introduce a standard information model such as the SID from the TeleManagement Forum, we can reduce the number of transformation that need to be built and maintained to one per system as shown in the Illustration below. Ensuring that all the traffic across the ESB is SID aligned means that as the CSP changes systems (such as CRM or Billing) the effort required to integrate the new system into the environment is dramatically reduced. That will enable the introduction of new systems faster than could otherwise been achieved. It will also reduce the ongoing IT maintenance costs.
As I'm sure you're aware, most end to end business processes need to orchestrate multiple systems. If we take the next step and insulate those end to end business processes from the functions that are specific to the various end point systems using a standard Process Framework such as eTOM, then business process can be independent of systems such as CRM, Billing, Provisioning etc. That means that if those systems change in the future (as many CSPs are looking to do) the end to end business processes will not need to change - in fact the process will not even be aware that the end system has changed.
When changing (say) the CRM system, you will need to remap the eTOM business services to the specific native services and rebuild a single integration and a single transformation to/from the standard data model (SID). This is a significant reduction in effort required to introduce new systems into the CSP's environment. Additionally, if the CSP decide to take a phased approach to the migration of the CRM systems (as opposed to a big bang) the eTOM aligned business processes can dynamically select which of the two CRM systems should be used for this particular process instance.
What that means for the CSP.
Putting in place a robust integration and process orchestration environment that is aligned to TMF Frameworx should be the CSP's first priority; this will not only allow the subsequent major projects integration and migration efforts to be minimised, it will also reduce the time to market for new processes and product that the CSP might offer into the market.
Telekom Slovenia is a perfect example of this. When the Slovenian government forced Mobitel (Slovenia) and Telekom Slovenia to merge, having the alignment with the SID and eTOM within Mobitel allowed the merged organisation to meet the governments deadlines for the specific target KPIs:
Be able to provide subscribers with a joint bill
Enable CSR from both organisations to sell/service products from both organisations
Offer a quad-play product that combined offerings from both Telekom Slovenia and Mobitel
All within six months.
When a CSP is undertaking multiple concurrent major IT replacement projects, there are a number of recommendations that IBM would make based on past observations with other CSPs that have also undertaken significant and multiple system replacement projects:
Use TMF Frameworx to minimise integration work (requires integration and process orchestration environment such as the ESB/SOA project is building) to be in place
Use TMF eTOM to build system independent business processes so that as those major systems change, end to end business processes do not need to change and can dynamically select the legacy or new system during the migration phases of the system replacement projects.
To achieve, 1 and 2, the CSP will need to have the SOA and BPM infrastructure that is capable of integration with ALL of the systems (not just limited to (say) CRM or ERP) within the CSP in place first
If you have the luxury of time, don't try to run the projects simultaneously, rather run them linearly. If this cannot be achieved due to business constraints, limit the concurrent projects to as few systems as possible, and preferably to systems that don't have a lot of interaction with each other.
Operators hoping to engage in widespread deployment of voice over LTE in order to gain spectral efficiencies in their network may face some unhappy customers because one vendor's recent tests showed that VoLTE calls can slash a device's talk-time battery life by half.
For years now, we've known that higher speed mobile networks would mean more power required in handsets to maintain the higher bandwidth connections. I recall it being raised as a concern when UMTS (3G) was being rolled out while GPRS or EDGE were the dominant technology in the mobile data networks. In fact, while I am travelling, I often switch off my 3G/3.5G network capability and trop back to GPRS and EDGE just to make my batter last through the day. It's interesting that it has been quantified like this.
When you think about it though, it makes sense. VoLTE (Voice over LTE) is not using a traditional GSM or CDMA circuit, rather it is using a packet data network to encapsulate the voice traffic - so it is voice over a data network. We've known for a long time that data traffic (particularly higher speed data traffic) uses a lot more power than voice traffic. More power equals less talk time from the same charge.
This study is a US based one, so it brings the luggage of CDMA rather than GSM like the rest of the world uses, but I think there are lessons here for the GSM carriers around the world too. CDMA battery life (from my experience) has been on a par with GSM battery life. I think it would be reasonable to equate the CDMA battery life in this study with GSM battery life.
I am seeing more and more countries around the world clawing back the 2G spectrum for use with Digital TV, LTE or other local requirements. At some point in the future (at least for some markets) the only Voice traffic will be using VoLTE and those subscribers will have severely reduced standby and talk time compared to mobile phones of a few years back. Will that lead to a backlash in the community? By that point it may be too late with the spectrum re-deployed for other uses. Will we end up with VoLTE being the only voice option in some countries and others still having CDMA or GSM voice networks - and will that complicate things for phone manufacturers? (remember the days of so called 'Global phones' that had to be made to cater to all the different spectrums used around the world - yes multi band phones became pervasive, but will so called Global Phones that retain backward compatibility with GSM networks be so popular when the primary channel for mobile phone distribution is still the telephone carriers themselves (and they have committed to VoLTE in their own country)?
Who knows. I do think that we'll end up with a big group of primarily voice subscribers who aren't going to be happy campers!
Last week, I was at the TeleManagement Forum's (TMF) Africa Summit event in Johannesburg, South Africa. The main reason for me attending was to finish of my TMF certifications (I am level 3 currently) in the process framework (eTOM) - if I have passed the exam, I will be Level 4 certified. It was a really tough exam (75% pass mark) so I don't know if I did enough to get over the line'.
Regardless, the event was well attended with 200-230 attendees for the two days of the conference. It was interesting to hear the presenter's thoughts on telco usage within Africa into the future. Many seemed to think that video would drive future traffic for telcos. I am not so sure.
I n other markets around the world, video was also projected to drive 3G network adoption, yet this has not happened anywhere. Why do all these people think that Africa will be different?
I see similar usage patterns in parts of Asia, yet Video has not take off there. Skype carries many more voice only calls than video calls. Apple's Facetime video chat hasn't taken off like Apple predicted. 3G video calls makes a tiny proportion of all calls made.
Personally, I think that voice (despite it's declining popularity relatively speaking in the developed world) will remain the key application, especially voice over LTE for the foreseeable future in Africa. I also think that social networking (be it Facebook, freindster, MySpace or some other African specific tool) will drive consumer data (LTE) traffic. Humans are social animals, and I think these sorts of social interactions will apply just as much in the African scenario as it has in others.
The other day, I was at a customer proof of concept, where the customer asked for 99.9999% availability within the Proof of Concept environment. Let me explain briefly the environment for the Proof of Concept - we were allocated ONE HP Proliant server, with twelve cores and needed to run the following:
IBM BPM Advanced (BPM Adv)
WebSphere Operational Decision Management (WODM)
WebSphere Services Registry & Repository(WSRR)
Oracle DB (not sure what version the customer installed).
Obviously we needed to use VMWare to deploy the software since installing all of the software on the server (and being able to demonstrate any level of redundancy) would be impossible.
Any of you that understand High Availability as I do would say it can't be done in a Proof of Concept - and I agree, yet our competitor claims they have demonstrated six nines (99.9999% availability) in this Proof of Concept environment - it was deployed on the customer's hardware; hardware that did not have any redundancy at all. I call shenanigans on the competitor claims. Unfortunately for us, the customer swallowed the claim hook line and sinker.
I want to explain why their claim of six nines cannot be substantiated and why the customer should be sceptical as soon as a vendor - any vendor makes such claims. First, lets think about what 99.9999% availability really means. To quantify that figure, that means 31.5 seconds of unplanned downtime per year! For a start, how could you possibly measure availability for a year over a two week period. Our POC server VMs didn't crash for the entire time we had them running - does that entitle us to claim 100% availability? No way.
The simple fact is that the Proof of Concept was deployed in a virtualised environment on a single physical machine - without redundant Hard Drives or power supplies - there is no way we or our competition could possibly claim any level of availability given the unknowns of the environment.
In order to achieve high levels of availability, there can be no single point of failure. That means no failure points in the Network, the Hardware or the Software. For example, that means:
Multiple redundant Network Interface Connectors
RAID 1+0 drive array,
Multiple redundant power supplies,
Multiple redundant network switches,
Multiple redundant network backbones
Minimise unused OS services
Use Software clustering capabilities (WebSphere n+x clustering *)
Active automated management of the software and OS
Database replication / clustering (eg Oracle RAC or DB2 HADP)
HA on network software elements (eg DNS servers etc)
We need to go back to the Telco and impress upon them that six nines availability depends on all of the above factors (and probably some others!) and not just about measuring the availability of the software over a short (and non-representative) sample period.
Typically this level of HA is very expensive, indeed every additional '9' increases the cost exponentially - that is: six nines (99.9999% availability) is exponentially more expensive than five nines(99.999% availability). I found this great diagram that illustrates the cost versus HA level.
This diagram is actually from a IBM Redbook (See http://www.redbooks.ibm.com/redbooks/pdfs/sg247700.pdf ) which has a terrific section on high Availability - it illustrates how there is a compromise point between the level of high availability (aiming for continuous availability) and the cost of the infrastructure to provide that level of availability.
n is number of servers needed to handle load requirements
x is the number of redundant nodes in the cluster – to achieve six 9's, this should be in excess of 2)
Further to my last post, it now looks like the WAC is completely dead and buried.
One thing that is creating a lot of chatter at the moment though is TelcoML (Telco Markup Language) - there it a lot of discussions about it on the TeleManagement Forum (TMF) community site and while I don't intend to get in a big discussion about TelcoML, I do want to talk about Telco standards in general.
The Telco standards that seem to take hold are the ones with strong engineering background - I am thinking of networking standards like SS7, INAP, CAMEL, SigTRAN etc, but the Telco standards focussed on the IT domain (like Parlay, ParlayX, OneAPI, ParlayREST and perhaps TelcoML) seem to struggle to get real penetration - sure standards are good - they make it easier and cheaper for Telcos to integrate and introduce new software; they make it easier for ISVs to build software that can be deployed at any telco. So, why don't they stick?
Why do we see a progression of standards that are well designed, have collaboration of a core set of telcos around the world (I'm thinking the WAC here) yet nothing comes of it. It we look at Parlay for example, sure CORBA is hard, so I get why it didn't take off, but ParlayX with web services is easy - pretty much every IDE in the world can build a SOAP request from the WSDL for that web Service - why didn't it take off? I've spoken to telcos all around the world about ParlayX, but it's rare to find one that is truly committed to the standard - sure the RFP's say must have ParlayX, but then after they implement the software (Telecom Web Services Server in IBM's case) they either continue to offer their previous in house developed interfaces for those network services and don't use ParlayX or they just don't follow through with their plans to expose the services externally: why did we bother? ParlayX stagnated for many years with little real adoption from Telcos. Along comes GSMA with OneAPI with the mantra 'ParlayX web services are too complicated still, lets simplify them and also provide a REST based interface'. No new services, just the same ones as ParlayX, but simplified. Yes, I responded to a lot of Requests For Proposal (RFP) asking for OneAPI support, but I have not seen one telco that has actually exposed those OneAPI interfaces to 3rd party developers as they originally intended. So, now, OneAPI doesn;t really exist any more and we have ParlayREST as a replacement. Will that get any more take up? I don't think so.
The TMF Frameworx seem to have more adoption, but they are the exception to the rule.
I am not really sure why Telco standards efforts have such a tough time of it, but I suspect that it comes down to:
Lack of long term thinking within telcos - there are often too many tactical requirements to be fulfilled and the long term strategy never gets going (this is like Governments who have a four year terms not being able to get 20 year projects over the line - they're too worried about getting the day to day things patched up and then getting re-elected)
Senior executives in Telcos that truly don't appreciate the benefits of standardisation - I am not sure if this is because executives come from a non-technical background or some other reason.
What to do? I guess I will keep preaching about standards - it is fundamental to IBM's strategy and operations after all - and keep up with the new ones as they come along. Lets hope that Telcos start to understand why they should be using standards as much as possible, after all they will make their life easier and their operations cheaper.
"Apigee, the API management company that was most recently spotted powering that new “print to Walgreens” feature in half a dozen or so mobile applications, is now acquiring the technology assets of WAC, aka the Wholesale Applications Community.
WAC, an alliance of global telecom companies, like AT&T, Verizon,
Sprint, Deutsche Telecom, China Mobile, Orange, and others (and pegged
by TechCrunch writer Jason Kincaid back in 2010 as “a disaster in the making“)
was intent on building a platform that would allow mobile developers to
build an application once, then run it on any carrier, OS or device.
The group also developed network API technology, which is another key
piece to today’s acquisition."
I think this is a really interesting development. The Wholesale Application Community (WAC) was supposed to give Telcos a way of minimizing the revenue losses to the likes of Apple's App Store and Google Play. IBM's Telecom Solution Lab in France built a demonstration that was shown at Mobile World Congress (MWC) in 2011 demonstration how a Telco's own app store could incorporate applications from the WAC App store as well as other app stores within their own combined app store. I've demonstrated this a number of times around the world and the thing that always seemed odd to me is that applications in the WAC App Store could not be native applications (for Android, Blackberry, WinMob or Symbian) but rather, they could ONLY be HTML5 based apps. That was always going to limit the number of apps that would be in the WAC App store, but since the WAC was announced at WMC 2010, the number of apps in the store has never really taken off.
I'm not sure if this is effectively the end of the road for the WAC, or if it's just a stop on their journey. Certainly, the Telcos that I have dealt with that form the core WAC Telco members remain dedicated to the WAC. I guess we'll have to wait and see what happens.
This makes for an interesting comparison to the National Emergency
Warning System (NEWS) that was implemented in Australia last year as a
result of the Black Saturday bushfires. Here is the URL for this bookmark: gizmodo.com/5857897/this-is-not-a-test-the-emergency-alert-system-is-worthless-without-social-networks This makes for an interesting comparison to the National Emergency
Warning System (NEWS) that was implemented in Australia last year as a
result of the Black Saturday bushfires. Of particular interest is that the USA have avoided the SMS channel when in Australia, that has been the primary channel - alternates like TV and Radio are seen as not as pervasive and thus a lower priority. I don't think that NEWS here in Oz is connected to twitter, facebook, foursquare or any other social networking site either, but that could be an extension to NEWS - the problem is getting everyone to "friend" the NEWS system so that they see updates and warnings!
While I can understand HP getting out of the PC business - it's a very competitive marketplace wit low margins - after all, that is why IBM sold it's PC division to Lenovo. What surprises me is the timing. Only 18 months after buying Palm for US$1.2 Billion, they're cutting their losses and shedding it.
Since I don't live in the US, I can't comment on the marketing push that HP put behind the Pre and the TouchPad, but I've never seen any marketing for it. When your competitor is Apple, the only way to make any dent is the push and push hard. They needed to out market Apple and I'm sure I don't need to tell you how difficult and expensive that would be!
Yesterday, IBM launched the latest iteration of the Service Provider Delivery Environment (SPDE), a software framework for Telecom that has been around since 2000. Over the years, it has evolved with change sin market requirements and architecture maturity. The link below is for the launch:
The following enhancements are part of the new SPDE 4.0 Framework:
1. CSP Business Function Domains – a clear articulation of “communications service provider business domains” that describe the business functions that are common to any service provider across the world. These business domains offer us a simpler way to introduce the SPDE capabilities to a LOB audience, as well as to other client and partner constituents that are new to SPDE:
Sales & Marketing
2. New Capabilities - In the areas of cloud, B2B commerce, enterprise marketing management, business analytics, and service delivery.
3. Introduction of the SPDE Enabled Business Projects - that deliver solutions to address common business and IT needs for the LOB (CIO/CTO/CMO) and represent repeatable solutions and patterns harvested from client engagements.
4. Improved alignment with Telemanagment Forum (TMF) Industry Standards - a clearly defined depiction of the areas of alignment to TMF Frameworx - key industry standards that underpin much of the communications industry investment.
5. Simplified Graphics and Messaging - to improve ease of adoption and consumability by a broader LOB audience.
Built on best practices and patterns from client engagements with CSPs around the world, IBM SPDE 4.0 is the blueprint that enables Smarter Communications by helping deliver value-added ser¬vices that launch smarter services, drive smarter operations and build smarter networks. IBM is leading a conversation in the marketplace about how our world is becoming smarter, and software is at the very heart of this change. IBM's Industry Frameworks play a critical role in our ability to deliver smarter planet solutions by pulling together deep industry expertise, technology and a dynamic infrastructure from across the company to provide clients with offerings targeted to their industry-specific needs. Disclaimer. I have 'borrowed' some of the text from an IBM Marketing email about the new SPDE 4.0 framework - so not my words...
I am in Dublin at the moment for TeleManagement World 2011 which has changed locations from Nice, France last year. it looks to be a very interesting conference. I've already done two days of training and now, we're beginning the sessions. the keynote session has the Irish Minister for Communications, Mr Rabitte who is talking about the challenges that CSPs face all the world around. He is also talking about an innovation programme that the Irish Government have started called 'Examplar' which is part of their NGN Trial network. i'll see if I can get some more info over the next few days... Steven Shurrock, the new CEO at O2 Ireland who has been in the role for just six months is very bullish about the opportunities in Ireland for data services. After Steven, we saw a host of Keynote speakers who have been focused on a number of themes, but many common presenters included:
Standards compliance - including certification against standards. Particularly with the TMF Frameworx standards
Horizontal platforms and moving away from silos is their IT strategy
SOA is the basis for all of the new IT initiatives
I have recorded a number of keynote speakers as video, but for the time being, those files are very large. Once I have had a chance to transcode them to a smaller size, I'll add them to the blog as well - while not particularly technical, they're very intesting for a Telecom perspective.
OK, I know over the past six months or so, my blog has sat idle. For that I apologise. I could blame workload, personal issues, the amount of travel etc etc, but I am just going to cop it on the chin and say that I am sorry to anybody out there that can be bothered to read my posts. In light of the fresh start, I am going to change the name of the blog from Telco Talk to ...
Well, that's the thing, I haven't decided yet what i should change it to. The content isn't going to change - it will continue to be Telco focused, so I don't want to start a new blog from scratch. I will just rename this one. I just need some inspiration for the new name. Within IBM, our global market folks have decreed that we should no longer use the term "Telco" and that instead we should use "Communications Service Provider" or CSP for short. As a result I was thinking about changing the blog name to "CSP Comms" or "CSP Communiqué". Before I change it, I would like your opinion (if there is anyone out there) or suggestion of a new name.
I'll be watching my blog comments with bated breath, so please comment and suggest names.
The threatened ban was narrowly averted and the ban in India looks as if it will avoid a ban after all. I wonder if RIM installed (r promised to) a Network Operations Centre in the UAE (which is what I saw a a possible way of appeasing the authorities) or if they have come up wit some other way to give the UAE authorities access to the encrypted traffic.
In the meantime, India has hinted (per my previous post) that they will be going after private VPN traffic in addition to the Blackberry traffic. We'll see where that ends up soon I guess.
I know I have been lax in posting recently. I've had a lot of work on and I am sorry for not getting to the blog.
That said, over the past few weeks, I have been watching what seems to be a snowballing issue of governments spying on their citizens in the name of protection from terrorism. First cab off the rank was India a couple of years ago asking Research In Motion (RIM) for access to the data stream for Indian Blackberry users, then asking for the encryption keys. That went quiet until recently (1Jul10), the Indian Government again asked RIM for access to the Blackberry traffic and gave RIM 15 days to comply (See this post in Indian govt gives RIM, Skype 15 days notice, warns Google - Telecompaper). That has passed and the Indian government yesterday gave RIM a new deadline of 31Aug10 (See Indian govt gives 31 August deadline for BlackBerry solution - Telecompaper). In parallel, a number of other nations have asked their CSPs or RIM for access to the data sent via Blackberry devices.
First, was the United Arab Emirates (UAE) who will put a ban on Blackberry devices in place which will force the local Communications Service Providers (CSPs) to halt the service from 11Oct10. RIM are meeting with the UAE government, but who knows where that will lead with the Canadian government stepping in to defend it's Golden Hair Child - RIM. Following the UAE ban, Saudi Arabia, Lebanon and more recently Indonesia have all said they will also consider a ban on RIM devices. As an interesting aside, I read an article a week ago (See UAE cellular carrier rolls out spyware as a 3G "update") that suggested that the UAE government sent all Etisalat Blackberry subscribers an email advising them to update their devices with a 'special update' - it turns out that the update was just a Trojan which in fact delivered a spyware application to the Blackberry devices to allow the government to monitor all the traffic! (wow!)
Much of the hubbub seems to be around the use of Blackberry Messenger, an Instant Messaging function similar to Lotus Sametime Mobile, but hosted by RIM themselves which allows all Blackberry users (even on different networks and telcos) to chat to each other via their devices.
I guess at this stage, it might be helpful to describe how RIM's service works. From a historical point of view, RIM were a pager company. Pagers need a Network Operations Centre (NOC) to act as a single point from which to send all the messages out to the pagers. That's where all the RIM contact centre staff sat and answered phones, typed messages into their internal systems and sent the messages out to the subscribers. RIM had the brilliant idea to make their pagers two way so that the person being paged could respond initially with just an acknowledgement that they had read the message, and then later with full text messages. That's the point at which the pagers gained QWERTY keyboards. From there, RIM made the leap in functionality to support emails as well as pager messages, after all, they had a full keyboard now, a well established NOC based delivery system and a return path via the NOC for messages sent from the device. The only thing that remained was a link into an enterprise email system. That's where the Blackberry Enterprise Server (BES) comes in. The BES sites inside the Enterprise network and connects to the Lotus Domino or MS Exchange servers and acts as a connection to the NOC in Canada (the home of RIM and the location of the RIm NOC). The connection from the device to the NOC is encrypted and from the NOC to the BES is encrypted. Because of that encryption, there is no way for a government such as India, UAE, Indonesia, Saudi Arabia or other to intercept the traffic over either of the links (to or from the NOC)
Last time I spoke to someone at RIM about this topology, they told me that RIM did not support putting the BES in the DMZ (where I would have put it) - since then, this situation may have changed.
Blackberry messenger traffic doesn't get to the BES, but instead it goes from the device up to the NOC and then back to the second Blackberry which means that non-enterprise subscribers also have access to the messenger service and this appears to be the crux of what the various governments are concerned about. Anybody, including a terrorist could buy a Blackberry phone and have access to the encrypted Blackberry messenger service without needing to connect up their device to a BES which explains why they don't seem to be chasing after the other VPN vendors (including IBM with Lotus Mobile Connect) to get access to the encrypted traffic between the device and the enterprise VPN server. Importantly, other VPN vendors typically don't have a NOC in the mix (apart from the USA based Good who have a very similar model to RIM). I guess the governments don't see the threat from the enterprise customers, but rather the individuals who buy Blackberry devices.
To illustrate how a VPN like Lotus Mobile Connect differs from the Blackberry topology above, have a look at the diagram below:
Lotus Mobile Connect topology
If we extend that thought a little more, a terrorist cell could set them selves up as a pseudo enterprise by deploying a traditional VPN solution in conjunction with an enterprise type instant messaging server and therefore avoid the ban on Blackberries. the VPN server and IM server could even be located in another country which would avoid the possibility of the government easily getting a court order to intercept traffic within the enterprise environment (on the other end of the VPN). It will be interesting to see if those governments try to extend the reach of their prying to this type of IM strategy...
Since I last posted about New Zealand's National Broadband project which seemed to me to be much more focused on the subscribers and the products they would have available to them (and the retailers that sold them) than the high speed backbone network. My impressions may have been tainted by the work I was doing with the Telecom New Zealand Undertaking In Progress (UIP) project that I was involved with - the rather public forced split of Telecom New Zealand's Retail, Wholesale and Network departments to ensure equivalency of input for all retail and wholesale partners for (only) broadband services.
My understanding of the situation has developed somewhat since then and we can see that the situation in New Zealand Government also involves a similar structure to what is happening in Australia with the Communications Alliance and the NBN Company. In New Zealand, the companies are a little different. Certainly, we have the NZ Government Ministry of Economic Development (MED) as one participant, then we have Crown Fibre Holdings (not much of a web site there!) -set up by the Government to manage the process of selecting the companies to build the National Broadband Network and manage the government's investment in the NBN. Together with the companies that are bidding for the deal Crown Fibre holdings will form Local Fibre Companies (LFC) which (combined) will match the government's contribution to the NBN. That will mean the total project will cost NZ$3 Billion** with the LFCs kicking in NZ$1.5B and the NZ government contributing NZ$1.5B. I dont have the full schedule, but from a couple of sources, I have compiled an overview of the progress to date:
21 October 2009 - Communications and Information Technology Minister Steven Joyce announced the government's process for selecting private sector co-investment partners.
13 November 2009 - Intention to respond due.
9 December 2009 - The Ministry and Crown Fibre Holdings release a clarifications and amendments
14 January 2009 - The Ministry and Crown Fibre Holdings released additional clarification and amendments with respect to the Invitation to Participate.
29 January 2010 - Proposals must be lodged
4 February 2010 - Crown Fibre Holdings notify respondents of handover of responsibility for the partner selection process
October 2010 - Successful respondents announced/notified.
What I find a bit interesting is that the government are only looking to cover 75% of the population by 2019. For a small country (compared to Australia at least), that seems to me to be a very low target to aim for. If we compare that with Australia's NBN project, their target is 90% coverage at greater than 100Mbps and 10% greater than 12Mbps (that's 100% coverage!) by 2017. Admittedly, the Australian project has about a year's head start, but it's also a MUCH bigger country with a population nearly five times larger. Lets have a quick look at the comparisons:
Cost per person (US$/person)
Cost per area (US$/km2)
* 100% coverage is split between greater than 100Mbps (90%) and greater than 12Mbps (10%) ** One Billion is using the short scale definition = 109 = 1,000,000,000
What do I take from this quick comparison? Lets take a quick look at the numbers. Obviously, Australia is a much bigger country (28.4 times larger) and has a much larger population (5.2 time larger), so it is reasonable (in my opinion) that the cost per potential NBN customer should be higher for Australia (and it is at 2.2 times higher) but the thing that makes me ponder is the cost per square kilometre: New Zealand is nearly twice that of Australia. When the New Zealand target is only 70% of the population and thus enables them to avoid areas that are physically difficult to provide coverage to (I'm no NZ geologist, but I would imagine lots of the South Island's most mountainous areas would pose significant problems for cablers) I find myself wondering why the NZ network is going to be so expensive. I guess it could be a matter of scale - but I thought the biggest cost was actually laying the cables rather than the back end systems which every broadband network will need (routers, switches, administration and management systems). Maybe I am missing something - does anyone have any ideas?
edit: I've just found this quote in Wikipedia which (I think) is truely revealing when you consider New Zealand's 70% coverage target:
"New Zealand is a predominantly urban country, with 72% of the population
living in 16 main urban areas and 53% living in the four largest cities
By only extending the NBN to those 16 main urban areas and nowhere else - they've achieved their target! You wouldn't want to live in country New Zealand and be dependent on a fast network!
I was looking at where some of the traffic for this blog comes from this morning. Someone had used Google to search for "ibm sdp cloud" which I am glad to say yielded this blog as the third and forth results. Above Telco Talk in the results was a post from 2005 from fellow MyDeveloperworks blogger Bobby Woolf with his post What is in RAD 6.0 - which is interesting in that the post wasn't about Service Delivery Platforms and the term "SDP" is only mentioned in the comments on the post, yet it rated higher in Google's index than my posts which have been about cloud, SDPs or both! That's another conversation though...
The thing that really caught my attention was a new whitepaper form IBM on Smarter Homes. This has been an ongoing area of interest for me for a few years now. This new whitepaper "The IBM vision of a smarter home enabled by cloud technology" is interesting - it talks about some of the concepts that I have seen coming over the past few years, but it also introduces the concept of Cloud based services providers as the key enabler outside the home to enable smarter home to deliver on their lofty promises. In the introduction of the whitepaper, it states:
A common services delivery platform based on industry standards supports cooperative interconnection and creation of new services. Implementation inside the cloud delivers quick development of services at lower cost, with shorter time to market, facilitating rapid experimentation and improvement. The emergence of cloud computing, Web services and service-oriented architecture (SOA), together with new standards, is the key that will open up the field for the new smarter home services.
The dependence on external networks (from our homes) and external Communications Service Providers presents an opportunity for them to provide much more than just the pipe to the house. This is an area that some Telcos are trying to tap into already. Here in Australia, Telstra have recently introduced a home based smart device called the T-Hub which is intended to arrest some of the decline in homes installing or keeping land line phones (in Australia, more and more homes are buying a naked DSL or Hybrid Fibre Coax (HCF) service for Internet and using mobile phones for voice calls and not having a home phone service at all). I recently cancelled my Telstra Home Phone service, so I cannot buy one of the T-Hubs and apparently it won't work with my home phone service via my HCF connection. It is an intriguing idea though. I find myself wondering if Telstra's toe in the Smarter Home pond is too little too late. For years, in Telstra's Innovation Centres (one in Melbourne and one in Sydney) they had standing demonstrations of smarter home technology (I think the previous Telstra CEO, Sol Tujilllo closed them down). I even helped to install a Smarter Healthcare demo at the Sydney Telstra Innovation Centre a few years ago (more n that later) and their demos were every bit as good as the demos that IBM has at the Austin (Texas, USA) and LaGaude (France) Telecom Solutions Labs.
Further into the whitepaper, when talking about Cloud based Service Delivery Platforms (pp 10) there is a nice summary of why a Telco would consider a cloud deployment of their SDP:
An SDP in the cloud supports the expansion of the services scope by enabling new services in existing markets and by expanding existing services into new markets with minimum risk. By exposing standard service interfaces in the network, it enables third parties to integrate their services quickly, or to build new services based on the service components provided in the SDP. This creates the opportunity for new business models, for instance, for media distribution and advertising throughout multiple delivery scenarios.
I think this illustrates what all Telcos should be thinking about - the agility needed to compete in today's marketplace. Cloud is one way to enhance that agility but also adds elasticity - the ability to grow and shrink as the market demands grow and shrink. Sorry for rambling a bit there... some semi-random thoughts kept popping up when talking about Smarter homes and Telcos. Anyway, I would encourage you to have a read of the whitepaper for yourself. It's available at:
In just five months, Bharti Airtel's App store has had over 13 Million downloads. What a terrific example of a Telco App Store in action and (presumably) making money for the Telco. This article came across my screen this afternoon and given my previous posts about Bharti's App Store and carriers wanting to get into them (something I've seen all over Asia) to try and arrest some of the revenue bleeding to Apple (and to a lesser extent Google, Nokia and RIM) through single brand (phone) app stores.
The article is really brief, barely a footnote, but it does lay out some interesting facts:
13 Million downloads since Feb '10
Over 71,00 Applications available, up from 1250 at launch
Support for 780 different devices
1.2 downloads per second
I guess having over 200 Million subscribers does help achieve these sorts of numbers . I have some a bit of background about Airtel's App Central store and the technology it uses, much of it IBM technology. IBM Portal and Mobile Portal Accelerator are used to drive the interface which is able to support over 8,000 different devices from iPhones to WebTVs (remember them? They seem to be making a bit of a comeback at the moment) and everything in-between. These screen dumps are from their old mobile site - I will post some new ones if I can get them soon.
Since I penned my last post, I have done some more reading on Facetime and watch Steve Job's launch of Facetime. While I will happily admit that Apple have in fact used some standards within their Facetime Technology (Jobs lists H.264, AAC, SIP, STUN, TURN, ICE, RTP, SRTP as all being used), I am somewhat bemused by the "standards" discussion that most of the media seem to be focusing on with regard to Facetime. Almost everyone that refers to compliance with standards is talking about interoperability with current PC based video chat capabilities - from the likes of Skype, MS Messenger, GTalk and others. Am I the only one that has noticed the iPhone 4 is not a PC and is in fact a mobile phone? Why is it that no one else is questioning interoperability with existing video chat capable mobile phones?
After thinking on this for a little while, I guess it might be that most of the media coverage about the iPhone 4 is coming from the USA - where is was launched. It's only natural. The problem with the US telecoms market is that it is not representative of the rest of the world - who has had video calling for ages and don't really use it. Perhaps it was the overflowing Apple coolaid fountain in the iPhone 4 launch that got the audience clapping when Jobs placed a video call, or perhaps it was just that they had never seen a video call before - I wasn't there so I cant be sure. Right now, the Facetime capability on the iPhone 4 is only for WiFi connections - which makes it pretty limiting. Apparently, there is no setup required, no buddylist, you just use the phone number to make a video call - which is the way video calling already works (see the screen dump of my phone to the right and the short video below), but the WiFi limitation on the iPhone 4 will mean that you have to guess when the recipient is WiFi connected. At least with the standard 3GPP video call, the networks are ubiquitous enough to pretty much guarantee that if the recipient is connected to a network, they can receive a video or at least a phone call. Job's didn't explain what would happen if the recipient was not WiFi connected - does it just make a voice call instead? I hope so.
If you look at the pixelation and general poor quality of the video call, consider that I am in a UMTS coverage area, not HSPA (the phone would indicate 3.5G if I were), so this is what was available more than seven years ago in Australia, longer in other countries. If I was in a HSDPA coverage area, I would expect the video call to be higher quality due to the increase bandwidth available.
I recall in 2003, Hutchison 3 launched their 3G network in Australia with much fan-fair. Video calls was a key part of the 3G launch in Australia for all of the telcos. This article from the 14Apr03 Sydney Morning Herald (on day before the first official 3G network in Australia) illustrates what I am talking about. The authors say that the network's "...main feature is that it makes video
calling possible via mobile phone." Think about it for a second. That's from more than seven years ago and Australia was far from the first country to get a 3G network. A lifetime in today's technology evolution. Still the crowds clapped and cheered as Jobs made a Video call. If I had have been in the audience, I think I would have yawned at that point.
The other interesting thing that I noticed in job's speech as his swipe at the Telcos. He implied that they needed to get their networks in order to support video calls. Evidence from the rest of the world would suggest that is not the case - perhaps it is in the USA, or perhaps he is trying to deflect blame for not allowing Facetime over 3G connections away from Apple and back to the likes of AT&T who have copped a lot of flack over their alleged influence on Apple's Application store policies involving applications that could be seen to be competitive with services from AT&T. I am not sure how much stick AT&T deserve on that front, but it's pretty obvious from job's comment that he is not in love with carriers - and certainly from what I've seen, carriers are not in love with Apple. It might be interesting to see how long the relationship lasts. My guess is that as long as Apple devices continue to be popular, both parties will be forced to share the same bed.
On another related point, I have been searching the Internet to find what standards body Apple submitted Facetime to for certification - Jobs says in the launch that it will be done "tomorrow" - this could be marketing speak for 'in the future' or it could literally mean the day after he launched the iPhone 4. If anyone knows please let me know - I want to have a look into the way Facetime works.
Thanks very much to my colleague Geoff Nicholls for taking the Video Call in the video above.
I came across this article today - Apple wanting to propose their new Facetime technology for video chat now that they finally have a camera on the front of their iPhone 4. I'm now on my second phone with a camera on the surface of the phone (that's at least four years that my phones have had video chat capabilities) which has not proved to be much more than a curiosity where Telcos have launched it around the world. I recall the first 3G network launch in Australia - for Hutchinson's '3' network - video chat was seen as the next big thing - the killer application, yet apart from featuring in some reality shows on the TV, very few people used it. I wonder why Steve Jobs thinks this will be any different. At least the video chat capabilities that are in the market already have a standard that they comply with which means that on my Nokia phone, I can have a video call with someone on a (say) Motorola phone. With Apple's Facetime, it's only iPhones 4 to iPhone 4 (which does not support a 4G network like LTE or WiMax I hasten to add). If Apple really is worried about standards as the Computerworld article suggests, then I have to ask why doesn't Apple make their software comply with existing 3GPP Video call standards instead of 'inventing their own'. If Apple were truly concerned about interoperability, that would have been a more sensible path.
According to Wikipedia, in Q2 2007 there were "... over 131 million UMTS users (and hence potential videophone users), on
134 networks in 59 countries.". Today, in 2010, I would feel very confident in doubling those figures given the rate at which UMTS networks (and more latterly, HSPA networks) have been deployed throughout the world. Of note is that the Chinese 3G standard (TD-SCDMA) also supports the same video call standard protocol. That protocol (3G-324M - See this article from commdesign.com for a great explanation of the protocol and it's history - from way back in 2003!) has been around for a while and yes, it was developed because the original UMTS networks couldn't support IPv6 or the low latency connectivity to provide a good quality video call over a purely IP infrastructure. But, things have changed with LTE gathering steam all around the world (110 telcos across 48 countries according to 3GPP) and mobile WiMax being deployed in the USA by Sprint and at a few other locations around the world (See WiMax Forum's April 2010 report - note that the majority of these WiMax deployments are not for mobile WiMax and as far as I know, Sprint are the first to be actively deploying WiMax enabled mobile phones as opposed to mobile broadband USB modems) so, perhaps it is time to revisit those video calling standards and update them with something that can take advantage of these faster networks. I think that would be a valid thing to do right now. If it were up to me, I would be looking at SIP based solutions and learning from the success that companies like Skype have had with their video calling (albeit only on PCs and with proprietary technology) - wouldn't it be great if you could video call anyone from any device?
I guess the thing that annoys me most about Apple's arrogance is to ignore the prior work in the field. Wouldn't it be better to make Facetime compatible with the hundreds of millions of handsets already deployed rather than introduce yet another incompatible technology and proclaim it as "... going to be a standard".
Yes, I should have posted this a week ago during the TeleManagement World conference - I've been busy since then and the wireless network at the conference was not available in most of the session rooms - at least that is my excuse.
At Impact 2010 in Las Vegas we heard from the IBM Business Partner (GBM) on the ICE project. At TMW 2010, it was ICE themselves presenting on ICE and their journey down the TeleManagement Forum Frameworx path. Ricardo Mata, Sub-Director,
VertICE (OSS) Project from ICE presented (see his picture to the right) presented on ICE's projects to move Costa Rica's legacy carrier to a position that will allow them to remain competitive when the government opens up the market to international competitors such as Telefonica who are champing at the bit to get in there. ICE used IBM's middleware to integrate components from a range of vendors and align them to the TeleManagement Forum's Frameworx (the new name for eTOM, TAM and SID). In terms of what ICE wanted to achieve with this project (they call it PESSO) this diagram shows it really well.
I wish I could share with you the entire slide pack, but I think I might incur the wrath of the TeleManagement Forum if I were to do that. If you want to see these great presentations from Telcos from all around the world, you will just have to stump up the cash and get yourself to Nice next year. Finally, I want to illustrate the integration architecture that ICE used - this diagram is similar to the one form Impact, but I think importantly shows ICE's view of the architecture rather than IBM's or GMB's.
For the benefit of those that
don't understand some of the acronyms in the architecture diagram above,
let me explain them a bit:
ESB - Enterprise Services Bus
TOCP - Telecom Operations Content Pack (the old name for WebSphere Telecom Content Pack) - IBM's product to help Telcos get in line with the TMF Frameworx)
NGOSS - Next Generation Operations Support Systems (the old name to TMF Frameworx)
Here is the URL for this bookmark: http://apcmag.com/telstra-to-block-ipad-micro-sims-in-other-devices.htm Interesting... in the rest of the world (and as I heard repeatedly last
week at TeleManagement World in Nice, France) Telcos are suffering from
all you can eat plans - particularly plans for devices like the iPhone
which encourages users to be online all the time and to consume rich
media like movies. I heard from a number of Telcos that teenagers are
preferring to watch movies on their iPhones in their bedrooms rather
than in the lounge room on the normal TV (not that they can always get
access tot he same movies on the TV) - surely a larger screen will
encourage more of that sort of behaviour. This is driving too much
traffic on Telcos 3G networks with flat rate plans. Optus have also
announced a similar all you can eat plan for their iPads.
At almost the same time, both Optus and VHA
(Vodafone Hutchison Australia) have offered unlimited 3G plans for just
AU$50. It makes me wonder if these Telcos in Australia are listening
to other Telcos around the world. There's been a lot of press about
AT&T's network problems associated with iPhone users. I know the
world would be a perfect place if we learnt from everyone else's
mistakes, but come on - you don't need to be a genius to see how this
could damage their business. I guess they see this as a competitive
pressure - if their rivals do it, then they have to as well - I had
hoped that the Australian Telcos would be (jointly) a bit more sensible
I do not have any Apple products and I'll admit to a bit of jealousy at
an all you can eat plan for only AU$50 when I get about 1 Gb for a
similar amount on my Nokia e71 - it doesn't seem fair that I get so much
less for similar money on the same network - just because of the device
I choose to use...
While IBM missed out on winning the TeleManagement Excellence awards this year (congratulations to those four competition winners (see the winners on the TMF web site) we do have a great stand with multiple demos (I haven't counted, but I think there are six demos) and a small meeting area. Check out the photos below:
TeleManagement World conference, 2010. Nice France.
Lui Aili, Board Director for China Mobile presented this morning at the TeleManagement World conference in Nice, France. Mr Lui spoke of China mobile's challenges. For them, Internet based competitors posed a real threat, despite the size of China Mobile (more than 528 million subscribers) they see companies like Google (with GTalk) and Skype, but also device manufacturers such as Apple and Nokia as providing on device applications and value added services on their own devices which reduces China Mobiles function down to a bit carrier. As Mr Lui put it, these companies "moved our cheese"
For China Mobile, to compete with these Internet based companies, they needed to radically reduce their costs - to do this, they started a project about six years ago to move to an all IP network from their existing legacy network. This architectural move reduced their Capex by a massive 68%. The reduction was through reduced administration and management costs (by re-organising their operational management system and spreading it across all of their IP networks)
Strategy for IP transformation
China Mobile's network services are predominantly occupied by low value services - straight 2G services. They undertook a detailed analysis to look at network utilisation and management tools to better manage their network and control the customer experience. For them, ALL IP is not the same as All-in-one IP. they are separating their IP customers into high and low value services with security barriers in place - they have a separate virtual network for high value services and for low value standard services. He did not state it directly, but I took it to mean that they have different Service Level Agreements (SLAs) associated with the high and low value services.
From a network administration perspective, they have implemented network management agents at as many points as possible - including every router to enable efficient and rapid fault discovery and correction.
For China Mobile, IP skill levels among their staff was a key success factor - Mr Lui spoke of it multiple times, including implementing comprehensive training schemes for their staff.
"IP Transformation has been a huge task... the job is fare from finished" Mr Lui said. Despite this, he also said that right now, almost all of their voice traffic is already carried over their IP infrastructure
In summary, Mr Lui made the following points:
IP transformation simplifies the network, but males O&M more complex. .
Operators must invest in OSS systems to make IP networks and transformation more efficient.
(there was a third point that I missed - I will add it once I can download the presentations)
The yoyo mobile interface for MyDeveloperWorks is back again! Had I known, it was available I would have been using it all week instead of Skyfire to post blog entries from Impact this week. I just hope it is here to stay this time! :-)
For those of you that don't know about the Lotus Connections Mobile interface, it looks like this on my Nokia e71 and is available from https://www.ibm.com/developerworks/mydeveloperworks/mobile: (I have it zoomed out to 75%, so those of you getting on in life like me, you might prefer it at 100% or greater... :-) )
<edit> It was nice while it lasted - but the mobile interface is down again! </edit>
Well, Impact 2010 is over. It's been four and a half days of terrific content, catching up with other IBMers, customers and business partners. All of the Telco related sessions finished Wednesday so the last day and a half today, I have been concentrating on product updates to Business Process Management products. I went to a WebSphere Process Server V7 update yesterday and a WebShere Services Registry and Repository this morning. By far, the best session of the last two days was the final session which covered how to get started and be successful on your first BPM project. The presenter had lots of recommendations which made a lot of sense. Once the presentation is posted to the Impact collaboration site, I will summarise it (I didn't think to do it as the session ran as I did for the telco sessions - sorry!)
The WPS update had the BPM product Architect Eric Herness (BPM Chief Architect) along with Amy Dickson (WPS Product Manager) and Kevin Barker (WBI Architect) went through the many improvements that were introduced with WPS V7 as well as the improvements in WebSphere Integration Developer.
As I write this (on my phone) I am sitting downstairs in the Venitian waiting for the time to tick over before I head to the airport. Unfortunately, McCarran Airport (LAS) doesn't have an American Airlines lounge*, so I might as well wait here where I have free wifi and food as be at the airport. From there, I go to Los Angelies (LAX) and then finally home (after 15 or so hours in the air) to Melbourne.
Next week, I will be heading to the TeleManagement World in Nice, France so if I have wifi connections during the sessions, I will post from the sessions there as well. I hope you'll join me there or failing that, at least read about it here.
* The observant and well travelled among you will know that LAS does actually provide free wifi, but sitting at the airport is not as nice as sitting in the comfy chairs at the hotel.... #ibmimpact
In Costa Rica, the government owned telco - ICE is being forced to open up it's market to competitors because of the Central American Free Trade Agreement (CAFTA) that Costa Rica has joined. This represented a huge change for ICE who were a Power and Communications provider, without a competitor in their market, they didn't have any competitive forces to push them to modernise their systems and processes. For instance, fulfilment of basic services took weeks as a result.
GBM, an IBM business partner and IBM Software group proposed to ICE that they base their new OSS/BSS architecture on the TeleManagement Forum's Frameworx (eTOM, TAM, SID, TNA) - for which they used the WebSphere Telecom Content Pack and IBM Dynamic Process Edition to ensure ICE would have the standards compliance and dynamic BPM capabilities. By using WTCP and DPE, ICE reduced the effort required to build and deploy their new processes by an estimated 20-50%. A fundamental principle of Dynamic BPM is the Business Services layer which sits on top of the BPM layer which in turn sits on the SOA layer. A Business Service is abstracted up from the physical process. For instance, a business service might be 'Check Technical Availability' which would apply regardless of the service you are talking about - mobile, POTS or xDSL. These business services are defined within the Telecom Content Pack which enables system integrators like GBM to accelerate the architecture work on projects like this one for ICE.
GBM made use of IBM's Rapid Delivery Environment (RDE) - where they sent a number of their architects to the IBM Telecom Solution Lab in Austin, Texas for six weeks to conduct a proof of concept and to learn how to apply WTCP to a real customer situation such as that faced by ICE. The RDE allowed GBM to work with the IBM experts to build the first few scenarios so that GBM could continue the work locally in Costa Rica without a lot of assistance from IBM. The other benefit of using the RDE is to get access to the eTOM level 4,5 and 6 assets - the connections to the physical systems that the RDE has previously developed. For instance, the connection to Oracle Infranet Billing engine which can then be reused by other customers who also engage with the RDE.
GBM and ICE have not yet been able to measure that acceleration that WTCP and DPE provided, but anecdotal evidence suggests that it was significant. In preparation for CAFTA, ICE have already launched a 3G network and are preparing to launch pre-paid services in preparation to compete with several new operators that will enter the market this year. #ibmimpact
At this morning's keynote session with Beth Smith (IBM) and Shanker Ramamurthy (IBM) one of my customers - Globe Telecom from the Philippines was mentioned - unfortunately they could not be here to see if for themselves, so I thought I would post the photos and short video I took of it. <edit> I have replaced my shonky video with an extract of the relevant section form the official Impact videos on youtube </edit>.
Some official videos have been uploaded to YouTube - taken from the real (read good quality) cameras at the event. I have extracted out the relevant Telco section and added subtitles to clarify what Beth and Shankar are actually saying.
AT&T are part way through a major SOA/BPM project which if you know a little about their history* must be an enormous task. They are introducing modelling tools and reverse modelling their existing systems as well as using a tool from iRise to prototype the user interfaces and reduce the risk of not hitting the business requirements.
They have deployed Rational Requisite Pro to capture requirements without the need to get users away from their beloved MS Word. In the last five months, their requirements have gone from 15,000 requirements registered in January to over 30,000 now. Certainly illustrates the traction that they are achieving with their business people. Users access Req Pro via Citrix sessions and the tools are available to thousands of business users.
AT&T are also exposing WebSphere Business Modeler and iRise to a smaller set of subject matter expert users - building a Centre of Excellence in UI design and Process Modelling. So far, they have modelled over 800 process flows base on eTOM models which have been extended to meet their specific requirements. All of these are stored within a common Rational Asset Manager instance which helps their business analysts to improve asst use and reuse.
Those process models feed directly into the model driven development method which is aligned with the requirements and process models. That MDD method uses WebSphere Integration Developer(WID), Rational Software Architect (RSA) for development and WebSphere Process Server (WPS) runtime. WebShere Business Modeler and WebShere Services Registry and Repository (WSRR) in support of the runtime. IBM GBS have put in place processes to support AT&T's development life cycle and governance requirements.
Key success factors that AT&T see include:
Solve Critical Business Problems
Win over senior Exec support
Achieve Business Partner Alignment
Integrated Tools Approach
Communicate, communicate, communicate!
* AT&T have been through multiple de-mergers and mergers and acquisitions over the past 10 years resulting in a hugely complex IT environment. #ibmimpact
I have just seen Amy Wohl of Amy D Wohl Opinions present on Cloud computing, she was going through the various cloud models and spoke about Community Clouds. What she means by that is multiple community focused clouds as part of a larger (private) cloud. An example of that is the Vietnam Government that bought an IBM Cloudburst to provide multiple virtual private clouds to small businesses in Vietnam so that they can have access to computing power that they otherwise now be able to afford. For Telcos, this could be an offering to their local community groups - perhaps a local schools, bar, sporting clubs, service clubs etc but also potentially for commercial organisations - perhaps to small businesses.
She also made the interesting point that (in her opinion) we are too early in the cloud evolution to actually define standards. She believes that any standards set now would stifle innovation in cloud technology and interoperability. I was interested to hear about this since I attended a web conference call a few weeks ago run by the TeleManagement Forum's effort to create standards around clouds, particularly For Enterprise use rather than public clouds. I guess the Enterprise cloud market is the most likely type of cloud user that will need interoperability first, thus the emphasis on standards.
Amy co-presented with John Falkl from IBM who discussed BPM within the cloud. Given BPM is a business function, items subjects such as Security are usually one of the biggest hurdles for Cloud Services. There are multiple factors that fall under the title of 'security' such as encryption, roles, authentication (especially when using federates or external authentication services), legal data protection requirements and authorisations. John also pointed out a number of considerations that should be considered in enterprise cloud services including Governance models (which he sees as an extension to normal enterprise governance models). John's view of standards for Cloud services is that it will most likely start with Web Services standards such as WS-Provisioning and mentioned that there were multiple efforts around cloud standards. I might see if I can have a chat to both John and Amy after the session to get their views on the TMF's efforts around cloud standards. If that discussion is interesting, I will report back.
Amy made a really interesting point during the Q&A - she said that when she was at Microsoft a few weeks ago and asked about transactional activity in their cloud - they said that MS could not do it.... Very interesting especially when you consider that transactional integrity is a core capability on IBM's cloud capability.
<edit> I asked Amy about the TMF Cloud standardisation - she hadn't heard about it, but did say that she thought that TMF's approach was right - asking the enterprise customers to specify their requirements - she also thought they were probably the right place to start for any cloud standards too. </edit> #ibmimpact
Gridit is a Finnish company that is providing online retail services which was only founded in 2009. They are owned by nine local network providers. Think of them as an aggregated application store that sells a broad range of services and products from those nine network companies as well as third party content providers. They plan to sell services and content such as:
They do not make exclusive agreements with the content/service providers and provide their customers with freedom of choice. For Gridit, the customer is king - they will seek out new content providers if there is demand from the customers. Gridit also interact with local network providers and 3rd party content providers giving the customers a single point of contact and billing for the services that they resell.
What Gridit are providing is pretty similar to an app store solution we deployed last year in Vietnam which was also a joint venture by a number of Telcos and a bank which provided a retail online store for products and services from those communications providers as well as 3rd party content providers except that Gridit are also offering a hosted wholesale service - I could go to Gridit and build my new company 'Larmourcom' and offer products and services from a range of providers that Gridit front end for Larmourcom. Gridit can stand up an online commerce portal for Larmourcom and also provide an interface to the back end providers to allow for traditional and non-traditional service assurance, fulfilment and billing processes.
To achieve this abstraction from the back end providers, Gridit have used WebSphere Telecom Content Pack to provide an architectural framework and accelerator for all of those services. IBM has helped Gridit to map those processes as defined within the TeleManagement Forum's standards (eTOM, TAM, SID) and map them to the lower level processes to wherever the content or services come from.
Like the Vietnamese app store, Gridit are also using WebSphere Commerce to provide the online commerce and catalogue. For Gridit, the benefits they expect to see (as a result of a Business Value Assessment that was conducted) was 48% faster time to value by using Dynamic BPM and Telecom Content Pack versus a traditional BPM model. That is real business value and a great story for both Gridit and IBM. #ibmimpact
Orange in France are using WebSphere sMash to provide an easy development environment using PHP and Groovy to build Telco enabled applications that consume Orange Application Programming Interface (API) which are exposed through pre-built widgets. The custom Orange API is not compliant with either OneAPI or ParlayX and I would normally not endorse a custom API like this, but time to market forces meant that Orange had to move before the (OneAPI) standards were in place. What I would take from their experience in France is their model and use cases. All of which could be done and (now) use standards for those APIs. Interestingly, I think that Orange could also use IBM Mashup Center to support developers with even less skills that the PHP and Groovy developers they're currently targeting.
#ibmimpact Once I get back to my PC, I will insert an Orange video the positions the usage and simplicity of their offering.
Telus is a Communications Service Provider in Canada, the second largest in their market with 12M connections (wireline, mobile and broadband). Telus have a very complex mix of products, services and systems and they need to maximise their investments while still be able to grow and maintain a lid on their costs. New projects still need to be implemented through good times and bad, so they need an architecture that will allow Telus to continue to grow and maintain costs through a range of economic conditions. Telus selected an agile method/strategy where a reasonable investment early on with the plan to become agile and support new 'projects' through small add ons in terms of investment. Ed Jung from Telus characterised the 'projects' in the later stages as rule or policy changes which may or may not require a formal release.
To achieve this agility, Telus are using WebShere Telecom Content Pack (WTCP) as an accelerator to keep costs down, while still maintaining standards compliance for their architecture. He sees key success factors as:
Selecting a key implementation partner (IBM)
Using standards where possible to maintain consistency
For Telus, they elected to start with fulfilment scenarios within their IPTV system. The basis for this is a data mapping to and from a common model - within the TeleManagement Forum's standards, that relates to the SID. Ed sees this common model as key to their success.
Dynamic endpoint selection is used within Telus to enable their processes to integrate and participate with their BPM layer. Ed suggest the key factors for a successful WTCP project are:
Adopt a reference architecture
Select a good partner
Seed money for lab trials
Choose correct pilots
Put governance in place (business and architects)
Configure data / reduce code
Ed thinks that last point (configure data / reduce code) is the best description of an agile architecture that really drive lower total cost of ownership for projects as well as a lower capital expenditure for each project.
Craig Hayman is up now and making some great announcements. He went through them to quickly to capture them all on my phone, but I took a photo which I will add to this post later. They included a new Castiron, is a new acquisition (today) which will add to IBM's cloud integration capabilities.
WebSphere Lombardi edition, bringing together BPMBlueworks and Blueprint in a cloud initiative are just some of the new announcements. The others are on the photo below:
Below is the official YouTube video from Craig's Hayman's Speech
Robert LeBlanc is speaking now and previewing the 2010 CEO study - always an interesting read and it looks like there will be similar revelations come out of this years report too. Benifets like 75% of successful businesses make extensive use of BPM and SOA. Robert said there would be preview copies available which hopefully I will be able to get a copy of. The study should be available mid-May. Robert is discussing agile businesses. How individual IBM customers are becoming more agile.
Kaiser Permente is a healthcare provider and is really making changes to the way they work. Their CEO is speaking about the evolution of medical records from paper charts, to ecectronic records, predictive analytics and personalised records. They're making this revolutionary changes by using IBM SOA & BPM technology. It's impressive to see the real changes they have made that have a real impact on patient care, efficiency and capabilities.
The next customer example that Robert is giving is Ford. FoMoCo Exec VP, Paul Nussbaum is talking about their OneIT initiative that focused on standardisation and process simplification and consolidation allowed Ford to survive a thrive through the Global Financial Chrisis.
Well, I'm here! Las Vegas for this year's Impact conference. As I sit here listening to Steve Mills talk about IBM's BPM and SOA strategy since 2002 and it strikes me that the basic story around SOA and BPM has not changed in all that time. Sure things have changed, but those changes represent growth on top of the same SOA & BPM story. A key add on that Steve is talking about now is the Smarter Planet initiative which was launched in 2008 and build on the SOA basics to really improve our world.
I'm really looking forward to this week, to see the latest and greatest from IBM, IBM Business Partners and Customers. #ibmimpact
I am sitting here in Singapore and reading today's Straits Times, keeping up with the affairs in the region and around the world where on page 3 (the most important page in a newspaper after the front page) is an article about the leaked/lost next generation iPhone that Gizmodo reportedly paid US$5000 dollars for (other online reports that I've read have suggested other amounts such as US$350. I'm not sure who is right). The article occupied almost half of page 3.. for the next gen iPhone... that seems excessive to me for a non-specialist publication, but I guess it is reflective of the general hype that exists around Apple products. The previous hype was around the next gen MacBooks with faster processors and prior to that the iPad. I've read articles suggesting that the iPad will revolutionise newspapers and home computing and telcos. I'm not so sure. While I think a lot of iPad will be sold worldwide (once it is released outside of the USA), but I also think a lot of those devices will get a lot of use through a honeymoon period and then sit idle until they are eventually disposed of. I am so sick of the hype around all these Apple products. There are some things that Apple do really well (UI and Design) and some they do really poorly (Business use support, locking in users). I respect them, but I do not like them.
It reminds me of a great parody that The Onion did a while ago:
Ok, this is my first attempt at writing a blog post on the full web interface via Skyfire (a proxied browser for mobile devices similar to Opera Mini). I am using my Nokia e71. The big advantage of doing it this way is access to all the rich text options and images that are already uploaded to myDW. Let's test that by inserting an image... On second thoughts, that didn't work too well. I tried to insert an image, but to do that, you have to move the cursor from the text insertion mode so that I can click on the insert image button and Skyfire got a bit confused at that point... Oh well, just text then. That after all was all I would get with the mobile interface if it were available. The mobile interface is definitely faster though...
I'm off to Impact 2010 in Las Vegas in a couple of weeks time, then a couple of week after that, I am off to TeleManagement World at Nice, France - that's two conferences in three weeks - now that I've tested posting from my phone (without the Connections mobile interface) and I've proved the concept, I have a model that will allow me to post form the conference floors.
Guilty of not posting what I should have over the past few weeks. First a quickie - IBM's nominations in the TeleManagement Forum excellence awards for this year have dropped down to two, that is to say, IBM has made the finalist lists for two categories:
Business Innovation award
Industry Leadership award
While it's a shame we didn't make the cut for the Solution Excellence award (I am not sure which solution was nominated) I am still proud that we've made the finalist cut for two categories. If you are a TMF member - please go and vote at http://www.tmforum.org/ExcellenceAwards2010/Finalists/8647/Home.html#1 (you choose who you want to vote for, you can probably guess who I voted for! )
I have been working on a post about our newly announced Industry Framework for the Media & Entertainment Industry - you should expect that post to come along soon! (oh and don't forget to vote in the TMF awards!)
I spotted this article this morning - I don't know much about it yet, but I will try to find out some more over the next week or so. I would however note the section of the article that states:
"... In its defense, IBM claims all its solution will do is identify and
block large sources of SPAM SMS- not scan every single message to see if
it’s in accordance with the Chinese Government’s guidelines...."
I know that some Telcos that I have worked with have what they call "Anti-SPAM" servers on their network. The key difference between those and this new one at China Mobile is that this new solution looks to be part of the mobile to mobile SMS traffic whereas other that I have seen are all about mobile originated traffic to shortcodes (for application traffic). This has become a problem for some telcos who offer unlimited (or close to unlimited) SMS plans. Existing systems that I know of simply count the number of SMS's sent by that MSISDN (phone number) to a particular shortcode - if it exceeds 50 within a 24 hour period, simply drop the messages. Those systems present an interesting conundrum for SMS voting and SMS competition entries. A subscriber thinks they have entered/voted (say) 200 times by sending 200 short messages, but the actual count that the application (the voting or the competition entry database) is only 50 for that 24 hour period. If we're talking about unlimited SMS plans, there is no real penalty to the subscriber other than their perceived votes/entries won't be as high as they thought. But for mobile plans that are paying for each SMS sent, the subscriber is not getting what they pay for... I can understand why a subscriber of a pay as you consume mobile plan would be very upset with their messages getting dropped, not that a true SPAMer would ever use a mobile phone plan like that.
I found out today that IBM has been nominated for three of the four categories for this year's TeleManagement Forum's awards. IBM is the only company to have three nominations. (Click on the image to see all of the nominees for the TM Forum awards). It makes me proud to say "I'm building a smarter planet. I'm an IBMer"
IBM has been nominated for an award in:
Business Innovation Award
Industry Leadership Award
Solution Excellence Award
The other award (Operational Excellence Award) has only Telcos nominated.
Let me start by apologising. I have been very busy over the past few weeks - this week is the first at home in five weeks. That's my best excuse for not posting (other than drawing a blank when it comes to topics ) I know I have quite a few people who read my ramblings and I really appreciate it. Unfortunately, my day job keeps getting in the way. The other big news I have (big for me, not so much for almost anyone that is reading this) is that the Industry Business Partner Technical Enablement team is being disbanded and wound into the IBM channels infrastructure. That means that there will no longer be any industry speciality in the technical enablement that we provide to our business partners - of course our partners are not being left out in the cold either. The channels team will continue to provide first rate technical enablement and assistance and IBM will continue to have industry specialists. For Business Partners, it will just be a matter of engaging with (non-channels) IBMers in the industry teams as well as the channels team. I would expect that the channels team will provide the conduit to those industry specialists such as me when the specialised industry skills are needed..
By now you might be wondering if my team is going away, what is happening to yours truly? Well, I have a position with the ... wait for it.... GMU BPM Tiger Team focused on telecommunications. And I thought IBPTSE was a mouthful. I will continue to be a Telecom specialist architect in this new team. Let me break down those acronyms a bit for you.
GMU is Growth Market Unit which equates to the whole world less North America, Japan and Western and Northern Europe.
BPM is Business Process Management and is the layer of intelligence that sits on top of a Service Oriented Architecture; it is the business processes, the workflows, the business rules etc that for the basis for the business strategy.
Tiger Team is a small team of the best of the best resources to chase down deals. What is unusual for this tiger team is the focus on industry - most other Tiger teams in IBM are focused on a particular brand such as Rational, Lotus, WebSphere, Tivoli or Information Management.
This move has been in the works for a few weeks for me, but it's now at a stage where I can talk about it. I would like to take this opportunity to thank everyone associated with the IBPTSE team around the world, particularly Jim Toohey, my manager. Over the past three years that I have been in the team, we have accomplished a lot of things that make me feel very proud. Multiple deals, partners enabled, partners validated against our SPDE framework for Telcos. Despite me being the only team member in Australia, I have always felt a part of a team despite the geographical challenges. Thanks guys!
Providing a National Broadband Network within a country is seen by many governments as a way to help their population and country compete with other countries. I have been involved in three NBN projects; Australia, Singapore and New Zealand. I don't claim to be an expert in all three projects (which are ongoing) but I though I would share some observations and comparisons between the three projects.
Where Australia and Singapore have both opted to build a new network with (potentially) new companies running it, New Zealand has taken a different path. The Kiwis have decided to split the incumbent (and formerly monopoly) Telecom New Zealand into three semi-separated 'companies' Retail, Wholesale and Chorus (the network), but only for the 'regulated products' which for the New Zealand government is 'broadband'. They all
still report to a single TNZ CEO. I have not seen any direction in terms of Fibre to the Home or Fibre to the Node, just defined the product as 'broadband'. The really strange thing with this split is that the three business units will continue to operate as they did in the past for other non-regulated products such as voice.
As an aside, the Kiwi government not regulating voice seems an odd decision to me - especially when you compare it to countries like Australia and the USA where the government has mandated that the Telcos provide equivalent voice services to the entire population. Sure, New Zealand is a much smaller country, but it is not without it's own geographic challenges in providing services to all kiwis, yet
A key part of the separation is that these three business units are obliged to provide the same level of service to external companies as they provide to Telecom and it's other business units. For example if Vodafone wants to sell a Telecom Wholesale product, then Telecom Wholesale MUST treat Vodafone identically to the way they treat Telecom Retail. Likewise Chorus must do the same for it's customers which would include ISPs as well as potentially other local Telcos (Vodafone, Telstra Clear and 2Degrees). This equivalency of input seems to me to be an attempt to get to a similar place to Singapore (more on that later). Telecom NZ have already spent tens of million of NZ$ to this point and they don't have a lot to show for it yet. It seems to me like the Government is trying to get to a NBN state of play by using Telecom's current network and perhaps adding to that as needed. For the kiwi population, that's not anything flash like fibre to the home, but more like Fibre to the node and then have a DSL last mile connection. That will obviously limit the sorts of services that could be delivered over that network. When other countries are talking about speeds in excess of 100Mbps to the home, New Zealand will be limited to DSL speeds until the network is extended to a full FTTH deployment (not planned at the moment as far as I am aware)
Singapore, rather than split up an existing telco (like Singtel or Starhub) have gone to tender for the three layers - Network, Wholesale and Retail. The government (Singapore Ltd) has decided that should only be one network and run by one company (Nucleus Connect - providing Fibre to the Home), that there would be a maximum of three wholesale companies and as many retail companies as the market will support. A big difference to New Zealand is that the Singapore government wants the wholesalers to offer a range of value added services - that they refer to as 'sit forward' services to engage the population rather than 'sit back' services that do not engage the population base. Retail companies would be free to pick and choose wholesale products for different wholesalers to provide differentiation of services.
Singapore, New Zealand and Australia are vastly different countries - Singapore is only 700km2 in size, Australia is a continent in it's own right and new Zealand is at the smaller end of in between. This is naturally going to have a dramatic effect on each Government's approach to a NBN. Singapore's highly structured approach is typical of the way Singapore does things. Australia's approach is less controlled - due to the nature of the political environment in Australia rather than it's size and New Zealand's approach seems somewhat half-hearted by comparison. I am not sure why the NZ government has not elected to build a new network independent of Telecom NZ's current network.
In Australia on the other hand, the government have set up
the Communications Alliance to manage the NBN and subcontract to the likes of
Telstra, Optus and others. The interesting thing with that approach (other than the false start that has already cost the Australian Taxpayers AU$30 million) and the thing that sets it apart from Singapore is that the approach doesn't seem to have any focus on the value added services (unlike Singapore's approach) - it's all about the network, even the wholesaler plan for Australia is talking about layer 2 protocols (See The Communications Alliance Wiki. All of the documents I have seen from Communications Alliance are all about the network - all very low level stuff.
Of course, these three countries are not the only countries that are going through a NBN project. For example the Philippines had a shot at one a few years ago - the bid was won by ZTE, but then a huge scandal caused the project to be abandoned. It came back a while later as the Government Broadband Network (GBN) but that doesn't really help the average Filipino. It's interesting to see how these projects develop around the world...
A colleague of mine at IBM, Anthony Behan has just had an article published in BillingOSS magazine. I'll admit that I have never heard of the magazine before, but this particular issue has quite a few articles about Cloud computing in a Telco environment. I don't agree with all of the content in e-zine, it is still an interesting read none the less. Check out the full issue at http://www.billingoss.com/101 and Anthony's article on pp48.
The image is a screen capture of Anthony's article from the billingoss.com web site.
Last week, Bharti Airtel launched their new App Store - upping the competitive stakes in India. As I mentioned in my post - App Stores, Are they right for Telcos? Telcos are looking to add value beyond just providing the transport. Time will tell how successful they are, but I think it could be worth watching. Bharti have a huge subscriber base, India has one of the lowest ARPU values in the world, so I guess they see it as a vital step to raise the ARPU above their competitors.
New Delhi, February 09, 2010 :Bharti Airtel, Asia’s
leading integrated telecom service provider, today announced the launch
of India’s first mobile applications store - Airtel App Central. Now,
Airtel mobile customers can transform their basic phone into a Smart
Phone by accessing over 1250 Apps across 25 categories for their
business, games, books, social networking and other needs. Offering an
easy single click purchase – with no credit card required – the cost is
automatically added to the customer’s mobile bill or deducted from the
available talk-time. Starting as low as Rs. 5, Airtel App Central will
offer local and regional Apps for customers across the length and
breadth of the country.
I had hoped to write an inciteful post this week about the National Broadband Network projects and contrasting the way that the three I have been involved in are dealing with them. In Australia (where I live) there has been a LOT of bad media coverage for the NBN project - the first attempt at which wasted AU$30 million of taxpayers money. Australia, New Zealand and Singapore are all tackling what is essentially the same problem in vastly different ways. Of course there are really good reasons for those differences, and I wanted to explain those as well... but, on my first week back from leave, things have gone nuts - this week, I've had four separate Service Delivery Platform RFI/RFPs plus some ongoing work with Globe, and other partners in Japan and New Zealand. The time I had hoped to set aside for the post just hasn't happened.
All I can say is that I am sorry and I hope to get that to you early next week while I am in Singapore and Bangkok. If you would like to see some other Telcom topics discussed, please fell free to comment and I will try and get to them...
Next week, I will be running a Telco training class for our System Integrator business partners in Bangkok - teaching, demonstrating and helping them to come to grips with IBM's software offerings in the Telecom industry - it should be good, I am looking forward to it..
On the Wednesday of the week before last (the week before my leave) at about 1am my time, I got an urgent request for a RFI response to be presented back to the customer at Friday noon (GMT+8 - 3pm for me - 2.5 business days for the locals in that timezone). This RFI was asking lots of hypothetical questions about what this particular telco might do with their Service Delivery Platform (SDP). It had plenty of requirements like "Email service" or "App Store Service" and so on. These 'use cases' made up 25% of the overall score, but did not have any more detail than I have quoted here. Two to four words for each use case. Crazy! If I am responding to this, such loose scope means I can interpret the use cases any way that I want. It also means that to meet all the use cases (14 in all) ranging from 'Instance
Messaging Presence Service (IMPS)' to 'Media Content and Management Service' to 'Next-Generation Network Convergence innovative services' the proposal and the system would have to be a monster with lots of components. The real problem with such vague requirements is that vendors will answer the way they think the customer wants them to, rather than the customer telling them what they want to see in the response. The result will be six or eight different responses that vary so much that they cannot be compared which is the whole point of running the RFI process - to compare vendors and ultimately select one to grant the project to.
On top of the poor quality of the RFI itself, the lack of time to respond creates great difficulties for the people responding. 'So what, I don't care, it's there job' you might expect them to say and to an extent you are correct, but think about it like this: A short timeframe to respond means that the vendor has to find whoever they can internally to respond - they don't have time to find the best person. A short timeframe means that the customer is more likely to get a cookie cutter solution (one that the vendor has done before) rather than a solution that is designed to meet their actual needs. A short timeframe means that the vendor may not have enough time to do a proper risk assessment and quality assurance on the proposal - both of which will increase the cost quoted on the proposal.
All of these factors should be of interest to the Telco that is asking for the proposal because they all have a direct effect on the quality and price of the project and ultimately the success of the project.
I know this problem is not unique to the Telecom industry, but of all the industries I have worked with in my IT career, the Telcos seem to do it more often. I could go on and on quoting examples of ultra short lead times to write proposals - sometimes as little as 24 hours (to answer 600 questions in that case), but all it would do is get me riled up thinking about them.
The whole subject reminds me of what my boss in a photolab (long before my IT career began) would say "Quality, Speed, Price: Pick two". Think about it - it rings true doesn't it?
I will be away on leave, so no posts this week, but as a consolation prize, vskinner should be publishing an interview with me in her blog Yin meets Yang
Responding to Val's interview request has given me an idea for some future blog posts - publish interviews with some of our key Telco partners, those in the Service Provider Delivery Environment validation programme and those that I work with from a NEP or System integrator perspective. if you think this sounds like a good idea, please comment and let me know.
In the meantime, I am going to enjoy some time away from work. See you in a week's time.
Sizing of software components (and therefore also Hardware) is a task that I often need to perform. I spend a lot of time on it, so I figured I would share how I go about doing it and what factors I take into account. It is an inexact science. While I talk about sizing Telecom Web Services Server for the most part, the same principles would be applied to any sizing exercise. Please also note that the numbers stated are examples only and NOT should not be used to perform any sizing calculations of your own!
Inevitably, when asked to do a sizing, I am always forced to make
assumptions about traffic predictions. I don't like doing it, but
is is rare for customers to have really thought through the impact
that their traffic estimates/projections will have on the sizing of a
solution or it's price.
Assumptions are OK
Just as long as you state them - in fact they could be viewed as a
way to wiggle out of any commitment to the sizing should ANY
of the assumptions not hold true once the solution has been deployed.
Let me give you and example - I have seen RFPs that have asked for
500 Transactions Per Second (TPS), but neglected to state anywhere
what a Transaction actually is. When talking about a product like
Telecom Web Services Server - you might assume that the
transactions they're talking about are SMS, but in reality, they
might be talking about MMS or some custom transaction - a factor
which would have a very significant effect on the sizing estimate.
Almost always, different transaction types will place different
loads on systems.
Similarly, it is rare for a WebSphere Process Server opportunity (at a Telco anyway) to fully define
the processes that they will be implementing and their volumes once
that system goes into production. So, what do I do in these cases?
My first step is to try to get the customer to clear up the
confusion. If that fails (I often have multiple attempts at
explaining to the customer why we need such specific information -
it is in their benefit after all - they're much more likely to get
the right (sized) system for their needs. This is not always
successful, so my next step is to make assumptions to fill in the
holes in the customer's information. I am always careful to write
those assumptions down and include them with my sizing estimations.
At this point, industry experience and thinking about potential use
cases really helps to make the assumptions I make reasonable (or I
think so anyway )
instance, if a telco has stated that the Parlay X Gateway must be able
to service 5760000 SMS messages per day, I think it would be reasonable
to assume that very close to 100% of those would be sent within a 16
hour window (while people are awake and to avoid complaints to the
telco about SMS messages that come in at all hours of the day -
remembering we are talking about applications sending SMS message -
nothing to do with user to user SMS messages ) which gets use down to
360000 (5760000/16) SMS per hour or 100 TPS for SendSMS over SMPP - now
this is fine for an average number, but I guarantee that the
distribution of those messages will not be even, so you have to make an
assumption that the peak usage will be somewhat higher than 100 TPS,
remembering that we have to size for peak load not average. How much
higher will depend on use cases. If the customer cant give you those,
then pick a number that your gut tells you is reasonable - lets say 35%
higher than average which is roughly 135 TPS of SendSMS over SMPP (I
say roughly because if that is your peak load, then as our total is
constant for the day (5,760,000) the load must be lower during the
non-busy hours. As we are making up numbers here anyway, I wouldn't
worry about this discrepancy, and certainly erring on the side of over
sizing is the safer option anyway - provided you don't over do the over
Assumptions are your friend
said I prefer to not make lots of assumptions, but stating stringent
assumptions can be your friend if the system does not perform as you
predicted and the influencing factors are not as you stated exactly in
your assumptions. For instance if you work on the basis of 35% increase
in load during the busyhour and it turns out to be 200%, your sizing is
going to be way off, but because you asked the customer for the
increase in load during the busyhour and they did not give you the
information, you were forced to make an assumption - they know their
business better that we ever could and if they can't or won't predict
such a increase during the busyhour, then we cannot be reasonably
expected to predict it accurately either - the assumptions you stated
will save your (and IBM's) neck. If you didn't explicitly state your
assumptions, then you would be leaving yourself open to all sorts of
consequences and not good ones at that.
Understand the hardware that you are deploying to
saw a sizing estimate the other week that was supposed to be able to
handle about 500 TPS of SendSMS over SMPP, but the machine quoted would
have been able to handle around 850 TPS of SendSMS over SMPP; I would
call that over doing the over sizing. This over estimate happened
because the person who did the sizing failed to take into account the
differences between the chosen deployment platform and the platform
that the TWSS performance team did their testing on.
you look at the way that our Processor Value Licensing (PVU) based
software licensing works, you will pretty quickly come to the
conclusion that not all processors are equal. PVUs are based on the
architecture of the CPU - some value a processor at just 30 PVUs per
core (Sparc eight core cpus), older Intel CPUs are 50 PVUs per core,
while newer ones are 70 PVUs per core. PowerPC chips range from 80 PVUs
per core to 120 PVUs per core. Basically, the higher the PVU rating to
more powerful each core is on that CPU.
that are rated at higher PVUs per core are more likely to be able to
handle more load per core than ones with lower PVU ratings.
Unfortunately, PVUs are not granular enough to use as the basis for
sizing (remember them though) we will come back to PVUs later in the
discussion. To compare the performance of different hardware, I use
RPE2 benchmark scores. IBM's Systems and Technology Group (Hardware) keeps track of RPE2 scored for IBM hardware
(System p and x at least) Since pricing is done by CPU core, you should
also do your sizing estimate by CPU core. For TWSS sizing, I use a
spreadsheet from Ivan Heninger (ex WebSphere Software for Telecom
Performance Team lead). Ivan's spreadsheet works on the basis of CPU cores for (very old) HS21
blades. Newer servers/CPUs and PowerPC servers are pretty much all
faster than the old clunkers Ivan had for his testing. To bridge the
gap between the capabilities of his old test environment and modern
hardware i use RPE2 scores. Since Ivan's spreadsheet delivers a number
of cores required result, I break the RPE2 score for the server down to
a RPE2 score per core, then use the ratio between the RPE2 score per
core for the new server and the test servers to figure out how many
cores of the new hardware are required to meet the performance
– so now, using the spreadsheet, you key in the TPS required for the
various transaction types - lets say 500 TPS of SendSMS over SMPP (just
to keep is simple; normally, you would also have to take into account
the Push WAP and MMS messages as well not to mention other transaction
types such as Location requests which are not covered by the
spreadsheet) that's 12 x 2 cores for Ivan's old clunkers, but on newer
hardware such as newer HS21s with 3 Ghz CPUs, that's 6 x 2 cores or on
JS12 blades it is 6 x 2 cores. Oh that's easy you say, the HS21s are
only 50 PVUs eash easy, I just go with Linux on HS21 blades and that
will be the best bang for the buck for the customer, well don't forget
that Intel no longer make dual-core CPUs for server they're all
quad-core, so in the above example, you have to buy 8 x 2 cores rather
than 6 x 2 cores for the JS12/JS22 blades.
the x 2 after each number: that is because for TWSS in production
deployments, you must separate the TWSS Access Gateway and the TWSS
Service Platform. The x 2 indicates that the AG and the SP both require
that number of cores.
Lets work that through:
Lets first say that TWSS is $850 per PVU.
For the fast HS21s - that's 8 x 2 x 50 x $850 = $680,000 for the TWSS licences alone For JS12s - that's 6 x 2 x 80 x $850 = $816,000 for the TWSS licences alone
(and all sales people who are pricing this should know this) the
pre-requisites for TWSS must be licensed separately as well. That means
the appropriate numbers of PVUs for WESB (for the TWSS AG) and the
appropriate numbers of PVUs for WAS ND (for the TWSS SP) as well as the
Database. It's pretty easy to see how the numbers can add up pretty
quickly and how much your sizing estimate can effect the prices of the
Database sizing for TWSS
the database, of course we prefer to use DB2, but most telcos will
demand Oracle in my experience. For TWSS, the size of the server is usually not the bottleneck int he environment what is important is the DB writes and reads per second
which equates to disk input/output to achieve high transaction rates
with TWSS. It is VITAL to have an appropriate number of disk spindles
in the database sick array to achieve the throughput required - the
spreadsheet will give you the number of disk drives that need to be in
a RAID 1 array to achieve the throughput. For the above 500 TPS
example, it is 14.6 disks = 15 disks since you cant buy only part of a
disk. While RAID 1 will give you striping and consequently throughput
across your disk array, if one drive fails, you're sunk. To achieve
protection, you must go with a RAID 1+0 (sometimes called RAID 10)
which gives you both mirroring (RAID 0) and stripping (RAID 1). RAID
1+0 immediately doubles your disk count so we're up to 30 disks in the
array. Our friends at STG should be able to advise on the most suitable
disk array unit to go with. In terms of CPU for the database server, as
I said, it does not get heavily loaded. The spreadsheet indicates that
70.7% of the reference HS21(Ivan's clunker) would be suitable, so a
single CPU JS12 or HS21 blade even an old one would be suitable.
time I do a TWSS sizing, I get asked how much capacity we need in the
RAID 1+0 disk array - despite always asking for the smallest disk's
possible. Remember we are going for a (potentially) large array to get
throughput, not storage space. In reality, I would expect a single
32Gb HDD would be able to easily handle the size requirements for the
database, so space is not an issue at all when we have 30 disks in our
array. To answer the question about what size - the smallest possible
- since that will also be the cheapest possible provided it does not
compromise the seek and data transfer rates for the drive. In the
hypothetical 30 drive array, if we select the smallest drive now
available (136Gb) we would would have a massive 1.9 Tb of space
available ((15-1) x 136 Gb) which is way over what we need in terms of
space, but it is the only way we can currently get the throughput
needed for the disk I/O on our database server. Exactly the same
principles apply regardless of DB2 or Oracle being used for the
Something that I have yet to see empirical data on is how Solid State Drives (SSD) with their higher I/O rates will perform in a RAID 1_0 array. In such a I/O intensive application, I suspect that it would allow us to drop the total number of 'disks' in the array down quite significantly, but I don't have any real data to back that up or to size an array of SSDs.
We have also considered using an in memory database such as SolidDB either as the working database or as a 'cashe' in front of DB2, but the problem there is the level of SQL supported by SolidDB is not the same as that supported by DB2 or Oracle's conventional database. To port the TWSS code to use SolidDB will require a significant investment in development.
Remember : Sizing estimates must always be multiples of the number of cores per CPU
sure you have enough of a overhead built into your calculations for
other processes that my be using CPU cycles on your servers. I assume
that the TWSS processes will only ever use a maximum of 50% of the CPU
– that leaves the other 50% for other tasks and processes that may be
running on the system. As a result, I always state that with my
assumptions as well. As an example, I would say:
achieve 500 TPS (peak) of SendSMS over SMPP at 50% CPU utilisation, you
will need 960 PVUs of TWSS on JS12 (BladeCenter JS12 P6 4.0GHz-4MB
(1ch/2co)) blades or 800 PVUs of HS21 (BladeCenter HS21 XM Xeon L5430
Quad Core 2.66GHz (1ch/4co)) blades. I would then list the assumptions
that I had made to get to the 500 TPS figure such as:
There is no allowance made for PushWAP or MMS included in the sizing estimate.
500 TPS is the peak load and not an average load
SMSC has a SMPP interface available
All application driven SMS traffic will be during a 16 hour window
What about High Availability?
I think that High Availability (HA) is probably a topic in it's own
right, but it does have a significant effect on the sizing, so I will
talk about it in that regard. HA is generally specified in nines - by that I mean if a customer asks for "five nines
", they mean 99.999% availability per annum (that's about 5.2 minutes
per year of unplanned down time). Three nines (or 99.9% available) or
even two nines (99%) are also sometimes asked for. Often, customers
will ask for five nines, not realising the significant impact that such
a requirement will have on the software, hardware and services sizing.
If we start adding additional nodes into clusters for server
components, that will not only improve the availability of that
component, it will also improve the transaction capability and the
price. The trick is to find the right balance between hardware sizing
and HA requirements. For example: if a customer wanted 400 TPS of
Transaction X, but also wanted HA. Lets assume a single JS22 (2 x dual
core PowerPC) blade can handle the 400 TPS requirement. We could go
with JS22 blades and just add more to the cluster to build up the
availability and remove single points of failure. As soon as we do
that, we are also increasing the license cost and the actual capacity
of the component., so with three nodes in the cluster, we would have
1200 TPS capability and three times the price of what they actually
need just to get HA. If we use JS12 blades (1 x dual core PowerPC)
which have half the computing power of a JS22, we could have three
JS12s in a cluster, achieve 3 x 200(say) TPS = 600 TPS and even if a single
node in the cluster is down, still achieve their 400 TPS requirement.
With JS12's, we meet the performance requirement, we have the same
level of HA as we did with 3 x JS22s but the licensing price will be
half that of the JS22 based solution ( at 1.5 x the single JS22 option).
guess the point I am trying to get across is to think about your
options and consider if there are ways to fiddle with the deployment
hardware to get the most appropriate sizing for the customer and their
requirements. The whole thing just requires a bit of thinking...
What other tools are available for sizing?
IBMers have a range of tools availbel to help with sizing - the TWSS spreadsheet I was talking about earlier, various online tools and of course Techline. Techline is also available to our IBM Business Partners as well via the Partnerworld web site (You need to be a registered Business Partner to access the Techline pages on the Partnerworld site). For
more mainstream products such as WAS, WPS, Portal etc, Techline is the team to help Business Partners - they have questionnaires that they will use to get all the
parameters they need to do the sizing. Techline is the initial contact
point for sizing support. For more specialised product support (like for TWSS and the other WebSphere Software for Telecom products) you may need to contact your local IBM team for help. If you are a partner, feel free to contact me directly for assistance with sizing WsT products.
is a IBM class for IT Architects called 'Architecting for
Performance' - don't let the title put you off, others can do it - I
did it and I am neither an architect (I am a specialist) or from IBM
Global Services (although everyone else in the class was!). If you get
the opportunity to attend the class, I recommend it - you work through
plenty of exercises and while you don't do any component sizing, you do
do some whole system sizing which is a similar process. I am not sure if the class is open to Business Partners, if it is, I would also encourage architects and specialists from our BPs to do the class as well. Let me take that on as a task - I will see if it is available externally and report back.
Sizing estimations is not an exact science
I glance back over this post, I guess that I have been rambling a bit,
but hopefully you now understand some of the factors in doing a sizing
estimate. The introduction of assumptions and other factors beyond your
knowledge and control makes sizing non-exact - it will always be an
estimate and you cannot guarantee its accuracy. That is something that
you should also state with your assumptions.
I know lots of people are saying that Apple invented the Application Store (App Store) for their iPhone/iTouch range of devices, but they would be wrong. App stores have been around for years - I have been a customer of Handango since before I joined IBM's Pervasive Computing team and that team has been gone for over three years now. Handango are an Internet based app store that have supported multiple handheld PDA and phone platforms. Others that I've used in the past include Tucows, although Tucows is more than just mobile applications - they also cover Win32, Linux, Mac etc as well. The big things that Apple did differently from Handango and their Internet brethren was:
Restrict applications to a single platform (I count the iTouch and the iPhone as the same thing since the key difference lies in the Mobile Phone part, not the computing part of the device)
Restrict the development tooling and platform environment by license restrictions (All applications must be approved by Apple and must not breach their license agreement - you still can't get a Java Virtual Machine on an iPhone for instance)
Force users to install via their iTunes installation on their PC/Mac or over the air from their device. Not being an iPhone user, I am not 100% sure of this point. Is there an iTunes install for Linux? (Other platforms allow apps the be installed via bluetooth, memory cards, IR and direct USB copying.)
Of course, Apples' device competitors are trying to catch the same wave that Apple have been riding and deploy their own application store equivalents. We've seen efforts from Google, Nokia, Palm and Research In Motion (RIM - makers of the Blackberry) and interestingly, all have been somewhat successful. Successful at attracting developers which is key to then attracting users. Here are the their app stores:
Personally I am not a fan of Apple's restrictive market practices and much prefer the more open ecosystem that surrounds the Symbian and Windows mobile platforms. I have in the past written applications for Palm Garnet (nee PalmOS), Symbian and Windows Mobile for use within a corporate environment. Something that is not possible with Apples licensing policies and forcing developers to upload apps to the App Store so that Apple can approve them and then include them in the App Store catalogue. If I only want to write an application for my customer, I cannot deploy it directly to the customer's iPhones unless they have been jailbroken - the only alternative is for Apple to look at and approve the application then sign it. While the others also have the concept of signed and certified applications, you can install unsigned or un-certified applications on the other major platforms if you want (except for Android which appears to be going down a similar if less restrictive path to Apple).
Telcos and App Stores
In the past year as Telcos all around the world have watched Apple's App Store take off and seen their interaction with the iPhone subscribers being reduced to the supplier of the pipe to the Internet - way down from the high value position that most carriers aspire to in order to improve ARPU. I've seen requests form many Telcos in that time for Application Store or Widget Store capability. The telos - understandably - want to raise their profile in the eyes of the subscriber and their worth in the value proposition. I have seen request for proposal documents from telcos in China, Taiwan, Vietnam, USA and queries from telcos in Thailand, Philippines, Singapore, Japan and other countries. App/Widget Stores are certainly one of the topics of the moment for Telcos.
The key differentiators that a Telco has that separates it from Apple's App Store are:
Support for multiple smartphone platforms - Symbian, Blackberry, Windows Mobile, Garnet (and presumably soon; WebOS and Android as well)
The ability to sell things other than on device applications - this might include pre-paid top ups, ringtones, ringback tones, telco hosted applications (that could be delivered by the Telco's Service Delivery Platfform (SDP)
In fact, IBM has won and has (partially at this stage) implemented an app store in Vietnam. Because of the Telecom environment in Vietnam, this App Store is not actually within a telco, but is instead an external company*. The app store was implement with a combination of WebSphere Portal (to provide the user interface) and WebSphere Commerce (to provide the catalog and sales part of the App Store and WebSphere Message Broker for Integration requirements. I was involved from the very initial stages of that project.
The company intends to
launch a Mobile Commerce and Advertising Platform (MCAP), which is a
multi-channel platform enabling its members to do small value electronic transactions (or m-commerce and e-commerce. Some of their use cases include
Mobile phone content buying and selling (logo, ring tone, ring-back-tone...)
Purchasing small value digital products such as software, e-books, etc.
Buying and Selling of other services and products such as information
services, Souvenir, electronic tickets, promotion vouchers, etc.
Low value payment services, such as prepaid top-up, game top-up, bill payment, etc.
Online marketing, advertisement and promotion services over Internet and Mobile.
I don't often get involved in WebSphere Commerce projects (it tends to be a very specialized field) we do have a number of Telcos who are using WebSphere Commerce, not necessarily in App Stores, but based on the experience in Vietnam, it would not be a big leap to add that capability to their existing deployments. The usage of WebSphere Portal provides a easy and extensible user interface primarily targeted at the PC, and with the addition of the Mobile Portal Accelerator (nee WebSphere Everyplace Mobile Portal Enable) to the existing Portal, that user interface can be extended to over 10,000 separate devices providing subscribers with an optimized experience for their device.
Where does this leave those Telcos who haven't made the leap to their own app store? In my opinion, they still have time to catch the wave, and certainly if they want to avoid the Apple effect and being reduces to a bit pipe provider, then they need to do something to add value in the eyes of the subscriber. Apple's model doesn't help them with that, but perhaps the other device specific app stores wont be so carrier unfriendly. I will see what I can find out on this issue and report back in another post.
Buy for now
* Once that customer has agreed to be a formal reference, I will share additional details in a future post.
If you want some background reading on App Stores, here are a couple of articles I would suggest:
There are a number of opportunities as I see them for Telcos over the next few years. There are definite opportunities for social networking which will enable a carrier to move from a traditional communication model with their subscribers to a more collaborative and open 'Shared Social Space' . For mobile operators, this market movement presents both opportunities and risks for the telco making that journey.
Extension of Social Networking to the mobile where operators continue to enjoy exclusivity
Extend enterprise offerings to include Social Networking and collaborative services
Offer users of mobile, online, and possibly IPTV, a unified rich Social network-experience across the “3 screens”
Virtual mobile social networking operators reducing network owners to bit-pipe
One of the most obvious moves for a mobile carrier is to simply allow mobile access to social networking tools. While this might satisfy the subscribers who want mobile access to Facebook, LinkedIn, MySpace etc they are effectively reducing themselves to a bit-pipe (all of those companies already have mobile interfaces for their platform). If Telcos are going to be able to effectively fight off those Internet based rivals, the Telco MUST offer some value beyond just the pipe. That's where the Telcos have to use their closeness and brand to the best advantage to ensure that the carriers do not get relegated to the 'plumbing'.
I spoke about the advantages that Telcos have over the Internet based Social Networking providers, this is where the Telco must play their hand and use those advantages because failure to do so will result in them being just providers of bandwidth. One way for Telcos to exert more ownership and maintain their value for subscribers is through User Generated Content (UCG). I've spoken to a number of Telcos int he past year about UCG in Australia, Malaysia and Thailand - for some reason those that I spoke to all had the idea that it is a space they 'should' move into, yet not one of them actually had the guts to step up and do it.
This is contrary to the they I think they should be moving and it feels to me like they are stuck in an old telco product thought mode. The whole idea behind long tail applications is that you have very short time to market for your products and try out as many as you can. Kill off the ones that fail and keep and extend/enhance the ones that do well. I speak to Telcos a lot about long tail applications - usually with respect to technology like IBM Mashup Center, WebSphere sMash and Telecom Web Services Server, but also with respect to traditional SOA as well. For example, Globe Telecom in the Philippines are executing a long tail strategy for new products brilliantly - with the help of a SDP and Unified Service Creation environment based on IBM's software, they are able to bring new products to market in as little as 15 days. It used to take them at least eight months! They know that some products will work and some will fail, but by taking advantage of the short time to market, they can launch many products very quickly. This strategy is proving very successful for Globe, their resellers and their subscribers.
Sorry - got on a bit of a tangent there... back to UGC.
A Telco that deploys a User Generate Content framework has a number of opportunities for revenue - some better advised than others. For instance, one carrier I spoke with last year wanted to charge artists/publishers to upload content. To me, that would be a good way to prevent their platform from every taking off. The table below shows the various revenue opportunities from the artists/publishes, the consumers, the advertisers and others.
Artist / Publisher
Free apart from Data Charges ($)
Free apart from Data Charges ($)
Own Rigntone/RBT ($)
Artist Wallpaper ($)
Premium Subscriptions: ($)
Automated alerts e.g. fan club started, x number of downloads made, etc
Subscriber Consumer Viewing:
Free apart from Data Charges ($)
Consumer Purchases (any channel):
Artist Wallpaper ($)
Premium Subscriptions: ($)
Automated alerts e.g. Photo tagged
Broadcast Ad Revenue
Ad inserts before video starts ($)
Ad inserts could vary by site the video is published to e.g. Youth brand, Facebook, YouTube, etc
Ad Funded Content
Advertiser decides to do a “Powered by …” sponsorship for a given artist via their Fan Club for a week. ($)
Different ad pricing by site published to ($)
Different pricing by artist popularity ($)
Voted #1 by Telco XYZ Social Community
Community Testing ($)
Selling the value of the community to relevant companies that want to test new products/ideas within the community.
Cross Selling / Up Selling to Artists and Consumers
service revenue from enabling 3rd parties to consume and use the common
web services exposed from the UGC platform.
Currency – encourage the community members to do some rewarding. To do
this they’ll need to purchase synthetic currency from the carrier.
As you can see, there are plenty of opportunities to establish new revenue streams from UCG - but any telco looking to move into this area need to tread a careful line between getting revenue and encouraging usage.
2009, what a year! The Industry Business Partner Technical Strategy Enablement team has has a bumper year and has a huge influence on partners and deals worth many millions of dollars on a worldwide basis. Jim Toohey (feel free to leave him a message on his profile!), our team manager recently sent a detailed summary of the team's achievements for 2009 – think of this as a summary of the summary
I wont go into the number of hours that was spent with each partner or customer or list out all of them, but I will try to illustrate our reach. (I wish I could do this all with Google Maps, but it can't easily shade the maps different colours, so I have used Google chart instead.) The map below shows where all of our team members are located and shaded in WebSphere Purple are the counties in which we dealt with partners or customers (in person and remotely) Given the small size of our team (19 in all) , I reckon this is pretty impressive:
embedded google map below with the 'IBM pins' will show you exactly where
all of our team are based in the world - click on each pin to see who works where. (map is zoom-able)
Note: The links for the individual's profiles will only work for IBMers as they go to IBM's internal Connections implementation - not all of my teammates have MyDeveloperworks accounts I am ashamed to admit...
Our team is split between 13 of us (including me) dedicated to IBM business Partners in the Telco, Media & Entertainment and Energy & Utilities industies, while the remaining six are dedicated to RFID and Voice Partners. Unfortunately, I can't boast too much here about specific achievements due to the public nature of this blog, but hopefully, this post will give you some idea of the reach of our team.
Did you know that the vast majority of calls carried out on the 3.5
billion GSM connections in the world today are protected by a 21-year
old 64-bit encryption algorithm? You should now, given that the A5/1
privacy algorithm, devised in 1988, has been deciphered by German
computer engineer Karsten Nohl and published as a torrent for fellow
code cracking enthusiasts and less benevolent forces to exploit.
Yikes! This harks back to the old days of eves droppers on Analogue
phone signals and all those illegally taped conversations (I recall
some conversations between the late Princess Diana of Wales and her
bodyguard for example). Ok, we're probably not quite there yet, but by
the sounds of this article, we aren't far from it now...
As I see it, Telcos (particularly in counties that I deal with) are in a perfect position to transform their subscriber's enthusiasm for social networking into real business benefits
Combining traditional Calling Circle Applications (aka Family & Friends, or VPNs as the Telcos would call them) with online (PC or Mobile) communities to share information. These could be short-lived around:
21st Birthday Party
Or they could be longer term communities such as :
Service Clubs (Lions, Apex, Rotary etc)
These are just some that come to mind off the top of my head. I am picturing discounted call and text rates for community members as well as discounted data rates for mobile access to the web community including blogs, activities, profiles, discussions etc. Think about these sort of integrated scenarios for Telcos:
Sending SMS messages to blog subscribers every time that blogger posts a new blog
Emails or SMS message to a community - based on either their profile or their current location.
Microblogging aggregation - the subscriber sends a SMS to a shortcode, which then updates all the other microblogging services that subscriber uses (Facebook, Twitter, MySpace, Freindster etc)
Write blog posts on your mobile phone either via a MMS message (including images, video or audio), the phone web interface, an email interface or (for shorter entries) SMS messages.
Bloggers could recieve SMS messages whenever someone comments on their posts
.... the list goes on...
In these days where churn is a significant issue for most Telcos -
especially in countries where mobile number portability (MNP) has been
introduced, anything a telco can do to make themselves more sticky for
their subscribers is a good thing. Also add to that the
potential additional revenue from additional data and messaging usage and we have a proposition that lot of telcos would be interested in. I wouldn't see this as having a major effect on ARPU, but every little bit helps.
I can picture a wide range of services that telcos could combine with their Social networking offerings that would draw out additional revenue from their subscribers. While there are plenty of Internet based companies offering blogging, file sharing, profiles, microblogging etc, none of them have the established relationship that a Telco has with it's subscriber base. Additional, very few of them have a local presence outside of their home country. Telcos are localised in nature - either through government heritage, Government regulation, Language or social reasons - Telcos need to take advantage of that fact. OK, their in country competitors have the same advantage, but in this race, the real competitors are the Internet providers. Obviously, a Telco that can move on this territory before their local competition will have a significant advantage in the marketplace.
My gut tells me that within each country, we are just waiting for the first Telco to offer these sort of converged services before all the others in that marketplace decide that they need to as well - the Domino effect.
Speaking about the Domino effect, I am struck by the irony of the naming of that principle and what is happening in the Vietnam telco market right now. The US Government coined the term the Domino effect to justify entering the Vietnam War in support of South Vietnam (to prevent the fall of the rest of South East Asia to communism), yet in the Telecom industry in Vietnam, we are seeing a Domino effect with respect to Service Delivery Platforms right now - one telco goes down the SDP path, so now they are all going down the SDP path...
Now that I have rambled onto the subject of SDPs, a telco could offer Social Networking services without having a SDP in place, but in order to offer true integration between the Social Networking offering and the traditional telco services, a SDP will be required unless they want to go down the custom code path and I think we all know where that ends up - Spaghetti Junction!
As I alluded to in my earlier post (Telcos capitalising on Social Networking tools), Telcos can use Social Networking tools to their advantage in a number of ways. I also mentioned the Idea Factory for Telecom - an adaptation of the basic Idea Factory offering now owned by Software Group Services for Lotus. This offering was originally put together by the High Performance On Demand Services (HiPODS) team and had no less than six servers minimally required for the offering. That is because the Idea Factory (or Innovation Factory as it was previously known but renamed due to trademark issues) was originally offered well before Lotus Connections was released. These days, small to medium implementations can be done with Lotus Connections, IBM Mashup Center and a number of templates and add-ons (widgets) for Connections. A Proof of Concept could potentially be done with a single server. Larger Idea Factory implementations - particularly where Telcos are hosting the service for their enterprise customers and MNVOs would also require a WebSphere Portal Instance as well.
Probably the best explanation I can give to you of the Idea Factory is for you to watch the recorded demonstration I have available below
- in fact I have quite a number of variations of the same demo customised for different Telcos - the demo below the most recent which was recorded before Connections 2.5 was released and so was done with beta version of Connections - see if you can spot the fault in the video! I tried to cover it up as much as possible because I needed to show this video to customers, but it's in there and you will see it if you know what you are looking for....
For online access to the latest Idea Factory (V2) recorded demo - just launch it below... Also note that this is a lower resolution version for online use. I also have a larger version that I used for offline demos, it is 24Mb in size so I will share that with anyone that requests it rather than make it generally available.
Way back in 2007, there was a good whitepaper about the Idea Factory - I have uploaded it to Collaboration_to_innovation-leveraging_web20.pdf. This document is now quite out of date with respect to the technology used to deploy the Idea Factory (then called the Innovation Factory) - these days, we would use Lotus Connections 2.5 as the base platform and add widgets for the polling/surveys requirement and set up activities templates to manage the ideation process, then use IBM Mashup Center rather than QED Wiki which was a IBM Research developed Mashup Environment (check youhtube.com - there are lots of QEDWiki demos available).
The Idea Factory for Telecom fosters collaboration by incorporating a self-service portal for consistent user experience and integration.
Aside from this limitation, the concepts expressed in the whitepaper and the usage of the Idea Factory remains relevant. I guess one point that this whitepaper makes it that IBM has been in this Web 2.0 game for a while now - longer than we have had Generally Available product to support the concepts.
If we look at the above diagram, Lotus Connections will take care of most of the Collaborative Services and the Portal (UI) requirements while IBM Mashup Center takes care of the Services Cluster and the Services Catalog (called the Widget Catalog in the Mashup Center).
I was going to use this post to talk about the Idea Factory for Telecom, but I noticed this press release this morning about SK Telecom (South Korea) use of Cloud computing and I though I would share what have seen with Cloud computing in Telcos. The press release follows:
ARMONK, N.Y. - 16 Dec 2009: IBM (NYSE: IBM) today announced that it has successfully built Korea's first cloud computing environment for a private sector company, SK Telecom, the largest telecommunications company in Korea with over 24 million customers. The cloud environment provides developers with the necessary software and hardware to develop applications that will allow SK Telecom to offer up to 20 new services to their customers by the end of 2009, such as sports news feeds and a photo service.
I can't claim to have been closely involved with this deal at SK Telecom, but we have spoken to other Telcos in ASEAN about using Cloud Computing in a similar way. Where telcos have a developer ecosystem, Cloud could be deployed in a private could environment for their developers to deploy their test applications within that private cloud. We proposed using the WebSphere Cloudburst appliance to allow developers to self manage and deploy the virtual servers for their applications. The diagram below illustrates what I am talking about:
I guess this could be where I tie in the Idea Factory for Telecom after all. The Idea Factory would be used to support the whole developer ecosystem, while the Cloudburst appliance would be used to support the advanced developers who want to be able to deploy their Java applications within the cloud.
In my view, this is a somewhat obvious use for Cloud within a Telco and SK Telecom's deployment of Cloud in this manner is proof of that point. The somewhat less obvious use of Cloud within a Telcos is the use of Cloud infrastructure for their core SDP and OSS/BSS infrastructure. I could not imagine a Telco being willing to deploy such core systems in a public cloud, but there is a possibility of deploying it in a private cloud. The team at Bharti Airtel are working to move the SDP infrastructure there to a Cloud environment - giving them the flexibility to rapidly scale up and down to suite different changing market forces. The other BIG thing that moving to a cloud will change is where the SDP components are deployed. Once the SDP components such as WebSphere Process Server, Telecom Web Services Server, WebSphere Services Registry and Repository and the other components are in a private cloud, it becomes very easy to move to a hosted private cloud or even a public cloud. If we think for a moment about the SDP running in a hosted cloud environment, then it is not a huge leap to host another Telco's SDP in the same hosted cloud. Now we have a hosted environment in which potentially many telcos have their Service Delivery Platform running.
This diagram illustrates the various SDP deployment options including the Cloud options. What is happening at Bharti is a move from a traditional OutSourcing model to a private cloud, then on to potentially a single client hosted private cloud and then eventually to a multi client cloud option. See that I need below. What do you think? Can you think of any other cloud scenarios in a telco?
PostScript Taking into account the comments from the internal version of this blog, I have modified the developer ecosystem a bit - using IBM Cloudburst instead of WebSphere Cloudburst would certainly give a Telco a much greater developer platform choice. I've left the view above because that is what we proposed to the ASEAN telco, but in retrospect, the IBM Cloudburst option would have better suited their needs - although IBM Cloudburst has a significant price premium associated with it that WebSphere Cloudburst does not. That said, in a cloud environment (customer hosted private, IBM hosted private, multi or single tenanted) for a Service Delivery platform, using IBM Cloudburst would seem to be to be the right way to go.
OK, Internally, a Telco is like any other big company when it comes to collaboration among it's staff. Social Networking tools help employees to make contacts, learn and share more, find information more rapidly and maintain social networks beyond the physical boundaries of their own work location. If you're curious about what I am talking about, I recommend you have a look at the great videos on YouTube from Jean Francois Chenier (An IBMer). I have embedded the first of the series below:
It's pretty easy to see how within any large company, social networking software such as Lotus Connections makes sense provided you have enough people who actually use it - it seems to me to be something like groups calendaring - it is dependent on a significant proportion of the user population using the tool to make it effective.
The way I see it, it is only a small step beyond the internal deployment of social networking tools to extend to a Telco's trading partners. That might include vendors, resellers (of Telco products - I was initially thinking retail, but that could include MNVOs), enterprise customers and others. Situations where employees of the Telco and employees of external companies need to work together and share information and collaborate - share idea, files, information - generally collaborate would seem to be a valid deployment of social networking tools.
IBM already has an offering that uses social networking tools to build communities around the Ideation (Idea generation and growth) process, a kind of virtual brainstorming combined with idea and through sharing. The intent of the offering is to make it easier for companies to find and help to evolve idea for the next product to take to market. In a Telco, this might be idea such applications like "Meet-on-click"** that a telco could take to market. That offering is called the 'Idea Factory' and is not actually unique to the Telecom industry. Kraft foods use the Idea Factory to come up with new ideas for product packaging. When deployed in a Telco, we often combine the Idea Factory with IBM Mashup Center (recently V2.0 of Mashup Center was released by the way) - an offering I usually call the "Idea Factory for Telecom". The Mashup Center is used as a rapid prototyping environment for the ideas that are evolving within the Idea Factory. In my view, this is a great way to build an active and dynamic developer community for the Telco.
China Telecom have demonstrated how effective the Idea Factory can be in a Telco environment - with a year on year improvement of 900% in a competition to find new applications (3 to 27 new products). Their Idea Factory deployment predated IBM Mashup Center, so they didn't get the benefit of a rapid prototyping tool which I believe could increase the quality of the new product ideas even further.
While I am a big fan of the Idea Factory, I see that as just a starting point for social networking tools hosted by a Telco that extend beyond just their developer community and into their (much) larger subscriber base. Think about building many local communities based around schools, churches, scout troops, national holidays, religious events, local football teams, mothers groups... anything really. The community would have access to a shared virtual community on the web accessible from a PC or (more importantly for many developing nations) from a mobile phone, they would have microblogging, blogs, files sharing, discussion forums, profiles contacts AND be tied into more traditional Telco services such as calling circles. The Telco could provide discounted call and text rate between community members. Sound good? I think so. For the Telco I see a number of benifets:
Decreased likelihood to churn - increased 'stickiness'
Stronger loyalty to the Telco brand
Increased revenue due to increase in call and text volumes and increased mobile data usage once a reasonable proportion of the community is using the tools
An additional weapon against the Internet based competitors (such as Facebook, Skype, MySpace etc)
Telcos in my opinion have a significant advantage over the Internet companies when it comes to offerings like this. They have:
An existing relationship (post or pre paid) customer
More local footprint via people on the ground and reseller/franchisees
Existing monetary arrangements with the customers
Greater trust by customers (typically)
Telcos could easily become the local aggregation point for social networking within that community - for instance with a Facebook connection, subscribers could update their Facebook wall without the need to launch Facebook. Microblogging entries could automatically update status in Facebook, LinkedIn, MySpace and sent a tweet out on twitter.
I think this is going to be big - web based social networking giants like Facebook, MySpace and LinkedIn have proved how popular web based social networking can be - add the local context to it and I think you have a winner for Telcos in many markets.
Now that I have started this thought, I think the next few posts could well be along similar vein - looking at the Idea Factory for Telecom, Telco focused Developer Ecosystems, User generated content and Public focused and Telco integrated social networking capabilities....
Here endith my thought (for now)
* I am a shareholder of Telstra ** Meet-on-click is an application I often demonstrate which I can build from scratch in about three minutes using the Mashup Center and some widgets that consume TWSS Web Services. It enables a group of friends to see what is on in the area, where the rest of the group are and send a SMS to the friends inviting them to go out or set up a multi-way conference call so that the group can discuss the suggested venue.
I know this isn't strictly related to my normal Industries, but it is applicable for any DW member, so I thought it was valuable enough to share and might even prove useful in dealing with IBMers. For a number of years now, my email signature has included a link for non-IBMers to contact me via Sametime. That link is connects to https://www.ibm.com/collaboration/instantmessaging
This doesn't seem to be well known among IBMers, but I have spoken with a number of partners, exIBMers and my wife via this facility in the past. All they need is a ibm.com account and anyone can sign up for one of those. If you have ever downloaded anything from ibm.com in the past, or signed up to Developerworks then they will already have one (which is the case for most partners and IBMers)The Sametime client that the ibm.com site launches is the (old) Sametime Connect 3.1 Java client. It looks like this:
NB. In the buddylist - alarmour @ au.ibm.com is my internal Sametime community id (which is the same as my email address) and alarmour @ optusnet.com.au is my ibm.com id.
Despite it's age and now being superseded by Sametime 6.5, Sametime 7, Sametime 7.5 and Sametime 8, it still works! As an example, check out the short conversation I had with my other personality!
In my normal Sametime client, my external id comes in as alarmour @ optusnet.com.au.ibm.ext (my ibm.com id prepended to "ibm.ext") - I can add this external id to my buddylist so that I can see when my external self is logged on. In fact, I can add the external community to my standard sametime setup and log in from there as well. If you know the name of the IBMer that you want to add to your buddylist, but don't know their email address, you can get that from the ibm.com web site through this employee search facility.
I am not sure what is going on with the status of my ibm.com id not showing up as online (on the screen dump above) - I do see when my wife is logged on and some others that regularly log in too (although they are using a more modern client rather than the old 3.1 java client). After a while, it did correct itself through.
What I did for my wife was to download the free trail version of the Sametime client (from DW!), then use the config information from the jave client so that Samtime started automatically when her PC starts - thaqt way, she can chat with me regardless of the Sametime client I am using to connect to messaging.ibm.com (I often use the mobile client which does not support multiple communities). Such a setup also means that she does not need to go to ibm.com in a browser to chat with me - the client is just sitting minimised in the systray on her PC.
Hopefully, this post will spread the word a bit more....
Update: The version of the Sametime Web Client has been updated and the launch URL has changed - I have corrected it above and added a new screen capture of the new client:
Would you like to be able to create bookmarks, blog posts and activity entries with a simple button in your browser? It's easy and works in Firefox and Internet Explorer (possibly others, but I have not tested them). Al you need to do is to create a new bookmark in your bookmarks bar.
Create a new bookmark and paste in the following in the "Location" field of the bookmark:
Wow! Optus in Australia are blocking their own Android subscribers from buying apps from Google's App Store - only permitting subscribers to buy apps from Optus's own app store. Imagine the uproar if they did the same thing for the Apple iPhone and Apple's app store!
In response to the presentation I built (and blogged about - see Telecom Systems Evolution) a number of IBMers asked for costings for each phase. While that would be possible at the early stages where you don't have to take into account scaling, later in the phases, that really does become impossible. For instance lets say phase 4 was US$1m for Telecom New Zealand (with about 2 million subscribers and a heavy post paid mix of subscribers) and compare that with Bharti-Aitel in India with 120 million subscribers and a heavy pre-paid mix. Because of the variations in telcos around the world, there is no way any pricing that was added to the presentation would be in any way applicable - that is something that has to happen on a per telco basis.
That said, we are working on a "starting point" or "lite' implementation and wrapping some costs around that - we think this would be a good starting point for many SDP projects - kind of a standard phase 1. The architecture would be built with expansion in mind even though at phase 1, the components would not be excessively sized. So far, we have a near final Bill of Materials and a number of diagrams. I thought I would share those diagrams with you here. We still need to document all the assumptions and limitations that this "SDP Lite" phase 1 architecture would entail.
I would appreciate feedback on these diagrams, so please comment.
The IBM Telco specialists should recognise this - it is the Service Provider Delivery Environment Version (SPDE) 3 - IBM's industry framework for Telecoms. I included it here for reference and comparison with the follow diagram that illustrates the areas of impact that SDP lite will have on SPDE 3.0.
Now, see the areas of impact that SDP Lite has on SPDE 3.0 - the Orange shades indicate areas of impact. Iif we map the IBM products in the Bill of Materials over the top of that, we get...
That's it with the logical views of SDP Lite. The next one is a marketecture diagram to help explain the key principles and functions of SDP lite. I am only showing the production environment in this diagram, but you could also have a separate environment that is a duplicate of this that could be dedicated to both test and ISVs.
For those that want to understand the Deployment units and their basic layout, I have a deployment unit diagram
Finally, an illustration to show how the SDP Lite infrastructure could support a developer ecosystem - the shaded components could be added in subsequent phases - for now, it is all about secure exposure of web services and REST interfaces to a developer ecosystem that is using conventional Integrated Development Environments (IDE). Note also that the hosting servers for the developer applications are out of scope and could be hosted anywhere on the Internet. . Keep in mind that this is supposed to provide a starting point - or initial deployment. Infrastructure that could grow, but could be used immediately for smaller trials. With that in mind, in estimating the performance, this is what we get:
This SDP Lite infrastructure would allow a telco to begin offering a range of new services and products as well as form the basis for a larger and more functionally rich topology later on. Some of the use cases that come to mind immediately include:
Service exposure to 3rd party ISVs - Web Services and/or REST. This represents a new revenue opportunity for many telcos.
Build composite applications that consume and combine existing services to automate processes like retailer commissions
Marketing campaigns based around bundled products. For instance: top up your prepaid mobile broadband and get unlimited text messages for 24 hours on your mobile - Globe Telecom in the Philippines are doing this exactly - check out the following Globe TV commercial:
Begin to build a developer ecosystem around the exposed services
Build composite applications that consume and combine existing services to offer new products to subscribers such as
Family Finder - Allow parents to see where their kids are and where they have been
Meet on Click - Friends can see where each other is, what is on locally and send invitations to catch up via SMS, MMS or PushWAP
Emergency notifications - Notify everyone in a specified geographical area of some danger
Location based marketing - Send SMS/MMS/PushWAP messages to subscribers who have oped in when they get within 500m of a retail outlet.
lots of others.....
50% CPU Maximum
JS12 Blades (one for TWSS Access Gateway, One for TWSS Service Platform)
SMSM via SMPP
PushWAP via SMPP
MMS via MM7
LBS via MLP
Parlay via CORBA
Presence via SIP
IMS/VoIP via SIP
Expected max transaction rates
100 TPS* for SendSMS via SMPP
10 TPS* for SendMMS via MM7
Other transactions could be added into the mix, but would lower the SMS or MMS transaction rate.
50% CPU Maximum
80% Shortlived transactions
20% Long Lived transactions
Expected max transaction rate
20 TPS* of the above transaction split
50% CPU Maximum
JS12 Blades (one for WebSeal, One for TAM Policy Manager)
Expected max transaction rate
DB2 Enterprise Edition
50% CPU Maximum
Minimum of 12 Disks in RAID 1+0 Array
So, assuming that both internal and external users/systems were using the system concurrently, we could support up to 40TPS from external developers (limited by TAMeb) which if we assume they are using 90% TWSS services and 10% composite services will mean that internal systems would have 74TPS of TWSS services and 16TPS of WPS services available if the developers are consuming 50% of the TAM CPU.
for Internal Use
(32.7 TPS of SendSMS + 3.2TPS of SendMMS)
(67.3 TPS of Send SMS + 6.8 TPS of SendMMS)
(3.2 shortlived + 0.8 longlived TPS)
(12.8 TPS shortlived + 3.2 longlived TPS)
These are really rough numbers and I would like to add some more assumptions around then. Of course, if your Telco won't have a developer programme, then 100% of the transactions would be available for internal consumption.
What do you think? Are we on the right track for your Telco customer? Would you like to see some changes?
@Alex, regardless of why Oracle chose to compare the RISC and CISC
platforms, the result is not meaningless. Java App Servers are
licensed on cores that they run on. For most companies, bang for
buck is a key measure. As the comparison is JEE compliant Java App
Servers, we can legitimately compare app servers running on
different underlying hardware architectures and come up with a
legitimate measure of bang for buck.
Let's look at IBM's and Oracle's licensing for this benchmark; the
WebLogic instance would require 24 CPU equivalent*** (48 x 0.5)
licenses of Weblogic App Server (Enterprise Edition) while the
Power system will require 480 Processor Value Units (4x120) of
WebSphere Application Server Network Deployment. To compare the two
models if we take the list price for WebLogic Enterprise Edition
(US$30,500* per CPU equivalent) and IBM's WebSphere App Server ND
(US$174/PVU*) then we see that the Oracle WebLogic App
Server license (US$732,000) will cost 776% more than the IBM
WebSphere App Server license (US$83,520).
Oracle are claiming "...nearly 7 times..." the performance despite
the fact that 9,455.17/1,197.51 = 7.90 (to 2 decimal places) which
in my book is nearly 8 times the performance, not nearly 7 times. I
think their marketing people got their percentages mixed up - 7.90
times the performance of the IBM score is a 690% improvement on the
So, let's give them the benefit of the doubt and assume the Oracle
marketing folks made a mistake and their benchmark system can
deliver 7.9 times the performance of the IBM
benchmark system, they are doing it for 8.7 times the
price of the IBM system in terms of app server licensing,
that is not looking like the spectacular win that Oracle are
claiming it to be... In the bang for buck war (at least in
software licenses), IBM still wins.
I had a request on the other week to create a number of topology diagrams that showed how a Telco might start small and grow their environment to add new capabilities and services. This was specifically for a telco in Vietnam, but I figured it would make sense to generalise the presentation and the images to make it usable for other opportunities. We've had a similar request from other telcos recently as well. The presentation step through 11 phases from a pilot/trial environment through to a full blown system. Each slide has speaker notes explaining what is being added at each phase in terms of products and capabilities. This presentation is not meant to make any recommendations on how to evolve form a small system to a more complex and capable one. What it is supposed to illustrate one possible evolution... Note that it focuses only on the IBM components and some other components would also be required for some phases (such as a transcoding engine in the media extension phase).
Below are three of the diagrams - Phase 1, Phase 6 and Phase 11 and the speaker notes that go along with that phase - to give you a feel for the flow...
Phase 1 - Test Environment
At this first stage, an initial deployment might be considered a proof of concept or a trial – which could become the test and or ISV environment, The functions that this could offer are:
Composite applications that bring together functions provided by the network. For instance an application that consumes SMS messaging and integrates the location of the handset into an app.
WSRR will get them down the path of SOA Governance – it is important to get this in early to ensure that the governance model is maintained and the Telco will now need to rework services that are created at this stage.
Complex workflows and business processes can be built which include human tasks (such as prototype processes for the production implementation )
Phase 6 - Developer Ecosystem including Web 2.0
Phase 6 introduces the Developer Ecosystem components such as :
Idea Factory for Telecom – which will help make a dispersed group of developers into a community. It enable the sharing of ideas and a framework for the Telco to manage the evolution of the ideas that are generated within the community. It also provides a rapid prototyping capability via...
IBM Mashup Canter which allows users to drag widgets onto a workspace and simply wire them together. It is both the development and the runtime architecture. This means that developers don't need deep development skills in order to build new applications.
WebSphere sMash which provides a PHP and Groovy scripting environment (both development using the Dojo toolkit and the runtime environment)
This combined with the web services exposure deployed in phase 4 means that the developer ecosystem can now cater for all levels of developers – those with no skills can use the drag and drop mashup environment, script developers can use sMash and more advanced developers can use the web services interface. In the backup slides there is an illustration of this.
For advanced developers the Telco can support developers across a range of IDEs ranging from Rational and Eclipse (where we have Telecom Toolkits available for free) to other IDEs (such as Microsoft Visual Studio or Sun Netbeans) where the IDE has tools to assist developers with consuming web services. In all the IDEs, developers will consume the Web Services Description Language (WSDL) file from a UDDI directory in the DMZ. The UDDI directory (part of WPS) is populated from the WSRR internal services repository.
Phase 11 - IMS integration and extension
When the Telco goes down the IP Mulitmedia Subsystem (IMS) path, the software deployed already has IMS enablement, but at this point we can also add WebSphere Presence Server (PS) and WebSphere XML document Management Server (XDMS – formerly WebSphere Grouplist Manager) which provides IMS services for the IMS services plane. The core infrastructure that was deployed way back in phases 1 and 2 are critical to the IMS Services plane.
It is important to understand that the phases I have split them down into are purely arbitrary and are not necessarily what would happen in a real telco. Which function occurs at what point and in combination with other functions is something that must be driven by the business requirements of the telco. The intent is to illustrate how a telco could start small and add function incrementally building on the previous investments. Still want it? Great - feel free to download it from MyDeveloperworks files. Please let me know what you think.
Verizon Wireless, AT&T and several major international carriers and vendors threw their support behind an IMS-based approach to delivering voice and SMS services over LTE networks. The level of operator support--the approach also is supported by Orange, Telefonica, TeliaSonera and Vodafone--sits in sharp contrast to another approach, called Voice over LTE via Generic Access, or VoLGA, which is supported by T-Mobile International.
Vendors including Alcatel-Lucent, Ericsson, Nokia Siemens Networks, Nokia, Samsung Electronics and Sony Ericsson also voiced their support for the initiative, dubbed One Voice. The companies said they concluded that an IMS-based approach "is the most applicable approach to meeting the consumers' expectations for service quality, reliability and availability when moving from existing circuit-switched telephony services to IP-based LTE services. This approach will also open the path to service convergence, as IMS is able to simultaneously serve broadband wireline and LTE wireless networks."
The companies said that the purpose of the initiative is to create the largest LTE ecosystem possible, and to avoid fragmentation of technical solutions.
Interestingly, both Alcatel-Lucent and Ericsson also support the VoLGA approach, and Nokia Siemens has supported its own solution, called Fast Track Voice, which proposes having mobile switching center servers handle VoIP traffic over LTE networks. VoLGA proponents argue that their approach should be used as an interim solution. All three vendors said they do not see a conflict in supporting the different approaches.
Is it just me or when you read "VoLGA" do you think "Vulgar" - I think the ALu acronym police need to get out from behind their desks and make an arrest for that one!
I've drawn up a representation of the situation as I see it. NSN going it alone with Fast Track Voice, almost everyone else supporting VoLGA and planning to move to OneVoice.
The thing that I find really interesting is the inference by ALu that they will continue to stand by VoLGA as well as support it as an interim step to OneVoice while NSN seem to be saying that Fast Track Voice is only an interim step on the path to OneVoice. It's also interesting to note that the VoLGA consortium seems to be mainly Network Equipment Providers (NEPS) while OneVoice is bot NEPs and Telcos.... I suppose the most appropriate message is "watch this space"...
PS. On rereading this post, I imaging some of you are going 'Huh?' I
apologise for the Telco jargon. Let me take a moment to try and
explain some of the terms that appear in this post.
IMS - IP Multimedia Subsystem (not
IBM's mainframe database that helped put man on the moon). This is a
specification controlled by 3GPP (a Telco standards body)
to describe a next generation IP based telephony environment. Most
telcos today still run a legacy switched environment based on very
specialised protocols such as SS7 and Sigtran. These protocols are not
IP based and as such require very specialised (read expensive) skills
to work with them. The other thing is that they are not really
standardised - each NEP has their own version of the SS7 protocols.
IMS promises to bringmuch cheaper skills and shorter development cycles to the Telcos core platform - something they have not had before. IBM has a number of products that are targeted at telco's IMS infrastructure (WebSphere IMS Connector, WebSphere Presence Server and WebSphere XML Document Management Server)
LTE - Long Term Evolution is seen by most NEPS as the next logical evolution step for carriers with GSM networks.
That evolutionaty path goes something like this:
GSM->GPRS->EDGE->UMTS->HSDP->LTE. LTE promises to
deliver high bandwidth mobile connections. The main rival to LTE is
WiMax which you may have heard of before.
ALu - Alcatel Lucent (a very common abbreviation for the joint company)
Yes, our team is focused on SDP, but this article was interesting because it is in our part of the world (I live about 45kms from it) in AP and Telstra make extensive use of Netcool in their Network Operations Centre. I wonder if the folks with only two screens suffer from 'screen envy' when so many others have four screens?
A South African information technology company proved it was faster for them to transmit data with a carrier pigeon than to send it using Telkom, the country's leading internet service provider.
Internet speed and connectivity in Africa's largest economy are poor because of a bandwidth shortage. It is also expensive.
Local news agency SAPA reported the 11-month-old pigeon, Winston, took one hour and eight minutes to fly the 80 km from Unlimited IT's offices near Pietermaritzburg to the coastal city of Durban with a data card was strapped to his leg.
Including downloading, the transfer took two hours, six minutes and 57 seconds -- the time it took for only four percent of the data to be transferred using a Telkom line.
Okay, it was a bit of a stunt. I am sure if I posted a 32Gb SD Card to the Sydney (standard mail service- often next day delivery, but sometimes the day after that), it would arrive faster than I could transfer that content from my home office. What does that prove in terms of available bandwidth? Not much really - SD cards can hold an incredible amount of information these days. I have worked with customers in the past who shipped hard-drives around when they needed to transfer large amounts of data - even today - on most networks, it would be faster to courier a 1Tb HDD anywhere in the world than to transfer that much data over the wire.
The article did get me thinking though. I travel quite a bit around Asia and have experienced first hand the speed of networks in many countries. I've seen networks slower than a dial up modems (in Vietnam IBM Office) - in fact I reckon that my mobile phone as a modem over an EDGE connection (3G in Vietnam is very patchy) would have been faster than the IBM office network connection. This is not a unique situation - in many countries I visit, the network speed is faster in my hotel than it is at the local IBM office.
How does this effect the way we behave? Lets look at a specific example. Last year, I was doing a lot of work for the Globe Telecom SDP project that we eventually won with NSN in the Philippines. I was using Cattail (an IBM Research project for sharing files - similar functionality to the Lotus Connections Files capabilty that we now have in MyDeveloperworks) to upload files so that the local IBMPH IBM team could get to them rather than clog up their mail boxes. Smart - or so I thought. With Cattail, you are able to see who is downloading your files - often quite interesting as it was in this case. I noticed that only one person in the Philippines was downloading the files, despite notifying about 12 people that they each needed to look at the content. After a while I asked this one person why no one else was downloading the files from Cattail - he told me that because the network was so slow, most people were unable to even load the Cattail page to begin the download, so he went through the pain for everyone, then emailed the files around the local team! So much for not clogging up their mail files.
I am constantly frustrated by the US centric assumption that the whole world has the same bandwidth available to them as they do. Even in Australia, I am paying AU$68 per month for 12Gb of traffic - typically around 2 Mbps actual (10Mbps claimed capacity) downstream and 250 kbps actual upstream. By US standards, that must seem slow, but by the standards of developing nations in ASEAN, that's pretty darn good. There is still a huge digital divide between the haves (the US) and the have-nots (developing nations) - while some countries will have fibre to the home deployed (or being deployed) over the next few years - Singapore will be done very quickly I anticipate - I wont have that sort of speed available to me until 2012 the Australian federal government claims (I expect it will be more like 2020 though as I do not live in the inner suburbs of Melbourne)
So, what point was I trying to make? I am not sure. I am frustrated at my bandwidth sometimes (usually not) but in countries that I visit, the whole nation must feel frustrated. I often see web pages sizes in excess of 500kb - a ridiculously large size and unusable in most of Asia. Application designers need to be mindful of the bandwidth availability if they hope to be successful in Asia. If you have thoughts, please comment...
For the second year in a row, IBM AIX UNIX running on the Power or “P” series
servers, scored the highest reliability ratings among 15 different server
operating system platforms – including Linux, Mac OS X, UNIX and Windows.
Those are the results of the ITIC 2009 Global Server Hardware and Server OS
Reliability Survey which polled C-level executives and IT managers at 400
corporations from 20 countries worldwide. The results indicate that the IBM AIX
operating system whether running on Big Blue’s Power servers (System p5s) is
the clear winner, offering rock solid reliability. The IBM servers running AIX
consistently score at least 99.99% or just 15 minutes of unplanned per server,
per annum downtime.
I am working with a number of IBM business partners and I found a need to explain to them how our Software licensing works. I found that many of our sales staff don't fully understand it either, so I figured I would post the explanation I wrote for the business partners to try and explain it so more people "get it". The other thing that struck me in speaking with some partners was that - despite some of them them partnering with Oracle more often than they have with us in the past - they had a simplistic view of Oracle's licensing thinking that it was simply CPU based. Oracle's licensing scheme is similar to our own PVU scheme in weighting different multi-core CPUs differently for licensing purposes.
First - IBM's PVU scheme
The majority of the IBM runtime components are priced per PVU. The Processor Value Unit or PVU is an arbitrary notion that IBM came up with to cater for multi-core CPUs and the fact that some platforms offered more processing power per CPU core than other platforms. Different brand processors cores are considered equivalent to PVU counts from 30 PVUs to 120 PVUs per core.
For example, an Intel single-core CPU is 100 PVU. Intel multi core CPUs are considered to be equivalent to 50 PVUs per processor core (or 70 PVUs per core for the newer Intel chips), so a dual core CPU would be 100 or 140 PVU and a quad core CPU would be 200 or 280 PVU. Prior to the latest generation of Intel multi-core CPUs, Intel multi-core architecture was such that a single dual core CPU offers similar processing power to a single core CPU, so to be fair to customers that use Intel multi core CPUs, IBM only rates each core at 50 PVUs. The latest chips have improved their processing power per core over previous generations of chip and they are now rated at 70 PVUs per core as a result.
IBM PowerPC chips are more efficient and therefore the PVU rating per CPU core is 80 PVU per core for Power 6 blades although other PowerPC CPUs are rated at 50, 100 or 120 PVUs per core.
"Processor: shall be defined as all processors where the Oracle programs are installed and/or running. Programs licensed on a processor basis may be accessed by your internal users (including agents and contractors) and by your third party users. The number of required licenses shall be determined by multiplying the total number of cores of the processor by a core processor licensing factor specified on the Oracle Processor Core Factor Table which can be accessed at http://oracle.com/contracts. All cores on all multicore chips for each licensed program are to be aggregated before multiplying by the appropriate core processor licensing factor and all fractions of a number are to be rounded up to the next whole number. When licensing Oracle programs with Standard Edition One or Standard Edition in the product name, a processor is counted equivalent to an occupied socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.."
This basically means that for Intel quad core CPUs, they are priced at twice the price of an Intel Single core CPU (a multiplier of .50 per core) - exactly the same as IBM pricing for Intel Quad core CPUs. Likewise, for PowerPC (Po dual core CPUs, they apply an factor of 0.75 since they do not differentiate between the processing power from other manufacturers other than Intel, AMD or Sun and just apply a generic multiplier of 0.75. Oracle have introduced a more comprehensive factor table to calculate their per CPU licensing price (introduced in March this year I think) where they added multipliers of 0.5 and 1.0 to their table. Oracle's core factor table is available at http://www.oracle.com/corporate/contracts/library/processor-core-factor-table.pdf
To illustrate, if the Oracle product license cost is $100 per CPU and the IBM price is $1 per PVU, then the following table illustrates how Oracle and IBM pricing will change depending on the processor that software is deployed on.
Assuming the base software price is $100/CPU (Oracle) or $1 per PVU (IBM)
Oracle Cost calculation Price x RoundUp(CPU cores x multiplier)
Oracle Extended software cost
IBM PVU rating (PVUxCPU-cores)
single core CPU (any)
100 x 1
Intel/AMD Quad Core(older)
100 x RoundUp(4 x 0.5) = 100 x 2
Intel/AMD Quad Core(new)
100 x RoundUp(4 x 0.5) = 100 x 2
Sun UltraSparc T1 Hexa-core(1.0 or 1.2 Ghz)
100 x RoundUp(6 x 0.25) = 100 x 2
Sun UltraSparc T1 Hexa-core(1.4 Ghz or higher)
100 x RoundUp(6 x 0.5) = 100 x 3
Sun UltraSparc T2 Hexa-core
100 x RoundUp(6 x 0.75) = 100 x 5
IBM PowerPC Dual Core POWER6 (520, JS12, JS22 servers)
100 x RoundUp(2 x 1.0) = 100 x 2
IBM PowerPC Dual Core POWER6 (550,560,570, 575, 595 svrs)
100 x RoundUp(2 x 1.0) = 100 x 2
IBM Power5 Quad Core
100 x RoundUp(4 x 0.75) = 100 x 3
This illustrates that both IBM and Oracle understand that not all multi-core CPUs are created equally - some are more like multiple single core CPUs just placed on a single die. It also shows that Oracle and IBM both understand that CPU architectures such as the SunSparc and Intel/AMD x86 offer less processing power per CPU core that IBM PowerPC architecture.
Lets dispel the myth that Oracle price per CPU only - their multipliers provide a similar pricing strategy to IBM's PVU based pricing - sometimes IBM has the price advantage, sometimes Oracle has the price advantage. Oracle first introduced this type of multi-core licensing back in 2005 although back then the multipliers were set at a generic 0.75 per CPU core for all processor types - regardless of CPU processing power.
Note - as both Oracle and IBM have the right to change their pricing at any time, I can only vouch for the accuracy of this post at the time it was originally posted (Nov09).
I was at a workshop with a customer in Manila recently when they started to talk about compression over a client link (especially from
Nokia S60 Mobile Phones) - a key value proposition of Lotus Mobile
Connect. Not since I was in a Pervasive Technical Sales in Australia / New Zealand had I
seen an opportunity for a hosted Lotus Mobile Connect (LMC) deployment. For those of you
that weren't aware that LMC supported a hosted deployment - it does.
If you have the Mobility Client installed (The client for Lotus Mobile Connect - on any platform) you will notice a field labelled "Organizational Unit" - ever wondered what that is for? It's simple really. It is there so than in a hosted deployment, the LMC authentication mechanism is able to distinguish between "John Smith" at Company A and "John Smith" at Company B.
Typically, you would use Tivoli Directory Integrator (TDI) to enable a federated directory model so that the individual client companies can manage their own internal directory - and because TDI uses LDAP to communicate with those directories, it doesn't matter what those client directories are (Domino, MS Active Directory, Sun Directory Server, Novell Groupwise Directory, openLDAP etc) - as long as they support LDAP V3.
Basically, there are two deployment topologies that enable LMC to be
deployed in a hosted environment... (I have deliberately left TDI out
of the diagram since the purpose of the following diagrams are intended
to illustrate the Client options and Encryption break or end-to-end)
Secondly, there is a lower security (and cheaper to deploy) topology
that still gives the end users the advantages of LMC, but without
end-to-end encryption - this model requires that the client companies
trust the Telco since there is a break in encryption at the Telco's
hosting centre. This model would not be suitable in high
security/privacy industries such as Finance, Health, Government,
military or Emergency Services.
A Telco might offer the lower security model as their standard product
and the end-to-end model as the premium service with a price premium...
This is a potential salable product to a Telco's enterprise customers in it's own right, but if we look at the offering that also come from LotusLive ([particularly LotusLive Notes and LotusLive iNotes). In a market like the Philippines or many others across Asia, I suspect there is a business to be made by offering Domino capabilities or even just plain old hosted email but kept separate from the masses of a Telco's standard data customers who all get an email address like email@example.com. Using LotusLive Notes or iNotes would allow a small business to maintain their own virtual email system and keep their own email domain, internal email addresses but without the headaches of looking after their own servers. If we think about the LotusLive offerings in a Telco - where the LotusLive products are rebranded to suit the telco, they could easily go along with a hosted LMC offering. This would provide secure access by remote or mobile users to their own network and their own virtual email environment.
I had hoped that for the LotusLive deployment of Domino
in LotusLive notes that some code changes had been made to make Domino
work in a multi-realm environment - alas, no. Consequently, there is a
minimum customer deployment size of 1000 users - way bigger than most
Telcos would be looking for and way too big for the Philippine market.
As it stands, LotusLive iNotes is not much better at 500 users, but it
so far looks like that is an IBM decision and that if the Telco is to
take on the level one support, then it would be up to the Telco to
decide what the minimum customer size is to be. Indeed, Some legacy Outblaze (from
whom we bought assets from to deploy LotusLive iNotes) customers have some
ISP/ASP customers that resell their service to end customers with 5,10
or 20 users.
Perhaps a diagram is in order to explain who it might all come together. I have refined my diagrams that illustrate the hosted deployment of LMC
with LotusLive iNotes (or ANY LotusLive product for that matter -
Engage, Connections, Meeting etc). First, the Premium offering:
Or in a slightly less secure deployment (with a break in Encryption at the Telco - probably not acceptable for a Bank or Government department, but fine for may smaller businesses) :
As I see it, a Telco offering this type of service could charge a premium for the end to end encryption model while the second model might be a cheaper service.
As an adjunct to the LMC and LotusLive iNotes offering, a Telco might
also offer Lotus Foundations for an on-premise offering to SMBs. I am
not sure if Foundations will interest every Telco, but we already have
some success with Telco sold Foundations in Singapore.
]If you are interested in understanding this hosted model for Lotus Mobile Connect (LMC) or LotusLive iNotes, please let us know... It could make for an interesting series of blog posts
It struck me today that many of our business partners, at least the ones I deal with, don't have the foggiest what IBM offers to them in terms of online resources, assistance (paid or free), demonstrations available publicly etc. As I was preparing for a presentation on the subject in the Global and Medium System Integrator Telecom trianing that we ran in Malaysia a couple of months ago, I thought it would also be worthwhile sharing it with a larger Business Partner audience as well... . I've uploaded a file to my collection on Lotuslive - it is somewhat biased toward AP in terms of listed resources in the area, maps on slides etc, but could be easily made more specific to another geography pretty easily... . . .This is the basic agenda and flow:
Partner Programme Management
Partnerworld Industry Networks
Virtual Innovation Centre
IBM Developer Relations
Logging a call – types of calls etc
Business Partner Technical Strategy Enablement team
This article describes how you can develop an offline charging application using the Rf interface in IBM® WebSphere® IP Multimedia Subsystem (IMS) Connector V6.2, presents a sample asynchronous offline charging adapter to enable multi-threaded throughput of the Rf client, and discusses performance tuning based on the Rf interface. .
WebSphere IP Multimedia Subsystem Connector V6.2 (hereafter referred to
as WebSphere IMS Connector) is an important component of the IBM
Service Delivery Platform for Telecommunications. In the IP Multimedia
Subsystem (IMS) architecture, the WebSphere IMS Connector connects SIP
applications with IMS core elements and provides functions of offline
charging (through the Rf interface), online charging (Ro interface),
and subscriber profile management (Sh interface).
article describes how to develop an offline charging application,
leveraging the Rf interface in WebSphere IMS Connector V6.2. An
asynchronous offline charging adapter that implements an asynchronous
callback interface to enable multi-threaded throughput of the Rf client
is then presented. This discussion concludes with a look at performance
tuning with WebSphere IMS Connector.
This article assumes a basic understanding of the IP Multimedia Subsystem,
Diameter protocol, Java™ programming, and Web services standards.
It occurred to me the other day, when talking to a customer about Web2.0 - the participatory web, that I have a great example at home - I don't need to talk about Youtube, Flickr or Wikipedia - my five year old son is a great example of Web 2.0 in action. . My son Max - like many boys of his age is a big fan of the cartoon character Ben Ten. At home, he watches Cartoon Network to get his Ben Ten fix. In conjunction with the TV show, Cartoon Network have a number of games available on their web site - for them it is all about encouraging their viewers to keep watching and the way they do that is to offer games based on their shows on their web site to encourage more intense interest in the shows. They have recently launched a game creator which allows their web site users to build their own Ben Ten games. Max (my son) loves the game creator. It enables him to build his own games using a shockwave interface, then share that game with other users of the Cartoon Network web site. . That is the perfect example of the Web 2.0 concept of the participatory web. Max has ok computer skills for his age, but he still has a long way to go, yet he is able to and really enjoys creating his own games. More than that, he loves sharing his creations with others. That sort of participation, sharing and creation is exactly what Web 2.0 is all about. . I am such a proud dad! . If you want to try it our for yourself, this game is available at http://gamecreator.cartoonnetwork.com.au/?id=141081 .
. Some important things to note: The games are rated and stats are recorded on the number of attempts and time played - clicking on the "share this game" link add to the share count - Max doesn't (yet) have many friends with email accounts, so that's not a big deal to him, but older kids (and me!) find that a useful capability to share original games around... . What can we learn form this lesson as related to Telcos? Well, here are some things that I've learnt from my son:
Make it easy - in the telco space the closest Web 2.0 equivalent we have is the Mashup Center. Frankly, I think it is pretty easy to use, so I think we're doing ok on that score
Provide a rating capability - Max loves it when his games get blue balls (the rating visual that Game Creator uses) - likewise, the Widget library in the Mashup Center has this capability
Provide usage stats - I think it's really interesting to see which of Max's Games are getting played (let alone being voted for). I am not sure if Mashup Center or the Widget Library does this or not. If not, I think it would be a good addition.
Relate the participation back to your business - For Cartoon Network, that's all about getting web users to watch the show through getting players excited about the characters. For Telco's the Mashup environment should also encourage users to use Telco services and think of that telco as more than just their carrier, but their technology partner for the future...
So, I've found a real world example that I can now use in my Web 2.0 for Telco presentations... :-) .
The Industry Business Partner Technical Strategy Enablement (IBPTSE) team is focused on helping IBM's business partners in their technical strategy with respect to IBM Software - specifically WebSphere software. We focus on three undustries:
Media & Entertainment
Energy & Utilities
I guess you're wondering what to expect form this blog. Over the next few posts, I plan to talk about resources that are available to IBM Business Partners - specifically in our team's focus industries, some of the latest issues we are seeing in our industries, some discussion of IBM software technology and some thoughts on IBM's Smarter Planet initiatives.
In terms of introductions, our team has people around the world, with coverage of North America, Europe and Asia Pacific. I live in Australia and have team mates in China, Taiwan and India and between us, we cover the whole of Asia Pacific.