Let me start by apologising. I have been very busy over the past few weeks - this week is the first at home in five weeks. That's my best excuse for not posting (other than drawing a blank when it comes to topics
) I know I have quite a few people who read my ramblings and I really appreciate it. Unfortunately, my day job keeps getting in the way. The other big news I have (big for me, not so much for almost anyone that is reading this) is that the Industry Business Partner Technical Enablement team is being disbanded and wound into the IBM channels infrastructure. That means that there will no longer be any industry speciality in the technical enablement that we provide to our business partners - of course our partners are not being left out in the cold either. The channels team will continue to provide first rate technical enablement and assistance and IBM will continue to have industry specialists. For Business Partners, it will just be a matter of engaging with (non-channels) IBMers in the industry teams as well as the channels team. I would expect that the channels team will provide the conduit to those industry specialists such as me when the specialised industry skills are needed..
By now you might be wondering if my team is going away, what is happening to yours truly? Well, I have a position with the ... wait for it.... GMU BPM Tiger Team focused on telecommunications. And I thought IBPTSE was a mouthful. I will continue to be a Telecom specialist architect in this new team. Let me break down those acronyms a bit for you.
- GMU is Growth Market Unit which equates to the whole world less North America, Japan and Western and Northern Europe.
- BPM is Business Process Management and is the layer of intelligence that sits on top of a Service Oriented Architecture; it is the business processes, the workflows, the business rules etc that for the basis for the business strategy.
- Tiger Team is a small team of the best of the best resources to chase down deals. What is unusual for this tiger team is the focus on industry - most other Tiger teams in IBM are focused on a particular brand such as Rational, Lotus, WebSphere, Tivoli or Information Management.
This move has been in the works for a few weeks for me, but it's now at a stage where I can talk about it. I would like to take this opportunity to thank everyone associated with the IBPTSE team around the world, particularly Jim Toohey, my manager. Over the past three years that I have been in the team, we have accomplished a lot of things that make me feel very proud. Multiple deals, partners enabled, partners validated against our SPDE framework for Telcos. Despite me being the only team member in Australia, I have always felt a part of a team despite the geographical challenges. Thanks guys!
It occurred to me the other day, when talking to a customer about Web2.0 - the participatory web, that I have a great example at home - I don't need to talk about Youtube, Flickr or Wikipedia - my five year old son is a great example of Web 2.0 in action. .
My son Max - like many boys of his age is a big fan of the cartoon character Ben Ten. At home, he watches Cartoon Network to get his Ben Ten fix. In conjunction with the TV show, Cartoon Network have a number of games available on their web site - for them it is all about encouraging their viewers to keep watching and the way they do that is to offer games based on their shows on their web site to encourage more intense interest in the shows. They have recently launched a game creator which allows their web site users to build their own Ben Ten games.
Max (my son) loves the game creator. It enables him to build his own games using a shockwave interface, then share that game with other users of the Cartoon Network web site..
That is the perfect example of the Web 2.0 concept of the participatory web. Max has ok computer skills for his age, but he still has a long way to go, yet he is able to and really enjoys creating his own games. More than that, he loves sharing his creations with others. That sort of participation, sharing and creation is exactly what Web 2.0 is all about. .I am such a proud dad!.
If you want to try it our for yourself, this game is available at http://gamecreator.cartoonnetwork.com.au/?id=141081 .
Some important things to note: The games are rated and stats are recorded on the number of attempts and time played - clicking on the "share this game" link add to the share count - Max doesn't (yet) have many friends with email accounts, so that's not a big deal to him, but older kids (and me!) find that a useful capability to share original games around....
What can we learn form this lesson as related to Telcos? Well, here are some things that I've learnt from my son:
- Make it easy - in the telco space the closest Web 2.0 equivalent we have is the Mashup Center. Frankly, I think it is pretty easy to use, so I think we're doing ok on that score
- Provide a rating capability - Max loves it when his games get blue balls (the rating visual that Game Creator uses) - likewise, the Widget library in the Mashup Center has this capability
- Provide usage stats - I think it's really interesting to see which of Max's Games are getting played (let alone being voted for). I am not sure if Mashup Center or the Widget Library does this or not. If not, I think it would be a good addition.
- Relate the participation back to your business - For Cartoon Network, that's all about getting web users to watch the show through getting players excited about the characters. For Telco's the Mashup environment should also encourage users to use Telco services and think of that telco as more than just their carrier, but their technology partner for the future...
So, I've found a real world example that I can now use in my Web 2.0 for Telco presentations... :-)..
I know this isn't strictly related to my normal Industries, but it is applicable for any DW member, so I thought it was valuable enough to share and might even prove useful in dealing with IBMers. For a number of years now, my email signature has included a link for non-IBMers to contact me via Sametime. That link is connects to https://www.ibm.com/collaboration/instantmessaging
This doesn't seem to be well known among IBMers, but I have spoken with a number of partners, exIBMers and my wife via this facility in the past. All they need is a ibm.com account and anyone can sign up for one of those. If you have ever downloaded anything from ibm.com in the past, or signed up to Developerworks then they will already have one (which is the case for most partners and IBMers)The Sametime client that the ibm.com site launches is the (old) Sametime Connect 3.1 Java client. It looks like this:
NB. In the buddylist - alarmour @ au.ibm.com
is my internal Sametime community id (which is the same as my email address) and alarmour @ optusnet.com.au
is my ibm.com id.
Despite it's age and now being superseded by Sametime 6.5, Sametime 7, Sametime 7.5 and Sametime 8, it still works! As an example, check out the short conversation I had with my other personality!
In my normal Sametime client, my external id comes in as alarmour @ optusnet.com.au.ibm.ext
(my ibm.com id prepended to "ibm.ext") - I can add this external id to my buddylist so that I can see when my external self is logged on. In fact, I can add the external community to my standard sametime setup and log in from there as well. If you know the name of the IBMer that you want to add to your buddylist, but don't know their email address, you can get that from the ibm.com web site through this employee search facility
I am not sure what is going on with the status of my ibm.com id not showing up as online (on the screen dump above) - I do see when my wife is logged on and some others that regularly log in too (although they are using a more modern client rather than the old 3.1 java client). After a while, it did correct itself through.
What I did for my wife was to download the free trail version of the Sametime client (from DW!), then use the config information from the jave client so that Samtime started automatically when her PC starts - thaqt way, she can chat with me regardless of the Sametime client I am using to connect to messaging.ibm.com (I often use the mobile client which does not support multiple communities). Such a setup also means that she does not need to go to ibm.com in a browser to chat with me - the client is just sitting minimised in the systray on her PC.
Hopefully, this post will spread the word a bit more....
Update: The version of the Sametime Web Client has been updated and the launch URL has changed - I have corrected it above and added a new screen capture of the new client:
I had a request on the other week to create a number of topology diagrams that showed how a Telco might start small and grow their environment to add new capabilities and services. This was specifically for a telco in Vietnam, but I figured it would make sense to generalise the presentation and the images to make it usable for other opportunities. We've had a similar request from other telcos recently as well. The presentation step through 11 phases from a pilot/trial environment through to a full blown system. Each slide has speaker notes explaining what is being added at each phase in terms of products and capabilities. This presentation is not meant to make any recommendations on how to evolve form a small system to a more complex and capable one. What it is supposed to illustrate one possible evolution... Note that it focuses only on the IBM components and some other components would also be required for some phases (such as a transcoding engine in the media extension phase).
Below are three of the diagrams - Phase 1, Phase 6 and Phase 11 and the speaker notes that go along with that phase - to give you a feel for the flow...
Phase 1 - Test Environment
At this first stage, an initial deployment might be considered a proof of concept or a trial – which could become the test and or ISV environment, The functions that this could offer are:
- Composite applications that bring together functions provided by the network. For instance an application that consumes SMS messaging and integrates the location of the handset into an app.
- WSRR will get them down the path of SOA Governance – it is important to get this in early to ensure that the governance model is maintained and the Telco will now need to rework services that are created at this stage.
- Complex workflows and business processes can be built which include human tasks (such as prototype processes for the production implementation )
Phase 6 - Developer Ecosystem including Web 2.0
Phase 6 introduces the Developer Ecosystem components such as :
This combined with the web services exposure deployed in phase 4 means that the developer ecosystem can now cater for all levels of developers – those with no skills can use the drag and drop mashup environment, script developers can use sMash and more advanced developers can use the web services interface. In the backup slides there is an illustration of this.
- Idea Factory for Telecom – which will help make a dispersed group of developers into a community. It enable the sharing of ideas and a framework for the Telco to manage the evolution of the ideas that are generated within the community. It also provides a rapid prototyping capability via...
- IBM Mashup Canter which allows users to drag widgets onto a workspace and simply wire them together. It is both the development and the runtime architecture. This means that developers don't need deep development skills in order to build new applications.
- WebSphere sMash which provides a PHP and Groovy scripting environment (both development using the Dojo toolkit and the runtime environment)
For advanced developers the Telco can support developers across a range of IDEs ranging from Rational and Eclipse (where we have Telecom Toolkits available for free) to other IDEs (such as Microsoft Visual Studio or Sun Netbeans) where the IDE has tools to assist developers with consuming web services. In all the IDEs, developers will consume the Web Services Description Language (WSDL) file from a UDDI directory in the DMZ. The UDDI directory (part of WPS) is populated from the WSRR internal services repository.
Phase 11 - IMS integration and extension
When the Telco goes down the IP Mulitmedia Subsystem (IMS) path, the software deployed already has IMS enablement, but at this point we can also add WebSphere Presence Server (PS) and WebSphere XML document Management Server (XDMS – formerly WebSphere Grouplist Manager) which provides IMS services for the IMS services plane. The core infrastructure that was deployed way back in phases 1 and 2 are critical to the IMS Services plane.
It is important to understand that the phases I have split them down into are purely arbitrary and are not necessarily what would happen in a real telco. Which function occurs at what point and in combination with other functions is something that must be driven by the business requirements of the telco. The intent is to illustrate how a telco could start small and add function incrementally building on the previous investments. Still want it? Great - feel free to download it from MyDeveloperworks files
. Please let me know what you think.
I noticed this article today at FierceWireless today:
Verizon, AT&T, others rally on IMS approach to voice over LTE
Read more: Verizon, AT&T, others rally on IMS approach to voice over LTE - fiercewireless.com
Verizon Wireless, AT&T and several major international carriers and vendors threw their support behind an IMS-based approach to delivering voice and SMS services over LTE networks. The level of operator support--the approach also is supported by Orange, Telefonica, TeliaSonera and Vodafone--sits in sharp contrast to another approach, called Voice over LTE via Generic Access, or VoLGA, which is supported by T-Mobile International.
Vendors including Alcatel-Lucent, Ericsson, Nokia Siemens Networks, Nokia, Samsung Electronics and Sony Ericsson also voiced their support for the initiative, dubbed One Voice. The companies said they concluded that an IMS-based approach "is the most applicable approach to meeting the consumers' expectations for service quality, reliability and availability when moving from existing circuit-switched telephony services to IP-based LTE services. This approach will also open the path to service convergence, as IMS is able to simultaneously serve broadband wireline and LTE wireless networks."
The companies said that the purpose of the initiative is to create the largest LTE ecosystem possible, and to avoid fragmentation of technical solutions.
Interestingly, both Alcatel-Lucent and Ericsson also support the VoLGA approach, and Nokia Siemens has supported its own solution, called Fast Track Voice, which proposes having mobile switching center servers handle VoIP traffic over LTE networks. VoLGA proponents argue that their approach should be used as an interim solution. All three vendors said they do not see a conflict in supporting the different approaches.
For more: See this release - SMS over LTE
See also this related article on UnStrung
Is it just me or when you read "VoLGA" do you think "Vulgar" - I think the ALu acronym police need to get out from behind their desks and make an arrest for that one!
I've drawn up a representation of the situation as I see it. NSN going it alone with Fast Track Voice, almost everyone else supporting VoLGA and planning to move to OneVoice.
The thing that I find really interesting is the inference by ALu that they will continue to stand by VoLGA as well as support it as an interim step to OneVoice while NSN seem to be saying that Fast Track Voice is only an interim step on the path to OneVoice. It's also interesting to note that the VoLGA consortium seems to be mainly Network Equipment Providers (NEPS) while OneVoice is bot NEPs and Telcos.... I suppose the most appropriate message is "watch this space"...
PS. On rereading this post, I imaging some of you are going 'Huh?' I
apologise for the Telco jargon. Let me take a moment to try and
explain some of the terms that appear in this post.
- IMS - IP Multimedia Subsystem (not
IBM's mainframe database that helped put man on the moon). This is a
specification controlled by 3GPP (a Telco standards body)
to describe a next generation IP based telephony environment. Most
telcos today still run a legacy switched environment based on very
specialised protocols such as SS7 and Sigtran. These protocols are not
IP based and as such require very specialised (read expensive) skills
to work with them. The other thing is that they are not really
standardised - each NEP has their own version of the SS7 protocols.
IMS promises to bringmuch cheaper skills and shorter development cycles to the Telcos core platform - something they have not had before. IBM has a number of products that are targeted at telco's IMS infrastructure (WebSphere IMS Connector, WebSphere Presence Server and WebSphere XML Document Management Server)
- LTE - Long Term Evolution is seen by most NEPS as the next logical evolution step for carriers with GSM networks.
That evolutionaty path goes something like this:
GSM->GPRS->EDGE->UMTS->HSDP->LTE. LTE promises to
deliver high bandwidth mobile connections. The main rival to LTE is
WiMax which you may have heard of before.
- ALu - Alcatel Lucent (a very common abbreviation for the joint company)
I get regular emailed updates from one of the newspapers here in Australia (The Sydney Morning Herald
in this case) - A few months ago, there was an interesting article about a IT company in South Africa who found it was much faster to transfer data by carrier pigeon then electronically. For reference, it is available here http://www.smh.com.au/technology/technology-news/carrier-pigeon-faster-than-south-african-isp-20090910-fi9h.html
To quote the article:
Carrier pigeon faster than South African ISP
September 10, 2009 - 10:53AM
A South African information technology company proved it was faster for them to transmit data with a carrier pigeon than to send it using Telkom, the country's leading internet service provider.
Internet speed and connectivity in Africa's largest economy are poor because of a bandwidth shortage. It is also expensive.
Local news agency SAPA reported the 11-month-old pigeon, Winston, took one hour and eight minutes to fly the 80 km from Unlimited IT's offices near Pietermaritzburg to the coastal city of Durban with a data card was strapped to his leg.
Including downloading, the transfer took two hours, six minutes and 57 seconds -- the time it took for only four percent of the data to be transferred using a Telkom line.
Okay, it was a bit of a stunt. I am sure if I posted a 32Gb SD Card to the Sydney (standard mail service- often next day delivery, but sometimes the day after that), it would arrive faster than I could transfer that content from my home office. What does that prove in terms of available bandwidth? Not much really - SD cards can hold an incredible amount of information these days. I have worked with customers in the past who shipped hard-drives around when they needed to transfer large amounts of data - even today - on most networks, it would be faster to courier a 1Tb HDD anywhere in the world than to transfer that much data over the wire.
The article did get me thinking though. I travel quite a bit around Asia and have experienced first hand the speed of networks in many countries. I've seen networks slower than a dial up modems (in Vietnam IBM Office) - in fact I reckon that my mobile phone as a modem over an EDGE connection (3G in Vietnam is very patchy) would have been faster than the IBM office network connection. This is not a unique situation - in many countries I visit, the network speed is faster in my hotel than it is at the local IBM office.
How does this effect the way we behave? Lets look at a specific example. Last year, I was doing a lot of work for the Globe Telecom SDP project that we eventually won with NSN in the Philippines. I was using Cattail (an IBM Research project for sharing files - similar functionality to the Lotus Connections Files capabilty that we now have in MyDeveloperworks) to upload files so that the local IBMPH IBM team could get to them rather than clog up their mail boxes. Smart - or so I thought. With Cattail, you are able to see who is downloading your files - often quite interesting as it was in this case. I noticed that only one person in the Philippines was downloading the files, despite notifying about 12 people that they each needed to look at the content. After a while I asked this one person why no one else was downloading the files from Cattail - he told me that because the network was so slow, most people were unable to even load the Cattail page to begin the download, so he went through the pain for everyone, then emailed the files around the local team! So much for not clogging up their mail files.
I am constantly frustrated by the US centric assumption that the whole world has the same bandwidth available to them as they do. Even in Australia, I am paying AU$68 per month for 12Gb of traffic - typically around 2 Mbps actual (10Mbps claimed capacity) downstream and 250 kbps actual upstream. By US standards, that must seem slow, but by the standards of developing nations in ASEAN, that's pretty darn good. There is still a huge digital divide between the haves (the US) and the have-nots (developing nations) - while some countries will have fibre to the home deployed (or being deployed) over the next few years - Singapore will be done very quickly I anticipate - I wont have that sort of speed available to me until 2012 the Australian federal government claims (I expect it will be more like 2020 though as I do not live in the inner suburbs of Melbourne)
So, what point was I trying to make? I am not sure. I am frustrated at my bandwidth sometimes (usually not) but in countries that I visit, the whole nation must feel frustrated. I often see web pages sizes in excess of 500kb - a ridiculously large size and unusable in most of Asia. Application designers need to be mindful of the bandwidth availability if they hope to be successful in Asia. If you have thoughts, please comment...
PS The other thing this article reminded me of was RFC 1149 -A Standard for the Transmission of IP Datagrams on Avian Carriers Although I know that carrier pigeon transmission of IP packets (datagrams) would go anywhere near the throughput achieved by strapping a SD card to the pigeon's leg.
I stumbled across this report this evening. It states:
For the second year in a row, IBM AIX UNIX running on the Power or “P” series
servers, scored the highest reliability ratings among 15 different server
operating system platforms – including Linux, Mac OS X, UNIX and Windows.
Those are the results of the ITIC 2009 Global Server Hardware and Server OS
Reliability Survey which polled C-level executives and IT managers at 400
corporations from 20 countries worldwide. The results indicate that the IBM AIX
operating system whether running on Big Blue’s Power servers (System p5s) is
the clear winner, offering rock solid reliability. The IBM servers running AIX
consistently score at least 99.99% or just 15 minutes of unplanned per server,
per annum downtime.
It is very satisfying to know that the platform I have been
recommending to our clients (usually AIX on Power Blades (JS12 or JS22)
is the most reliable platform* out there.
*Distributed platforms at least.
I am working with a number of IBM business partners and I found a need to explain to them how our Software licensing works. I found that many of our sales staff don't fully understand it either, so I figured I would post the explanation I wrote for the business partners to try and explain it so more people "get it". The other thing that struck me in speaking with some partners was that - despite some of them them partnering with Oracle more often than they have with us in the past - they had a simplistic view of Oracle's licensing thinking that it was simply CPU based. Oracle's licensing scheme is similar to our own PVU scheme in weighting different multi-core CPUs differently for licensing purposes. First - IBM's PVU scheme
The majority of the IBM runtime components are priced per PVU. The Processor Value Unit or PVU is an arbitrary notion that IBM came up with to cater for multi-core CPUs and the fact that some platforms offered more processing power per CPU core than other platforms. Different brand processors cores are considered equivalent to PVU counts from 30 PVUs to 120 PVUs per core.
For example, an Intel single-core CPU is 100 PVU. Intel multi core CPUs are considered to be equivalent to 50 PVUs per processor core (or 70 PVUs per core for the newer Intel chips), so a dual core CPU would be 100 or 140 PVU and a quad core CPU would be 200 or 280 PVU. Prior to the latest generation of Intel multi-core CPUs, Intel multi-core architecture was such that a single dual core CPU offers similar processing power to a single core CPU, so to be fair to customers that use Intel multi core CPUs, IBM only rates each core at 50 PVUs. The latest chips have improved their processing power per core over previous generations of chip and they are now rated at 70 PVUs per core as a result.
IBM PowerPC chips are more efficient and therefore the PVU rating per CPU core is 80 PVU per core for Power 6 blades although other PowerPC CPUs are rated at 50, 100 or 120 PVUs per core.
The PVU calculator is available at https://www-112.ibm.com/software/howtobuy/passportadvantage/valueunitcalculator/vucalc.wss
Now - lets look at the Oracle do it
For multi-core CPUs, Oracle have a similar scheme to IBM. This quote is from Oracle's current price list on their web site -
New reference http://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf
"Processor: shall be defined as all processors where the Oracle programs are installed and/or running. Programs licensed on a processor basis may be accessed by your internal users (including agents and contractors) and by your third party users. The number of required licenses shall be determined by multiplying the total number of cores of the processor by a core processor licensing factor specified on the Oracle Processor Core Factor Table which can be accessed at http://oracle.com/contracts. All cores on all multicore chips for each licensed program are to be aggregated before multiplying by the appropriate core processor licensing factor and all fractions of a number are to be rounded up to the next whole number. When licensing Oracle programs with Standard Edition One or Standard Edition in the product name, a processor is counted equivalent to an occupied socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.."
This basically means that for Intel quad core CPUs, they are priced at twice the price of an Intel Single core CPU (a multiplier of .50 per core) - exactly the same as IBM pricing for Intel Quad core CPUs.
Likewise, for PowerPC (Po dual core CPUs, they apply an factor of 0.75 since they do not differentiate between the processing power from other manufacturers other than Intel, AMD or Sun and just apply a generic multiplier of 0.75
. Oracle have introduced a more comprehensive factor table to calculate their per CPU licensing price (introduced in March this year I think) where they added multipliers of 0.5 and 1.0 to their table. Oracle's core factor table is available at http://www.oracle.com/corporate/contracts/library/processor-core-factor-table.pdf
To illustrate, if the Oracle product license cost is $100 per CPU and the IBM price is $1 per PVU, then the following table illustrates how Oracle and IBM pricing will change depending on the processor that software is deployed on.
Assuming the base software price is $100/CPU (Oracle) or $1 per PVU (IBM)
|CPU Type||Oracle Cost calculation Price x RoundUp(CPU cores x multiplier)||Oracle Extended software cost||IBM PVU rating (PVUxCPU-cores)||Extended Cost|
|single core CPU (any)||100 x 1||$100.00||100||$100.00|
|Intel/AMD Quad Core(older)||100 x RoundUp(4 x 0.5) |
= 100 x 2
|Intel/AMD Quad Core(new)||100 x RoundUp(4 x 0.5) |
= 100 x 2
|Sun UltraSparc T1 Hexa-core(1.0 or 1.2 Ghz)||100 x RoundUp(6 x 0.25) |
= 100 x 2
|Sun UltraSparc T1 Hexa-core(1.4 Ghz or higher)||100 x RoundUp(6 x 0.5) |
= 100 x 3
|Sun UltraSparc T2 Hexa-core ||100 x RoundUp(6 x 0.75) |
= 100 x 5
|IBM PowerPC Dual Core POWER6 |
(520, JS12, JS22 servers)
|100 x RoundUp(2 x 1.0) |
= 100 x 2
|IBM PowerPC Dual Core POWER6 |
(550,560,570, 575, 595 svrs)
|100 x RoundUp(2 x 1.0) |
= 100 x 2
|IBM Power5 Quad Core||100 x RoundUp(4 x 0.75) |
= 100 x 3
This illustrates that both IBM and Oracle understand that not all multi-core CPUs are created equally - some are more like multiple single core CPUs just placed on a single die. It also shows that Oracle and IBM both understand that CPU architectures such as the SunSparc and Intel/AMD x86 offer less processing power per CPU core that IBM PowerPC architecture.
Lets dispel the myth that Oracle price per CPU only - their multipliers provide a similar pricing strategy to IBM's PVU based pricing - sometimes IBM has the price advantage, sometimes Oracle has the price advantage. Oracle first introduced this type of multi-core licensing back in 2005 although back then the multipliers were set at a generic 0.75 per CPU core for all processor types - regardless of CPU processing power.
Note - as both Oracle and IBM have the right to change their pricing at any time, I can only vouch for the accuracy of this post at the time it was originally posted (Nov09).
Published back in April, is a new document on developerworks, "Develop an offline charging application based on WebSphere IMS Connector
" which looks a very useful document so i figured it would make sense to bring it to your attention.... .
To quote the article.....
, IT Specialist, IBM
This article describes how you can develop an offline charging application using the Rf interface in IBM® WebSphere® IP Multimedia Subsystem (IMS) Connector V6.2, presents a sample asynchronous offline charging adapter to enable multi-threaded throughput of the Rf client, and discusses performance tuning based on the Rf interface.
WebSphere IP Multimedia Subsystem Connector V6.2 (hereafter referred to
as WebSphere IMS Connector) is an important component of the IBM
Service Delivery Platform for Telecommunications. In the IP Multimedia
Subsystem (IMS) architecture, the WebSphere IMS Connector connects SIP
applications with IMS core elements and provides functions of offline
charging (through the Rf interface), online charging (Ro interface),
and subscriber profile management (Sh interface).
article describes how to develop an offline charging application,
leveraging the Rf interface in WebSphere IMS Connector V6.2. An
asynchronous offline charging adapter that implements an asynchronous
callback interface to enable multi-threaded throughput of the Rf client
is then presented. This discussion concludes with a look at performance
tuning with WebSphere IMS Connector.
This article assumes a basic understanding of the IP Multimedia Subsystem,
Diameter protocol, Java™ programming, and Web services standards.
Last week, I was at the TeleManagement Forum's (TMF) Africa Summit event in Johannesburg, South Africa. The main reason for me attending was to finish of my TMF certifications (I am level 3 currently) in the process framework (eTOM) - if I have passed the exam, I will be Level 4 certified. It was a really tough exam (75% pass mark) so I don't know if I did enough to get over the line'.
Regardless, the event was well attended with 200-230 attendees for the two days of the conference. It was interesting to hear the presenter's thoughts on telco usage within Africa into the future. Many seemed to think that video would drive future traffic for telcos. I am not so sure.
I n other markets around the world, video was also projected to drive 3G network adoption, yet this has not happened anywhere. Why do all these people think that Africa will be different?
I see similar usage patterns in parts of Asia, yet Video has not take off there. Skype carries many more voice only calls than video calls. Apple's Facetime video chat hasn't taken off like Apple predicted. 3G video calls makes a tiny proportion of all calls made.
Personally, I think that voice (despite it's declining popularity relatively speaking in the developed world) will remain the key application, especially voice over LTE for the foreseeable future in Africa. I also think that social networking (be it Facebook, freindster, MySpace or some other African specific tool) will drive consumer data (LTE) traffic. Humans are social animals, and I think these sorts of social interactions will apply just as much in the African scenario as it has in others.
The other day, I was at a customer proof of concept, where the customer asked for 99.9999% availability within the Proof of Concept environment. Let me explain briefly the environment for the Proof of Concept - we were allocated ONE HP Proliant server, with twelve cores and needed to run the following:
- IBM BPM Advanced (BPM Adv)
- WebSphere Operational Decision Management (WODM)
- WebSphere Services Registry & Repository(WSRR)
- Oracle DB (not sure what version the customer installed).
Obviously we needed to use VMWare to deploy the software since installing all of the software on the server (and being able to demonstrate any level of redundancy) would be impossible.
Any of you that understand High Availability as I do would say it can't be done in a Proof of Concept - and I agree, yet our competitor claims they have demonstrated six nines (99.9999% availability) in this Proof of Concept environment - it was deployed on the customer's hardware; hardware that did not have any redundancy at all. I call shenanigans on the competitor claims. Unfortunately for us, the customer swallowed the claim hook line and sinker.
I want to explain why their claim of six nines cannot be substantiated and why the customer should be sceptical as soon as a vendor - any vendor makes such claims. First, lets think about what 99.9999% availability really means. To quantify that figure, that means 31.5 seconds of unplanned downtime per year! For a start, how could you possibly measure availability for a year over a two week period. Our POC server VMs didn't crash for the entire time we had them running - does that entitle us to claim 100% availability? No way.
The simple fact is that the Proof of Concept was deployed in a virtualised environment on a single physical machine - without redundant Hard Drives or power supplies - there is no way we or our competition could possibly claim any level of availability given the unknowns of the environment.
In order to achieve high levels of availability, there can be no single point of failure. That means no failure points in the Network, the Hardware or the Software. For example, that means:
- Multiple redundant Network Interface Connectors
- RAID 1+0 drive array,
- Multiple redundant power supplies,
- Multiple redundant network switches,
- Multiple redundant network backbones
- Hardened OS
- Minimise unused OS services
- Use Software clustering capabilities (WebSphere n+x clustering *)
- Active automated management of the software and OS
- Database replication / clustering (eg Oracle RAC or DB2 HADP)
- HA on network software elements (eg DNS servers etc)
We need to go back to the Telco and impress upon them that six nines availability depends on all of the above factors (and probably some others!) and not just about measuring the availability of the software over a short (and non-representative) sample period.
Typically this level of HA is very expensive, indeed every additional '9' increases the cost exponentially - that is: six nines (99.9999% availability) is exponentially more expensive than five nines(99.999% availability). I found this great diagram that illustrates the cost versus HA level.
This diagram is actually from a IBM Redbook (See http://www.redbooks.ibm.com/redbooks/pdfs/sg247700.pdf
) which has a terrific section on high Availability - it illustrates how there is a compromise point between the level of high availability (aiming for continuous availability) and the cost of the infrastructure to provide that level of availability.
- n is number of servers needed to handle load requirements
- x is the number of redundant nodes in the cluster – to achieve six 9's, this should be in excess of 2)
Further to my last post, it now looks like the WAC is completely dead and buried.
One thing that is creating a lot of chatter at the moment though is TelcoML (Telco Markup Language) - there it a lot of discussions about it on the TeleManagement Forum (TMF) community site and while I don't intend to get in a big discussion about TelcoML, I do want to talk about Telco standards in general.
The Telco standards that seem to take hold are the ones with strong engineering background - I am thinking of networking standards like SS7, INAP, CAMEL, SigTRAN etc, but the Telco standards focussed on the IT domain (like Parlay, ParlayX, OneAPI, ParlayREST and perhaps TelcoML) seem to struggle to get real penetration - sure standards are good - they make it easier and cheaper for Telcos to integrate and introduce new software; they make it easier for ISVs to build software that can be deployed at any telco. So, why don't they stick?
Why do we see a progression of standards that are well designed, have collaboration of a core set of telcos around the world (I'm thinking the WAC here) yet nothing comes of it. It we look at Parlay for example, sure CORBA is hard, so I get why it didn't take off, but ParlayX with web services is easy - pretty much every IDE in the world can build a SOAP request from the WSDL for that web Service - why didn't it take off? I've spoken to telcos all around the world about ParlayX, but it's rare to find one that is truly committed to the standard - sure the RFP's say must have ParlayX, but then after they implement the software (Telecom Web Services Server in IBM's case) they either continue to offer their previous in house developed interfaces for those network services and don't use ParlayX or they just don't follow through with their plans to expose the services externally: why did we bother? ParlayX stagnated for many years with little real adoption from Telcos. Along comes GSMA with OneAPI with the mantra 'ParlayX web services are too complicated still, lets simplify them and also provide a REST based interface'. No new services, just the same ones as ParlayX, but simplified. Yes, I responded to a lot of Requests For Proposal (RFP) asking for OneAPI support, but I have not seen one telco that has actually exposed those OneAPI interfaces to 3rd party developers as they originally intended. So, now, OneAPI doesn;t really exist any more and we have ParlayREST as a replacement. Will that get any more take up? I don't think so.
The TMF Frameworx seem to have more adoption, but they are the exception to the rule.
I am not really sure why Telco standards efforts have such a tough time of it, but I suspect that it comes down to:
- Lack of long term thinking within telcos - there are often too many tactical requirements to be fulfilled and the long term strategy never gets going (this is like Governments who have a four year terms not being able to get 20 year projects over the line - they're too worried about getting the day to day things patched up and then getting re-elected)
- Senior executives in Telcos that truly don't appreciate the benefits of standardisation - I am not sure if this is because executives come from a non-technical background or some other reason.
What to do? I guess I will keep preaching about standards - it is fundamental to IBM's strategy and operations after all - and keep up with the new ones as they come along. Lets hope that Telcos start to understand why they should be using standards as much as possible, after all they will make their life easier and their operations cheaper.
This makes for an interesting comparison to the National Emergency
Warning System (NEWS) that was implemented in Australia last year as a
result of the Black Saturday bushfires.
Here is the URL for this bookmark: gizmodo.com/5857897/this-is-not-a-test-the-emergency-alert-system-is-worthless-without-social-networks
This makes for an interesting comparison to the National Emergency
Warning System (NEWS) that was implemented in Australia last year as a
result of the Black Saturday bushfires. Of particular interest is that the USA have avoided the SMS channel when in Australia, that has been the primary channel - alternates like TV and Radio are seen as not as pervasive and thus a lower priority. I don't think that NEWS here in Oz is connected to twitter, facebook, foursquare or any other social networking site either, but that could be an extension to NEWS - the problem is getting everyone to "friend" the NEWS system so that they see updates and warnings!
I was looking at where some of the traffic for this blog comes from this morning. Someone had used Google to search for "ibm sdp cloud" which I am glad to say yielded this blog as the third and forth results. Above Telco Talk
in the results was a post from 2005 from fellow MyDeveloperworks blogger Bobby Woolf
with his post What is in RAD 6.0
- which is interesting in that the post wasn't about Service Delivery Platforms and the term "SDP" is only mentioned in the comments on the post, yet it rated higher in Google's index than my posts which have been about cloud, SDPs or both! That's another conversation though...
The thing that really caught my attention was a new whitepaper form IBM on Smarter Homes. This has been an ongoing area of interest for me for a few years now. This new whitepaper "The IBM vision of a smarter home enabled by cloud technology" is interesting - it talks about some of the concepts that I have seen coming over the past few years, but it also introduces the concept of Cloud based services providers as the key enabler outside the home to enable smarter home to deliver on their lofty promises. In the introduction of the whitepaper, it states:
A common services delivery platform based on industry standards supports cooperative interconnection and creation of new services. Implementation inside the cloud delivers quick development of services at lower cost, with shorter time to market, facilitating rapid experimentation and improvement. The emergence of cloud computing, Web services and service-oriented architecture (SOA), together with new standards, is the key that will open up the field for the new smarter home services.
The dependence on external networks (from our homes) and external Communications Service Providers presents an opportunity for them to provide much more than just the pipe to the house. This is an area that some Telcos are trying to tap into already. Here in Australia, Telstra
have recently introduced a home based smart device called the T-Hub
which is intended to arrest some of the decline in homes installing or keeping land line phones (in Australia, more and more homes are buying a naked DSL or Hybrid Fibre Coax (HCF) service for Internet and using mobile phones for voice calls and not having a home phone service at all). I recently cancelled my Telstra Home Phone service, so I cannot buy one of the T-Hubs and apparently it won't work with my home phone service via my HCF connection. It is an intriguing idea though. I find myself wondering if Telstra's toe in the Smarter Home pond is too little too late. For years, in Telstra's Innovation Centres (one in Melbourne and one in Sydney) they had standing demonstrations of smarter home technology (I think the previous Telstra CEO, Sol Tujilllo closed them down). I even helped to install a Smarter Healthcare demo at the Sydney Telstra Innovation Centre a few years ago (more n that later) and their demos were every bit as good as the demos that IBM has at the Austin (Texas, USA) and LaGaude (France) Telecom Solutions Labs.
Further into the whitepaper, when talking about Cloud based Service Delivery Platforms (pp 10) there is a nice summary of why a Telco would consider a cloud deployment of their SDP:
An SDP in the cloud supports the expansion of the services scope by enabling new services in existing markets and by expanding existing services into new markets with minimum risk. By exposing standard service interfaces in the network, it enables third parties to integrate their services quickly, or to build new services based on the service components provided in the SDP. This creates the opportunity for new business models, for instance, for media distribution and advertising throughout multiple delivery scenarios.
I think this illustrates what all Telcos should be thinking about - the agility needed to compete in today's marketplace. Cloud is one way to enhance that agility but also adds elasticity - the ability to grow and shrink as the market demands grow and shrink. Sorry for rambling a bit there... some semi-random thoughts kept popping up when talking about Smarter homes and Telcos. Anyway, I would encourage you to have a read of the whitepaper for yourself. It's available at:
Disclaimer: I own a small number of shares in Telstra Corp.
Since I penned my last post
, I have done some more reading on Facetime and watch Steve Job's launch of Facetime
. While I will happily admit that Apple have in fact used some standards within their Facetime Technology (Jobs lists H.264
as all being used), I am somewhat bemused by the "standards" discussion that most of the media seem to be focusing on with regard to Facetime. Almost everyone that refers to compliance with standards is talking about interoperability with current PC based video chat capabilities - from the likes of Skype, MS Messenger, GTalk and others. Am I the only one that has noticed the iPhone 4 is not a PC and is in fact a mobile phone? Why is it that no one else is questioning interoperability with existing video chat capable mobile phones?
After thinking on this for a little while, I guess it might be that most of the media coverage about the iPhone 4 is coming from the USA - where is was launched. It's only natural. The problem with the US telecoms market is that it is not representative of the rest of the world - who has had video calling for ages and don't really use it. Perhaps it was the overflowing Apple coolaid fountain in the iPhone 4 launch that got the audience clapping when Jobs placed a video call, or perhaps it was just that they had never seen a video call before - I wasn't there so I cant be sure. Right now, the Facetime capability on the iPhone 4 is only for WiFi connections - which makes it pretty limiting. Apparently, there is no setup required, no buddylist, you just use the phone number to make a video call - which is the way video calling already works (see the screen dump of my phone to the right and the short video below), but the WiFi limitation on the iPhone 4 will mean that you have to guess when the recipient is WiFi connected. At least with the standard 3GPP video call, the networks are ubiquitous enough to pretty much guarantee that if the recipient is connected to a network, they can receive a video or at least a phone call. Job's didn't explain what would happen if the recipient was not WiFi connected - does it just make a voice call instead? I hope so.
If you look at the pixelation and general poor quality of the video call, consider that I am in a UMTS coverage area, not HSPA (the phone would indicate 3.5G if I were), so this is what was available more than seven years ago in Australia, longer in other countries. If I was in a HSDPA coverage area, I would expect the video call to be higher quality due to the increase bandwidth available.
I recall in 2003, Hutchison 3 launched their 3G network in Australia with much fan-fair. Video calls was a key part of the 3G launch in Australia for all of the telcos. This article from the 14Apr03 Sydney Morning Herald
(on day before the first official 3G network in Australia) illustrates what I am talking about. The authors say that the network's "...main feature is that it makes video
calling possible via mobile phone.
" Think about it for a second. That's from more than seven
years ago and Australia was far from the first country to get a 3G network. A lifetime in today's technology evolution. Still the crowds clapped and cheered as Jobs made a Video call. If I had have been in the audience, I think I would have yawned at that point.
The other interesting thing that I noticed in job's speech as his swipe at the Telcos. He implied that they needed to get their networks in order to support video calls. Evidence from the rest of the world would suggest that is not the case - perhaps it is in the USA, or perhaps he is trying to deflect blame for not allowing Facetime over 3G connections away from Apple and back to the likes of AT&T who have copped a lot of flack over their alleged influence on Apple's Application store policies involving applications that could be seen to be competitive with services from AT&T. I am not sure how much stick AT&T deserve on that front, but it's pretty obvious from job's comment that he is not in love with carriers - and certainly from what I've seen, carriers are not in love with Apple. It might be interesting to see how long the relationship lasts. My guess is that as long as Apple devices continue to be popular, both parties will be forced to share the same bed.
On another related point, I have been searching the Internet to find what standards body Apple submitted Facetime to for certification - Jobs says in the launch that it will be done "tomorrow" - this could be marketing speak for 'in the future' or it could literally mean the day after he launched the iPhone 4. If anyone knows please let me know
- I want to have a look into the way Facetime works.
Thanks very much to my colleague Geoff Nicholls for taking the Video Call in the video above.