Providing a National Broadband Network within a country is seen by many governments as a way to help their population and country compete with other countries. I have been involved in three NBN projects; Australia, Singapore and New Zealand. I don't claim to be an expert in all three projects (which are ongoing) but I though I would share some observations and comparisons between the three projects.
Where Australia and Singapore have both opted to build a new network with (potentially) new companies running it, New Zealand has taken a different path. The Kiwis have decided to split the incumbent (and formerly monopoly) Telecom New Zealand into three semi-separated 'companies' Retail, Wholesale and Chorus (the network), but only for the 'regulated products' which for the New Zealand government is 'broadband'. They all still report to a single TNZ CEO. I have not seen any direction in terms of Fibre to the Home or Fibre to the Node, just defined the product as 'broadband'. The really strange thing with this split is that the three business units will continue to operate as they did in the past for other non-regulated products such as voice.
As an aside, the Kiwi government not regulating voice seems an odd decision to me - especially when you compare it to countries like Australia and the USA where the government has mandated that the Telcos provide equivalent voice services to the entire population. Sure, New Zealand is a much smaller country, but it is not without it's own geographic challenges in providing services to all kiwis, yet
A key part of the separation is that these three business units are obliged to provide the same level of service to external companies as they provide to Telecom and it's other business units. For example if Vodafone wants to sell a Telecom Wholesale product, then Telecom Wholesale MUST treat Vodafone identically to the way they treat Telecom Retail. Likewise Chorus must do the same for it's customers which would include ISPs as well as potentially other local Telcos (Vodafone, Telstra Clear and 2Degrees). This equivalency of input seems to me to be an attempt to get to a similar place to Singapore (more on that later). Telecom NZ have already spent tens of million of NZ$ to this point and they don't have a lot to show for it yet. It seems to me like the Government is trying to get to a NBN state of play by using Telecom's current network and perhaps adding to that as needed. For the kiwi population, that's not anything flash like fibre to the home, but more like Fibre to the node and then have a DSL last mile connection. That will obviously limit the sorts of services that could be delivered over that network. When other countries are talking about speeds in excess of 100Mbps to the home, New Zealand will be limited to DSL speeds until the network is extended to a full FTTH deployment (not planned at the moment as far as I am aware)
Singapore, rather than split up an existing telco (like Singtel or Starhub) have gone to tender for the three layers - Network, Wholesale and Retail. The government (Singapore Ltd) has decided that should only be one network and run by one company (Nucleus Connect - providing Fibre to the Home), that there would be a maximum of three wholesale companies and as many retail companies as the market will support. A big difference to New Zealand is that the Singapore government wants the wholesalers to offer a range of value added services - that they refer to as 'sit forward' services to engage the population rather than 'sit back' services that do not engage the population base. Retail companies would be free to pick and choose wholesale products for different wholesalers to provide differentiation of services.
Singapore, New Zealand and Australia are vastly different countries - Singapore is only 700km2 in size, Australia is a continent in it's own right and new Zealand is at the smaller end of in between. This is naturally going to have a dramatic effect on each Government's approach to a NBN. Singapore's highly structured approach is typical of the way Singapore does things. Australia's approach is less controlled - due to the nature of the political environment in Australia rather than it's size and New Zealand's approach seems somewhat half-hearted by comparison. I am not sure why the NZ government has not elected to build a new network independent of Telecom NZ's current network.
In Australia on the other hand, the government have set up the Communications Alliance to manage the NBN and subcontract to the likes of Telstra, Optus and others. The interesting thing with that approach (other than the false start that has already cost the Australian Taxpayers AU$30 million) and the thing that sets it apart from Singapore is that the approach doesn't seem to have any focus on the value added services (unlike Singapore's approach) - it's all about the network, even the wholesaler plan for Australia is talking about layer 2 protocols (See The Communications Alliance Wiki. All of the documents I have seen from Communications Alliance are all about the network - all very low level stuff.
Of course, these three countries are not the only countries that are going through a NBN project. For example the Philippines had a shot at one a few years ago - the bid was won by ZTE, but then a huge scandal caused the project to be abandoned. It came back a while later as the Government Broadband Network (GBN) but that doesn't really help the average Filipino. It's interesting to see how these projects develop around the world...
Interesting - looks like RIM dodged a bullet in the UAE.
Here is the URL for this news: www.google.com/hostednews/afp/article/ALeqM5iMtJnqeRckjmlWVOoB1KWqtYmbLw?docId=CNG.aec298041bd87d0d6ae2ef88e13bcbcd.6a1
The threatened ban was narrowly averted and the ban in India looks as if it will avoid a ban after all. I wonder if RIM installed (r promised to) a Network Operations Centre in the UAE (which is what I saw a a possible way of appeasing the authorities) or if they have come up wit some other way to give the UAE authorities access to the encrypted traffic.
In the meantime, India has hinted (per my previous post) that they will be going after private VPN traffic in addition to the Blackberry traffic. We'll see where that ends up soon I guess.
Andrew_Larmour 0300000243 Etiquetas:  telecom telco mobile_portal bharti app_store andrew_larmour airtel 2.746 vistas
In just five months, Bharti Airtel's App store has had over 13 Million downloads. What a terrific example of a Telco App Store in action and (presumably) making money for the Telco. This article came across my screen this afternoon and given my previous posts about Bharti's App Store and carriers wanting to get into them (something I've seen all over Asia) to try and arrest some of the revenue bleeding to Apple (and to a lesser extent Google, Nokia and RIM) through single brand (phone) app stores.
http://www.telecompaper.com/news/printarticle.aspx?cid=742043 - Thursday 24 June 2010 | 03:29 AM CET, Telecompaper
The article is really brief, barely a footnote, but it does lay out some interesting facts:
Technorati Tags: app_store, bharti, airtel, telco, mobile_portal, andrew_larmour,
Airtel's App Central on a PC
I am sitting here in Singapore and reading today's Straits Times, keeping up with the affairs in the region and around the world where on page 3 (the most important page in a newspaper after the front page) is an article about the leaked/lost next generation iPhone that Gizmodo reportedly paid US$5000 dollars for (other online reports that I've read have suggested other amounts such as US$350. I'm not sure who is right). The article occupied almost half of page 3.. for the next gen iPhone... that seems excessive to me for a non-specialist publication, but I guess it is reflective of the general hype that exists around Apple products. The previous hype was around the next gen MacBooks with faster processors and prior to that the iPad. I've read articles suggesting that the iPad will revolutionise newspapers and home computing and telcos. I'm not so sure. While I think a lot of iPad will be sold worldwide (once it is released outside of the USA), but I also think a lot of those devices will get a lot of use through a honeymoon period and then sit idle until they are eventually disposed of. I am so sick of the hype around all these Apple products. There are some things that Apple do really well (UI and Design) and some they do really poorly (Business use support, locking in users). I respect them, but I do not like them.
It reminds me of a great parody that The Onion did a while ago:
Apple Introduces Revolutionary New Laptop With No Keyboard
Andrew_Larmour 0300000243 Etiquetas:  google arpu larmour telecom nokia app_store iphone andrew nexus telco apple handango palm ovi pixi 5.165 vistas
App Stores Background
I know lots of people are saying that Apple invented the Application Store (App Store) for their iPhone/iTouch range of devices, but they would be wrong. App stores have been around for years - I have been a customer of Handango since before I joined IBM's Pervasive Computing team and that team has been gone for over three years now. Handango are an Internet based app store that have supported multiple handheld PDA and phone platforms. Others that I've used in the past include Tucows, although Tucows is more than just mobile applications - they also cover Win32, Linux, Mac etc as well. The big things that Apple did differently from Handango and their Internet brethren was:
Of course, Apples' device competitors are trying to catch the same wave that Apple have been riding and deploy their own application store equivalents. We've seen efforts from Google, Nokia, Palm and Research In Motion (RIM - makers of the Blackberry) and interestingly, all have been somewhat successful. Successful at attracting developers which is key to then attracting users. Here are the their app stores:
Personally I am not a fan of Apple's restrictive market practices and much prefer the more open ecosystem that surrounds the Symbian and Windows mobile platforms. I have in the past written applications for Palm Garnet (nee PalmOS), Symbian and Windows Mobile for use within a corporate environment. Something that is not possible with Apples licensing policies and forcing developers to upload apps to the App Store so that Apple can approve them and then include them in the App Store catalogue. If I only want to write an application for my customer, I cannot deploy it directly to the customer's iPhones unless they have been jailbroken - the only alternative is for Apple to look at and approve the application then sign it. While the others also have the concept of signed and certified applications, you can install unsigned or un-certified applications on the other major platforms if you want (except for Android which appears to be going down a similar if less restrictive path to Apple).
Telcos and App Stores
In the past year as Telcos all around the world have watched Apple's App Store take off and seen their interaction with the iPhone subscribers being reduced to the supplier of the pipe to the Internet - way down from the high value position that most carriers aspire to in order to improve ARPU. I've seen requests form many Telcos in that time for Application Store or Widget Store capability. The telos - understandably - want to raise their profile in the eyes of the subscriber and their worth in the value proposition. I have seen request for proposal documents from telcos in China, Taiwan, Vietnam, USA and queries from telcos in Thailand, Philippines, Singapore, Japan and other countries. App/Widget Stores are certainly one of the topics of the moment for Telcos.
The key differentiators that a Telco has that separates it from Apple's App Store are:
In fact, IBM has won and has (partially at this stage) implemented an app store in Vietnam. Because of the Telecom environment in Vietnam, this App Store is not actually within a telco, but is instead an external company*. The app store was implement with a combination of WebSphere Portal (to provide the user interface) and WebSphere Commerce (to provide the catalog and sales part of the App Store and WebSphere Message Broker for Integration requirements. I was involved from the very initial stages of that project.
The company intends to launch a Mobile Commerce and Advertising Platform (MCAP), which is a multi-channel platform enabling its members to do small value electronic transactions (or m-commerce and e-commerce. Some of their use cases include
I don't often get involved in WebSphere Commerce projects (it tends to be a very specialized field) we do have a number of Telcos who are using WebSphere Commerce, not necessarily in App Stores, but based on the experience in Vietnam, it would not be a big leap to add that capability to their existing deployments.
The usage of WebSphere Portal provides a easy and extensible user interface primarily targeted at the PC, and with the addition of the Mobile Portal Accelerator (nee WebSphere Everyplace Mobile Portal Enable) to the existing Portal, that user interface can be extended to over 10,000 separate devices providing subscribers with an optimized experience for their device.
Where does this leave those Telcos who haven't made the leap to their own app store? In my opinion, they still have time to catch the wave, and certainly if they want to avoid the Apple effect and being reduces to a bit pipe provider, then they need to do something to add value in the eyes of the subscriber. Apple's model doesn't help them with that, but perhaps the other device specific app stores wont be so carrier unfriendly. I will see what I can find out on this issue and report back in another post.
Buy for now
* Once that customer has agreed to be a formal reference, I will share additional details in a future post.
If you want some background reading on App Stores, here are a couple of articles I would suggest:
Andrew_Larmour 0300000243 1.780 vistas
Here is the URL for this bookmark: www.bbc.co.uk/news/world-south-asia-15071086?utm_source=twitterfeed
Andrew_Larmour 0300000243 Etiquetas:  bpm larmour andrew_larmour tmf frameworx businessagility telco telecom soa 1.747 vistas
Why TMF Frameworx?
The TeleManagement Forum (TMF) have defined a set of four frameworks collectively known as Frameworx. The key frameworks that will deliver business value to the CSP are the Information Framework(SID) and the Process Framework (eTOM). Both of these can deliver increased business agility - which will reduce time to market and lower IT costs. In particular if a CSP is undertaking with the multiple major IT projects in the near term, TMF Frameworx alignment will ease the pain associated with those major projects.
Without a Services Oriented Architecture (SOA), such as many CSP's have currently, there is no common integration layer, no common way to perform format transformations with that multiple systems can communicate correctly. A typical illustration of this point to point integration might look like the Illustration to the right:
Each of the orange ovals represents a transformation of information so that the two systems can understand each other - each of which must be developed and maintained independently. These transformations will typically be built with a range of different technologies and method, thus increasing the IT costs of integrating, maintaining such transformations, not to mention maintaining competency within the IT organisation.
A basic SOA environment introduces the concept of an Enterprise Service Bus which provides a common way to integrate systems together and a common way of building transformation of information model used by multiple systems. The Illustration below shows this basic Services Oriented Architecture - note that we still have the same number of transformations to build and maintain, but now they can be built using a common method, tools and skills.
If we now introduce a standard information model such as the SID from the TeleManagement Forum, we can reduce the number of transformation that need to be built and maintained to one per system as shown in the Illustration below. Ensuring that all the traffic across the ESB is SID aligned means that as the CSP changes systems (such as CRM or Billing) the effort required to integrate the new system into the environment is dramatically reduced. That will enable the introduction of new systems faster than could otherwise been achieved. It will also reduce the ongoing IT maintenance costs.
As I'm sure you're aware, most end to end business processes need to orchestrate multiple systems. If we take the next step and insulate those end to end business processes from the functions that are specific to the various end point systems using a standard Process Framework such as eTOM, then business process can be independent of systems such as CRM, Billing, Provisioning etc. That means that if those systems change in the future (as many CSPs are looking to do) the end to end business processes will not need to change - in fact the process will not even be aware that the end system has changed.
When changing (say) the CRM system, you will need to remap the eTOM business services to the specific native services and rebuild a single integration and a single transformation to/from the standard data model (SID). This is a significant reduction in effort required to introduce new systems into the CSP's environment. Additionally, if the CSP decide to take a phased approach to the migration of the CRM systems (as opposed to a big bang) the eTOM aligned business processes can dynamically select which of the two CRM systems should be used for this particular process instance.
What that means for the CSP.
Putting in place a robust integration and process orchestration environment that is aligned to TMF Frameworx should be the CSP's first priority; this will not only allow the subsequent major projects integration and migration efforts to be minimised, it will also reduce the time to market for new processes and product that the CSP might offer into the market.
Telekom Slovenia is a perfect example of this. When the Slovenian government forced Mobitel (Slovenia) and Telekom Slovenia to merge, having the alignment with the SID and eTOM within Mobitel allowed the merged organisation to meet the governments deadlines for the specific target KPIs:
When a CSP is undertaking multiple concurrent major IT replacement projects, there are a number of recommendations that IBM would make based on past observations with other CSPs that have also undertaken significant and multiple system replacement projects:
I was going to use this post to talk about the Idea Factory for Telecom, but I noticed this press release this morning about SK Telecom (South Korea) use of Cloud computing and I though I would share what have seen with Cloud computing in Telcos. The press release follows:
ARMONK, N.Y. - 16 Dec 2009: IBM (NYSE: IBM) today announced that it has successfully built Korea's first cloud computing environment for a private sector company, SK Telecom, the largest telecommunications company in Korea with over 24 million customers. The cloud environment provides developers with the necessary software and hardware to develop applications that will allow SK Telecom to offer up to 20 new services to their customers by the end of 2009, such as sports news feeds and a photo service.
I can't claim to have been closely involved with this deal at SK Telecom, but we have spoken to other Telcos in ASEAN about using Cloud Computing in a similar way. Where telcos have a developer ecosystem, Cloud could be deployed in a private could environment for their developers to deploy their test applications within that private cloud. We proposed using the WebSphere Cloudburst appliance to allow developers to self manage and deploy the virtual servers for their applications. The diagram below illustrates what I am talking about:
I guess this could be where I tie in the Idea Factory for Telecom after all. The Idea Factory would be used to support the whole developer ecosystem, while the Cloudburst appliance would be used to support the advanced developers who want to be able to deploy their Java applications within the cloud.
In my view, this is a somewhat obvious use for Cloud within a Telco and SK Telecom's deployment of Cloud in this manner is proof of that point. The somewhat less obvious use of Cloud within a Telcos is the use of Cloud infrastructure for their core SDP and OSS/BSS infrastructure. I could not imagine a Telco being willing to deploy such core systems in a public cloud, but there is a possibility of deploying it in a private cloud.
The team at Bharti Airtel are working to move the SDP infrastructure there to a Cloud environment - giving them the flexibility to rapidly scale up and down to suite different changing market forces. The other BIG thing that moving to a cloud will change is where the SDP components are deployed. Once the SDP components such as WebSphere Process Server, Telecom Web Services Server, WebSphere Services Registry and Repository and the other components are in a private cloud, it becomes very easy to move to a hosted private cloud or even a public cloud. If we think for a moment about the SDP running in a hosted cloud environment, then it is not a huge leap to host another Telco's SDP in the same hosted cloud. Now we have a hosted environment in which potentially many telcos have their Service Delivery Platform running.
This diagram illustrates the various SDP deployment options including the Cloud options. What is happening at Bharti is a move from a traditional OutSourcing model to a private cloud, then on to potentially a single client hosted private cloud and then eventually to a multi client cloud option. See that I need below.
What do you think? Can you think of any other cloud scenarios in a telco?
Taking into account the comments from the internal version of this blog, I have modified the developer ecosystem a bit - using IBM Cloudburst instead of WebSphere Cloudburst would certainly give a Telco a much greater developer platform choice. I've left the view above because that is what we proposed to the ASEAN telco, but in retrospect, the IBM Cloudburst option would have better suited their needs - although IBM Cloudburst has a significant price premium associated with it that WebSphere Cloudburst does not. That said, in a cloud environment (customer hosted private, IBM hosted private, multi or single tenanted) for a Service Delivery platform, using IBM Cloudburst would seem to be to be the right way to go.
Andrew_Larmour 0300000243 Etiquetas:  ibm larmour communicate chat sametime andrew 4 Comentarios 11.180 vistas
I know this isn't strictly related to my normal Industries, but it is applicable for any DW member, so I thought it was valuable enough to share and might even prove useful in dealing with IBMers. For a number of years now, my email signature has included a link for non-IBMers to contact me via Sametime. That link is connects to https://www.ibm.com/collaboration/instantmessaging
This doesn't seem to be well known among IBMers, but I have spoken with a number of partners, exIBMers and my wife via this facility in the past. All they need is a ibm.com account and anyone can sign up for one of those. If you have ever downloaded anything from ibm.com in the past, or signed up to Developerworks then they will already have one (which is the case for most partners and IBMers)The Sametime client that the ibm.com site launches is the (old) Sametime Connect 3.1 Java client. It looks like this:
NB. In the buddylist - alarmour @ au.ibm.com is my internal Sametime community id (which is the same as my email address) and alarmour @ optusnet.com.au is my ibm.com id.
Despite it's age and now being superseded by Sametime 6.5, Sametime 7, Sametime 7.5 and Sametime 8, it still works! As an example, check out the short conversation I had with my other personality!
In my normal Sametime client, my external id comes in as alarmour @ optusnet.com.au.ibm.ext (my ibm.com id prepended to "ibm.ext") - I can add this external id to my buddylist so that I can see when my external self is logged on. In fact, I can add the external community to my standard sametime setup and log in from there as well. If you know the name of the IBMer that you want to add to your buddylist, but don't know their email address, you can get that from the ibm.com web site through this employee search facility.
I am not sure what is going on with the status of my ibm.com id not showing up as online (on the screen dump above) - I do see when my wife is logged on and some others that regularly log in too (although they are using a more modern client rather than the old 3.1 java client). After a while, it did correct itself through.
What I did for my wife was to download the free trail version of the Sametime client (from DW!), then use the config information from the jave client so that Samtime started automatically when her PC starts - thaqt way, she can chat with me regardless of the Sametime client I am using to connect to messaging.ibm.com (I often use the mobile client which does not support multiple communities). Such a setup also means that she does not need to go to ibm.com in a browser to chat with me - the client is just sitting minimised in the systray on her PC.
Hopefully, this post will spread the word a bit more....
Update: The version of the Sametime Web Client has been updated and the launch URL has changed - I have corrected it above and added a new screen capture of the new client:
Andrew_Larmour 0300000243 Etiquetas:  larmour blog activity ie bookmarks post andrew firefox 2.741 vistas
Would you like to be able to create bookmarks, blog posts and activity entries with a simple button in your browser? It's easy and works in Firefox and Internet Explorer (possibly others, but I have not tested them). Al you need to do is to create a new bookmark in your bookmarks bar.
Create a new bookmark and paste in the following in the "Location" field of the bookmark:
When you click on the bookmark, it will pop up a window allowing you to fill in the details (see below)
Andrew_Larmour 0300000243 Etiquetas:  salesman celcom banned andrew larmour 3 Comentarios 2.999 vistas
I've met with Celcom (a Telco in Malaysia) a few times this year, they have a funny sign in the lift well of every floor... So much for all the IBM sales staff that were with me!
Apologies for the quality of the photo - I only had my phone camera with me at the time.
Andrew_Larmour 0300000243 Etiquetas:  rim uae india telecom larmour encryption blackberry telco andrew andrew_larmour banned 7.432 vistas
I know I have been lax in posting recently. I've had a lot of work on and I am sorry for not getting to the blog.
That said, over the past few weeks, I have been watching what seems to be a snowballing issue of governments spying on their citizens in the name of protection from terrorism. First cab off the rank was India a couple of years ago asking Research In Motion (RIM) for access to the data stream for Indian Blackberry users, then asking for the encryption keys. That went quiet until recently (1Jul10), the Indian Government again asked RIM for access to the Blackberry traffic and gave RIM 15 days to comply (See this post in Indian govt gives RIM, Skype 15 days notice, warns Google - Telecompaper). That has passed and the Indian government yesterday gave RIM a new deadline of 31Aug10 (See Indian govt gives 31 August deadline for BlackBerry solution - Telecompaper). In parallel, a number of other nations have asked their CSPs or RIM for access to the data sent via Blackberry devices.
First, was the United Arab Emirates (UAE) who will put a ban on Blackberry devices in place which will force the local Communications Service Providers (CSPs) to halt the service from 11Oct10. RIM are meeting with the UAE government, but who knows where that will lead with the Canadian government stepping in to defend it's Golden Hair Child - RIM. Following the UAE ban, Saudi Arabia, Lebanon and more recently Indonesia have all said they will also consider a ban on RIM devices. As an interesting aside, I read an article a week ago (See UAE cellular carrier rolls out spyware as a 3G "update") that suggested that the UAE government sent all Etisalat Blackberry subscribers an email advising them to update their devices with a 'special update' - it turns out that the update was just a Trojan which in fact delivered a spyware application to the Blackberry devices to allow the government to monitor all the traffic! (wow!)
Much of the hubbub seems to be around the use of Blackberry Messenger, an Instant Messaging function similar to Lotus Sametime Mobile, but hosted by RIM themselves which allows all Blackberry users (even on different networks and telcos) to chat to each other via their devices.
I guess at this stage, it might be helpful to describe how RIM's service works. From a historical point of view, RIM were a pager company. Pagers need a Network Operations Centre (NOC) to act as a single point from which to send all the messages out to the pagers. That's where all the RIM contact centre staff sat and answered phones, typed messages into their internal systems and sent the messages out to the subscribers. RIM had the brilliant idea to make their pagers two way so that the person being paged could respond initially with just an acknowledgement that they had read the message, and then later with full text messages. That's the point at which the pagers gained QWERTY keyboards. From there, RIM made the leap in functionality to support emails as well as pager messages, after all, they had a full keyboard now, a well established NOC based delivery system and a return path via the NOC for messages sent from the device. The only thing that remained was a link into an enterprise email system. That's where the Blackberry Enterprise Server (BES) comes in. The BES sites inside the Enterprise network and connects to the Lotus Domino or MS Exchange servers and acts as a connection to the NOC in Canada (the home of RIM and the location of the RIm NOC). The connection from the device to the NOC is encrypted and from the NOC to the BES is encrypted. Because of that encryption, there is no way for a government such as India, UAE, Indonesia, Saudi Arabia or other to intercept the traffic over either of the links (to or from the NOC)
Last time I spoke to someone at RIM about this topology, they told me that RIM did not support putting the BES in the DMZ (where I would have put it) - since then, this situation may have changed.
Blackberry messenger traffic doesn't get to the BES, but instead it goes from the device up to the NOC and then back to the second Blackberry which means that non-enterprise subscribers also have access to the messenger service and this appears to be the crux of what the various governments are concerned about. Anybody, including a terrorist could buy a Blackberry phone and have access to the encrypted Blackberry messenger service without needing to connect up their device to a BES which explains why they don't seem to be chasing after the other VPN vendors (including IBM with Lotus Mobile Connect) to get access to the encrypted traffic between the device and the enterprise VPN server. Importantly, other VPN vendors typically don't have a NOC in the mix (apart from the USA based Good who have a very similar model to RIM). I guess the governments don't see the threat from the enterprise customers, but rather the individuals who buy Blackberry devices.
To illustrate how a VPN like Lotus Mobile Connect differs from the Blackberry topology above, have a look at the diagram below:
Lotus Mobile Connect topology
If we extend that thought a little more, a terrorist cell could set them selves up as a pseudo enterprise by deploying a traditional VPN solution in conjunction with an enterprise type instant messaging server and therefore avoid the ban on Blackberries. the VPN server and IM server could even be located in another country which would avoid the possibility of the government easily getting a court order to intercept traffic within the enterprise environment (on the other end of the VPN). It will be interesting to see if those governments try to extend the reach of their prying to this type of IM strategy...
Did you know that the vast majority of calls carried out on the 3.5 billion GSM connections in the world today are protected by a 21-year old 64-bit encryption algorithm? You should now, given that the A5/1 privacy algorithm, devised in 1988, has been deciphered by German computer engineer Karsten Nohl and published as a torrent for fellow code cracking enthusiasts and less benevolent forces to exploit.
Here is the URL for this bookmark: http://www.engadget.com/2009/12/29/gsm-call-encryption-code-cracked-published-for-the-whole-world/
Yikes! This harks back to the old days of eves droppers on Analogue phone signals and all those illegally taped conversations (I recall some conversations between the late Princess Diana of Wales and her bodyguard for example). Ok, we're probably not quite there yet, but by the sounds of this article, we aren't far from it now...
Andrew_Larmour 0300000243 Etiquetas:  ha robust six_nines wasnd 99.9999 clustering 1 Comentario 3.655 vistas
The other day, I was at a customer proof of concept, where the customer asked for 99.9999% availability within the Proof of Concept environment. Let me explain briefly the environment for the Proof of Concept - we were allocated ONE HP Proliant server, with twelve cores and needed to run the following:
Any of you that understand High Availability as I do would say it can't be done in a Proof of Concept - and I agree, yet our competitor claims they have demonstrated six nines (99.9999% availability) in this Proof of Concept environment - it was deployed on the customer's hardware; hardware that did not have any redundancy at all. I call shenanigans on the competitor claims. Unfortunately for us, the customer swallowed the claim hook line and sinker.
I want to explain why their claim of six nines cannot be substantiated and why the customer should be sceptical as soon as a vendor - any vendor makes such claims. First, lets think about what 99.9999% availability really means. To quantify that figure, that means 31.5 seconds of unplanned downtime per year! For a start, how could you possibly measure availability for a year over a two week period. Our POC server VMs didn't crash for the entire time we had them running - does that entitle us to claim 100% availability? No way.
The simple fact is that the Proof of Concept was deployed in a virtualised environment on a single physical machine - without redundant Hard Drives or power supplies - there is no way we or our competition could possibly claim any level of availability given the unknowns of the environment.
In order to achieve high levels of availability, there can be no single point of failure. That means no failure points in the Network, the Hardware or the Software. For example, that means:
We need to go back to the Telco and impress upon them that six nines availability depends on all of the above factors (and probably some others!) and not just about measuring the availability of the software over a short (and non-representative) sample period.
Typically this level of HA is very expensive, indeed every additional '9' increases the cost exponentially - that is: six nines (99.9999% availability) is exponentially more expensive than five nines(99.999% availability). I found this great diagram that illustrates the cost versus HA level.
This diagram is actually from a IBM Redbook (See http://www.redbooks.ibm.com/redbooks/pdfs/sg247700.pdf ) which has a terrific section on high Availability - it illustrates how there is a compromise point between the level of high availability (aiming for continuous availability) and the cost of the infrastructure to provide that level of availability.
Andrew_Larmour 0300000243 Etiquetas:  twss sdp telecom licensing telco sizing estimations 7.291 vistas
Sizing of software components (and therefore also Hardware) is a task that I often need to perform. I spend a lot of time on it, so I figured I would share how I go about doing it and what factors I take into account. It is an inexact science. While I talk about sizing Telecom Web Services Server for the most part, the same principles would be applied to any sizing exercise. Please also note that the numbers stated are examples only and NOT should not be used to perform any sizing calculations of your own!
Inevitably, when asked to do a sizing, I am always forced to make assumptions about traffic predictions. I don't like doing it, but is is rare for customers to have really thought through the impact that their traffic estimates/projections will have on the sizing of a solution or it's price.
Assumptions are OK
Just as long as you state them - in fact they could be viewed as a way to wiggle out of any commitment to the sizing should ANY of the assumptions not hold true once the solution has been deployed. Let me give you and example - I have seen RFPs that have asked for 500 Transactions Per Second (TPS), but neglected to state anywhere what a Transaction actually is. When talking about a product like Telecom Web Services Server - you might assume that the transactions they're talking about are SMS, but in reality, they might be talking about MMS or some custom transaction - a factor which would have a very significant effect on the sizing estimate. Almost always, different transaction types will place different loads on systems.
Similarly, it is rare for a WebSphere Process Server opportunity (at a Telco anyway) to fully define the processes that they will be implementing and their volumes once that system goes into production. So, what do I do in these cases? My first step is to try to get the customer to clear up the confusion. If that fails (I often have multiple attempts at explaining to the customer why we need such specific information - it is in their benefit after all - they're much more likely to get the right (sized) system for their needs. This is not always successful, so my next step is to make assumptions to fill in the holes in the customer's information. I am always careful to write those assumptions down and include them with my sizing estimations. At this point, industry experience and thinking about potential use cases really helps to make the assumptions I make reasonable (or I think so anyway )
instance, if a telco has stated that the Parlay X Gateway must be able
to service 5760000 SMS messages per day, I think it would be reasonable
to assume that very close to 100% of those would be sent within a 16
hour window (while people are awake and to avoid complaints to the
telco about SMS messages that come in at all hours of the day -
remembering we are talking about applications sending SMS message -
nothing to do with user to user SMS messages ) which gets use down to
360000 (5760000/16) SMS per hour or 100 TPS for SendSMS over SMPP - now
this is fine for an average number, but I guarantee that the
distribution of those messages will not be even, so you have to make an
assumption that the peak usage will be somewhat higher than 100 TPS,
remembering that we have to size for peak load not average. How much
higher will depend on use cases. If the customer cant give you those,
then pick a number that your gut tells you is reasonable - lets say 35%
higher than average which is roughly 135 TPS of SendSMS over SMPP (I
say roughly because if that is your peak load, then as our total is
constant for the day (5,760,000) the load must be lower during the
non-busy hours. As we are making up numbers here anyway, I wouldn't
worry about this discrepancy, and certainly erring on the side of over
sizing is the safer option anyway - provided you don't over do the over
said I prefer to not make lots of assumptions, but stating stringent
assumptions can be your friend if the system does not perform as you
predicted and the influencing factors are not as you stated exactly in
your assumptions. For instance if you work on the basis of 35% increase
in load during the busyhour and it turns out to be 200%, your sizing is
going to be way off, but because you asked the customer for the
increase in load during the busyhour and they did not give you the
information, you were forced to make an assumption - they know their
business better that we ever could and if they can't or won't predict
such a increase during the busyhour, then we cannot be reasonably
expected to predict it accurately either - the assumptions you stated
will save your (and IBM's) neck. If you didn't explicitly state your
assumptions, then you would be leaving yourself open to all sorts of
consequences and not good ones at that.
Understand the hardware that you are deploying to
saw a sizing estimate the other week that was supposed to be able to
handle about 500 TPS of SendSMS over SMPP, but the machine quoted would
have been able to handle around 850 TPS of SendSMS over SMPP; I would
call that over doing the over sizing. This over estimate happened
because the person who did the sizing failed to take into account the
differences between the chosen deployment platform and the platform
that the TWSS performance team did their testing on.
If you look at the way that our Processor Value Licensing (PVU) based software licensing works, you will pretty quickly come to the conclusion that not all processors are equal. PVUs are based on the architecture of the CPU - some value a processor at just 30 PVUs per core (Sparc eight core cpus), older Intel CPUs are 50 PVUs per core, while newer ones are 70 PVUs per core. PowerPC chips range from 80 PVUs per core to 120 PVUs per core. Basically, the higher the PVU rating to more powerful each core is on that CPU.
that are rated at higher PVUs per core are more likely to be able to
handle more load per core than ones with lower PVU ratings.
Unfortunately, PVUs are not granular enough to use as the basis for
sizing (remember them though) we will come back to PVUs later in the
discussion. To compare the performance of different hardware, I use
RPE2 benchmark scores. IBM's Systems and Technology Group (Hardware) keeps track of RPE2 scored for IBM hardware
(System p and x at least) Since pricing is done by CPU core, you should
also do your sizing estimate by CPU core. For TWSS sizing, I use a
spreadsheet from Ivan Heninger (ex WebSphere Software for Telecom
Performance Team lead). Ivan's spreadsheet works on the basis of CPU cores for (very old) HS21
blades. Newer servers/CPUs and PowerPC servers are pretty much all
faster than the old clunkers Ivan had for his testing. To bridge the
gap between the capabilities of his old test environment and modern
hardware i use RPE2 scores. Since Ivan's spreadsheet delivers a number
of cores required result, I break the RPE2 score for the server down to
a RPE2 score per core, then use the ratio between the RPE2 score per
core for the new server and the test servers to figure out how many
cores of the new hardware are required to meet the performance
– so now, using the spreadsheet, you key in the TPS required for the
various transaction types - lets say 500 TPS of SendSMS over SMPP (just
to keep is simple; normally, you would also have to take into account
the Push WAP and MMS messages as well not to mention other transaction
types such as Location requests which are not covered by the
spreadsheet) that's 12 x 2 cores for Ivan's old clunkers, but on newer
hardware such as newer HS21s with 3 Ghz CPUs, that's 6 x 2 cores or on
JS12 blades it is 6 x 2 cores. Oh that's easy you say, the HS21s are
only 50 PVUs eash easy, I just go with Linux on HS21 blades and that
will be the best bang for the buck for the customer, well don't forget
that Intel no longer make dual-core CPUs for server they're all
quad-core, so in the above example, you have to buy 8 x 2 cores rather
than 6 x 2 cores for the JS12/JS22 blades.
Note the x 2 after each number: that is because for TWSS in production deployments, you must separate the TWSS Access Gateway and the TWSS Service Platform. The x 2 indicates that the AG and the SP both require that number of cores.
Lets work that through:
For the fast HS21s - that's 8 x 2 x 50 x $850 = $680,000 for the TWSS licences alone
Also (and all sales people who are pricing this should know this) the pre-requisites for TWSS must be licensed separately as well. That means the appropriate numbers of PVUs for WESB (for the TWSS AG) and the appropriate numbers of PVUs for WAS ND (for the TWSS SP) as well as the Database. It's pretty easy to see how the numbers can add up pretty quickly and how much your sizing estimate can effect the prices of the solution.
For the database, of course we prefer to use DB2, but most telcos will demand Oracle in my experience. For TWSS, the size of the server is usually not the bottleneck int he environment what is important is the DB writes and reads per second which equates to disk input/output to achieve high transaction rates with TWSS. It is VITAL to have an appropriate number of disk spindles in the database sick array to achieve the throughput required - the spreadsheet will give you the number of disk drives that need to be in a RAID 1 array to achieve the throughput. For the above 500 TPS example, it is 14.6 disks = 15 disks since you cant buy only part of a disk. While RAID 1 will give you striping and consequently throughput across your disk array, if one drive fails, you're sunk. To achieve protection, you must go with a RAID 1+0 (sometimes called RAID 10) which gives you both mirroring (RAID 0) and stripping (RAID 1). RAID 1+0 immediately doubles your disk count so we're up to 30 disks in the array. Our friends at STG should be able to advise on the most suitable disk array unit to go with. In terms of CPU for the database server, as I said, it does not get heavily loaded. The spreadsheet indicates that 70.7% of the reference HS21(Ivan's clunker) would be suitable, so a single CPU JS12 or HS21 blade even an old one would be suitable.
Every time I do a TWSS sizing, I get asked how much capacity we need in the RAID 1+0 disk array - despite always asking for the smallest disk's possible. Remember we are going for a (potentially) large array to get throughput, not storage space. In reality, I would expect a single 32Gb HDD would be able to easily handle the size requirements for the database, so space is not an issue at all when we have 30 disks in our array. To answer the question about what size - the smallest possible - since that will also be the cheapest possible provided it does not compromise the seek and data transfer rates for the drive. In the hypothetical 30 drive array, if we select the smallest drive now available (136Gb) we would would have a massive 1.9 Tb of space available ((15-1) x 136 Gb) which is way over what we need in terms of space, but it is the only way we can currently get the throughput needed for the disk I/O on our database server. Exactly the same principles apply regardless of DB2 or Oracle being used for the database.
Something that I have yet to see empirical data on is how Solid State Drives (SSD) with their higher I/O rates will perform in a RAID 1_0 array. In such a I/O intensive application, I suspect that it would allow us to drop the total number of 'disks' in the array down quite significantly, but I don't have any real data to back that up or to size an array of SSDs.
We have also considered using an in memory database such as SolidDB either as the working database or as a 'cashe' in front of DB2, but the problem there is the level of SQL supported by SolidDB is not the same as that supported by DB2 or Oracle's conventional database. To port the TWSS code to use SolidDB will require a significant investment in development.
Remember : Sizing estimates must always be multiples of the number of cores per CPU
Make sure you have enough of a overhead built into your calculations for other processes that my be using CPU cycles on your servers. I assume that the TWSS processes will only ever use a maximum of 50% of the CPU – that leaves the other 50% for other tasks and processes that may be running on the system. As a result, I always state that with my assumptions as well. As an example, I would say:
What about High Availability?
Well, I think that High Availability (HA) is probably a topic in it's own right, but it does have a significant effect on the sizing, so I will talk about it in that regard. HA is generally specified in nines - by that I mean if a customer asks for "five nines ", they mean 99.999% availability per annum (that's about 5.2 minutes per year of unplanned down time). Three nines (or 99.9% available) or even two nines (99%) are also sometimes asked for. Often, customers will ask for five nines, not realising the significant impact that such a requirement will have on the software, hardware and services sizing. If we start adding additional nodes into clusters for server components, that will not only improve the availability of that component, it will also improve the transaction capability and the price. The trick is to find the right balance between hardware sizing and HA requirements. For example: if a customer wanted 400 TPS of Transaction X, but also wanted HA. Lets assume a single JS22 (2 x dual core PowerPC) blade can handle the 400 TPS requirement. We could go with JS22 blades and just add more to the cluster to build up the availability and remove single points of failure. As soon as we do that, we are also increasing the license cost and the actual capacity of the component., so with three nodes in the cluster, we would have 1200 TPS capability and three times the price of what they actually need just to get HA. If we use JS12 blades (1 x dual core PowerPC) which have half the computing power of a JS22, we could have three JS12s in a cluster, achieve 3 x 200(say) TPS = 600 TPS and even if a single node in the cluster is down, still achieve their 400 TPS requirement. With JS12's, we meet the performance requirement, we have the same level of HA as we did with 3 x JS22s but the licensing price will be half that of the JS22 based solution ( at 1.5 x the single JS22 option).
guess the point I am trying to get across is to think about your
options and consider if there are ways to fiddle with the deployment
hardware to get the most appropriate sizing for the customer and their
requirements. The whole thing just requires a bit of thinking...
IBMers have a range of tools availbel to help with sizing - the TWSS spreadsheet I was talking about earlier, various online tools and of course Techline. Techline is also available to our IBM Business Partners as well via the Partnerworld web site (You need to be a registered Business Partner to access the Techline pages on the Partnerworld site). For
more mainstream products such as WAS, WPS, Portal etc, Techline is the team to help Business Partners - they have questionnaires that they will use to get all the
parameters they need to do the sizing. Techline is the initial contact
point for sizing support. For more specialised product support (like for TWSS and the other WebSphere Software for Telecom products) you may need to contact your local IBM team for help. If you are a partner, feel free to contact me directly for assistance with sizing WsT products.
There is a IBM class for IT Architects called 'Architecting for Performance' - don't let the title put you off, others can do it - I did it and I am neither an architect (I am a specialist) or from IBM Global Services (although everyone else in the class was!). If you get the opportunity to attend the class, I recommend it - you work through plenty of exercises and while you don't do any component sizing, you do do some whole system sizing which is a similar process. I am not sure if the class is open to Business Partners, if it is, I would also encourage architects and specialists from our BPs to do the class as well. Let me take that on as a task - I will see if it is available externally and report back.
I glance back over this post, I guess that I have been rambling a bit,
but hopefully you now understand some of the factors in doing a sizing
estimate. The introduction of assumptions and other factors beyond your
knowledge and control makes sizing non-exact - it will always be an
estimate and you cannot guarantee its accuracy. That is something that
you should also state with your assumptions.
Image credits - See http://anthonyreza.nomadlife.org/2008/11/adversity-of-being-tall.aspx