Modified by AndrewLarmour
Think about it - orchestration is everywhere in a Telco - the Order to Cash process, The Ticket to Resolution process, the service and resource fulfilment process and even the NFV MANO processes. Orchestration is everywhere...
There is a hierarchy to processes in a Telco - just as the TMF recognises that there is a hierarchy in business services (within the eTOM Process Framework). At the highest level, the Order to Cash process might look like this:
Each task in this swimlane diagram will have multiple sub-processes. If we delve down into the provision resources task for instance, a CSP will need processes that will interrogate the resource catalog and network inventory to determine where in the network that resource can be put and what characteristics need to be set, then tell the resource manager to provision that resource. If it's a physical resource, that may involve allocating a technician to install the physical resource. If it's a virtual resource such as a Virtual Network Function (VNF) then the Network Function Virtualisation (NFV) orchestration engine will need to be told to provision that VNF. If we go one level deeper, the NFV Orchestration engine will need to tell the NFV Manager to provision that VNF and then update the network inventory.
Perhaps the diagram below will help you to understand what i mean:
This diagram is a very simplified hierarchical process model designed to show the layers of process. As you can see, there are many layers of orchestration required in a CSP and as long as the orchestration engine is flexible enough and can handle the integration points with the many systems it needs to interact with, there is no real reason why the same orchestration engine couldn't be used by all levels of process.
Over the past couple of years as NFV has risen significantly in popularity and interest, I've seen many players in the market talk about orchestration engines that just handle NFV orchestration and nothing else. To me, that seems like a waste. Why put in an orchestration engine that is just used for NFV when you also still need orchestration engines for the higher process layers as well? I'd suggest that a common orchestration and common integration capability makes the most sense delivering:
- High levels of reuse
- Maximising utilisation of software capabilities
- Common Admin and Development skills for all levels of process (be they business focussed or service or resource focussed)
- Common tooling
- Common Integration patterns (enabling developers and management staff to work across all layers of the business)
- Greater Business Agility - able to react to changing business and technical conditions faster
There are a number of Integration platforms - typically marketed as Enterprise Service Buses (ESB) that can handle integration through Web Services, XML/HTTP, File, CORBA/IIOP even Socket/RPC connections for those legacy systems that many telcos still have hanging around. An ESB can work well in a MicroServices environment too - so don't think that just because you have a ESB, you're fighting against MicroServices - you are not. MicroServices can make use of the ESB for connectivity to conventional Web Services (SOA) as well as legacy systems.
A common Orchestration layer would drive consistency in processes at all layers of a Telco - and there are a number of Business Process Management orchestration engines out there that have the flexibility to work with the Integration layer to orchestrate processes from the lowest level (such as within a Network Function Virtualisation (NFV) environment) all the way up to the highest levels of business process - the orchestrations should be defined in an standard language such as Business Process Execution Language (BPEL) or Business Process Model Notation (BPMN).
To me, it makes no sense to re-invent the wheel and have orchestration engines just for the NFV environment, different orchestration engines for the Service Order Management, the Resource Order Management, the Customer Order Management, the Service Assurance, the Billing, the Partner/Supplier management etc etc - all of these orchestration requirements could be handled by a single orchestration engine. Additionally, this would make disaster recovery simpler and faster and cheaper as well (fewer software components to be restored in a disaster situation).
Modified by AndrewLarmour
A link to this blog entry popped up in my LinkedIn feed today which in turn linked to a Developerworks article - Combine business process management and blockchain which steps you though a use case and allows you to build your own basic BPM & Blockchain demo. Complex processes could save and get data to/from Blockchain ensuring that every process in any organisation (within the same company and across company boundaries) are using the most up to date data.
I thought it would be appropriate to paste in a link given my previous post on Blockchain in Telcos. As I think about this topic more, I can see a few more use cases in Telecom. I'll explore them in subsequent posts, but for now, I think it's important that we be pragmatic about this. Re-engineering processes to make good use of blockchain is non-trivial and therefore will have a cost associated with it. Will the advantages in transparency and resilience be worth the cost of making the changes? Speaking about resilience, don't forget the damage that a failure can cause. British Airways IT system failure (which I believe is outsourced but I cannot be sure) was down for the better part of three days - failures like that have the potential to bring down a business. We don't know yet what will happen to BA in the long term, but you certainly don't want the same sort of failure happing ing to your business.
Modified by AndrewLarmour
If you like me are hearing 'Blockchain this, blockchain that', it almost seems like blockchain will solve world peace, global hunger and feed your pets for you! We're obviously at the 'peak of inflated expectations' of the Gartner hype cycle.
I saw a tweet yesterday from an ex-colleague at IBM yesterday that spoke about using blockchain to combat fraud in a Telco. While I can see that as a possible use case, I was thinking about other opportunities for blockchain.
Perhaps I need to explain blockchain briefly so that those that don't understand it can also understand the Telecom use cases for blockchain. Wikipedia defines it like this:
"A blockchain... is a distributed database that maintains a continuously growing list of records, called blocks, secured from tampering and revision. Each block contains a timestamp and a link to a previous block. By design, blockchains are inherently resistant to modification of the data — once recorded, the data in a block cannot be altered retroactively. Through the use of a peer-to-peer network and a distributed timestamping server, a blockchain database is managed autonomously. Blockchains are "an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. The ledger itself can also be programmed to trigger transactions automatically."
So, it's an immutable record of changes to something. I was thinking about that yesterday and there were a number of use cases in Telecom that I could think of that could use blockchain. I'm not suggesting that they should use blockchain or that it's needed, just that they could. These are the Use cases I came up with:
- Fraud prevention : being immutable makes it harder to 'slip one by' the normal accounting checks and balances that any large company has. I suppose the real question is 'exactly which records need to be stored in a blockchain to enable that fraud prevention?' The obvious one is the billing records.
- Billing - maintaining state of post-paid billing accounts, who is making payments, billing amounts and other biulling events (such as rate changes, grace periods etc)
- Tracking changes to the network. At the moment, many of the changes being made in a Telco's network may be made by staff, but increasingly, maintenance and management of the network is being outsourced to external companies and you want to keep en eye on them to ensure they're doing what they say they're doing. In the new world of Software Defined Networks (SDN) utilising Network Function Virtualisation (NFV) to build and change the network architecture at a rate that we've not seen before, it becomes important for a Telco to be able to track changes to the network to diagnose faults and customer complaints. Over a 24 hour period, a path on a network that supports enterprise customer X may change tens of times - much higher frequency than would be possible if the network elements were physical.
- Tracking changes to accounts by customers and telco staff - I could imagine a situation where a customer claims that they didn't request a configuration change, but a blockchain based record of changes could be used to track beck through all the changes in a customer's account to determine what happened and when - potentially enabling a Telco to limit the liability to the customer... or vice versa...
- Tracking purchases - A blockchain record of purchases would allow a CSP to rebuild a customer's liability from base information; provided there was an immutable record of the data records as well...
- xDRs - any type of Data Record (CDRs, EDRs...) could be stored in a blockchain to facilittate rebuilding of a client's history and billing records from base data. The problem with using a blockchain to store xDRs is the size requirements. I know that large CSPs in India for example produce between five and ten BILLION records per day. It wouldn't take long for that to build up to a very large storage requirement - even if you store the mediated data records, it's going to be very large. I guess the question is : 'what is the return on investment?' - it is worth while doing. I can't think of a business case to justify such an investment, but there may be one out there.
- Assurance events - Recording records associated with trouble tickets and problem resolution.
I don't for a second think that all of these can be justified in terms of cost/benefit analysis, but I could see blockchain being used in these scenarios.
Do you have any ideas? Please leave a comment below.
I realise I missed the usual business case that blockchain is used for - a financial ledger. Obviously storing a CSP's financial data in a blockchain would work (and make sense) as it would in ANY other enterprise. I really wanted to illustrate the CSP specific use cases for blockchain.
Modified by Andrew_Larmour
This post is an update to my earlier post which is now sadly mostly incorrect because IBM's web site has been completely restructured and none of the links I provided previously are valid any more.
I know this isn't strictly related to my normal Industries, but it is applicable for anyone who want's to chat with IBMers, so I thought it was valuable enough to share. For a number of years now, my email signature has included a link for non-IBMers to contact me via Sametime. If you're an IBMer reading this, you might consider linking to this post in your email signature yo allow your customers and partners to chat with you via Sametime.
Here is a step by step guide to setting it up so that you can chat with IBMers over Sametime/IBM Instant Messaging.
There are a few things you'll need for this to work:
- An ibm.com id - these are free and available from Sign up for an IBMid if you don't already have one
- A Sametime/IBM Instant Messaging compatible client installed on your computer/device. Previously a web client was available however that link is no longer working, so a 'fat client' install would seem to be the way to go. You can download the latest Sametime client from Lotus Greenhouse site which will also require a (free) ID to be created. This is a different ID to the IBMid mentioned above, but just as quick anbd easy to get. You can use other non-IBM clients such as Adium or Pidgin but those clients will require some 'hacking' to allow them to connect to the IBM Instant Messaging Gateway - if you're keen, please check out this Blog post from nomaen that details that configuration. Personally, the IBM client does the job really nicely and is available for Windows, Mac, and Linux (RPM and DEB) so' I'd just go that route.
Once you have your client installed, you'll want to set up a server community for the IBM IM Gateway. The details you need are:
- Host Server : extst.ibm.com
- Server Community Port : 80
- Connection : Direct connection using HTTP protocol
See these screen dumps for reference...
Once you login with your IBMid, you'll be presented with the ST client and no one in your buddylist. Sending instant messages to yourself isn't very interesting and really what you want to do anyway is to chat with IBMers so lest add an IBMer to your buddylist so that you can chat with them...
You will need to know their Internet email address as you have to manually type it in. You will not be able to serach for them. Select the "Add external person by email address' radio button, then type in their email address and name, asign a group if you want to group your contacts.If you don't know they're email address, you can search here to find it.
Once you click on 'add' a popup will appear telling you that the IBMer will need to approve you to be able to see their status and chat with them through the IM Gateway.
NB. In the buddylist - the au1.ibm.com is my internal Sametime community id (which is the same as my email address) and the optusnet.com.au email address is my ibm.com id.
Once you've added your IBM contacts, you're up and running and the interface should look something like this (below):
A chat session between my two IDs (my IBMid and my internal id) looks like this in both the standalone client (used for my external IBMid and the embedded client in my IBM Notes client - on Linux)
and the internal view of the same conversation:
You might notice that all the rich text, file, image functions are greyed out - that's because they are not supported by the external IBM gateway so you'll be restricted to plain text in your chats...
This capability is not well known among IBMers, but I have spoken with a number of partners, exIBMers and my wife via this facility in the past.
Hopefully, this post will spread the word a bit more....
Modified by Andrew_Larmour
Why TMF Frameworx?
The TeleManagement Forum (TMF) have defined a set of four frameworks collectively known as Frameworx. The key frameworks that will deliver business value to the CSP are the Information Framework(SID) and the Process Framework (eTOM). Both of these can deliver increased business agility - which will reduce time to market and lower IT costs. In particular if a CSP is undertaking with the multiple major IT projects in the near term, TMF Frameworx alignment will ease the pain associated with those major projects.
Without a Services Oriented Architecture (SOA), such as many CSP's have currently, there is no common integration layer, no common way to perform format transformations with that multiple systems can communicate correctly. A typical illustration of this point to point integration might look like the Illustration to the right:
Each of the orange ovals represents a transformation of information so that the two systems can understand each other - each of which must be developed and maintained independently. These transformations will typically be built with a range of different technologies and method, thus increasing the IT costs of integrating, maintaining such transformations, not to mention maintaining competency within the IT organisation.
A basic SOA environment introduces the concept of an Enterprise Service Bus which provides a common way to integrate systems together and a common way of building transformation of information model used by multiple systems. The Illustration below shows this basic Services Oriented Architecture - note that we still have the same number of transformations to build and maintain, but now they can be built using a common method, tools and skills.
If we now introduce a standard information model such as the SID from the TeleManagement Forum, we can reduce the number of transformation that need to be built and maintained to one per system as shown in the Illustration below. Ensuring that all the traffic across the ESB is SID aligned means that as the CSP changes systems (such as CRM or Billing) the effort required to integrate the new system into the environment is dramatically reduced. That will enable the introduction of new systems faster than could otherwise been achieved. It will also reduce the ongoing IT maintenance costs.
As I'm sure you're aware, most end to end business processes need to orchestrate multiple systems. If we take the next step and insulate those end to end business processes from the functions that are specific to the various end point systems using a standard Process Framework such as eTOM, then business process can be independent of systems such as CRM, Billing, Provisioning etc. That means that if those systems change in the future (as many CSPs are looking to do) the end to end business processes will not need to change - in fact the process will not even be aware that the end system has changed.
When changing (say) the CRM system, you will need to remap the eTOM business services to the specific native services and rebuild a single integration and a single transformation to/from the standard data model (SID). This is a significant reduction in effort required to introduce new systems into the CSP's environment. Additionally, if the CSP decide to take a phased approach to the migration of the CRM systems (as opposed to a big bang) the eTOM aligned business processes can dynamically select which of the two CRM systems should be used for this particular process instance.
What that means for the CSP.
Putting in place a robust integration and process orchestration environment that is aligned to TMF Frameworx should be the CSP's first priority; this will not only allow the subsequent major projects integration and migration efforts to be minimised, it will also reduce the time to market for new processes and product that the CSP might offer into the market.
Telekom Slovenia is a perfect example of this. When the Slovenian government forced Mobitel (Slovenia) and Telekom Slovenia to merge, having the alignment with the SID and eTOM within Mobitel allowed the merged organisation to meet the governments deadlines for the specific target KPIs:
When a CSP is undertaking multiple concurrent major IT replacement projects, there are a number of recommendations that IBM would make based on past observations with other CSPs that have also undertaken significant and multiple system replacement projects:
Use TMF Frameworx to minimise integration work (requires integration and process orchestration environment such as the ESB/SOA project is building) to be in place
Use TMF eTOM to build system independent business processes so that as those major systems change, end to end business processes do not need to change and can dynamically select the legacy or new system during the migration phases of the system replacement projects.
To achieve, 1 and 2, the CSP will need to have the SOA and BPM infrastructure that is capable of integration with ALL of the systems (not just limited to (say) CRM or ERP) within the CSP in place first
If you have the luxury of time, don't try to run the projects simultaneously, rather run them linearly. If this cannot be achieved due to business constraints, limit the concurrent projects to as few systems as possible, and preferably to systems that don't have a lot of interaction with each other.
Operators hoping to engage in widespread deployment of voice over LTE in order to gain spectral efficiencies in their network may face some unhappy customers because one vendor's recent tests showed that VoLTE calls can slash a device's talk-time battery life by half.
For years now, we've known that higher speed mobile networks would mean more power required in handsets to maintain the higher bandwidth connections. I recall it being raised as a concern when UMTS (3G) was being rolled out while GPRS or EDGE were the dominant technology in the mobile data networks. In fact, while I am travelling, I often switch off my 3G/3.5G network capability and trop back to GPRS and EDGE just to make my batter last through the day. It's interesting that it has been quantified like this.
When you think about it though, it makes sense. VoLTE (Voice over LTE) is not using a traditional GSM or CDMA circuit, rather it is using a packet data network to encapsulate the voice traffic - so it is voice over a data network. We've known for a long time that data traffic (particularly higher speed data traffic) uses a lot more power than voice traffic. More power equals less talk time from the same charge.
This study is a US based one, so it brings the luggage of CDMA rather than GSM like the rest of the world uses, but I think there are lessons here for the GSM carriers around the world too. CDMA battery life (from my experience) has been on a par with GSM battery life. I think it would be reasonable to equate the CDMA battery life in this study with GSM battery life.
I am seeing more and more countries around the world clawing back the 2G spectrum for use with Digital TV, LTE or other local requirements. At some point in the future (at least for some markets) the only Voice traffic will be using VoLTE and those subscribers will have severely reduced standby and talk time compared to mobile phones of a few years back. Will that lead to a backlash in the community? By that point it may be too late with the spectrum re-deployed for other uses. Will we end up with VoLTE being the only voice option in some countries and others still having CDMA or GSM voice networks - and will that complicate things for phone manufacturers? (remember the days of so called 'Global phones' that had to be made to cater to all the different spectrums used around the world - yes multi band phones became pervasive, but will so called Global Phones that retain backward compatibility with GSM networks be so popular when the primary channel for mobile phone distribution is still the telephone carriers themselves (and they have committed to VoLTE in their own country)?
Who knows. I do think that we'll end up with a big group of primarily voice subscribers who aren't going to be happy campers!
Last week, I was at the TeleManagement Forum's (TMF) Africa Summit event in Johannesburg, South Africa. The main reason for me attending was to finish of my TMF certifications (I am level 3 currently) in the process framework (eTOM) - if I have passed the exam, I will be Level 4 certified. It was a really tough exam (75% pass mark) so I don't know if I did enough to get over the line'.
Regardless, the event was well attended with 200-230 attendees for the two days of the conference. It was interesting to hear the presenter's thoughts on telco usage within Africa into the future. Many seemed to think that video would drive future traffic for telcos. I am not so sure.
I n other markets around the world, video was also projected to drive 3G network adoption, yet this has not happened anywhere. Why do all these people think that Africa will be different?
I see similar usage patterns in parts of Asia, yet Video has not take off there. Skype carries many more voice only calls than video calls. Apple's Facetime video chat hasn't taken off like Apple predicted. 3G video calls makes a tiny proportion of all calls made.
Personally, I think that voice (despite it's declining popularity relatively speaking in the developed world) will remain the key application, especially voice over LTE for the foreseeable future in Africa. I also think that social networking (be it Facebook, freindster, MySpace or some other African specific tool) will drive consumer data (LTE) traffic. Humans are social animals, and I think these sorts of social interactions will apply just as much in the African scenario as it has in others.
The other day, I was at a customer proof of concept, where the customer asked for 99.9999% availability within the Proof of Concept environment. Let me explain briefly the environment for the Proof of Concept - we were allocated ONE HP Proliant server, with twelve cores and needed to run the following:
- IBM BPM Advanced (BPM Adv)
- WebSphere Operational Decision Management (WODM)
- WebSphere Services Registry & Repository(WSRR)
- Oracle DB (not sure what version the customer installed).
Obviously we needed to use VMWare to deploy the software since installing all of the software on the server (and being able to demonstrate any level of redundancy) would be impossible.
Any of you that understand High Availability as I do would say it can't be done in a Proof of Concept - and I agree, yet our competitor claims they have demonstrated six nines (99.9999% availability) in this Proof of Concept environment - it was deployed on the customer's hardware; hardware that did not have any redundancy at all. I call shenanigans on the competitor claims. Unfortunately for us, the customer swallowed the claim hook line and sinker.
I want to explain why their claim of six nines cannot be substantiated and why the customer should be sceptical as soon as a vendor - any vendor makes such claims. First, lets think about what 99.9999% availability really means. To quantify that figure, that means 31.5 seconds of unplanned downtime per year! For a start, how could you possibly measure availability for a year over a two week period. Our POC server VMs didn't crash for the entire time we had them running - does that entitle us to claim 100% availability? No way.
The simple fact is that the Proof of Concept was deployed in a virtualised environment on a single physical machine - without redundant Hard Drives or power supplies - there is no way we or our competition could possibly claim any level of availability given the unknowns of the environment.
In order to achieve high levels of availability, there can be no single point of failure. That means no failure points in the Network, the Hardware or the Software. For example, that means:
- Multiple redundant Network Interface Connectors
- RAID 1+0 drive array,
- Multiple redundant power supplies,
- Multiple redundant network switches,
- Multiple redundant network backbones
- Hardened OS
- Minimise unused OS services
- Use Software clustering capabilities (WebSphere n+x clustering *)
- Active automated management of the software and OS
- Database replication / clustering (eg Oracle RAC or DB2 HADP)
- HA on network software elements (eg DNS servers etc)
We need to go back to the Telco and impress upon them that six nines availability depends on all of the above factors (and probably some others!) and not just about measuring the availability of the software over a short (and non-representative) sample period.
Typically this level of HA is very expensive, indeed every additional '9' increases the cost exponentially - that is: six nines (99.9999% availability) is exponentially more expensive than five nines(99.999% availability). I found this great diagram that illustrates the cost versus HA level.
This diagram is actually from a IBM Redbook (See http://www.redbooks.ibm.com/redbooks/pdfs/sg247700.pdf
) which has a terrific section on high Availability - it illustrates how there is a compromise point between the level of high availability (aiming for continuous availability) and the cost of the infrastructure to provide that level of availability.
- n is number of servers needed to handle load requirements
- x is the number of redundant nodes in the cluster – to achieve six 9's, this should be in excess of 2)
Further to my last post, it now looks like the WAC is completely dead and buried.
One thing that is creating a lot of chatter at the moment though is TelcoML (Telco Markup Language) - there it a lot of discussions about it on the TeleManagement Forum (TMF) community site and while I don't intend to get in a big discussion about TelcoML, I do want to talk about Telco standards in general.
The Telco standards that seem to take hold are the ones with strong engineering background - I am thinking of networking standards like SS7, INAP, CAMEL, SigTRAN etc, but the Telco standards focussed on the IT domain (like Parlay, ParlayX, OneAPI, ParlayREST and perhaps TelcoML) seem to struggle to get real penetration - sure standards are good - they make it easier and cheaper for Telcos to integrate and introduce new software; they make it easier for ISVs to build software that can be deployed at any telco. So, why don't they stick?
Why do we see a progression of standards that are well designed, have collaboration of a core set of telcos around the world (I'm thinking the WAC here) yet nothing comes of it. It we look at Parlay for example, sure CORBA is hard, so I get why it didn't take off, but ParlayX with web services is easy - pretty much every IDE in the world can build a SOAP request from the WSDL for that web Service - why didn't it take off? I've spoken to telcos all around the world about ParlayX, but it's rare to find one that is truly committed to the standard - sure the RFP's say must have ParlayX, but then after they implement the software (Telecom Web Services Server in IBM's case) they either continue to offer their previous in house developed interfaces for those network services and don't use ParlayX or they just don't follow through with their plans to expose the services externally: why did we bother? ParlayX stagnated for many years with little real adoption from Telcos. Along comes GSMA with OneAPI with the mantra 'ParlayX web services are too complicated still, lets simplify them and also provide a REST based interface'. No new services, just the same ones as ParlayX, but simplified. Yes, I responded to a lot of Requests For Proposal (RFP) asking for OneAPI support, but I have not seen one telco that has actually exposed those OneAPI interfaces to 3rd party developers as they originally intended. So, now, OneAPI doesn;t really exist any more and we have ParlayREST as a replacement. Will that get any more take up? I don't think so.
The TMF Frameworx seem to have more adoption, but they are the exception to the rule.
I am not really sure why Telco standards efforts have such a tough time of it, but I suspect that it comes down to:
- Lack of long term thinking within telcos - there are often too many tactical requirements to be fulfilled and the long term strategy never gets going (this is like Governments who have a four year terms not being able to get 20 year projects over the line - they're too worried about getting the day to day things patched up and then getting re-elected)
- Senior executives in Telcos that truly don't appreciate the benefits of standardisation - I am not sure if this is because executives come from a non-technical background or some other reason.
What to do? I guess I will keep preaching about standards - it is fundamental to IBM's strategy and operations after all - and keep up with the new ones as they come along. Lets hope that Telcos start to understand why they should be using standards as much as possible, after all they will make their life easier and their operations cheaper.
"Apigee, the API management company that was most recently spotted powering that new “print to Walgreens” feature in half a dozen or so mobile applications, is now acquiring the technology assets of WAC, aka the Wholesale Applications Community.
WAC, an alliance of global telecom companies, like AT&T, Verizon,
Sprint, Deutsche Telecom, China Mobile, Orange, and others (and pegged
by TechCrunch writer Jason Kincaid back in 2010 as “a disaster in the making“)
was intent on building a platform that would allow mobile developers to
build an application once, then run it on any carrier, OS or device.
The group also developed network API technology, which is another key
piece to today’s acquisition."
Follow the link to continue reading. techcrunch.com/2012/07/17/wac-whacked-telecom-backed-alliance-merges-into-gsma-assets-acquired-by-api-management-service-apigee/
I think this is a really interesting development. The Wholesale Application Community (WAC) was supposed to give Telcos a way of minimizing the revenue losses to the likes of Apple's App Store and Google Play. IBM's Telecom Solution Lab in France built a demonstration that was shown at Mobile World Congress (MWC) in 2011 demonstration how a Telco's own app store could incorporate applications from the WAC App store as well as other app stores within their own combined app store. I've demonstrated this a number of times around the world and the thing that always seemed odd to me is that applications in the WAC App Store could not be native applications (for Android, Blackberry, WinMob or Symbian) but rather, they could ONLY be HTML5 based apps. That was always going to limit the number of apps that would be in the WAC App store, but since the WAC was announced at WMC 2010, the number of apps in the store has never really taken off.
I'm not sure if this is effectively the end of the road for the WAC, or if it's just a stop on their journey. Certainly, the Telcos that I have dealt with that form the core WAC Telco members remain dedicated to the WAC. I guess we'll have to wait and see what happens.
This makes for an interesting comparison to the National Emergency
Warning System (NEWS) that was implemented in Australia last year as a
result of the Black Saturday bushfires.
Here is the URL for this bookmark: gizmodo.com/5857897/this-is-not-a-test-the-emergency-alert-system-is-worthless-without-social-networks
This makes for an interesting comparison to the National Emergency
Warning System (NEWS) that was implemented in Australia last year as a
result of the Black Saturday bushfires. Of particular interest is that the USA have avoided the SMS channel when in Australia, that has been the primary channel - alternates like TV and Radio are seen as not as pervasive and thus a lower priority. I don't think that NEWS here in Oz is connected to twitter, facebook, foursquare or any other social networking site either, but that could be an extension to NEWS - the problem is getting everyone to "friend" the NEWS system so that they see updates and warnings!
Here is the URL for this bookmark: www.telecomtv.com/comspace_newsDetail.aspx?n=47960
Wow! HP getting out of PCs and abandoning their very recent and very
significant investment in Palm - then on top of that, they're looking to
While I can understand HP getting out of the PC business - it's a very competitive marketplace wit low margins - after all, that is why IBM sold it's PC division to Lenovo. What surprises me is the timing. Only 18 months after buying Palm for US$1.2 Billion, they're cutting their losses and shedding it.
Since I don't live in the US, I can't comment on the marketing push that HP put behind the Pre and the TouchPad, but I've never seen any marketing for it. When your competitor is Apple, the only way to make any dent is the push and push hard. They needed to out market Apple and I'm sure I don't need to tell you how difficult and expensive that would be!
Yesterday, IBM launched the latest iteration of the Service Provider Delivery Environment (SPDE), a software framework for Telecom that has been around since 2000. Over the years, it has evolved with change sin market requirements and architecture maturity. The link below is for the launch: http://www-01.ibm.com/software/industry/communications/framework/index.html
The following enhancements are part of the new SPDE 4.0 Framework: 1.
CSP Business Function Domains – a clear articulation of “communications service provider business domains” that describe the business functions that are common to any service provider across the world. These business domains offer us a simpler way to introduce the SPDE capabilities to a LOB audience, as well as to other client and partner constituents that are new to SPDE:
- Customer Management
- Sales & Marketing
- Operations Support
- Subscriber Services
- Corporate Management
- Information Technology
- Network Technology
New Capabilities - In the areas of cloud, B2B commerce, enterprise marketing management, business analytics, and service delivery.
3. Introduction of the SPDE Enabled Business Projects - that deliver solutions to address common business and IT needs for the LOB (CIO/CTO/CMO) and represent repeatable solutions and patterns harvested from client engagements. 4.
Improved alignment with Telemanagment Forum (TMF) Industry Standards - a clearly defined depiction of the areas of alignment to TMF Frameworx - key industry standards that underpin much of the communications industry investment. 5.
Simplified Graphics and Messaging - to improve ease of adoption and consumability by a broader LOB audience.
Built on best practices and patterns from client engagements with CSPs around the world, IBM SPDE 4.0 is the blueprint that enables Smarter Communications by helping deliver value-added ser¬vices that launch smarter services, drive smarter operations and build smarter networks. IBM is leading a conversation in the marketplace about how our world is becoming smarter, and software is at the very heart of this change. IBM's Industry Frameworks play a critical role in our ability to deliver smarter planet solutions by pulling together deep industry expertise, technology and a dynamic infrastructure from across the company to provide clients with offerings targeted to their industry-specific needs.
Disclaimer. I have 'borrowed' some of the text from an IBM Marketing email about the new SPDE 4.0 framework - so not my words...
I am in Dublin at the moment for TeleManagement World 2011 which has changed locations from Nice, France last year. it looks to be a very interesting conference. I've already done two days of
training and now, we're beginning the sessions. the keynote session has the Irish Minister for Communications, Mr Rabitte who is talking about the challenges that CSPs face all the world around. He is also talking about an innovation programme that the Irish Government have started called 'Examplar
' which is part of their NGN Trial network. i'll see if I can get some more info over the next few days...
Steven Shurrock, the new CEO at O2 Ireland who has been in the role for just six months is very bullish about the opportunities in Ireland for data services. After Steven, we saw a host of Keynote speakers who have been focused on a number of themes, but many common presenters included:
- Standards compliance - including certification against standards. Particularly with the TMF Frameworx standards
- Horizontal platforms and moving away from silos is their IT strategy
- SOA is the basis for all of the new IT initiatives
I have recorded a number of keynote speakers as video, but for the time being, those files are very large. Once I have had a chance to transcode them to a smaller size, I'll add them to the blog as well - while not particularly technical, they're very intesting for a Telecom perspective.
OK, I know over the past six months or so, my blog has sat idle. For that I apologise. I could blame workload, personal issues, the amount of travel etc etc, but I am just going to cop it on the chin and say that I am sorry to anybody out there that can be bothered to read my posts. In light of the fresh start, I am going to change the name of the blog from Telco Talk to ...
Well, that's the thing, I haven't decided yet what i should change it to. The content isn't going to change - it will continue to be Telco focused, so I don't want to start a new blog from scratch. I will just rename this one. I just need some inspiration for the new name. Within IBM, our global market folks have decreed that we should no longer use the term "Telco" and that instead we should use "Communications Service Provider" or CSP for short. As a result I was thinking about changing the blog name to "CSP Comms" or "CSP Communiqué". Before I change it, I would like your opinion (if there is anyone out there) or suggestion of a new name.
I'll be watching my blog comments with bated breath, so please comment and suggest names.
Interesting - looks like RIM dodged a bullet in the UAE.
Here is the URL for this news: www.google.com/hostednews/afp/article/ALeqM5iMtJnqeRckjmlWVOoB1KWqtYmbLw?docId=CNG.aec298041bd87d0d6ae2ef88e13bcbcd.6a1
The threatened ban was narrowly averted and the ban in India looks as if it will avoid a ban after all. I wonder if RIM installed (r promised to) a Network Operations Centre in the UAE (which is what I saw a a possible way of appeasing the authorities) or if they have come up wit some other way to give the UAE authorities access to the encrypted traffic.
In the meantime, India has hinted (per my previous post) that they will be going after private VPN traffic in addition to the Blackberry traffic. We'll see where that ends up soon I guess.
I know I have been lax in posting recently. I've had a lot of work on and I am sorry for not getting to the blog.
That said, over the past few weeks, I have been watching what seems to be a snowballing issue of governments spying on their citizens in the name of protection from terrorism. First cab off the rank was India a couple of years ago asking Research In Motion (RIM) for access to the data stream for Indian Blackberry users, then asking for the encryption keys. That went quiet until recently (1Jul10), the Indian Government again asked RIM for access to the Blackberry traffic and gave RIM 15 days to comply (See this post in Indian govt gives RIM, Skype 15 days notice, warns Google - Telecompaper
). That has passed and the Indian government yesterday gave RIM a new deadline of 31Aug10 (See Indian govt gives 31 August deadline for BlackBerry solution - Telecompaper
). In parallel, a number of other nations have asked their CSPs or RIM for access to the data sent via Blackberry devices.
First, was the United Arab Emirates (UAE) who will put a ban on Blackberry devices in place which will force the local Communications Service Providers (CSPs) to halt the service from 11Oct10
. RIM are meeting with the UAE government, but who knows where that will lead with the Canadian government stepping in to defend it's Golden Hair Child - RIM. Following the UAE ban, Saudi Arabia, Lebanon and more recently Indonesia have all said they will also consider a ban on RIM devices. As an interesting aside, I read an article a week ago (See UAE cellular carrier rolls out spyware as a 3G "update"
) that suggested that the UAE government sent all Etisalat Blackberry subscribers an email advising them to update their devices with a 'special update' - it turns out that the update was just a Trojan which in fact delivered a spyware application to the Blackberry devices to allow the government to monitor all the traffic! (wow!)
Much of the hubbub seems to be around the use of Blackberry Messenger, an Instant Messaging function similar to Lotus Sametime Mobile, but hosted by RIM themselves which allows all Blackberry users (even on different networks and telcos) to chat to each other via their devices.
I guess at this stage, it might be helpful to describe how RIM's service works. From a historical point of view, RIM were a pager company. Pagers need a Network Operations Centre (NOC) to act as a single point from which to send all the messages out to the pagers. That's where all the RIM contact centre staff sat and answered phones, typed messages into their internal systems and sent the messages out to the subscribers. RIM had the brilliant idea to make their pagers two way so that the person being paged could respond initially with just an acknowledgement that they had read the message, and then later with full text messages. That's the point at which the pagers gained QWERTY keyboards. From there, RIM made the leap in functionality to support emails as well as pager messages, after all, they had a full keyboard now, a well established NOC based delivery system and a return path via the NOC for messages sent from the device. The only thing that remained was a link into an enterprise email system. That's where the Blackberry Enterprise Server (BES) comes in. The BES sites inside the Enterprise network and connects to the Lotus Domino or MS Exchange servers and acts as a connection to the NOC in Canada (the home of RIM and the location of the RIm NOC). The connection from the device to the NOC is encrypted and from the NOC to the BES is encrypted. Because of that encryption, there is no way for a government such as India, UAE, Indonesia, Saudi Arabia or other to intercept the traffic over either of the links (to or from the NOC)
Last time I spoke to someone at RIM about this topology, they told me that RIM did not support putting the BES in the DMZ (where I would have put it) - since then, this situation may have changed.
Blackberry messenger traffic doesn't get to the BES, but instead it goes from the device up to the NOC and then back to the second Blackberry which means that non-enterprise subscribers also have access to the messenger service and this appears to be the crux of what the various governments are concerned about. Anybody, including a terrorist could buy a Blackberry phone and have access to the encrypted Blackberry messenger service without needing to connect up their device to a BES which explains why they don't seem to be chasing after the other VPN vendors (including IBM with Lotus Mobile Connect) to get access to the encrypted traffic between the device and the enterprise VPN server. Importantly, other VPN vendors typically don't have a NOC in the mix (apart from the USA based Good
who have a very similar model to RIM). I guess the governments don't see the threat from the enterprise customers, but rather the individuals who buy Blackberry devices.
To illustrate how a VPN like Lotus Mobile Connect differs from the Blackberry topology above, have a look at the diagram below:
Lotus Mobile Connect topology
If we extend that thought a little more, a terrorist cell could set them selves up as a pseudo enterprise by deploying a traditional VPN solution in conjunction with an enterprise type instant messaging server and therefore avoid the ban on Blackberries. the VPN server and IM server could even be located in another country which would avoid the possibility of the government easily getting a court order to intercept traffic within the enterprise environment (on the other end of the VPN). It will be interesting to see if those governments try to extend the reach of their prying to this type of IM strategy...
Since I last posted
about New Zealand's National Broadband project which seemed to me to be much more focused on the subscribers and the products they would have available to them (and the retailers that sold them) than the high speed backbone network. My impressions may have been tainted by the work I was doing with the Telecom New Zealand Undertaking In Progress (UIP) project that I was involved with - the rather public forced split of Telecom New Zealand's Retail, Wholesale and Network departments to ensure equivalency of input for all retail and wholesale partners for (only) broadband services.
My understanding of the situation has developed somewhat since then and we can see that the situation in New Zealand Government also involves a similar structure to what is happening in Australia with the Communications Alliance and the NBN Company. In New Zealand, the companies are a little different. Certainly, we have the NZ Government Ministry of Economic Development
(MED) as one participant, then we have Crown Fibre Holdings
(not much of a web site there!) -set up by the Government to manage the process of selecting the companies to build the National Broadband Network and manage the government's investment in the NBN. Together with the companies that are bidding for the deal Crown Fibre holdings will form Local Fibre Companies (LFC) which (combined) will match the government's contribution to the NBN. That will mean the total project will cost NZ$3 Billion** with the LFCs kicking in NZ$1.5B and the NZ government contributing NZ$1.5B. I dont have the full schedule, but from a couple of sources, I have compiled an overview of the progress to date:
- 21 October 2009 - Communications and Information Technology Minister Steven Joyce announced the government's process for selecting private sector co-investment partners.
- 13 November 2009 - Intention to respond due.
- 9 December 2009 - The Ministry and Crown Fibre Holdings release a clarifications and amendments
- 14 January 2009 - The Ministry and Crown Fibre Holdings released additional clarification and amendments with respect to the Invitation to Participate.
- 29 January 2010 - Proposals must be lodged
- 4 February 2010 - Crown Fibre Holdings notify respondents of handover of responsibility for the partner selection process
- August 2010 - Refined Proposals to be re-submitted to the government (See http://www.totaltele.com/view.aspx?C=0&ID=456818 )
- October 2010 - Successful respondents announced/notified.
What I find a bit interesting is that the government are only looking to cover 75% of the population by 2019. For a small country (compared to Australia at least), that seems to me to be a very low target to aim for. If we compare that with Australia's NBN project, their target is 90% coverage at greater than 100Mbps and 10% greater than 12Mbps (that's 100% coverage!) by 2017. Admittedly, the Australian project has about a year's head start, but it's also a MUCH
bigger country with a population nearly five times larger. Lets have a quick look at the comparisons:
Cost per person (US$/person)
Cost per area (US$/km2)
* 100% coverage is split between greater than 100Mbps (90%) and greater than 12Mbps (10%)
** One Billion is using the short scale definition = 109
What do I take from this quick comparison? Lets take a quick look at the numbers. Obviously, Australia is a much bigger country (28.4 times larger) and has a much larger population (5.2 time larger), so it is reasonable (in my opinion) that the cost per potential NBN customer should be higher for Australia (and it is at 2.2 times higher) but the thing that makes me ponder is the cost per square kilometre: New Zealand is nearly twice that of Australia. When the New Zealand target is only 70% of the population and thus enables them to avoid areas that are physically difficult to provide coverage to (I'm no NZ geologist, but I would imagine lots of the South Island's most mountainous areas would pose significant problems for cablers) I find myself wondering why the NZ network is going to be so expensive. I guess it could be a matter of scale - but I thought the biggest cost was actually laying the cables rather than the back end systems which every broadband network will need (routers, switches, administration and management systems). Maybe I am missing something - does anyone have any ideas?
I've just found this quote in Wikipedia
which (I think) is truely revealing when you consider New Zealand's 70% coverage target:
"New Zealand is a predominantly urban country, with 72% of the population
living in 16 main urban areas and 53% living in the four largest cities
and Hamilton"By only extending the NBN to those 16 main urban areas and nowhere else - they've achieved their target!
You wouldn't want to live in country New Zealand and be dependent on a fast network!
I was looking at where some of the traffic for this blog comes from this morning. Someone had used Google to search for "ibm sdp cloud" which I am glad to say yielded this blog as the third and forth results. Above Telco Talk
in the results was a post from 2005 from fellow MyDeveloperworks blogger Bobby Woolf
with his post What is in RAD 6.0
- which is interesting in that the post wasn't about Service Delivery Platforms and the term "SDP" is only mentioned in the comments on the post, yet it rated higher in Google's index than my posts which have been about cloud, SDPs or both! That's another conversation though...
The thing that really caught my attention was a new whitepaper form IBM on Smarter Homes. This has been an ongoing area of interest for me for a few years now. This new whitepaper "The IBM vision of a smarter home enabled by cloud technology" is interesting - it talks about some of the concepts that I have seen coming over the past few years, but it also introduces the concept of Cloud based services providers as the key enabler outside the home to enable smarter home to deliver on their lofty promises. In the introduction of the whitepaper, it states:
A common services delivery platform based on industry standards supports cooperative interconnection and creation of new services. Implementation inside the cloud delivers quick development of services at lower cost, with shorter time to market, facilitating rapid experimentation and improvement. The emergence of cloud computing, Web services and service-oriented architecture (SOA), together with new standards, is the key that will open up the field for the new smarter home services.
The dependence on external networks (from our homes) and external Communications Service Providers presents an opportunity for them to provide much more than just the pipe to the house. This is an area that some Telcos are trying to tap into already. Here in Australia, Telstra
have recently introduced a home based smart device called the T-Hub
which is intended to arrest some of the decline in homes installing or keeping land line phones (in Australia, more and more homes are buying a naked DSL or Hybrid Fibre Coax (HCF) service for Internet and using mobile phones for voice calls and not having a home phone service at all). I recently cancelled my Telstra Home Phone service, so I cannot buy one of the T-Hubs and apparently it won't work with my home phone service via my HCF connection. It is an intriguing idea though. I find myself wondering if Telstra's toe in the Smarter Home pond is too little too late. For years, in Telstra's Innovation Centres (one in Melbourne and one in Sydney) they had standing demonstrations of smarter home technology (I think the previous Telstra CEO, Sol Tujilllo closed them down). I even helped to install a Smarter Healthcare demo at the Sydney Telstra Innovation Centre a few years ago (more n that later) and their demos were every bit as good as the demos that IBM has at the Austin (Texas, USA) and LaGaude (France) Telecom Solutions Labs.
Further into the whitepaper, when talking about Cloud based Service Delivery Platforms (pp 10) there is a nice summary of why a Telco would consider a cloud deployment of their SDP:
An SDP in the cloud supports the expansion of the services scope by enabling new services in existing markets and by expanding existing services into new markets with minimum risk. By exposing standard service interfaces in the network, it enables third parties to integrate their services quickly, or to build new services based on the service components provided in the SDP. This creates the opportunity for new business models, for instance, for media distribution and advertising throughout multiple delivery scenarios.
I think this illustrates what all Telcos should be thinking about - the agility needed to compete in today's marketplace. Cloud is one way to enhance that agility but also adds elasticity - the ability to grow and shrink as the market demands grow and shrink. Sorry for rambling a bit there... some semi-random thoughts kept popping up when talking about Smarter homes and Telcos. Anyway, I would encourage you to have a read of the whitepaper for yourself. It's available at:
Disclaimer: I own a small number of shares in Telstra Corp.
In just five months, Bharti Airtel's App store has had over 13 Million downloads. What a terrific example of a Telco App Store in action and (presumably) making money for the Telco. This article came across my screen this afternoon and given my previous posts about Bharti's App Store
and carriers wanting to get into them
(something I've seen all over Asia) to try and arrest some of the revenue bleeding to Apple (and to a lesser extent Google, Nokia and RIM) through single brand (phone) app stores. http://www.telecompaper.com/news/printarticle.aspx?cid=742043
- Thursday 24 June 2010 | 03:29 AM
The article is really brief, barely a footnote, but it does lay out some interesting facts:
- 13 Million downloads since Feb '10
- Over 71,00 Applications available, up from 1250 at launch
- Support for 780 different devices
- 1.2 downloads per second
I guess having over 200 Million subscribers does help achieve these sorts of numbers
. I have some a bit of background about Airtel's App Central store and the technology it uses, much of it IBM technology. IBM Portal and Mobile Portal Accelerator are used to drive the interface which is able to support over 8,000 different devices from iPhones to WebTVs (remember them? They seem to be making a bit of a comeback at the moment) and everything in-between. These screen dumps are from their old mobile site - I will post some new ones if I can get them soon. Technorati Tags: app_store, bharti, airtel, telco, mobile_portal, andrew_larmour,
Airtel's App Central on a PC
Since I penned my last post
, I have done some more reading on Facetime and watch Steve Job's launch of Facetime
. While I will happily admit that Apple have in fact used some standards within their Facetime Technology (Jobs lists H.264
as all being used), I am somewhat bemused by the "standards" discussion that most of the media seem to be focusing on with regard to Facetime. Almost everyone that refers to compliance with standards is talking about interoperability with current PC based video chat capabilities - from the likes of Skype, MS Messenger, GTalk and others. Am I the only one that has noticed the iPhone 4 is not a PC and is in fact a mobile phone? Why is it that no one else is questioning interoperability with existing video chat capable mobile phones?
After thinking on this for a little while, I guess it might be that most of the media coverage about the iPhone 4 is coming from the USA - where is was launched. It's only natural. The problem with the US telecoms market is that it is not representative of the rest of the world - who has had video calling for ages and don't really use it. Perhaps it was the overflowing Apple coolaid fountain in the iPhone 4 launch that got the audience clapping when Jobs placed a video call, or perhaps it was just that they had never seen a video call before - I wasn't there so I cant be sure. Right now, the Facetime capability on the iPhone 4 is only for WiFi connections - which makes it pretty limiting. Apparently, there is no setup required, no buddylist, you just use the phone number to make a video call - which is the way video calling already works (see the screen dump of my phone to the right and the short video below), but the WiFi limitation on the iPhone 4 will mean that you have to guess when the recipient is WiFi connected. At least with the standard 3GPP video call, the networks are ubiquitous enough to pretty much guarantee that if the recipient is connected to a network, they can receive a video or at least a phone call. Job's didn't explain what would happen if the recipient was not WiFi connected - does it just make a voice call instead? I hope so.
If you look at the pixelation and general poor quality of the video call, consider that I am in a UMTS coverage area, not HSPA (the phone would indicate 3.5G if I were), so this is what was available more than seven years ago in Australia, longer in other countries. If I was in a HSDPA coverage area, I would expect the video call to be higher quality due to the increase bandwidth available.
I recall in 2003, Hutchison 3 launched their 3G network in Australia with much fan-fair. Video calls was a key part of the 3G launch in Australia for all of the telcos. This article from the 14Apr03 Sydney Morning Herald
(on day before the first official 3G network in Australia) illustrates what I am talking about. The authors say that the network's "...main feature is that it makes video
calling possible via mobile phone.
" Think about it for a second. That's from more than seven
years ago and Australia was far from the first country to get a 3G network. A lifetime in today's technology evolution. Still the crowds clapped and cheered as Jobs made a Video call. If I had have been in the audience, I think I would have yawned at that point.
The other interesting thing that I noticed in job's speech as his swipe at the Telcos. He implied that they needed to get their networks in order to support video calls. Evidence from the rest of the world would suggest that is not the case - perhaps it is in the USA, or perhaps he is trying to deflect blame for not allowing Facetime over 3G connections away from Apple and back to the likes of AT&T who have copped a lot of flack over their alleged influence on Apple's Application store policies involving applications that could be seen to be competitive with services from AT&T. I am not sure how much stick AT&T deserve on that front, but it's pretty obvious from job's comment that he is not in love with carriers - and certainly from what I've seen, carriers are not in love with Apple. It might be interesting to see how long the relationship lasts. My guess is that as long as Apple devices continue to be popular, both parties will be forced to share the same bed.
On another related point, I have been searching the Internet to find what standards body Apple submitted Facetime to for certification - Jobs says in the launch that it will be done "tomorrow" - this could be marketing speak for 'in the future' or it could literally mean the day after he launched the iPhone 4. If anyone knows please let me know
- I want to have a look into the way Facetime works.
Thanks very much to my colleague Geoff Nicholls for taking the Video Call in the video above.
Regarding this article: http://www.computerworld.com/s/article/9177819/Jobs_has_lofty_goal_for_iPhone_4_s_FaceTime_video_chat_with_open_standard
I came across this article today - Apple wanting to propose their new Facetime technology for video chat now that they finally have a camera on the front of their iPhone 4. I'm now on my second phone with a camera on the surface of the phone (that's at least four years that my phones have had video chat capabilities) which has not proved to be much more than a curiosity where Telcos have launched it around the world. I recall the first 3G network launch in Australia - for Hutchinson's '3' network - video chat was seen as the next big thing - the killer application, yet apart from featuring in some reality shows on the TV, very few people used it. I wonder why Steve Jobs thinks this will be any different. At least the video chat capabilities that are in the market already have a standard that they comply with which means that on my Nokia phone, I can have a video call with someone on a (say) Motorola phone. With Apple's Facetime, it's only iPhones 4 to iPhone 4 (which does not support a 4G network like LTE or WiMax I hasten to add). If Apple really is worried about standards as the Computerworld article suggests, then I have to ask why doesn't Apple make their software comply with existing 3GPP Video call standards instead of 'inventing their own'. If Apple were truly concerned about interoperability, that would have been a more sensible path.
According to Wikipedia
, in Q2 2007 there were "... over 131 million UMTS users (and hence potential videophone users), on
134 networks in 59 countries.". Today, in 2010, I would feel very confident in doubling those figures given the rate at which UMTS networks (and more latterly, HSPA networks) have been deployed throughout the world. Of note is that the Chinese 3G standard (TD-SCDMA) also supports the same video call standard protocol. That protocol (3G-324M - See this article from commdesign.com
for a great explanation of the protocol and it's history - from way back in 2003!) has been around for a while and yes, it was developed because the original UMTS networks couldn't support IPv6 or the low latency connectivity to provide a good quality video call over a purely IP infrastructure. But, things have changed with LTE gathering steam all around the world (110 telcos across 48 countries according to 3GPP
) and mobile WiMax being deployed in the USA by Sprint and at a few other locations around the world (See WiMax Forum's April 2010 report
- note that the majority of these WiMax deployments are not for mobile WiMax and as far as I know, Sprint are the first to be actively deploying WiMax enabled mobile phones as opposed to mobile broadband USB modems) so, perhaps it is time to revisit those video calling standards and update them with something that can take advantage of these faster networks. I think that would be a valid thing to do right now. If it were up to me, I would be looking at SIP based solutions and learning from the success that companies like Skype
have had with their video calling (albeit only on PCs and with proprietary technology) - wouldn't it be great if you could video call anyone from any device?
I guess the thing that annoys me most about Apple's arrogance is to ignore the prior work in the field. Wouldn't it be better to make Facetime compatible with the hundreds of millions of handsets already deployed rather than introduce yet another incompatible technology and proclaim it as "... going to be a standard".
My 2c worth...
Yes, I should have posted this a week ago during the TeleManagement World conference - I've been busy since then and the wireless network at the conference was not available in most of the session rooms - at least that is my excuse.
At Impact 2010 in Las Vegas we heard from the IBM Business Partner (GBM) on the ICE project
. At TMW 2010, it was ICE themselves presenting on ICE and their journey down the TeleManagement Forum Frameworx path. Ricardo Mata, Sub-Director,
VertICE (OSS) Project from ICE presented (see his picture to the right) presented on ICE's projects to move Costa Rica's legacy carrier to a position that will allow them to remain competitive when the government opens up the market to international competitors such as Telefonica who are champing at the bit to get in there. ICE used IBM's middleware to integrate components from a range of vendors and align them to the TeleManagement Forum's Frameworx (the new name for eTOM, TAM and SID). In terms of what ICE wanted to achieve with this project (they call it PESSO) this diagram shows it really well.
I wish I could share with you the entire slide pack, but I think I might incur the wrath of the TeleManagement Forum if I were to do that. If you want to see these great presentations from Telcos from all around the world, you will just have to stump up the cash and get yourself to Nice next year. Finally, I want to illustrate the integration architecture that ICE used - this diagram is similar to the one form Impact, but I think importantly shows ICE's view of the architecture rather than IBM's or GMB's.
For the benefit of those that
don't understand some of the acronyms in the architecture diagram above,
let me explain them a bit:
- ESB - Enterprise Services Bus
- TOCP - Telecom Operations Content Pack (the old name for WebSphere Telecom Content Pack) - IBM's product to help Telcos get in line with the TMF Frameworx)
- NGOSS - Next Generation Operations Support Systems (the old name to TMF Frameworx)
- TAM - Telecom Applications Map
- SID - Shared Information / Data
Here is the URL for this bookmark: http://apcmag.com/telstra-to-block-ipad-micro-sims-in-other-devices.htm
Interesting... in the rest of the world (and as I heard repeatedly last
week at TeleManagement World in Nice, France) Telcos are suffering from
all you can eat plans - particularly plans for devices like the iPhone
which encourages users to be online all the time and to consume rich
media like movies. I heard from a number of Telcos that teenagers are
preferring to watch movies on their iPhones in their bedrooms rather
than in the lounge room on the normal TV (not that they can always get
access tot he same movies on the TV) - surely a larger screen will
encourage more of that sort of behaviour. This is driving too much
traffic on Telcos 3G networks with flat rate plans. Optus have also
announced a similar all you can eat plan for their iPads.
At almost the same time, both Optus
(Vodafone Hutchison Australia) have offered unlimited 3G plans for just
AU$50. It makes me wonder if these Telcos in Australia are listening
to other Telcos around the world. There's been a lot of press about
AT&T's network problems associated with iPhone users. I know the
world would be a perfect place if we learnt from everyone else's
mistakes, but come on - you don't need to be a genius to see how this
could damage their business. I guess they see this as a competitive
pressure - if their rivals do it, then they have to as well - I had
hoped that the Australian Telcos would be (jointly) a bit more sensible
I do not have any Apple products and I'll admit to a bit of jealousy at
an all you can eat plan for only AU$50 when I get about 1 Gb for a
similar amount on my Nokia e71 - it doesn't seem fair that I get so much
less for similar money on the same network - just because of the device
I choose to use...