Visit my supported websites: Epping Life  |  Rye Meads Ringing Group  |  Epping Horticultural Society  |  Simplicated  |  Contact

Wednesday 26 August 2009

Whither WiMAX?

There are strongly divided opinions about WiMAX.

Its supporters have a vision of a world where many devices are wireless mobile enabled, so that for example a camera can take a picture and have it automatically and immediately uploaded to the photographer’s own picture archive on a server somewhere, gaming devices can use mobility and geography as part of the game, a phone can make a freed VoIP call, or the expected new generation of digital notebooks as well as laptops can access videos, books, online newspapers, and social networking sites seamlessly. Access to the web anywhere on any device at high end broadband speeds.

On the other hand, observers point to the approaching ‘Long Term Evolution’ (LTE) standard for transforming 3G mobile networks (UMTS) into 4G networks based on IP. These will offer speeds in excess of current WiMAX technology, but less than next generation WiMAX (and both are in excess of speeds likely to be achievable over current fixed wire local loops). They say that there is no business case to build a WiMAX network when the mobile companies can deliver the same through the current mobile networks enhanced to 4G.

So should the title to this article be “Wither WiMAX?”

The position in the UK is not promising. At present, Ofcom has only licensed two operators to provide fixed point services based on the 802.16d standard. This has limited capacity and does not enable mobility (802.16e is ready for deployment and enables mobility including cell handoff). Therefore at present WiMAX can compete with fixed broadband services but not mobile – yet mobile is the raison d’ĂȘtre of WiMAX. Meanwhile the WiFi hotspot operators and GSM/3G operators are signing up mobile broadband customers as fast as they can, which is mopping up all the pent up demand that would otherwise be potential WiMAX customers.

If this particular cork were removed, how would WiMAX operators respond? They are broadly ready to deploy 802.16e technology but face significant problems in getting a return on investment.

Firstly, it will be a major undertaking, and investment, to build out a network of base stations to support wide area coverage. Mobility is not an easy sell unless coverage extends to at least the main areas a mobile user may go to – this means at least metropolitan cover in the area you live and work, and probably most of the main metropolitan areas in the country to be a convincing proposition. The big issue in base station building is in acquiring sites appropriately located for WiMAX frequency and wave propagation characteristics, which are much more significantly affected by topography and are less able to penetrate buildings than are GSM frequencies. The mobile phone companies on the other hand already have all their base station sites, and therefore cost of deployment is much less and the speed of deployment much greater – even though LTE is behind WiMAX in terms of development, it may be able to overtake WiMAX on deployment.

Secondly, the WiMAX-enabled user devices are not by and large on the market yet. Smart phones and PDAs on the market today are WiFi and GPRS/UMTS enabled but not WiMAX. The latest laptops are starting to be deployed with WiMAX as well as WiFi built in, but the rest of us will need to buy a dongle transceiver.

Thirdly, the technology is still on the leading edge, and despite deployments elsewhere in the world – most notably by Sprint in the USA – is not yet routine in terms of deployment, network management, or coverage prediction.

And finally, can the services offered by differentiated – and priced – competitively with both WiFi and GPRS/UMTS? On the one hand, it may compete for speed, especially uplink (WiMAX is a symmetrical service, unlike wired broadband, so uplink is very fast compared with normal domestic broadband), so this might suit those needing capacity to upload large files on the road, such as photos or videos, or even some fixed location users for similar reasons; but this advantage is offset by the very limited coverage until a lot of network building has taken place, which will take years not months.

It seems to me that WiMAX has its greatest potential where mobile (especially mobile data) is less well developed, particularly in developing countries, and where fixed broadband is not universally available; but with regret I have to conclude that the odds against WiMAX succeeding in the UK are long.

Thursday 20 August 2009

What's in a name?

I've been in the IT industry for - well, you don't need to know, but suffice it to say many years. There has been such huge technology change in that time: when I started as a trainee, there were no personal computers, only monolithic mainframes which you programmed by writing code with paper and pencil, and then sat at your desk and effectively executed the program line by line on paper. When you were happy, you sent this to a room wherein young ladies of the type much admired by male trainees typed onto punched cards for loading onto the mainframe.

But it's not so much the technology change, constant and amazing though it is, which I am interested in exploring here; it is the continual change in job titles. Thus I was a Programmer, but these days of course you don't have Programmers, you have Developers. You don't have Programmers because you don't write Programs any more, you write Software. I don't know why the change arose, because despite the technology now enabling code writing and testing to be done online and in real time, it is still essentially the same job of producing logic instructions to achieve a business requirement.

In the main, however, the issue driving changes in job titles is essentially one of 'job inflation'. When I started my career, a company was administered by the Board (as it still is today), but headed by a Managing Director, assisted by the Finance Director, Operations Director, Sales Director and so on. Then there was the management layer, where the managers were all called - well, Managers. At the Project level, you had a Project Leader running the show, and within the Project there were Teams, each led by a Team Leader.

Then it was decided that being called a Manager was not sufficiently important sounding, especially if you were in Sales, and if you were called a Director you sounded much more impressive to a prospective client. That in turn meant that the Board's important titles were diluted, because you couldn't tell if a Director was really important or just moderately important.

Much the same thing happened in the States, where Directors were known as Vice-Presidents. A few years back, I encountered the delightful title of "Vice-President of Blue Sky", which I gather meant he looked for new business markets. (So were existing customers, by implication, Grey Sky?)

So the Board members became Chief Officers, and the Managing Director as a title now means the head of a subsidiary or division.

Interestingly, there is now a trend where the management layer is populated not by Managers or Directors, but by Heads of Department. My own favourite job title, which I came across recently, is "Head of Web", a title I now espouse when assisting the technologically challenged amongst my family. They of course, in their ignorance, still think a Head of Web is what you find in those corners you don't dust as assiduously as you should.

Now, large developments are not just big Projects, today they are Programmes, and Leaders have become Managers (since the Management layer no longer needs the term, and it sounds more impressive than Leader). Except that even quite modest developments are now Programmes, so that what used to be a Team is now a Project. Thus the leader of a small team is now a Project Manager, and quite possibly has no Project Management training or accreditation. It is getting harder to decide if it is better to present myself as a Project Manager, a Programme Director or a Head of IT Implementation.

We have also started to see that the leader of some project teams is known simply as 'Lead'. Thus the head of the Testing team, who has recently been known as the Test Manager (Head of Testing is obviously too grand for what is an essential but regrettably unglamorous role), now seems to be increasingly termed 'Test Lead'. Clearly this is not the same as Test Leader, because we don't want to go back to the past, and Lead sounds more modern - or at least, with the use of an adjective as a noun, more American.

So, where next? I suspect that the next round of title inflation is overdue, that the Director will become a Chief of something, if not a Chief Officer, and all Project Managers will become Project Directors. This will only be acceptable to the Board members if they in turn have their escalation route mapped out.

I have a suggestion: how about Minister? After all, the important Government ministers are now all Secretaries of State, so their importance can still be maintained. And I would love to go into a meeting with my Board level boss and be able to agree by means of a simple "Yes, Minister!"


Wednesday 12 August 2009

Agile Waterfalls?

There is something of an ongoing debate at present concerning project methodologies between the traditional approach known as Waterfall, and the new kid on the block, Agile, which has become somewhat - dare I say it - trendy. So is it appropriate to change IT development strategy from Waterfall to Agile?

Now there are many different flavours of each of these models, but broadly speaking the principles of each are as follows. Waterfall requires that the project moves through defined stages (broadly - requirements confirmation, design, development, test, acceptance, deployment), whereas Agile is about reducing time into service for software developments by establishing a collaborative relationship between developers and users, and using iterative development techniques to ensure best correlation between requirement and solution. This is normally achieved by time-boxing the overall interval between releases to typically 3 months, and by reducing the project stages to four: release planning, where the approach to the release, including dates, budgets, resources and functionality focus areas, is agreed with the principal business sponsors; the hothouse, an intense, competitive workshop over normally 3 or 4 days to agree the requirements scope and the business case for the release; iterative development, a period comprising typically four or so iterations through the development cycle, delivering successively refined prototypes of the solution; and finally deployment into service.

So should all developments in future move to an Agile model? Agile certainly has some key benefits that are highly relevant to certain project scenarios.

Implementation of new products and services in a fast-moving business can happen more responsively due to the short release cycles. This in turn leads to increased customer satisfaction due to rapid delivery of usable functionality.

There is a closer relationship between business benefit and functional delivery prioritisation, which has to be a good thing. Furthermore, the close co-operation between users and developers leads to a better understanding of the business by the developers, and an increased sense of collaboration and ownership of the solution by the business.

New requirements that are developing during a short implementation timeframe can be accommodated in a method which allows for some flexibility of requirement and prototyping during the development iterations.

However, there are also disadvantages of Agile compared with Waterfall.

Pressure for throughput of development can lead to short-termism of design - a lack of attention to the overall solution architecture and the way that this supports future flexibility and maintainability. Similarly, the rigidity of the time-boxed development cycles can lead to releases being changed in scope at a late stage.

Agile relies largely for its success on a high level of involvement of user communities with the development teams through the iterations, in order to maintain a close link between requirement and solution. This is much harder to deliver effectively where the development teams are off-shore.

Developers need to be experienced and responsible in order to maximise business benefit from the development cycles. This can also be an issue with Indian development teams due to the strongly hierarchical culture in India. Furthermore, it can also increase the average cost per developer day, as it is harder to use more junior resources effectively.

Requirements documentation is often insufficiently precise to form a sound basis for acceptance (or indeed for business process amendment and staff retraining) and can lead to commercial issues between supplier and user, as well as scope creep during development iterations.

Finally, because of the fluidity of requirements and delivered scope inherent in the Agile approach, it is harder to tie a supplier’s contract price to functionality.

In conclusion, it seems to me that where there is the need to provide change on an existing environment, and to keep that change focused on maximum return in a short timescale, then Agile provides a compelling case. However, for major developments - particularly for large scale IT refresh programmes (e.g. billing system replacement) or for greenfield developments for start-ups - there seems still to be a strong case for these being managed in a Waterfall based approach. But there are lessons to be learned here from Agile experiences: large Waterfall projects also benefit from being broken into appropriate phases for frequent delivery of value, from establishing and maintaining a close involvement of the relevant communities within the business, and from a more progressive approach to Acceptance.

Although the advantages of Agile are widely promoted at present, it is also becoming increasingly clear that not every project would benefit from being implemented through wholly Agile techniques.

Monday 10 August 2009

Management by Targets

There's been quite a lot in the news over the past year about the management styles of the banks that led them to reward the wrong behaviour and drive them to the brink of insolvency. That started me thinking about the way that management style has changed in the past three decades I have been working.

When I began work in the seventies, the office environment was quite different. There were no office computers for a start (the only ones used by business were in large rooms tended like temple gods by priests and priestesses behind locked doors, and used only for applications with high returns given the then astronomical cost of computer power). There were in relative terms large numbers of secretarial staff, as all documents needed to be typed. Businesses communicated by phone, letter and occasionally by telex.

The management environment for staff was different too. Staff expected to change jobs less often. The emphasis was in doing well, and employers encouraged staff to improve - there was much focus on training, development of individual capability in the right directions for the business, and getting staff to the point where promotion could be envisaged.

Staff performance assessment consequentially was more qualitative than quantitative. This had the drawback that assessment was more subjective, but usually there were enough people involved in the review processes that there was little scope for personality clashes to spoil a person's career. What we didn't have were individual targets (sales personnel excepted), but I don't think that made us any less committed to do well. We did have bonuses, usually related in some way to company performance, and relatively modest in scale compared with today's incentives.

Today of course every member of staff in all the companies in which I've recently worked have individual targets, and often large bonuses, based largely on achievement of their own personal targets. These targets are by their nature short term, and so increasingly are corporate objectives - show revenue and profit growth not just this year, but this quarter or even month, and we'll worry about next year when we get there.

The results are predictable. Managers drive their reports rather than lead them, often with unpleasant "macho management" (at least the real bullies of old are now curbed by modern employment law). Staff work in stressed conditions, focus single-mindedly on their targets even when it becomes clear these are not in the company’s interests (the company being powerless to change them once committed), and matters of true good performance and development of capabilities are lost. How many staff in, or aspiring to, management grades work the contracted number of hours per day or week? Many routinely work up to a couple of hours a day extra, with getting in a bit early, cutting lunch hour, and leaving an hour or more late - and this is not paid overtime.

It certainly riles me to go into a staff assessment interview, and have a discussion on the lines of:

"You didn't achieve your targets."

"No, but they became less important due to x and y, and I couldn't achieve them. I think I am good at my job, don't you?"

"That's not for me to say: you didn't achieve your targets."

This inability to recognise competence unless it is measurable - and only targets are measured - is one of the reasons why I've focused more and more on contract work than employment, where you are judged on performance and "results" (in a general sense) and not on targets.

There are of course some notable advances in modern management methods. In particular I would note the improved way that women are treated nowadays, which in most of the offices I've worked in recently is genuinely on an equal footing with men.

However there is no getting away from the fact that target based staff management rather than competence based rewards does seem to me to be a doubtful improvement - one the victims of the excesses of the city traders and bankers might well agree with.

Saturday 8 August 2009

The Thrill of the Start-up

I have been involved now in several telecoms service providers either as greenfield start-ups or as 'early life growth' companies, and there is no doubt they are very exciting places to be. There is a vibrancy about them - a will to overcome problems, to get their message and their services into the marketplace, and to dare the market not to let them succeed!

Regrettably, not all do succeed. Many have particular problems with the IT implementation which is sometimes the cause, but more often I think the symptom of the underlying issues. Start-ups have certain characteristics, which are both the nature of the beast and also the source of problems with the IT requirements.

Firstly, they are almost always developing the IT infrastructure before the business has determined in detail how the network will be configured and managed or how the launch products will be defined. This is because to set up significant IT infrastructure generally takes as long as building a core network, and longer than creating products and services. This means that significant architecture and design decisions are taken before all the data impacting that decision are known.

Secondly, since the company starts from a base of no staff, there is almost no-one to be involved in detailed planning of business processes, whether the products will be sold through resellers or retailers and if so how to engage and remunerate them, how the logistics around supply and sale of handsets/SIM/routers/numbers/etc (delete as appropriate) will be managed, and so on ad nauseam. Therefore once again the IT infrastructure starts out in a certain direction, and too often the business later decides on a different course, with consequent impacts (not always taken into account) on the IT build programme.

Thirdly, business plans made well before launch have a habit of being adjusted the closer to launch you get. Suddenly, volume assumptions about wholesale vs. direct, business vs. residential, self care vs. call centre, in house processing vs. external - the list is again endless - get thrown on their head, and you find you have either too much or too little IT capacity (sometimes both!). More pertinently, you may find sudden adjustments in Capex or Opex budgets which significantly compromise the IT support you can deliver to the business.

And fourthly, start-ups have a habit of changing their ownership during the run up to launch, as funding needs necessitate new investors with an incentive to investigate how their money is being spent. Often these investors have strong preconceptions about how the infrastructure should be architected - for example, corporate policies about specific suppliers, perhaps a new requirement to converge solutions with other group operating companies - and you may find at best a distraction from the build issues, at worst a major reappraisal of direction.

I have therefore come to some simple principles to help guide IT delivery for a start-up.
  • Design the functionality as openly as possible, and keep it all simple, basic and targeted on the major volume transactions. Requirements will change, so be ready to go in different directions. Complexities and automation for the exceptional transactions can be added later.

  • Design the architecture so that the systems are scalable, but build the initial configurations for the early volumes - resist the temptation to build for 4 years out on the basis you have the capital available!

  • Define and use the change management process from the earliest days.

  • Set up and use rigorously one tool omitted from the Prince2 methodology, but in my view essential for such projects: an Assumptions and Decisions Register. You will certainly have a conversation with someone very senior and new to the organisation, probably about a month before launch, along the lines of 'Why on earth did you do it this way?' You will, I guarantee, need to produce your record of all significant business and design decisions or assumptions, who endorsed them, and why.

But despite these trials and tribulations, if given the choice between mature organisation and start-up, I would prefer to work for the start-up - they are just more lively and more fun!

Friday 7 August 2009

Permanent or Contract?

For the first twenty-odd years of my career, I was a Permie - a salaried employee of the company I served. It didn't really occur to me to do anything else - I was a Project Manager, and Project Managers are professionally risk-averse. And I saw contracting as a risky business - you couldn't know where the next job would come from, or how much you would earn from it.

Being made redundant for the first time didn't really change that view. This wasn't a time of recession, it was simply because my employer wanted to reorganise and relocate, and I didn't want to move. I cast around until I found a new permanent position.

The second time was in a recession. The outlook appeared bleak. I was lucky - I called a client I'd previously worked for as a consultant, and he wanted me back - as a contractor. And so I had my 'road to Damascus' moment'.

Now I have several years' experience as a contractor - or as it is now more politely referred to, an Interim Manager. I can look at both types of role, and have some observations to make from the resource side of the fence.

Firstly, it is clear that the decline in job security has eroded the difference in risk between permanent and contract. It is no longer 'safer' to be in a permanent role - as soon as the company hits choppy waters, over the side goes anyone now seen as 'non-essential'. Nor indeed are the benefits any longer compelling - nobody now expects an employer to provide a generous pension scheme, and most employees find a company car is a tax liability.

Secondly, as I hit the latter stages of my career, I find there is a difference in approach by companies for permanent or contract staff. When I go for a permanent job interview (and I still do from time to time) I get the impression they are disappointed that I am nearing retirement age. They seem to seek someone either aiming to step up for the role, and so keen to prove themselves, or else they are looking not for the current role but for some future one not yet available. Often the job description requires someone who is 'ambitious' - but for what? I am no longer ambitious, if that means wanting to get there and immediately seek advancement. I don't want to work my way up to CEO, but that's not the same as not wanting to do as good a job as I can in the role I was hired for. On the other hand, for contract work, companies do indeed require experience - they want the grey hairs and battle scars, and are much less inclined to take a punt on someone keen to make a name for themselves. This is one reason why I've focused more on the contract market as I get older.

Thirdly, contractors seem to be more self-reliant. We have to be - we don't have an employer taking care of national insurance, taxation, health cover, life insurance, car allowance and the rest. Employees sometimes look at contractors' fee rates and think we are pampered - but take into account the costs they don't have to bear and the periods between contracts, and the higher remuneration isn't significant. But I have noticed that the contractor community is stacked with individuals who have the 'make it happen' attitude - an attribute our clients generally really need.

Fourthly, most of the contractors I have worked with take tremendous pride in their work. We all seem to work hard - often harder than the permanent staff - and derive great satisfaction from a job well done. That in some ways runs contrary to popular opinion, that contractors are 'only out for themselves' - but the real truth is that a contractor lives by his or her reputation, and so must constantly prove their worth.

It is clear to me that for the resourcing of projects, which are by definition finite in duration, employers are well advised to look at the contract market. The overall costs of such staff are certainly comparable to those of permanent staff (particularly if it comes to redundancies), and it is easier to get a resource which is closely matched in experience and capability to the new project in hand. Furthermore, the issue of what to do with the resources at the end of the project is no longer a concern!

Thursday 6 August 2009

How to get CRM wrong

I've been involved with the delivery or enhancement of information systems that support customer services for many years, and I am eternally disappointed at how companies seem determined to miss the point of customer service. Customer service often seems from a consumer's point of view to be something they hate, and can result in really negative views of the organisation in question; and yet companies seem unable to address this.
Accordingly, since so many companies seem determined to deliver bad customer service, I'd like to help them by presenting my guide to making CRM unsuccessful!
Here are my top 5 decisions to ensure a reduced business benefit from CRM investment. (I could do many more, but in the interests of brevity...)
Customer service is a cost of doing business - so we will just try to handle problems as cheaply and efficiently as possible
Most of the significant inadequacies in customer service arise from the view that the cost of it must be kept under strict budget constraints. However, the results of poor customer service are felt in increased customer churn, longer payment times, reduced ARPU, negative recommendations by customers to their social circle, and adverse brand image generally. Conveniently, none of these are reliably quantifiable, and none affect the budgets of the Customer Service operation.
As with so much else in life, customer service costs are about value for money - the service levels (both measurable and 'feelgood') should be carefully defined, and only then can strenuous efforts can be made to deliver that at a sustainable cost.
This is especially true with offshore call centres - there is a tendency for these to be staffed with agents who clearly don't know the company or products they represent, don't understand how these products are used in the context of British life, and don't understand UK geography. This can only work with very careful attention to training.
We will stream customer contacts by subject of call
Like everybody else, I have had some bad experiences with calling Customer Service lines. There are plenty of ways to annoy the customer before even speaking to an agent: the interminable "press 1 for x, press 2 for y" menus, the long wait whilst "none of our agents is available", and the meaningless platitude that "your call is important to us" when all the evidence is to the contrary. But worst of all, on getting through, finding that the person you speak to is not the right one after all, and you need to be handed on to another department (or even in some cases asked to call another number altogether). I have been on calls to companies who should remain unnamed where I've been passed from one department to another, only to be transferred back to where I started.
Every company has a good reason for streaming. Mostly it is about the complexity of the issues that can arise and the degree of staff training to cover the board; often about systems and the lack of ability to join together all the data that may be required; and sometimes it is because the company needs to be careful about what authorisations they permit which staff to have (a significant portion of fraud is regrettably internal). Too few companies design and manage the streaming in an externally focused way rather than to suit themselves.
We can solve a lot of problems through the IVR programming
This should actually read, "we can create a lot of problems through IVR programming". Besides the menu chains so often leading to dead ends when the choices don't exactly tally with what the customer is seeking, often with the use of standard messages instead of a person and with no way back but to start again, there is a tendency to request meaningless information along the way. I am particularly irritated by the recorded announcement asking me to key my account number using the telephone keypad, only for the agent on finally getting through beginning by asking me for my account number. It's no good arguing, this poor soul can't help it, the systems haven't passed the data on with the call.
Not only should IVR routings and messages be carefully set up, but should also be frequently reviewed and revised. I believe that companies should force their senior managers and directors to use their own products but not as corporate facilities, but as real external customers. If the CEO of say a mobile operator had to call customer service like everybody else when a problem occurred instead of getting their PA to call the CS Director, the situation would improve dramatically.
Once we've got self-care set up, all we need to do is monitor the daily reports and let it take care of itself
Self-care seems to be particularly widely abused by Customer Service operations. There is no doubt that some self care works really well: for example, those energy companies I've used have excellent and efficient web sites and IVR systems to take requested meter readings. For anything except a routine request, however, it seems to break down.
There are a few simple rules that need to be followed for self-care (whether web or IVR based) to work better. Firstly, design the flows from a customer viewpoint - think of minimising number of clicks or time taken, ease of use, escape routes. Secondly, design usage data reports such that you can tell what is working and what isn't. Thirdly, keep on and on reviewing how easy it still is to use after changes for new products, new processes, new journey flows etc.
We aim to deliver the same level of customer service to all of our customers
Unfortunately, this usually means dragging everyone down to the lowest common denominator. You should aim to treat every customer with the same level of respect and care, but not all customers are equal. This mantra is often an excuse for the failure to recognise customers who might have cause for complaint, or who are particularly valuable and should not be aggravated, or who are at risk of churn. These failures are usually for systems reasons - again, the inability to pull together the elements of customer data that can identify such conditions and warn the customer service operator.

Customer Service has such a bad reputation amongst the public at large - but we don't have to accept that 'industry standard' poor levels of service need to apply to our own organisations.

Wednesday 5 August 2009

Prepaid/postpaid convergence - is it worth it?

Suppliers of billing software to the mobile industry are advancing the capabilities for prepaid/postpaid convergence in leaps and bounds - but like many a handset feature, is it something customers really want?

It is worth remembering that operators developed the prepaid market largely because of the fraud risks with postpaid: the period from a new user being granted service, to first bill, reminders and dunning processes, to service cutoff could be up to 10 weeks. Therefore operators set a high creditworthiness threshold, and so limited the market sector they could address.

It was the advent of IN platforms for in-call management - initially aimed at number translation and routing services - that provided the opportunity for developing prepaid services. These proved very popular with customers: people liked being able to predict costs with no surprises at bill time, and also appreciated the simpler and more immediate buying experience.

However, as a result, operators ended with two distinct systems infrastructures for rating, billing and customer care. This got more complex with the advent of various data services and increasingly diverse billing arrangements. But does that matter?

There is no doubt that running two significant infrastructures is costly. This is true from a pure IT point of view, but even more so from a business point of view. The different capabilities of prepaid and postpaid billing systems tend to mean that an operator will have separate products, with separate marketing and product management functions (and a more complex marketing message), impacts on staff training and sales, considerable complexities in migration of a subscriber from pre to post or vice versa, probably separate call centres and all of the CRM that goes along with that - rewards, customer retention and recovery, revenue assurance and fraud detection, and so on. Simplification of these business functions would bring management and cost benefits.

What about the customer point of view? Would a convergent solution provide additional customer benefits?

Well, there are certainly customer options available with a convergent solution that can't be made available (at least easily) on a diverse solution. For example, it would be possible to vary payment method by date/time, or by numbers called or services used. Consider these scenarios:

1. A business provides an employee phone where calls during the working day are postpaid by the business, and calls made during the evening are prepaid, with top-ups the responsibility of the employee. The business can provide a phone for business use without preventing reasonable use by the employee, whilst the employee can use the phone for personal calls when not at work without worrying about the company view of the cost.

2. Parents can provide phones for their children knowing there will always be enough credit available for them to call home or to receive calls, but that they will not run up big bills on calls to friends, or downloads, games or mobile surfing to Facebook etc, for the parents to pay.

3. A business can provide phones to its delivery drivers but limit usage to calls to the company office or corporate VPN, paid by the company. The driver can still use the phone for personal calls on a prepaid basis if he tops up the credit.

There is no doubt too that there are benefits for the operator in being able to treat all customers the same however they pay. Any product can now be sold to any customer, greatly simplifying the marketing message, since payment method becomes just a purchase option. Any postpaid account can apply a top-up at any time. Discounts applied to hierarchies can be more easily applied to mixed pre/post hierarchies, with again a simplified sales message. And of course with only one solution architecture to update, new products should be more easily and more rapidly introduced.

But still - is it worth it? If you were a new operator with greenfield systems and processes, no doubt you would go the convergent route. But for a major operator with long established legacy systems - which will by now be horribly complex and intertwined - unpicking this to become convergent is a major undertaking: costly, risky, and lengthy (and hence highly impacted by business change). Some operators have taken the plunge, but they are still a minority. A large mobile business needs a lot of convincing that the benefits justify the pain of getting there. And as a customer, there are many more important factors when buying your mobile service.

So whilst I envisage convergence slowly increasing its presence in the industry, I don't expect to see convergence revolutionising mobile sales in the High Street any time soon.