taking policy to the
smart connected device

Author Archive

Google/Verizon: More Heat than Light?

Written by GoS on . Posted in Blog

Few topics in seem to generate as much furore as “net neutrality”. Lots of factors are at work here, including fear of creeping government interference and suspicion of big business. The latter isn’t helped when two industry titans, in this case Google and Verizon, have “secret” discussions, leading to a set of proposals of what, and how much should be regulated (in the US at any rate). However it’s interesting that any agreement was even possible between these two, given that Google has been a staunch supporter of neutrality, while Verizon, as a service provider, could be expected to favour tiered services. The compromise is an affirmation of non-discrimination for “broadband Internet access” and a requirement of “transparency” on the one hand, while allowing “reasonable network management” and “additional or differentiated services” on the other. An exclusion for wireless access and the caveat of “lawful” applied to content, applications and services have critics such as the EFF hot under the collar, but on the whole this does seem to be a step forward in the debate.

What net neutrality diehards don’t seem to want to acknowledge is that packet networks rely on statistical multiplexing in order to be economical. While a service provider may offer, say, a 20Mbit/s connection to all their subscribers, it’s infeasible to actually deliver packets at that speed to all of them simultaneously. The network relies on the fact that users generally don’t exploit their full bandwidth allocation all the time, in order to be able to deliver it to all users some of the time. When applications such as video streaming and peer-to-peer filesharing break this assumption, something has to give. Insisting on a “right” to run such applications regardless is akin to demanding a “right” to drive a car at its maximum speed all the time; in both cases this disregards the rights of other people using the same infrastructure. We’ve come to accept a system of speed limits on the roads and some equivalent mechanisms for networks are inevitable so that they can continue to function in the face of insatiable demand. Net neutralists need to engage in a debate about what limits are fair and how they are to be enforced, and Google should be applauded for doing just this rather than condemned as evil.

Between QoS and QoE

Written by GoS on . Posted in Blog

Quality of Service (QoS) has long been regarded in some quarters as rather fiddly, technical, and difficult to relate to everyday concerns. The new buzzword is “quality of experience” (QoE), which sounds, on the face of it, much more intuitive. Who wouldn’t want a good quality of experience from their network service? Obviously this is something that service providers should strive to improve! On closer inspection, however, things are not so simple: a user’s “quality of experience” is fundamentally subjective, so it’s not easy to measure, and in any case depends on many factors that are outside a service provider’s control. For example, you might not enjoy an otherwise excellent video conference session if a lot of the screen pixels were dead, you had a migraine or got fired in the course of it! So “QoE” has come to mean those aspects of the delivery of a service to the end user that it’s possible to measure. This brings it very close to another useful, but less widely used, concept, “quality of application” (QoA), which is the performance of the distributed system delivering the application or service. This is almost the same thing, but looked at from the inside out: an engineering view of service delivery.

So what determines QoA? Let’s start from the user: the first part of the system they interact with is some sort of end-point, whose performance may be inadequate. For instance, we had an issue here with Skype, running on a dedicated but very old laptop that turned out to be underpowered for the job (I believe that the latest versions of Skype check for this sort of problem and give a warning). Replacing the computer with something less antiquated made a huge difference! On the other side of the equation is generally a server, or sometimes several, for example when a webserver needs to query a database in order to display a page. The increasing trend towards centralising servers in data centres has spawned a batch of server monitoring tools to keep track of this critical aspect of QoA.
Finally there’s the network that connects all the endpoints and servers together, whose performance we call “QoS”. QoS is complicated because the network has to do lots of things at once, all of which need to get consistent service (just like a server supporting a bunch of VMs). Not every application needs low latency, but every application needs some bound on the delay and loss its packets will experience when crossing the network or else its QoA will drop, taking QoE with it, and maybe giving you that migraine in the first place, or even getting you fired if it was your job to do something about it!

Peter Thompson, Chief Scientist

Optimising Networks for Video Delivery

Written by GoS on . Posted in News

GoS CTO Charles Twist has been invited to speak at IIR’s forthcoming “Optimising Networks for Video Delivery Conference” to take place in September in Warsaw. This unique event is co-located with the well-established Carrier Ethernet World Congress and Transport Network Strategies conferences.

Charles will discuss the issues behind “Monitoring, measuring and controlling video QoE to the edge device whilst protecting other services”. Topics covered will include:

  • Guaranteeing QoE even with network congestion
  • Balancing between maximum QoE and efficient bandwidth
  • Ensuring consistent, predictable and efficient QoE for network planning
  • Achieving service demarcation and diagnostics across any device

Learn more here.