Financialization in telecom

The normal role for finance in the economy is to facilitate trade and production efficiently. Through these transactions profits are generated. However, due to dysfunctional factors, it can become more profitable to use financial methods to generate profits without trade or production. This abnormal role of finance in the economy is termed financialization.

Financialization is “an economic, social and moral disaster: net disinvestment, loss of shareholder value, crippled capacity to innovate, destruction of jobs, exploitation of workers, runaway executive compensation, windfall gains for activist insiders, rapidly increasing inequality and sustained economic stagnation.” [2]

Financialization in the telecom industry has become a destructive force. “AT&T and Verizon say 10Mbps is too fast for “broadband,” 4Mbps is enough” is the best indicator yet of the depth of financialization in telecom. Providing better services will severely limit telcos financial engineering activities. It’s ironic coming from the heirs to the legend built on the promise of providing “best possible service.”

It seems telcos no longer consider it their business to provide services their customers need, illustrated by these reports:

Now, contrast it with how other industries are operating, for example, utilities, auto, or computing. Here are some highlights:

Loss of direction by dominant communication providers has negative cascading effects on the industry. It has decimated a once thriving telecom technology supply chain. Nortel is no more [2, 3]. Alcatel-Lucent “has not earned any money 2006-2013” [2, 3]. Motorola has shrunk dramatically [2, 3].

With all these things going on, one would think that there would be an earnest effort to find out what is wrong. Instead, the preoccupation in the media and industry is with “net neutrality” confusion, which the FCC Chairman summed up: “the idea of net neutrality has been discussed for a decade with no lasting results.”

Posted in Communication industry, Telecom industry | Tagged , , , , , , , | Leave a comment

Tragedy of Internet Commons

The explanation by Verizon about the recent dispute between Netflix and Verizon highlights the problems of inadequate ownership rights [2, 3, 4] and lack of commonly accepted sustainable practices with the Internet. Another unrelated factor that is making things even more complicated is that the Internet was not designed to carry video streams.

There are historical precedents for the conflicts we are witnessing with the Internet — “Tragedy of the Commons” [2, 3, 4, 5, 6, 7, 8, 9, 10]. In medieval England and Europe there was a practice of sharing a common parcel of land as grazing grounds for cattle. Herdsmen will bring their cattle to the common grass fields. The tragedy is that benefits of bringing an additional cattle belong solely to the herdsman, but the problems of over grazing are shared by all.

With the Internet we have a similar conflict situation. As in the case with grazing grounds, the conflict is a result of property rights and insufficient regulation — self-imposed or external.

The ownership issues related to the Internet are complex. The Internet Transit Map provides a logical overview of the Internet. The connections marked Cloud Access (4) and LAN Switching (7) are the areas of this conflict. The logical structure of the conflicting area is shown in Internet Commons Architecture. The conflict arises due to the multiplicity of ownership, and lack of commonly accepted sustainable practices.

Unlike the medieval grasslands, different parts of the Internet Commons are owned by different parties. The Internet Commons Architecture is one instance of a simplified logical representation of connections in a data center that is shared.

This is how the ownership in a Commons Data Center may be distributed. The Data Center (1) building and land is owned by an internet landlord. The high speed communication lines (2) and the Transmission Switch (3) are owned by Internet Service Providers, who provide connectivity for that facility. The ownership of the Cabinets (5) belong to different Data Center Operators. Within the Cabinets (5), there are Servers (8), LAN/SAN Switches (7), and Distribution Switches (6). In addition, there is cabling connecting these communication systems and servers. The cabinets and the systems within the cabinet may be owned by the same company. Or, the space within a cabinet may be leased out to several companies, who in-turn own the systems within the cabinet.

“Cloud Services” or Software as a Service (SaaS) [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (10) is another innovation in the Data Center. Without having any type of ownership of anything in the Data Center, SaaS allows running software on the servers only when needed, and paying only for the usage.

The companies leasing space in cabinets may use their own cables for connections or the Data Center operator may provide some of the cables. And “Cloud companies” do not own anything in the Data Center, but use the services of “Cloud Providers,” and generate internet traffic.

In the case of the Netflix-Verizon conflict, the communication lines (2) and Transmission Switches (3) were installed by Verizon. Netflix was using “Cloud Services” (10) provided by Amazon.

About the economics – Adding servers (8), LAN Switches (7) and related cables (9) are relatively inexpensive, compared to the Transmission lines (2), Transmission Switches (3), Distribution Switches (6), and associated cabling (2, 4).

As time went on Netflix usage of “cloud services” increased, even though they had no ownership of the systems in the Data Center. The resulting increase in internet traffic made it necessary to upgrade the Data Center (1) infrastructure facilities that include Transmission lines (2), Transmission Switch (3), Distribution Switch (6), and associated cabling (4). The dispute is who should pay for the upgrade.

The Netflix-Verizon dispute illustrates the need for better clarity on ownership rights, responsibilities and usage rights at various transit points on the Internet — since competing commercial interests are involved.

If you have topics for discussion and/or have questions, please include them in your comments below, or send them directly.

Posted in Communication industry, Telecom industry | Tagged , , , | Leave a comment

Internet Fast and Slow Lanes

One area of confusion in the current internet debate is the “fast lane” and “slow lane” controversy. The FCC Chairman Tom Wheeler said, “I will not allow some companies to force Internet users into a slow lane so that others with special privileges can have superior service.” A closer examination of the way the Internet is constructed will reveal that this is not a real issue, but the real issues are completely different.

There are two kinds of internet connections:

      1. retail connection (subscriber access), and
      2. wholesale connection (among carriers and content providers).

The retail connections are marked (2) and (6) in the Internet Transit Map. The wholesale connections are marked (1), (4) and (5).

Using the transportation analogy, the wholesale connections are freeways, and the retail connections are on/off-ramps. The agreements Netflix have with Comcast, Verizon and others (as far as I can tell) is for wholesale connections.

How the retail connections operate is what is critical for most regular users of the Internet, except for those who may be operating remote servers in co-location centers. The retail connection is the regular Internet access, or broadband access.

For internet access common technologies used are xDSL [2 (pdf), 3], cable [2 (pdf), 3 (pdf)], fiber (pdf) [2, 3, 4, 5, 6, 7, 8, 9 (pdf)], WiFi [2, 3, 4], 3G/4G/LTE [2]. These connections are for a single subscriber to connect to a single or a few computers. Wholesale connections, on the other hand, handle very large number of connections, thousands or even millions of connections.

Now the economics regarding retail and whole connections. The cost involved in the retail connection is on a per subscriber basis. But the cost of wholesale connections is distributed over all the potential users of the wholesale connection. So wholesale connection is very cost effective, while the retail connections are very cost sensitive.

Also, the downstream (towards the subscriber) and the upstream (towards the service provider) speeds are also different for most of the technologies.

About performance, which is what the “fast lane” and “slow lane” controversy is about. Depending on the technology, the performance (peak speed, sustained speed, average speed) vary widely. For xDSL the peak speed, range from 144 kbps to 52 Mpbs and more (downstream), for varying distances. And 144 kbps to 6 Mbps upstream. For cable speed limit is 30Mbps, but most providers offer 1 Mpbs to 6 Mbps downstream, and 128 kpbs to 768 kbps upstream. Unlike other access technologies, cable is a shared medium. The available bandwidth capacity in a cable connection is shared by all the subscribers connected on the shared path. So the actual speed could be much lower if your neighbors also share the same cable, and are using at the same time.

Peak speed for fiber differs depending on the service provider. Peak speed for Google Fiber is 1 Gbps upstream and downstream. Verizon FiOS offers speeds upto 500 Mpbs (downstream) and 100 Mbps (upstream). AT&T U-Verse offers speeds upto 300 Mbps.

The peak speed is what is normally advertised for the subscriber connection. But the actual speed depends on many factors. For example, if you are watching a video clip there is a constant downstream of data (about 8 Mbps for MPEG2 [2]) after you have selected the address (url) of the video you are requesting. But if you are using online chat, there are gaps between interactions, and the amount of data being transferred is small, a few bps.

The peak speed is the maximum capacity of the connection, depending on the access technology. However, the average (and sustained) speed is dependent on the destination system from which the data is requested, and the delay (or how busy) in all the intermediate “Core Routing” and “Edge Routing” nodes. The “Edge Router” connected to each subscriber is usually a bottleneck, since many subscribers are connected to it and can become overloaded.

Optimum performance of subscriber devices require efficient performance by the “Edge Routers” connecting to them, since it has to manage the traffic from all the subscribers connected to it. Thus “Edge Router” can become performance bottleneck, decreasing the data speed (increasing the delay) experienced by the subscribers. So rather than focusing on “fast line” and “slow lane”, what is relevant is the performance and capacity of “Edge Routers.”

Service providers have a built-in incentive not to upgrade “Edge Routers” for optimum performance, since it increases the overall network load. “Edge Routers” of some network providers regularly under perform.

One of the past FCC decisions makes matters worse. The FCC has ruled that the service providers may throttle user data for traffic management purposes. This is a mistake. Instead the FCC should develop performance measures for “Edge Routers,” since it provides a simplified way to assure subscriber service (speed) levels.

The issue of “bandwidth abuse” (heavy bandwidth users) [2, 3, 4] is a separate issue and needs to be handled separately.

If you have topics for discussion and/or have questions, please include them in your comments below, or send them directly.

Posted in Communication industry, Net neutrality, Telecom industry | Tagged , , , , | Leave a comment

An Internet Transit Map

“In the world of tech policy there are few issues more conflict-laden and wrapped up in misunderstandings than net neutrality,” says Doug Brake in The Hill. There is no shortage of internet maps [2, 3]. But what is missing is a “transit map” for the Internet — a users guide for the Internet.

A transit map, commonly used in subways, is a simplified logical diagram of the transit network to make it easy to use the transportation network. To make matters worse, as the future universal medium for human communication and interaction, issues related to the Internet naturally involve technology, jurisprudence, economics, commerce, finance, consumer protection, market monopoly issues, government oversight, politics, economic development — to name a few. So it is easy to add to the discussion issues that are not relevant or material and create confusion.

Unlike many media-anointed experts, I have spent years designing and developing network systems and applications — invented and patented technologies for improving networks. To help clarify the issues, I created an Internet Transit Map (below). The Internet Transit Map is a simplified logical diagram (“reference model”) of the Internet to provide clarity for discussions about regulating the Internet.

The Internet Transit Map (ITM) shows the top level logical systems and critical interconnections. There are two primary types of routers:

        1. Core routers, and
        2. Edge routers.

Different manufacturers offer different products that differ in functionality, performance and capacity.

Details about products Cisco offers are available here [2, 3].

Details about Juniper products are available here [2, 3].

Details about products from Alcatel-Lucent are available here [2, 3].

Details about solutions from Ericsson are available here [2, 3].

The nodes marked “Core Routing” and “Edge Routing” in the Internet Transit Map may represent a single product configuration, or an entire network. For example, one “Core Routing” node could represent the full “Core Routing” in the Internet backbone provided by Sprint (data), or by Deutsche Telekom (IP Transit), or by CenturyLink. Other maps of physical networks are available here [2, 3, 4, 5].

In addition there are at least seven types of interconnections (interfaces) — numbered 1 thru 7 — that are critical for proper functioning of the Internet. These interconnections are made up of different types of hardware products and software stacks that operate over them. Compatibility and interoperability of hardware and software at these interconnections are essential for the Internet to function properly.

The technologies, systems and protocols used in each of these critical components of the Internet are totally different that any generalized discussion of “the Internet” to ensure its “openness” is meaningless. Issues need to be identified, discussed and resolved with respect to each of the interconnections (1-7) in the Internet Transit Map.

So far I have identified the following areas of confusion:

      • Internet origins
      • Internet “Fast lane” and “Slow lane”
      • Myth of unregulated Internet
      • Freedom for innovation

I plan to discuss them in the coming days and weeks. If you have topics for discussion and/or have questions, please include them in your comments below, or send them directly.

Posted in Communication industry, Net neutrality | Tagged , , , , , , , | Leave a comment

Recommendations to the FCC for the path forward

The FCC Chairman, Tom Wheeler, wrote accurately in his blog, “the idea of net neutrality (or the Open Internet) has been discussed for a decade with no lasting results.” The stalemate is the result of an attempt to solve technology problems using legal and political methods.

Designing network systems involve making tradeoffs for efficient resource allocation, functionality and preventing deadlocks. This inherently involves treating different sets of bits differently. Hence trying to mandate “principles of equality” in network design is a meaningless exercise, as has been demonstrated.

The legal root of the current conundrum is the past FCC mistake of classifying the Internet as an “information service,” exempt from FCC regulations. After exempting the Internet from regulations, the appeals court logically rebuffed efforts to regulate the Internet through fuzzy linguistics.

The intrinsic dilemma with Internet access is cost per connection is not a constant, but varies with technology and the “deployment distance” for each termination. Result is higher cost in less populated areas (disregarding the affordability factor.) Hence, some form of network deployment subsidy is necessary in the current market configuration.

RecommendationsTherefore, the long term solution is to declare that the Internet access will be regulated to conform to “common carrier principles” that have evolved over the centuries — starting with ferry operators — taking into account the unique attributes of the Internet. In exchange for accepting network deployment subsidies, network providers must comply with the common carrier principles for Internet access, to be formulated.

Obviously, this requires action by the Congress, which is messy, complicated and long drawn-out. But a declaration to that effect will provide clarity in the marketplace, and may even speed up the Congressional process.

The rise of the Internet, the divestiture of the AT&T [2, 3, 4, 5, 6], and the downsizing of the Bell Labs [2, 3, 4, 5, 6, 7, 8] created a market vacuum. The Bell Labs used to be the final technology authority regarding networks. Parts of the Bell Labs were absorbed into different entities, dispersing the knowledge and expertise accumulated over a century. The current FCC regulatory framework presupposes the existence an external technology authority.

The current network market structure is vastly different from the monopoly market when the FCC was formed. As history has shown, the FCC cannot fulfill its mandate without independent technology expertise. Current legal-centric FCC processes may have been adequate, when quasi-independent technology expertise was available externally. But changes are necessary to effectively manage the changed market structure. The simplest solution will be to develop that expertise internally, within the FCC, and enhance current legal-centric processes — to factor in technology-driven constraints, limitations and possibilities.

The legal-centric FCC processes also have a secondary deleterious impact. Issues related to networks belong primarily in the technology domain. Superimposing a legal framework on technology evolution can create unhelpful distortions. Technology issues and problems need to be addressed as such. The “net neutrality discussions” is an illustrative example of how not to do technology policy.

In addition to the complications created by rapid development of new technologies and wholesale changes of market structures, the idiosyncratic behaviors of financial markets were also in play — as the dot-com bubble. The result is widespread misconceptions about “Internet.” One critical issue is the mis-identification of the success factors of the Internet.

The “magic of the Internet” is the universal adoption of public common standards and practices, which resulted in the astonishing benefits generated. The key to continuing the “magic of the Internet” is making sure that the common standards and practices are followed so that unified network and application interoperability are maintained, helping continued development of a vibrant marketplace.

The focus of regulatory oversight need to be shifted from the current “Internet focus” to “open standards, interfaces and practices.” A critical precursor to that step is for the FCC to become strictly “technology neutral,” allowing the free market to operate creating the best possible network capabilities with public common standards, interfaces and practices.

Posted in Communication industry, Net neutrality | Tagged , , , , , | Leave a comment

How to learn effectively using the Internet

It is common knowledge that the Internet is a treasure trove of information. And one of the often repeated applications of the Internet and broadband is education. But using the Internet as a learning tool is easier said than done.

The reasons for this challenge are many. Foremost among the reasons is that learning is hard work. And the Internet is full of distractions that divert attention easily. Commercial use of the Internet for commerce, advertising and marketing purposes does not help either.

To get started the essential skill is self-discipline [2, 3], since it is self directed learning. Curiosity and motivation to learn are also essential.

The system, Net Learning ClusterTM, described here is developed by the author, resulting from a practical need (more details below.)

Net Learning Cluster (NLC) consists of structured use of easily available tools on the Internet. Net Learning Cluster helps with learning subject matter, concentration, and language skills: reading, editing, authoring, messaging and more.

Net Learning Cluster turns Internet browsing into a goal directed activity to create (bookmark) blog posts. Social Bookmarking [2, 3] is one of the most popular Social Media applications on the Internet. WordPress makes this process as easy as 1, 2, 3, 4

This is how it works.

Allocate a dedicated time for this learning effort. Your learning goal needs to be sufficiently broad to be effective, and must be practiced regularly.

For each Net Learning Cluster session, have a large number of articles and web pages ready for review, so that you don’t spend time searching for material during the session. With each web page or article ask yourself: Am I interested in reading this again? Next week? Next month? Next Year? Learn to arrive at this decision rapidly. If the answer is ‘No’, proceed to the next one. This step will help improve your reading, comprehension, and decision making skills.

If the answer is yes, then identify the main idea or key points and create a blog post, with a link to the web page. Business Exchange used this idea. For examples, please review the posts in the News links blog.

If the information is something you feel strongly that others need to read as well, then create a synopsis. The aim of the synopsis is to motivate the reader to visit the original site. Then use the power of the blogging tools, and the Internet as a rich media to make the synopsis as compelling as possible. However, you must be mindful of copyright, since the original content belongs to someone else.

To learn how to create compelling synopsis from articles mindful of copyright, study how blog posts are constructed in the Net economy.

Once you practice these steps or you already have ideas of your own that you want to write about, Net discussions is a place for original articles.

To use this methodology for self directed learning, it is not necessary to use the blogs provided as examples. You may create your own blogs. But collaboration available with the example blogs cited will be missing. There is at least one research report using this methodology.

This methodology, Net Learning Cluster, was developed by the author with a learning objective: How does the United States government work? More specifically, how does the Federal Communications Commission (FCC) operate? The article, Net neutrality: issues and solution, is a result of this exercise.

If you have reached this far, take the next step: request an invitation to become a contributor.

Posted in Communication industry | Tagged , , | Leave a comment

Maybe AT&T should fully divest from the communication industry

AT&T, by vigorously pursuing innovation, developed a patent to help manage the people who are not using the networks properly. The invention is currently a patent application to better manage “bandwidth abuse” [2, 3].

This is interesting at many levels. First, you would expect inventions are to help improve the use of networks, and not limit its use. Or, maybe, AT&T has very good ideas as to what are the “good uses” of networks, and what are the “bad uses.” And their self-appointed role is to make sure everyone’s “good behavior” regarding networks.

However, for the Internet as the emerging media for communications, the real controlling issue is First Amendment Rights [2, 3, 4, 5]. If the United States is to adhere its founding principles, then it is clearly outside the purview of commercial interests of AT&T.

This also creates interesting dilemma. For example, if Verizon or Comcast were also interested in correcting their misbehaving users, will they be infringing on AT&T’s patent? (assuming the patent gets granted) And they would be required to pay royalty to AT&T?

AT&T in its current incarnation is clearly motivated by financial profits, as explained by CEO, Randall Stephenson. It is worth pointing out that AT&T became the legend and most admired company in its glory days not by pursuing profits, but high ideals. Theodore Vail, who architected its growth, developed a “strategy to achieve a single communication system offering the best possible service,” subordinating the maximization of profit. And there was a vigorous campaign about “One policy, one system, and universal service,” to help implement a unified, coherent national network policy. The result was, at its peak, AT&T employed more than 1 million people, admired by all, and affectionately called Ma Bell.

Now, AT&T’s network assets are not helpful for maximizing financial profits to match the “financial games” by the innovators in the financial industry. For example, to create and dominate a whole new market like the CDS (credit-default swap). Or, the clever trades by Blackstone.

There is a clear solution for the conundrum AT&T finds itself in. AT&T should fully divest from the communication industry, and concentrate wholeheartedly in financial operations. Then, they would be able to beat JP Morgan, Goldman Sachs et al. at their own game.

Posted in Telecom industry | Tagged , , , , , | Leave a comment

What is strange about the telecom industry?

The FCC has unanimously voted to allow AT&T to conduct trials to turn off the phone network. It must be puzzling, at least to those who are unfamiliar with telecom industry inner workings. In normal markets, you hardly ever have such debates. For instance, there is no public debate as to how jet engines or future light bulbs are to be designed — even though everyone uses them. GE and Rolls-Royce introduced major enhancements without most people even noticing them. New light bulbs pack major innovations and plug into the old sockets, without any accompanying brouhaha. What is different with telecom?

The short answer is the Internet. Because the Internet has become everyone’s everyday tool, the dysfunctional behaviors in the telecom industry have become a matter of public debate. Anyone who uses Facebook or Twitter seems to believe that they have a say in the way the networks are to be designed.

For the long answer, you have to go back into history. Years of litigation resulted in the divestiture (break-up) of the AT&T in 1984. Bell Labs [2, 3] was the final authority on networks at that time, when, the Internet was in its infancy. The predominant network being the telecom (phone) network. One side effect of the divestiture was the dissipation of the technology know-how and knowledge, accumulated over a century, in the Bell Labs. The reconstructed Bell Labs is regaining its moorings only now. While the Bell Labs was in decline, the Internet as its protagonists were rising in prominence, culminating in the Dot-com bubble.

This is critical because the technologies, vocabulary, and principles underlying the telecom networks and the Internet are as alien as the German and French languages, even though the telecom networks and the Internet are an inseparable mesh.

The concurrent decline of the telecom stakeholders and ascent of the Internet stakeholders resulted in the neglect of telecom infrastructure and the underlying products. In fact, the primary suppliers (Nortel and Lucent) for the North American telecom networks are no longer viable business entities — Nortel is no more, and Lucent merged with Alcatel. The net result is cost of operating the telecom networks has been increasing, while the number of customers using voice components of the telecom networks have been declining. AT&T, therefore, wants to replace the voice telecom network with the Internet.

But the problem is that the Internet is not an exact replacement for the telecom networks for voice services. The best example is the 911 emergency service. The result is the convoluted public debate we are witnessing.

Texting has been suggested as replacement for the 911 service. However, this is not a good substitute. Dialing 3-digits — 9, 1, 1 — and speaking on the phone are simple to perform. But texting is not that simple — among the reasons are lack of uniformity of operation among devices, and the option for alphanumeric data.

A better solution is a mandatory “Alert Button” on all messaging devices, including phones, smartphones, tablets, dashboards, remote controllers, radio/TV receivers, wearables, keyboards, game consoles, door openers, and connected devices. The Alert Button could internally send text messages. In addition, with Alert Button, implementing different types of Alerts, in conjunction with GPS, will be trivial. Different types of Alerts, such as medical, fire, disasters, rescue, roadside assistance, burglary/intruder, equipment failure, threshold alarms, diagnostics, operating parameters, and other types could be easily implemented. This has the potential to create market for a whole new class of Alert devices and services, similar to the pagers used in the past.

Another issue missing in the debate is “transparency.” What are the goals of the trials? What are the objectives? How the results of the trials will be evaluated? And how the decisions will be made?

Posted in Communication industry | Tagged , , , , , , , , , | 4 Comments

Net neutrality: issues and solution

As expected, FCC’s Net Neutrality rules have been struck down by the court. The reason is FCC’s position is self-contradictory.

2002 FCC rule classified broadband/Internet as an information service, and outside the purview of FCC regulations. But 2010 FCC Net Neutrality ruling, placed conditions on the Internet service providers. Verizon’s challenge brought out this contradiction, and the court invalidated FCC’s 2010 Net Neutrality rules.

What is the way forward? The solution is not restoring Net Neutrality, as some have suggested.

Net Neutrality is a legal construct imposed on how networks are to be designed. Legal principles have no place in the design of networks, or for that matter, in the design of any technology products. But legal rules may be applied in the manner in which the technology systems are operated, or used.

Designing network systems involve making tradeoffs for efficient resource allocation, functionality and preventing deadlocks. This inherently involves treating different sets of bits differently. Hence, prudent engineering practices cannot be reconciled with the legal principles embodied in the Net Neutrality.

However, imposing telephone-centric common carrier regulations are also not appropriate for future networks, if it is to evolve to its full potential. The reason is that the phone-centric regulations were a compromise: In exchange for a government sanctioned monopoly [2, 3], AT&T agreed to be regulated as a utility. Network industry is no longer a monopoly, but has many entrenched competitors. And one of the critical issues is limiting market power abuses.

One issue, probably the most important, absent from the extensive discussions is the technology constraints involved. One critical factor for the current impasse is the lingering aftereffect of the dot-com bubble. The dot-com bubble created and reinforced the idea that all future networks are to be IP (Internet Protocol) based – “All-IP.” However, there is no technology rationale for this conclusion.

There are four primary data types: voice, data, video, and connected devices. Each has its own unique characteristics and requirements. Developments in digital technology enable transmission of all these data types over packets (Internet.) That does not mean that the optimum network for these data needs is the Internet.

The situation is similar to transportation. It is feasible to build flying cars that also operate in water. But nobody argues that flying cars should be the universal transportation solution.  We have car, bus, train, boat, ship, plane, spacecraft, etc.  Each of these are designed for different transportation needs and requirements.

If we design networks in a similar manner — design different networks optimized for different services targeted for different data types — the issues related to the Net Neutrality will get simplified, allowing simpler resolution. However, such an approach will provide only limited, if any, opportunities for political drama.

Posted in Net neutrality | Tagged , , | 1 Comment