Around my neighborhood, little children are going around imagining non-existent ghosts and collecting treats. In turns out, grownups (only slightly it turns out) in Washington are creating their own ghosts. A recent, relevant one is “spoofing” of Caller ID. Normally, the caller ID information is determined by the interface over which that call is made. But certain class of users can specify (per call) the exact information that will be delivered to the caller user. Subsequent networks will carry/deliver this information. They had the ability to mark the authenticity of the information that is being gathered, even though seldom it was done. But the service was marketed as if the information so delivered is authentic. This means that there will be people who will take advantage of this for nefarious purposes. Some policy makers want to address this matter legislatively. This note argues that it is ill advised.
Almost all service providers call out their Caller ID service offering by either charging extra or by declaring this to be an enhanced service in one respect or other. If it is so, then shouldn’t we expect them to verify the voracity of that piece of information? Or minimally, shouldn’t they be required to declare whether the information has been verified or not. The protocol allows for this information to be carried within the network. Why can’t the service providers deliver it to the user?
Instead of this simple solution, the proposal just wants to declare certain practices to illegal. This is not effective. It forces us to build a legal case and prosecute the offenders requiring lengthy legal proceedings. A couple of weeks back, Alec talked about it in his blog while quoting a proposed solution from TalkPlus CTO John Todd. In my opinion this solution is also not satisfactory.
According to the description available at ETel site, the proposal calls for formation of an entity that will stipulate, monitor and enforce authentication scheme. Of course John has thought through this and has a very detailed proposal. My concern is that it is possible we may be recreating the old telco model in the sense that this becomes a bureaucratic entity slowing new entrants. In any event, it is difficult to ensure that this entity will be global in nature. So the only effective action we can take is to inform the user whether the information has been verified or not and let the user decide what to do with that information. After all the intelligence has moved to the end and delivering transparent information is sufficient. Earlier, I had suggested as much.
Only recently both Tom Evslin and Alec Saunders wrote about the anomalies of Inter-carrier compensation scheme in US, when they talked about FuturePhone. Readers of these pages may remember that many “virtual number” offerings are based on the ICC regimen. As you recall, when a carrier delivers a call to another the recipient gets compensated. (By the way doesn’t this suggest that the carriers must give Call waiting for free, thereby increasing their revenue due to termination charges?) Apparently FCC has issued for comment a reform plan called the “Missoula Plan” that was filed by National Association of Regulatory Utility Commissioners’
Task Force on Inter-carrier Compensation and supported by AT&T, BellSouth and many of the carriers serving rural areas. (If you have read Tom and Alec, you will know that rural carriers benefit enormously from the existing scheme). What is interesting is that Verizon opposes the proposed reform. There is a news report that states that Florida Attorney General also opposes the plan quoting that the charges to the consumer will go up about $3.50 per month because the plan shifts the full burden of these charges to the local customers. My translation: these charges are collected per phone number and not based on the usage. If I am correct, then I would think companies like FuturePhone need to revisit their business plan because driving traffic to their local network will not increase revenue. Also many of the “free phone number for life” offers may disappear. Finally, I think that the proposed reform comes close to “bill and keep” scheme suggested by Tom. I hope he weighs in on this.
Generally it goes unchallenged if anything derisive is stated about PSTN and anything that glorifies IP. It is almost, nay, exactly like the electoral behavior that is happening in US now. In this rant, I am taking on a big challenge of confronting myths that are well established and people who are “rock stars” for having established them. In my opinion this all started with the “Stupid Network” paper by David Isenberg. As you probably know that paper very well. That paper argues the fatal flaws of PSTN network architecture. Even though it has become part of the knowledgebase of every communications engineer, its major claims need to be diluted or its scope narrowed. But the misunderstanding does not stop there; subsequently, many others have invoked this paper and the “end-to-end” paper to state certain behaviors for PSTN with out proper analysis and justification while hiding behind these two papers' authority. The purpose of this long essay is to critically analyze these two papers and bring out certain falsely but commonly held views.
First let us look at the “end-to-end” paper. A common and widely held interpretation of this paper is that this paper advocates that most of the data transfer functions must be done at the end-points. At least that is how this paper is quoted most of the times. But a careful reading of this paper and analysis of contemporaneous papers will suggest a more nuanced claims. A major architectural debate at that time was where to do error correction. The ISO crowd, as evidenced in X.25, had concluded that link by link error correction is the preferred approach. The Arpa crowd had concluded that it has to be end-to-end, but there was disagreement to what is meant by the “end”. You see, in those days, the network interface was an outboard card and if the error correction is done by that card the data could still be lost before it is delivered to the application in the main processor. So it is mandatory that some form verification must be done at the “real end”. Given that it has to be done, so the reasoning went that full verification be done at that end. The “end-to-end” paper makes it abundantly explicit. But if you need further elaboration, I will recommend the RFC authored by Padlipsky. The paper extends this line of reasoning to other functions as well; but it does not suggest that all the functions be done only at the end. It grants that depending on network characteristics and application requirements, it might be optimal to do some of the functions at the intermediate points. But, the paper points out, it should be determined by the applications and the network architecture must allow for a dynamic decision. As such, the paper is more reasoned and less dogmatic, even though lately the paper is used for dogmatic claims. It is more like how Darwin was invoked by “social Darwinists”.
At this time, I want to suggest to those that claim that IP built on ISO model that many of these early architects of IP networking will find it amusing, if not offensive. IP is most decidedly does not conform to ISO reference model, which Padlipsky calls it ISORM (to be pronounced eye-sore-m). So please stop equating the IP protocol stack to ISO stack. The IP architects may get allergic reactions.
Next let us consider some of the points made by the “Stupid Network” paper. To begin with, the implication of the title as to the nature of PSTN is erroneous and the associated claim that in PSTN end points are not intelligent are false. PBX is an example of an intelligent device that is outside the control of the carriers. If you think PBX is an exception, don't you think Fax and data modems are intelligent devices. Here are two examples that destroy the myth PSTN is designed (sometimes this claim is diluted to “optimized”) for voice. It is legitimate to debate on the merits of statistical multiplexing and compare and contrast it to time division multiplexing. But if you grant that transport costs have come down close to zero, the differences between them are not that critical.
The second point made by the paper that I take issue with is its agreement with PSTN carriers that only the carriers are in a position to realize services on behalf of the users. I wish that the paper had taken issue with that claim rather than agreeing to it and conclude that it is an architectural flaw in PSTN. Even when the paper was written, the end had intelligence and TAPI and TSAPI were developed to take advantage of them. Both of them were results of efforts that were independent of the carriers.
The next claim the paper makes that is not totally accurate is that unlike in IP, the end points do not have freedom in selecting the codec. STU III is an example where the end points decide which codec to use for a given call and the Network is not consulted or even notified. Let us even take the “sand that gave rise to the pearl” of the paper – AT&T True Voice. Of course two compatible end-points could have realized the improved voice quality without requiring any further assistance from the network. But that was not the objective of AT&T at that time. They wanted to realize demonstrably improved voice quality between ANY two end points and that too within a short span of time. An implication of intelligent at the end is that new features can be realized at the rate the end devices are adopted by the user community. In short PSTN didn't stand in TV's way; conversely, there is no hope for IP network to deliver on the objectives of TV.
In my opinion, the sad aspect of the paper is that a generation of engineers didn't pay attention to developing intelligent end points for PSTN. Most of the computers as early as early 90's were bundled with telephone applications with address book, logs and voice mail. Now these features are available in VoIP world, but they are services offered by VOIP service providers and not realized at the ATA (for the most part). So where are the intelligent end devices? Where is the angst when one reads about services in the middle when they could be realized at the end? Shouldn't we at least attempt to realize them at the end?
A couple of days back, Sightspeed announced a partnership with AMD whereby AMD will rebrand Sightspeed client as AMD Live Communicator. This is a piece of good news for them. I used that occasion to talk to their CEO, Peter (who has started to blog) to update myself from the last time. They have introduced a couple of capabilities that I think are worth noting.
The first one is a new feature that they call Sightspeed Web, which allows a Sightspeed subscriber to have Sightspeed session with one who is not. To do this, the Sightspeed subscriber sends a specific URL to the friend, who can download an ActiveX client as part of visiting that URL. The ActiveX client has almost all the features available in the standalone client. This is very similar to what one could do in FWD. This is very useful for Enterprises to add click to (video) call to their web site.
The second capability that is woth noting is their claim that their NAT/FW traversal capability is very effective and can also handle symmetric ones. It is so effective that only a small fraction of calls require relay nodes. Since this is counter to my understanding, I wanted further clarification. The following is the response I received from their CTO, Aron:
“I can’t elaborate much further on how it works, but we basically have the best understanding of firewalls, NAT/Firewall’s and NAT devices in the marketplace. With this knowledge we built a solution which maps the current state of your firewall (symmetric included) when you want to start a call. By knowing the current state we can then build a direct peer2peer connection between the endpoints. The types of firewall’s that require persistent relays are usually ones we have not analyzed in our labs or ones which don’t follow any known patterns. Relays are also used for misconfigured firewalls that limit where destination traffic can be sent too. We find a fair number of Enterprise firewalls incorrectly limit destination traffic by default.”In my opinion, this is a major development that will impact others as well. This suggests that one can further classify symmetric NAT/FWs and solve the traversal problem for a subset of them.
Three decades back, a scrappy young man used a distribution technology to reach a nationwide audience for his video programming. He called it Superstation. I am sure you know that I am referring to Ted Turner, who used satellite and cable technologies to broadcast the signals from a small local Atlanta TV station to multiple cities. This was revolutionary at that time. Now another scrappy man is using another revolutionary distribution technology called Internet to distribute video programming to potentially reach the audience world over. He calls it Network2.tv and the man is none other than our own Jeff Pulver. This is not just a video hosting service; the value added service here is the selection process. In keeping with the current trend in naming things, why shouldn’t it be called Superstation 2.0?
Copyright © 2003-2009 Moca Educational Products.