Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

Security concerns emerge with the new gTLDs

By Jonathan French

The new gTLDs (generic top-level domains) that are being implemented have a few security concerns already. One of the major concerns is ‘name collision’, which results from a single domain name being used in different places.

An example of this would be a company that uses .corp in an internal domain name. Under the new gTLD processes, the .corp gTLD could be bought by a different company for their use on the Internet. If that happens, when a user tries to go to internal locations on a company network using .corp, there is a chance that they could actually get data back from the now legitimate .corp servers on the Internet.

Using an internal domain name like this is a very common practice among businesses, so any issues that may come up dealing with .corp could be widespread. In the case of these new gTLDs, the owners of those servers could also manipulate their records, redirecting wayward queries. This opens the door to possible malware or phishing attacks on unsuspecting systems.

gTLDs are an unprecedented change to the domain name system that will be monitored under a microscope. With the gTLD distribution of servers growing like this, it now means a hacker would have more servers to attempt to compromise. The root servers and current TLD servers have been very secure and reliable so far. But the new registrants of these gTLDs may have flaws or poor security practices, making it easier for someone to gain access and cause problems.

This isn’t just an issue with anyone using .corp, either. It could affect anyone using internal networks. Why? An internal name you’re using now could one day be registered as a gTLD and cause name collisions for you.

Interisle Consulting Group performed a test over 48 hours on the root servers to monitor inbound traffic. Of the total traffic checked, 3 percent included TLDs that were not registered but soon will be (.corp, .home, .site, .global, etc). And 19 percent of traffic was for unregistered gTLDs that ‘could’ be registered in the future (syntactically correct).

Many vendor defaults have systems set to use these (currently unregistered) gTLDs, which is why there is so much of this traffic hitting the root servers. You may not be trying to go to ‘website.home’ in your browser, but that doesn’t mean you don’t have some software or hardware that isn’t trying that in the background. So any gTLD’s that get registered could unknowingly cause some name collisions for certain software and hardware vendors.
ICANN is working on mitigation techniques to try and avoid problems like this. Right now, some of the gTLD requests have a hold placed on them until more investigating can be done (like .corp and .home). There is a good chance some of those more common ones may not be allowed if there is a potential they will cause real problems for the Internet.

Another concern is the root server network. There are currently 13 logical root servers in the domain name system, with 377 physical server sites using anycast at the time of writing. With the new gTLDs being implemented (up to 1,000 a year), this will slowly increase the load on the root servers.

By some projections, the increased traffic will be negligible and not a problem to manage. The main concern with the root servers is the provisioning involved. The current operation and maintenance of the root servers is a very solid system. Changes on the root DNS servers only happen at the rate of about one a year for each gTLD.

The provisioning and modification of the new gTLDs will greatly increase the workload and maintenance of the 13 root servers. As with many things, the more something is changed, the more likely it is to break.

Fortunately the domain name system is built with redundancy. If there are any failures, these redundant systems should be able to handle it. In the case of incorrect data, though, there is the potential for large issues.

We just have to hope that with the close attention to detail, these problems will not be something to worry about.

Many oppose the new gTLD rollout as well, but one of the more prominent voices against it is Verisign. Verisign is widely known for its certificate services, but its core business is running the .com and .net gTLD servers (and a few others). Verisign is concerned about the name collision issues as well as implementation problems. The company believes these new gTLDs may cause a bigger problem than the experts think, potentially affecting many companies and individuals on the Internet. That’s why they are recommending more investigation and testing before allowing the changes to go live. ICANN has set a limitation of up to 1,000 new gTLDs per year and they believe that should be a slow enough rate as to not overburden anyone during the provisioning processes for each new gTLD.

The first gTLDs are expected to hit the Internet around November this year as part of the phased rollout. For the most part, it will be sort of a cosmetic change in DNS and we don’t expect problems. Experience teaches, however, that technology doesn’t always conform to what we expect.

The author
Jonathan French is a security analyst at AppRiver.

•Date: 11th September 2013 • World •Type: Article • Topic: ISM

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here