The Internetâs organic structure explains why itâs so resistant to random failures, but researchers now say those same features make it vulnerable to cyberattacks. The findings could help security experts strengthen weak links in the Netâs chain.p. THE LATEST STUDY, published in Thursdayâs issue of the journal Nature, builds on earlier studies of the researchers, Reka Albert, Hawoong Jeong and Albert-Laszlo Barabasi.p. They found that samples of the World Wide Web didnât have a random structure: Instead, the connections exhibited a hierarchy similar to that found in naturally occurring networks such as trees and living cells, with a small proportion of highly connected nodes branching off to a large number of less connected nodes. The structure was the same at different scales, meaning that the results could be extended to the Web as a whole, they said.p. This âscale-freeâ pattern is reflected in the structure of the Internet â that is, the global network of routers and lines knitting computers together âas well as in the connections between Web pages sitting on those computers.p. In the new study, the trio focused on the implications for the Internetâs survivability in the face of failure. Turning again to their âsmall-worldâ samples, they found thatthe Internet, the Web and other scale-free networks could stand up to even unrealistically high rates of random failures.p. âEven when as many as 5 percent of the nodes fail, the communication network between the remaining nodes in the network is unaffected,â they reported.p. Thatâs because in most cases, random failure âsay, the breakdown of an Internet data router âwill affect nodes with little connectivity. The flow of Net traffic can simply take another, no less convenient route. In contrast, the performance of a randomly connected network, also known as an exponential network, degrades steadily as failures increase.p. This explains why the Internet chugs right along even though pieces of the network frequently break down.p. "The system evolved into this stage in which itâs completely tolerant against errors, and itâs not only because of redundancy," Barabasi said in a telephone interview from Romania, where heâs on sabbatical. âItâs much more than redundancy.âp. But the researchers say thereâs a flip side: Although the structure is particularly well-suited to tolerate random errors, itâs also particularly vulnerable to deliberate attack, they said. If just 1 percent of the most highly connected Internet routers or Web sites are incapacitated, the networkâs average performance would be cut in half, said Yuhai Tu of IBMâs T.J. Watson Research Center.p. âWith only 4 percent of its most important nodes destroyed, the Internet loses its integrity, becoming fragmented into small disconnected domains,â he wrote in a commentary published in Nature.p. This vulnerability represented the âAchillesâ heelâ of the Internet, Tu said. An attack on the key Internet access points would be far more serious than an attack on the biggest Web sites, Barabasi said. âThen thereâs no Internet, thereâs no e-mail, thereâs no Web, because you canât get from A to B,â he said.p. Internet security experts who reviewed the research said it meshed with their own real-world experience, although they cautioned that other factors helped guard against cyberattacks.p. The Internet traffic network has some intersections so key that âif you take down this limited number of points, you could take down a great deal of connectivity,â said Jim Jones, director for technical operations for response services at Global Integrity Corp., a Virginia-based computer security firm.p. However, Jones noted that the Notre Dame study was based on a âThe Internet is not static, and the snapshot they took of it may not match the backup connections that may kick in. ⊠There are redundancies that wouldnât necessarily show up,â he said.p. A potential attacker would have to have a detailed knowledge of how the Net works, how to take down key Internet locations that are heavily protected precisely because theyâre so essential, and how to cut off the backup avenues.p. Another expert took issue with the Notre Dame researchersâ claim that the Internet was inherently vulnerable to attack. âActually, these problems arenât inherent at all ⊠and we can eliminate them,â said David A. Fisher, simulation at the CERT Coordination Center in Pittsburgh. The center is among the nationâs pre-eminent As the Internet becomes more distributed and less hierarchical, the number of potential targets should decline, he said. âThere are only a few of noticeable, but itâs not because of a weakness,â he said.p. Jones said the research validates whatâs known about Internet security a little more rigorously, âand certainly gives us some better directionsâ on how to address the Netâs natural vulnerabilities.p. âThe structure of the Internet is a product more of economics than design,â he noted. To cite an exaggerated example, itâs cheaper to link less connected nodes to a bigger-bandwidth backbone than to install fiber-optic lines and heavy-duty routers in every household.p. âIf we were to design (the Internet) as an exponential network, we would have decent survivability to random error and incredible survivability to directed attack ⊠but it would be incomprehensible to do that because of the cost,â he said.p. Nevertheless, making the Internet more uniformly connected might be a goal to shoot for, he said. To cite a real-world example, Jones compared the current Internet to the Napster file-swapping site, while the more dispersed Gnutella service has more of âWith Gnutella, I have no one point to focus on,â Jones said.p. Mark Rasch, a former federal prosecutor who is now Global Integrityâs vice president for cyber law, said an exponential approach would also make it âmuch more difficult to regulate Internet content, because it grows organically.âp. Rasch said another way to beef up the Internetâs survivability in the face of attack would be to âbuild off a second Internet,â such as the high-bandwidth Internet2 infrastructure currently in development.p. But Fisher said redundancy alone would not protect the Internet from deliberate attack.p. âWeâre concerned not so much with failures in the traditional sense, but with intelligent adversaries who are trying to cause failures,â he said.p. "It turns out that redudnacy doesnât help at all. What helps is redundancy coupled with diversity.p. If network administrators use the same setup for all their primary and backup systems, that simply leads to a situation in which âone attack fits everybody,â he said. In the long run, Fisher said, the best way to fend off mass attacks against any network is to eliminate the single points of failure, wherever they exist.p. p. p. p. p. Wednesday, July 26, 2000
TopicID: 303