The terrible world of the Deep Web, where contract killers and drug dealers play their trade on the internet.
Most people use internet daily, however, most of us only know a fraction of it. putting that fraction in an example, I would say we only know the very top of the iceberg, most of the ice is submerged invisible except to those who know how to find it. This submerged network is known as the deep web (also called the Deepnet, Invisible Web, or Hidden Web).
Usually we use the term “Surface Web” to refere to the “normal” internet. This is the information and pages you can easly find by searching on any search engine such as Google or yahoo. This search engine usually index the sites and put the information in a database that you and I can easly find using a search keyword or a phrase. This search engines can only collect static pages (like this) and not dynamic pages, which is estimated to have only 0.03% of the information in the World Wide Web.
For the rest, it is hidden in the so-called “Deep Web”, invisible web, or the deep Internet. this huge unkiwn space of the world wide web contains all the information that can not be found with a simple Google search. These data is not necessarily hidden in anyway, it’s just difficult for todays technology of traditional search engines to find and make sense of it.
It is unknown exactly how big the deep web, Bright Planet estimated that it could be around 500 times larger than our surface Internet. Considering that Google, by itself, covering around 8 billion pages, it is truly amazing.
The vast majority of the invisible web pages contain valuable information. A report published iearlier estimated 54% of sites are are records of valuable information or secret documents such as reports of NASA or NOAA. However, everything is not as cool as it may sound. There is a dark side of the deep internet, and this side is as illegal and dangerous as it can ever be. This dangerous and illegal part of the web is called the dark web.
In the “Dark Web” network, users intentionally hide information. Often you can only access these sites by using special software browsers. This software ensures the privacy of both the source and the people visiting it are very secure. Ones secure and in you will into a world you never thought existed. Here you will find everything from purchasing a human kidney to prostitution, weapon or drugs. Anonymity allows the transfer, legal or illegal, information, goods and all type of services you can ever imagine all around the world.
Automatically determining if a Web resource is a member of the surface Web or the deep Web is difficult. If a resource is indexed by a search engine, it is not necessarily a member of the surface Web, because the resource could have been found using another method (e.g., the Sitemap Protocol, mod_oai, OAIster) instead of traditional crawling. If a search engine provides a backlink for a resource, one may assume that the resource is in the surface Web. Unfortunately, search engines do not always provide all backlinks to resources. Furthermore, a resource may reside in the surface Web even though it has yet to be found by a search engine.
Most of the work of classifying search results has been in categorizing the surface Web by topic. For classification of deep Web resources, Ipeirotis et al. presented an algorithm that classifies a deep Web site into the category that generates the largest number of hits for some carefully selected, topically-focused queries. Deep Web directories under development include OAIster at the University of Michigan, Intute at the University of Manchester, Infomine at the University of California at Riverside, and DirectSearch (by Gary Price). This classification poses a challenge while searching the deep Web whereby two levels of categorization are required. The first level is to categorize sites into vertical topics (e.g., health, travel, automobiles) and sub-topics according to the nature of the content underlying their databases.
The more difficult challenge is to categorize and map the information extracted from multiple deep Web sources according to end-user needs. Deep Web search reports cannot display URLs like traditional search reports. End users expect their search tools to not only find what they are looking for special, but to be intuitive and user-friendly. In order to be meaningful, the search reports have to offer some depth to the nature of content that underlie the sources or else the end-user will be lost in the sea of URLs that do not indicate what content lies beneath them. The format in which search results are to be presented varies widely by the particular topic of the search and the type of content being exposed. The challenge is to find and map similar data elements from multiple disparate sources so that search results may be exposed in a unified format on the search report irrespective of their source.
Among the most famous of the TOR darknet content is a collection of secret websites that end with “.onion”. TOR activity can not be tracked because it works from a broadcasting system that bounces signals between different TOR compatible equipment worldwide.