SEARCH ENGINES, computer science assignment help

User Generated

Zbr1

Computer Science

Description

I have 2 sample paper I only need paraphrasing for the paper that I have. Please make sure to do your best

Unformatted Attachment Preview

Running head: SEARCH ENGINES 1 With the continuous growth of the internet, the web remains to be quite a huge repository of information. The amount of information that has been stored on the internet grows vastly day by day. This calls for the need of systems that can manage such huge chunks of information. This includes doing the information retrieval and even managing the information relevancy. This information is of distinct types and needs to be classified to bring much relevancy. Information needs to be refined according to its popularity which calls for human maintained lists will other topics can rely on the automated system. The automated system uses automated ranks to filter and search information. To facilitate all this, large-scale systems have been made to help in the searching and indexing of information. Search engines have their own way of working. Google is one of the companies that have a very powerful search engine system. The search engine usually uses a program known as a “web crawler”. This kind of program has the ability to browse information and automatically store it in a central repository for archival. The web crawler visits different web pages. Every time the web crawler makes a visit to a web page, it copies all the links associated with that web page. The web crawlers add those links to its index. The web crawler keeps on repeating the process until it builds a huge database of webpages. Certain web pages are designed with mechanisms that keep web crawlers away. This is web pages that have a robot.txt file in them. By using a robot.txt file, the web crawler is usually not capable of accessing the page links and even all the other links associated with that web page. SEARCH ENGINES 2 There are certain areas of concern in the design of a search engine. The quality of search results is one of the design goals for developing search engines. This requires the search engine developers to use algorithms that can sieve information and bring up results that are relevant and clear. One of the main challenges with this is that some web page owners use certain algorithms to trick the search engine for higher rankings. This is happening even if the web page has got no relevant content. This ends up wasting the experience with search engines since people will get bored with a system that brings up junk results. Page ranks are also incorporated in search engines. The search engine uses certain metrics in placing pages according to their ranks. One way the search engine uses to prioritize web pages is through the use of certain keywords (Halasz & Halasz, 2013). Keywords are paramount in determining which page to place before the other. Search engine relies on intuitive justification too. They use certain calculated algorithms to know which website should be ranked where. Search engines especially Google usually use anchor texts extensively. This is because anchor texts can provide extra information from sites that have not been indexed. Anchor texts usually provide more clear and relevant information that the text itself. The relevancy can be viewed from the point that, search engines can provide and index a non-existent page, but the anchor texts usually give an existing and relevant information. Anchor texts also help web surfers to get access to non-indexed contents like pictures and databases. SEARCH ENGINES 3 The above information shows us that search engines are paramount in the world of World Wide Web. The growth of online information pushes the need for having search engines with better capabilities of searching and indexing information. References Halasz, J. & Halasz, J. (2013). How Search Engines Work -- Really!. Search Engine Land. Retrieved 3 June 2016, from http://searchengineland.com/how-search-engines-workreally-171556
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

Hi!👋 I have finished paraphrasing your work! For the most part, I simply replaced words or re-worded sentences. I did not want to add or take away too much as I don't really know much about the topic itself. Please let me know if you have any questions or concerns about any of my changes.

Running head: SEARCH ENGINES

1

With the constant growth of the internet, the web is becoming a huge archive of current
and old information. The amount of information being stored on the internet is growing
tremendously, and on a daily basis. A collection of this magnitude is calling for a body of
systems that can manage these huge chunks of information. This includes performing the
information retrieval tasks and even supervising the information relevancy. The information is of
such a variety that it needs to be classified specifically to have more re...


Anonymous
Very useful material for studying!

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Related Tags