A website is a group a webpages link together. Websites are usually pages used for advertising. A site (location) on the World Wide Web. Each Web site contains a home page, which is the first document users see when they enter the site. The site might also contain additional documents and files. Each site is owned and managed by an individual, company or organization.
A computer or device on a network that manages network resources. For example, a file server is a computer and storage device dedicated to storing files. Any user on the network can store files on the server. A print server is a computer that manages one or more printers, and a network server is a computer that manages network traffic. A database server is a computer system that processes database queries.
Servers are often dedicated, meaning that they perform no other tasks besides their server tasks. On multiprocessing operating systems, however, a single computer can execute several programs at once. A server in this case could refer to the program that is managing resources rather than the entire computer.
Often referred to as simply Apache, a public-domain open source Web server developed by a loosely-knit group of programmers. The first version of Apache, based on the NCSA httpd Web server, was developed in 1995.
Core development of the Apache Web server is performed by a group of about 20 volunteer programmers, called the Apache Group. However, because the source code is freely available, anyone can adapt the server for specific needs, and there is a large public library of Apache add-ons. In many respects, development of Apache is similar to development of the Linux operating system.
The original version of Apache was written for UNIX, but there are now versions that run under OS/2, Windows and other platforms.
The name is a tribute to the Native American Apache Indian tribe, a tribe well known for its endurance and skill in warfare. A common misunderstanding is that it was called Apache because it was developed from existing NCSA code plus various patches, hence the name a patchy server, or Apache server.
To understand how all these work you must first understand how the World Wide Web (WWW) works. A system of Internet servers that support specially formatted documents. The documents are formatted in a markup language called HTML (HyperText Markup Language) that supports links to other documents, as well as graphics, audio, and video files. This means you can jump from one document to another simply by clicking on hot spots. Not all Internet servers are part of the World Wide Web.
There are several applications called Web browsers that make it easy to access the World Wide Web; Two of the most popular being Netscape Navigator and Microsoft's Internet Explorer. These are just general anwers to your questions. There is a great deal of technical knowledge in each subject you questioned. I hope this will send you in the right direction.....
check out the link provided. it will get you started......
2006-06-25 03:55:01
·
answer #1
·
answered by goathead 2
·
0⤊
0⤋
Intro:
The Internet (also known simply as the Net) is the worldwide, publicly accessible system of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked Web pages and other documents of the World Wide Web.
Briefly, the Internet can be understood as "a network of networks".
Contrary to some common usage, the Internet and the World Wide Web are not synonymous: the Internet is a collection of interconnected computer networks, linked by copper wires, fiber-optic cables, wireless connections etc.; the Web is a collection of interconnected documents, linked by hyperlinks and URLs, and is accessible using the Internet. The Internet also provides many other services including e-mail, file sharing and others.
Creation of the Internet:
The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency (ARPA, later known as the Defense Advanced Research Projects Agency) in February 1958 to regain a technological lead. ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and saw universal networking as a potential unifying human revolution. Licklider recruited Lawrence Roberts to head a project to implement a network, and Roberts based the technology on the work of Paul Baran who had written an exhaustive study for the U.S. Air Force that recommended packet switching (as opposed to Circuit switching) to make a network highly robust and survivable. After much work, the first node went live at UCLA on October 29, 1969 on what would be called the ARPANET, the "eve" network of today's Internet.
The first TCP/IP wide area network was operational by 1 January 1983 , when the United States' National Science Foundation (NSF) constructed a university network backbone that would later become the NSFNet. (This date is held by some to be technically that of the birth of the Internet.) It was then followed by the opening of the network to commercial interests in 1985. Important separate networks that offered gateways into, then later merged into the NSFNet include Usenet, Bitnet and the various commercial and educational X.25 Compuserve and JANET. Telenet (later called Sprintnet), was a large privately-funded national computer network with free dialup access in cities throughout the U.S. that had been in operation since the 1970s. This network eventually merged with the others in the 1990s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over these pre-existing communication networks allowed for a great ease of growth. Use of Internet as a phrase to describe a single global TCP/IP network originated around this time.
The network gained a public face in the 1990s. In August 1991 CERN, which straddles the border between France and Switzerland publicized the new World Wide Web project, two years after Tim Berners-Lee had begun creating HTML, HTTP and the first few web pages at CERN.
An early popular web browser was ViolaWWW based upon HyperCard. It was eventually replaced in popularity by the Mosaic web browser. In 1993 the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign released version 1.0 of Mosaic and by late 1994 there was growing public interest in the previously academic/technical Internet. By 1996 the word "Internet" was common public currency, but it referred almost entirely to the World Wide Web.
Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks such as FidoNet have remained separate). This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.
Internet protocols:
In this context, there are three layers of protocols:
* At the lowest level is IP, which defines the datagrams or packets that carry blocks of data from one node to another. The vast majority of today's Internet uses version four of the IP protocol (i.e. IPv4), and although IPv6 is standardised, it exists only as "islands" of connectivity, and there are many ISPs who don't have any IPv6 connectivity at all.
* Next comes TCP and UDP - the protocols by which one host sends data to another. The former makes a virtual 'connection', which gives some level of guarantee of reliability. The latter is a best-effort, connection-less transport, in which data packets that are lost in transit will not be re-sent.
* On top comes the application protocol. This defines the specific messages and data formats sent and understood by the applications running at each end of the communication.
Internet structure:
There have been many analyses of the Internet and its structure. For example, it has been determined that the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.
Similar to how the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as:
* GEANT
* GLORIAD
* Internet2
* JANET (the UK's Joint Academic Network aka UKERNA)
These in turn are built around relatively smaller networks.
The World Wide Web:
Through keyword-driven Internet research using search engines like Google, millions worldwide have easy, instant access to a vast and diverse amount of online information. Compared to encyclopedias and traditional libraries, the World Wide Web has enabled a sudden and extreme decentralization of information and data.
Many individuals and some companies and groups have adopted the use of 'weblogs' or blogs, which are largely used as easily-updatable online diaries. Some commercial organizations encourage staff to fill them with advice on their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work.
For more information on the distinction between the World Wide Web and the Internet itself — as in everyday use the two are sometimes confused.
Server:
a Web server serves pages for viewing in a Web browser, while an application server provides methods that client applications can call. A little more precisely, you can say that:
A Web server exclusively handles HTTP requests, whereas an application server serves business logic to application programs through any number of protocols.
A Web server handles the HTTP protocol. When the Web server receives an HTTP request, it responds with an HTTP response, such as sending back an HTML page. To process a request, a Web server may respond with a static HTML page or image, send a redirect, or delegate the dynamic response generation to some other program such as CGI scripts, JSPs (JavaServer Pages), servlets, ASPs (Active Server Pages), server-side JavaScripts, or some other server-side technology. Whatever their purpose, such server-side programs generate a response, most often in HTML, for viewing in a Web browser.
Understand that a Web server's delegation model is fairly simple. When a request comes into the Web server, the Web server simply passes the request to the program best able to handle it. The Web server doesn't provide any functionality beyond simply providing an environment in which the server-side program can execute and pass back the generated responses. The server-side program usually provides for itself such functions as transaction processing, database connectivity, and messaging.
While a Web server may not itself support transactions or database connection pooling, it may employ various strategies for fault tolerance and scalability such as load balancing, caching, and clustering—features oftentimes erroneously assigned as features reserved only for application servers.
2006-07-07 23:37:55
·
answer #5
·
answered by Fusion 3
·
0⤊
0⤋