-

Uncategorized

Computer networking

Posted on Updated on

A computer network can be two computers connected:

connect2pc1

A computer network can also consist of, and is usually made for, more than two computers:

Characteristics of a Computer Network

The primary purpose of a computer network is to share resources:

  • You can play a CD music from one computer while sitting on another computer
  • You may have a computer that doesn’t have a DVD or BluRay (BD) player. In this case, you can place a movie disc (DVD or BD) on the computer that has the player, and then view the movie on a computer that lacks the player
  • You may have a computer with a CD/DVD/BD writer or a backup system but the other computer(s) doesn’t (don’t) have it. In this case, you can burn discs or make backups on a computer that has one of these but using data from a computer that doesn’t have a disc writer or a backup system
  • You can connect a printer (or a scanner, or a fax machine) to one computer and let other computers of the network print (or scan, or fax) to that printer (or scanner, or fax machine)
  • You can place a disc with pictures on one computer and let other computers access those pictures
  • You can create files and store them in one computer, then access those files from the other computer(s) connected to it

Peer-to-Peer Networking

Based on their layout (not the physical but the imagined layout, also referred to as topology), there are various types of networks. A network is referred to as peer-to-peer if most computers are similar and run workstation operating systems.

In a peer-to-peer network, each computer holds its files and resources. Other computers can access these resources but a computer that has a particular resource must be turned on for other computers to access the resource it has. For example, if a printer is connected to computer A and computer B wants to printer to that printer, computer A must be turned On.

Introduction

In a network, computers and other (optional) devices are connected to share resources. When a computer or device A is requesting a resource from another computer or device B, the item A is referred to as a client. Because all or most items that are part of a network live in association or cooperation, almost any one of them can be referred to as a client. Based on this, there can be different types of clients.

A workstation is a computer on which a person performs everyday regular assignments. A workstation is primarily a personal computer (PC). It can also be a laptop.  Almost any modern PC can be used as a workstation and participate to a network.

Before building a computer network, you should plan it. In some cases, you may want to use one or more computers you already have, or you are can purchase new computers.

Introduction to the Computers of a Network

If you already have one or more computers that you plan to use as workstations, you can start by checking the hardware parts installed in the computer. As mentioned already, you can use use existing computers or purchase new ones.

The computers used in a network must meet some requirements. The system requirements depend on the (type of) operating system (we will come back to operating systems in another section). For our network, we will use computers that run Microsoft Windows 7.  At the time of this writing, the system requirements for Microsoft Windows 7 can be found athttp://windows.microsoft.com/en-US/windows7/products/system-requirements.

Using Barebone Computers

A computer is referred to as “barebone” if it is built almost from scratch by assembling its parts. You can build your own computer or you can purchase one. Before starting, get a list of the hardware requirements the computer should have.

You can purchase or acquire a computer with all parts or only some parts. To get this type of computers:

  • You can purchase parts separately and assemble them
  • You can shop in a web store that sells “barebone” kits (Tiger Direct and Amazon have them)

After getting the parts, you must assembly them appropriately and make sure the computer can boot and present you with a BIOS screen. After assembling the computer, you will have to install the operating system, which you will have to acquire separately.

Computer Accessories and Peripherals

Keyboard and Mouse

When using a computer, there are different ways you can control it. The primary accessories used to perform routine operations are the keyboard and the mouse. If you are using an existing computers for your network and if either the mouse, the keyboard, or both of these items are missing or not functioning, you should get or replace the failing one(s).

If you are building your own computer or are acquiring a barebone, make sure you purchase a keyboard and a mouse for the computer.

There are also wireless keyboards and mice. If you purchase them, they come with easy-to-follow instructions to install and configure them. Our advice is that you still should always have a PS/2 keyboard and mouse with you.

Monitors

A monitor is a display that a user looks at when performing daily assignments:

Monitor

To use a monitor, a computer must have an appropriate port in the back. Most computers have a blue port that has 15 small holes:

Monitor Port

Networking services

Posted on Updated on

Network service

Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate.

The World Wide Web, E-mailprinting and network file sharing are examples of well-known network services. Network services such as DNS (Domain Name System) give names for IP and MAC addresses (people remember names like “nm.lan” better than numbers like “210.121.67.18”), and DHCP to ensure that the equipment on the network has a valid IP address.

Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.

Network performance

Quality of service

Depending on the installation requirements, network performance is usually measured by the quality of service of a telecommunications product. The parameters that affect this typically can include throughput, jitter, bit error rate and latency.

The following list gives examples of network performance measures for a circuit-switched network and one type of packet-switched network,

  • Circuit-switched networks: In circuit switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads.Other types of performance measures can include the level of noise and echo.
  • ATM: In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique and modem enhancements.

There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modelled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed.

Network congestion

Network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blockingof new connections. A consequence of these latter two is that incremental increases in offered load lead either only to small increase in network throughput, or to an actual reduction in network throughput.

Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion—even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.

Modern networks use congestion control and congestion avoidance techniques to try to avoid congestion collapse. These include: exponential backoff in protocols such as802.11‘s CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers. Another method to avoid the negative effects of network congestion is implementing priority schemes, so that some packets are transmitted with higher priority than others. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for some services. An example of this is 802.1p. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn standard, which provides high-speed (up to 1 Gbit/s) Local area networking over existing home wires (power lines, phone lines and coaxial cables).

For the Internet RFC 2914 addresses the subject of congestion control in detail.

Network resilience

Network resilience is “the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation.”

Security

Main article: Computer security

Network security

Network security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. Network security is the authorization of access to data in a network, which is controlled by the network administrator. Users are assigned an ID and password that allows them access to information and programs within their authority. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies and individuals.

Network surveillance

Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.

Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.

Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent/investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high speed surveillance computers and biometrics software, and laws such as theCommunications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.

However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to numerous lawsuits such as Hepting v. AT&T.[31][32] The hacktivist group Anonymous has hacked into government websites in protest of what it considers “draconian surveillance”.[33][34]

End to end encryption

End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet providersor application service providers, from discovering or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity.

Examples of end-to-end encryption include PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio.

Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee protection of communications between clients andservers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox. Some such systems, for example LavaBit and SecretInk, have even described themselves as offering “end-to-end” encryption when they do not. Some systems that normally offer end-to-end encryption have turned out to contain a back door that subverts negotiation of the encryption key between the communicating parties, for example Skype or Hushmail.

The end-to-end encryption paradigm does not directly address risks at the communications endpoints themselves, such as the technical exploitation of clients, poor qualityrandom number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the end points and the times and quantities of messages that are sent.

Network & internet protocol

Posted on Updated on

Definition: An IP address is a binary number that uniquely identifies computers and other devices on a TCP/IP network.

Administrators set up and manage the IP addressing scheme for their networks. When troubleshooting connection problems, users of computer networks also should be familiar with how to find their IP address and reset it if necessary.

IP Address Standards and Notation

Two IP addressing standards are in use today. The IPv4 standard is most familiar to people and supported everywhere on the Internet, but the newer IPv6 standard is gradually replacing it. IPv4 addresses consist of four bytes (32 bits), while IPv6 addresses are 16 bytes (128 bits) long.

An IPv4 address consists of a set of four numbers, each between 0 and 255. Computers store and work with an IP address as one combined (binary) value, but network devices displays them in human-readable forms. IPv4 uses dots to separate the individual numbers that range from 0.0.0.0 to 255.255.255.255.

CONTINUE READING BELOW OUR VIDEO

Quick Tip: What Is An IP Address?

0:00
/
0:45

 IPv6 uses colons instead of dots to separate the numbers and also uses hexadecimal rather than decimal digits.

More: Internet Protocol Address Notation Explained

Public and Private IP Addresses

An IP address can be private – for use on a local area network (LAN), or public – for use on the Internet or other wide area network (WAN). IP addresses can be determined statically (assigned to a computer by a system administrator) or dynamically (assigned by another device on the network on demand).

More: What is a Public IP Address?, What is a Private IP Address?

Working With a Device’s IP Address

Every device connected to an IP network – including computers, phones, printers, and Internet of Things gadgets, receives an IP address. Some devices like network routers can even have multiple addresses – one for each active network interface.

How to look up an IP address in use varies depending on the device but generally involve navigating through system settings menus. In a troubleshooting scenario, experienced users can look at their client IP address and verify whether it is valid for the network they are trying to use. Users can also sometimes restore a broken connection by learning how to release and renew their device’s IP address.Being familiar with IP lookup and renewal procedures comes in handy when calling computer tech support personnel as they may need the user to walk through these steps to get their device back online.

IP geolocation technology enables Web sites to roughly determine a person’s country and city location by the IP addresses their network is using. Sites sometimes use this information to control the kinds of content and advertising a user sees online. Some users prefer greater anonymity online and take steps to hide their IP address and avoid this kind of tracking.

More: Finding, Changing and Hiding an IP Address Tutorial

World Wide Web {www}

Posted on Updated on

The World Wide Web (WWW) is an information space where documents and other web resources are identified by URLs, interlinked by hypertext links, and can be accessed via the Internet.[1] The World Wide Web was invented by English scientist Tim Berners-Lee in 1989. He wrote the first web browser in 1990 while employed at CERN in Switzerland.[2][3]

It has become known simply as the Web. When used attributively (as in web page, web browser, website, web server, web traffic, web search, web user, web technology, etc.) it is invariably written in lower case. Otherwise the initial capital is often retained (‘the Web’), but lower case is becoming increasingly common (‘the web’).

The World Wide Web was central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet.[4][5][6]

Web pages are primarily text documents formatted and annotated with Hypertext Markup Language (HTML). In addition to formatted text, web pages may contain images, video, and software components that are rendered in the user’s web browser as coherent pages of multimedia content. Embedded hyperlinks permit users to navigate between web pages. Multiple web pages with a common theme, a common domain name, or both, may be called a website. Website content can largely be provided by the publisher, or interactive where users contribute content or the content depends upon the user or their actions. Websites may be mostly informative, primarily for entertainment, or largely for commercial purposes.

Function

The World Wide Web functions as a layer on top of the Internet, helping to make it more functional. The advent of the Mosaic web browser helped to make the web much more usable.

The terms Internet and World Wide Web are often used without much distinction. However, the two are not the same. The Internet is a global system of interconnected computer networks. In contrast, the World Wide Web is a global collection of text documents and otherresources, linked by hyperlinks and URIs. Web resources are usually accessed using HTTP, which is one of many Internet communication protocols.

Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser, or by following a hyperlink to that page or resource. The web browser then initiates a series of background communication messages to fetch and display the requested page. In the 1990s, using a browser to view web pages—and to move from one web page to another through hyperlinks—came to be known as ‘browsing,’ ‘web surfing’ (after channel surfing), or ‘navigating the Web’. Early studies of this new behavior investigated user patterns in using web browsers. One study, for example, found five user patterns: exploratory surfing, window surfing, evolved surfing, bounded navigation and targeted navigation.

The following example demonstrates the functioning of a web browser when accessing a page at the URLhttp://www.example.org/home.html. The browser resolves the server name of the URL (www.example.org) into an Internet Protocol address using the globally distributed Domain Name System (DNS). This lookup returns an IP address such as 203.0.113.4 or 2001:db8:2e::7334. The browser then requests the resource by sending an HTTP request across the Internet to the computer at that address. It requests service from a specific TCP port number that is well known for the HTTP service, so that the receiving host can distinguish an HTTP request from other network protocols it may be servicing. The HTTP protocol normally uses port number 80. The content of the HTTP request can be as simple as two lines of text:

GET /home.html HTTP/1.1
Host: www.example.org

The computer receiving the HTTP request delivers it to web server software listening for requests on port 80. If the web server can fulfill the request it sends an HTTP response back to the browser indicating success:

HTTP/1.0 200 OK
Content-Type: text/html; charset=UTF-8

followed by the content of the requested page. HyperText Markup Language (HTML) for a basic web page might look like this:

<html>
  <head>
    <title>Example.org – The World Wide Web</title>
  </head>
  <body>
    <p>The World Wide Web, abbreviated as WWW and commonly known ...</p>
  </body>
</html>

The web browser parses the HTML and interprets the markup (<title>, <p> for paragraph, and such) that surrounds the words to format the text on the screen. Many web pages use HTML to reference the URLs of other resources such as images, other embedded media, scripts that affect page behavior, and Cascading Style Sheets that affect page layout. The browser makes additional HTTP requests to the web server for these other Internet media types. As it receives their content from the web server, the browser progressively renders the page onto the screen as specified by its HTML and these additional resources.

Linking

Most web pages contain hyperlinks to other related pages and perhaps to downloadable files, source documents, definitions and other web resources. In the underlying HTML, a hyperlink looks like this: <a href="http://www.example.org/home.html">Example.org Homepage</a>

Graphic representation of a minute fraction of the WWW, demonstratinghyperlinks

Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of information. Publication on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990.[9]

The hyperlink structure of the WWW is described by the webgraph: the nodes of the webgraph correspond to the web pages (or URLs) the directed edges between them to the hyperlinks.

Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot, and the hyperlinks affected by it are often called dead links. The ephemeral nature of the Web has prompted many efforts to archive web sites. The Internet Archive, active since 1996, is the best known of such efforts.

Dynamic updates of web pages

Main article: Ajax (programming)

JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages.The standardised version isECMAScript.To make web pages more interactive, some web applications also use JavaScript techniques such as Ajax (asynchronous JavaScript and XML). Client-side scriptis delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse movements or clicks, or based on elapsed time. The server’s responses are used to modify the current page rather than creating a new page with each response, so the server needs only to provide limited, incremental information. Multiple Ajax requests can be handled at the same time, and users can interact with the page while data is retrieved. Web pages may also regularly poll the server to check whether new information is available.

WWW prefix

Many hostnames used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts according to the services they provide. Thehostname of a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a USENET news server. These host names appear as Domain Name System (DNS) or subdomain names, as in http://www.example.com. The use of www is not required by any technical or policy standard and many web sites do not use it; indeed, the first ever web server was called nxoc01.cern.ch.According to Paolo Palazzi,who worked at CERN along with Tim Berners-Lee, the popular use of www as subdomain was accidental; the World Wide Web project page was intended to be published at http://www.cern.ch while info.cern.ch was intended to be the CERN home page, however the DNS records were never switched, and the practice of prepending www to an institution’s website domain name was subsequently copied. Many established websites still use the prefix, or they employ other subdomain names such as www2, secure or en for special purposes. Many such web servers are set up so that both the main domain name (e.g., example.com) and the www subdomain (e.g., http://www.example.com) refer to the same site; others require one form or the other, or they may map to different web sites.

The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be used in a CNAME, the same result cannot be achieved by using the bare domain root.

When a user submits an incomplete domain name to a web browser in its address bar input field, some web browsers automatically try adding the prefix “www” to the beginning of it and possibly “.com”, “.org” and “.net” at the end, depending on what might be missing. For example, entering ‘microsoft’ may be transformed to http://www.microsoft.com/and ‘openoffice’ to http://www.openoffice.org. This feature started appearing in early versions of Mozilla Firefox, when it still had the working title ‘Firebird’ in early 2003, from an earlier practice in browsers such as Lynx.It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices.

In English, www is usually read as double-u double-u double-u. Some users pronounce it dub-dub-dub, particularly in New Zealand. Stephen Fry, in his “Podgrammes” series of podcasts, pronounces it wuh wuh wuh.[citation needed] The English writer Douglas Adams once quipped in The Independent on Sunday (1999): “The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it’s short for”. In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng , which satisfies www and literally means “myriad dimensional net”, a translation that reflects the design concept and proliferation of the World Wide Web. Tim Berners-Lee’s web-space states that World Wide Web is officially spelled as three separate words, each capitalised, with no intervening hyphens.

Use of the www prefix is declining as Web 2.0 web applications seek to brand their domain names and make them easily pronounceable. As the mobile web grows in popularity, services like Gmail.com, Outlook.com, MySpace.com, Facebook.com and Twitter.com are most often mentioned without adding “www.” (or, indeed, “.com”) to the domain.

Scheme specifiers

The scheme specifiers http:// and https:// at the start of a web URI refer to Hypertext Transfer Protocol or HTTP Secure, respectively. They specify the communication protocol to use for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web, and the added encryption layer in HTTPS is essential when browsers send or retrieve confidential data, such as passwords or banking information. Web browsers usually automatically prepend http:// to user-entered URIs, if omitted.

Web security

For criminals, the web has become the preferred way to spread malware. Cybercrime on the web can include identity theft, fraud, espionage and intelligence gathering.Web-based vulnerabilities now outnumber traditional computer security concerns,and as measured by Google, about one in ten web pages may contain malicious code. Most web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia.The most common of all malwarethreats is SQL injection attacks against websites.Through HTML and URIs, the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript and were exacerbated to some degree by Web 2.0 and Ajax web design that favors the use of scripts.Today by one estimate, 70% of all websites are open to XSS attacks on their users.Phishing is another common threat to the Web. “SA, the Security Division of EMC, today announced the findings of its January 2013 Fraud Report, estimating the global losses from phishing at $1.5 Billion in 2012”.Two of the well-known phishing methods are Covert Redirect and Open Redirect.

Proposed solutions vary to extremes. Large security vendors like McAfee already design governance and compliance suites to meet post-9/11 regulations, and some, likeFinjan have recommended active real-time inspection of code and all content regardless of its source. Some have argued that for enterprise to see security as a business opportunity rather than a cost center, “ubiquitous, always-on digital rights management” enforced in the infrastructure by a handful of organizations must replace the hundreds of companies that today secure data and networks. Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet.

Privacy

Main article: Internet privacy

Every time a client requests a web page, the server can identify the request’s IP address and usually logs it. Also, unless set not to do so, most web browsers record requested web pages in a viewable history feature, and usually cache much of the content locally. Unless the server-browser communication uses HTTPS encryption, web requests and responses travel in plain text across the Internet and can be viewed, recorded, and cached by intermediate systems.

When a web page asks for, and the user supplies, personally identifiable information—such as their real name, address, e-mail address, etc.—web-based entities can associate current web traffic with that individual. If the website uses HTTP cookies, username and password authentication, or other tracking techniques, it can relate other web visits, before and after, to the identifiable information provided. In this way it is possible for a web-based organisation to develop and build a profile of the individual people who use its site or sites. It may be able to build a record for an individual that includes information about their leisure activities, their shopping interests, their profession, and other aspects of their demographic profile. These profiles are obviously of potential interest to marketeers, advertisers and others. Depending on the website’s terms and conditions and the local laws that apply information from these profiles may be sold, shared, or passed to other organisations without the user being informed. For many ordinary people, this means little more than some unexpected e-mails in their in-box, or some uncannily relevant advertising on a future web page. For others, it can mean that time spent indulging an unusual interest can result in a deluge of further targeted marketing that may be unwelcome. Law enforcement, counter terrorism and espionage agencies can also identify, target and track individuals based on their interests or proclivities on the Web.

Social networking sites try to get users to use their real names, interests, and locations. They believe this makes the social networking experience more realistic, and therefore more engaging for all their users. On the other hand, uploaded photographs or unguarded statements can be identified to an individual, who may regret this exposure. Employers, schools, parents, and other relatives may be influenced by aspects of social networking profiles that the posting individual did not intend for these audiences. On-line bullies may make use of personal information to harass or stalk users. Modern social networking websites allow fine grained control of the privacy settings for each individual posting, but these can be complex and not easy to find or use, especially for beginners.

Photographs and videos posted onto websites have caused particular problems, as they can add a person’s face to an on-line profile. With modern and potential facial recognition technology, it may then be possible to relate that face with other, previously anonymous, images, events and scenarios that have been imaged elsewhere. Because of image caching, mirroring and copying, it is difficult to remove an image from the World Wide Web.

Standards

Main article: Web standards

Many formal standards and other technical specifications and software define the operation of different aspects of the World Wide Web, the Internet, and computer information exchange. Many of the documents are the work of the World Wide Web Consortium (W3C), headed by Berners-Lee, but some are produced by the Internet Engineering Task Force (IETF) and other organizations.

Usually, when web standards are discussed, the following publications are seen as foundational:

Additional publications provide definitions of other essential technologies for the World Wide Web, including, but not limited to, the following:

  • Uniform Resource Identifier (URI), which is a universal system for referencing resources on the Internet, such as hypertext documents and images. URIs, often called URLs, are defined by the IETF’s RFC 3986 / STD 66: Uniform Resource Identifier (URI): Generic Syntax, as well as its predecessors and numerous URI scheme-defining RFCs;
  • HyperText Transfer Protocol (HTTP), especially as defined by RFC 2616: HTTP/1.1 and RFC 2617: HTTP Authentication, which specify how the browser and server authenticate each other.

Accessibility

Main article: Web accessibility

There are methods for accessing the Web in alternative mediums and formats to facilitate use by individuals with disabilities. These disabilities may be visual, auditory, physical, speech related, cognitive, neurological, or some combination. Accessibility features also help people with temporary disabilities, like a broken arm, or aging users as their abilities change.The Web receives information as well as providing information and interacting with society. The World Wide Web Consortium claims it essential that the Web be accessible, so it can provide equal access and equal opportunity to people with disabilities. Tim Berners-Lee once noted, “The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.”Many countries regulate web accessibility as a requirement for websites. International cooperation in the W3CWeb Accessibility Initiative led to simple guidelines that web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology.

Speed issues[edit]

Frustration over congestion issues in the Internet infrastructure and the high latency that results in slow browsing has led to a pejorative name for the World Wide Web: the World Wide Wait.Speeding up the Internet is an ongoing discussion over the use of peering and QoS technologies. Other solutions to reduce the congestion can be found atW3C.Guidelines for web response times are:

  • 0.1 second (one tenth of a second). Ideal response time. The user does not sense any interruption.
  • 1 second. Highest acceptable response time. Download times above 1 second interrupt the user experience.
  • 10 seconds. Unacceptable response time. The user experience is interrupted and the user is likely to leave the site or system.

Web caching

A web cache is a server computer located either on the public Internet, or within an enterprise that stores recently accessed web pages to improve response time for users when the same content is requested within a certain time after the original request.

Most web browsers also implement a browser cache for recently obtained data, usually on the local disk drive. HTTP requests by a browser may ask only for data that has changed since the last access. Web pages and resources may contain expiration information to control caching to secure sensitive data, such as in online banking, or to facilitate frequently updated sites, such as news media. Even sites with highly dynamic content may permit basic resources to be refreshed only occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that they can be cached efficiently.

Enterprise firewalls often cache Web resources requested by one user for the benefit of many. Some search engines store cached content of frequently accessed websites.