Friday, January 30, 2009

Programmable read-only memory

A programmable read-only memory (PROM) or field programmable read-only memory (FPROM) is a form of digital memory where the setting of each bit is locked by a fuse or antifuse. Such PROMs are used to store programs permanently. The key difference from a strict ROM is that the programming is applied after the device is constructed. They are frequently seen in video game consoles, or such products as electronic dictionaries, where PROMs for different languages can be substituted.

Programming

A typical PROM comes with all bits reading as 1. Burning a fuse during programming causes its bit to read as 0. The memory can be programmed just once after manufacturing by "blowing" the fuses (using a PROM blower), which is an irreversible process. Blowing a fuse opens a connection while blowing an antifuse closes a connection (hence the name). Programming is done by applying high-voltage pulses which are not encountered during normal operation (typically 12 to 21 volts). Read-only means that, unlike the case with conventional memory, the programming cannot be changed (at least not by the end user).

Advantages

• Reliability
• Stores data permanently
• Moderate price
• Built using integrated circuits, rather than discrete components.
• Fast: reading is between 35ns and 60ns.

Invention

PROM was invented in 1956 by Wen Tsing Chow, working for the Arma Division of the American Bosch Arma Corporation in Garden City, New York. The invention was conceived at the request of the United States Air Force to come up with a more flexible and secure way of storing the targeting constants in the Atlas E/F ICBM's airborne digital computer. The patent and associated technology was held under secrecy order for several years while the Atlas E/F was the main operational missile of the United States ICBM force. The term "burn," referring to the process of programming a PROM, is also in the original patent, as one of the original implementations was to literally burn the internal whiskers of diodes with a current overload to produce a circuit discontinuity. The first PROM programming machines were also developed by Arma engineers under Mr. Chow's direction and were located in Arma's Garden City lab and Air Force Strategic Air Command (SAC) headquarters.


Magnetic tape sound recording

Magnetic tape has been used for sound recording for more than 75 years. Tape revolutionized both the radio broadcast and music recording industries. It did this by giving artists and producers the power to record and re-record audio with minimal loss in quality as well as edit and rearrange recordings with ease. The alternative recording technologies of the era, transcription discs and wire recorders, could not provide anywhere near this level of quality and functionality. Since some early refinements improved the fidelity of the reproduced sound, magnetic tape has been the highest quality analog sound recording medium available. Despite this, as of 2007, magnetic tape is largely being replaced by digital systems for most sound recording purposes.

Prior to the development of magnetic tape, magnetic wire recorders had successfully demonstrated the concept of magnetic recording, but they never offered audio quality comparable to the recording and broadcast standards of the time. Some individuals and organizations developed innovative uses for magnetic wire recorders while others investigated variations of the technology. One particularly important variation was the application of an oxide powder to a long strip of paper. This German invention was the start of a long string of innovations that lead to modern magnetic tape.

Random access memory

Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today it takes the form of integrated circuits that allow the stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.

This contrasts with storage mechanisms such as tapes, magnetic discs and optical discs, which rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than the data transfer, and the retrieval time varies depending on the physical location of the next item.

The word RAM is mostly associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. However, many other types of memory are RAM as well (i.e., Random Access Memory), including most types of ROM and a kind of flash memory called NOR-Flash.

History Of Random access memory

An early type of widespread writable random access memory was the magnetic core memory, developed from 1949 to 1952, and subsequently used in most computers up until the development of the static and dynamic integrated RAM circuits in the late 1960s and early 1970s. Before this, computers used relays, delay lines or various kinds of vacuum tube arrangements to implement "main" memory functions (i.e., hundreds or thousands of bits), some of which were random access, some not. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers and (random access) register banks. Prior to the development of integrated ROM circuits, permanent (or read-only) random access memory was often constructed using semiconductor diode matrices driven by address decoders.

Magnetic storage

Magnetic storage and magnetic recording are terms from engineering referring to the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads. As of 2007, magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference.

History of Magnetic storage

Magnetic storage in the form of audio recording on a wire was publicized by Oberlin Smith in 1888. He filed a patent in September, 1878 but did not pursue the idea as his business was machine tools. The first publicly demonstrated (Paris Exposition of 1900) magnetic recorder was invented by Valdemar Poulsen in 1898. Poulsen's device recorded a signal on a wire wrapped around a drum. In 1928, Fritz Pfleumer developed the first magnetic tape recorder. Early magnetic storage devices were designed to record analog audio signals. Computer and now most audio and video magnetic storage devices record digital data.

In early computers, magnetic storage was also used for primary storage in a form of magnetic drum, or core memory, core rope memory, thin film memory, twistor memory or bubble memory. Unlike modern computers, magnetic tape was also often used for secondary storage.

Wednesday, January 28, 2009

Front-end versus back-end

With modern media content retrieval and output technology, there is much overlap between visual communications (front-end) and information technology (back-end). Large print publications (thick books, especially instructional in nature) and electronic pages (web pages) require meta data for automatic indexing, automatic reformatting, database publishing, dynamic page display and end-user interactivity. Much of the meta data (meta tags) must be hand coded or specified during the page layout process. This divides the task of page layout between artists and engineers, or tasks the artist/engineer to do both.

More complex projects may require two separate designs: page layout design as the front-end, and function coding as the back-end. In this case, the front-end may be designed using the alternate page layout technology such as image editing software or on paper with hand rendering methods. Most image editing software includes features for converting a page layout for use in a "What You See Is What You Get" (WYSIWYG) editor or features to export graphics for desktop publishing software. WYSIWYG editors and desktop publishing software allow front-end design prior to back end-coding in most cases. Interface design and database publishing may involve more technical knowledge or collaboration with information technology engineering in the front-end.

Grids versus templates

Grids and templates are page layout design patterns used in advertising campaigns and multiple page publications, including websites.

* A grid is a set of guidelines, visible in the design process and invisible to the end-user/audience, for aligning and repeating elements on a page. A page layout may or may not stay within those guidelines, depending on how much repetition or variety the design style in the series calls for. Grids are meant to be flexible. Using a grid to layout elements on the page may require just as much or more graphic design skill than that which was required to design the grid.

* In contrast, a template is more rigid. A template involves repeated elements mostly visible to the end-user/audience. Using a template to layout elements usually involves less graphic design skill than that which was required to design the template. Templates are used for minimal modification of background elements and frequent modification (or swapping) of foreground content.

Most desktop publishing software allows for grids in the form of a page filled with automatic dots placed at a specified equal horizontal and vertical distance apart. Automatic margins and booklet spine (gutter) lines may be specified for global use throughout the document. Multiple additional horizontal and vertical lines may be placed at any point on the page. Invisible to the end-user/audience shapes may be placed on the page as guidelines for page layout and print processing as well. Software templates are achieved by duplicating a template data file, or with master page features in a multiple page document. Master pages may include both grid elements and template elements such as header and footer elements, automatic page numbering, and automatic table of contents features.

Page layout

Page layout is the part of graphic design that deals in the arrangement and style treatment of elements (content) on a page. Beginning from early illuminated pages in hand-copied books of the Middle Ages and proceeding down to intricate modern magazine and catalog layouts, proper page design has long been a consideration in printed material. With print media, elements usually consist of type (text), images (pictures), and occasionally place-holder graphics for elements that are not printed with ink such as die/laser cutting, foil stamping or blind embossing.

Since the advent of personal computing, page layout skills have expanded to electronic media as well as print media. The electronic page is better known as a graphical user interface (GUI) when interactive elements are included. Page layout for interactive media overlaps with (and is often called) interface design. This usually includes interactive elements and multimedia in addition to text and still images. Interactivity takes page layout skills from planning attraction and eye flow to the next level of planning user experience in collaboration with software engineers and creative directors.[citation needed]

A page layout may be designed in a rough paper and pencil sketch before producing, or produced during the design process to the final form. Both design and production may be achieved using hand tools or page layout software. Producing the most popular electronic page (the web page) may require knowledge of markup languages along with WYSIWYG editors to compensate for incompatibility between platforms. Special considerations must be made for how the layout of an HTML page will change (reflow) when resized by the end-user. Cascading style sheets are often required to keep the page layout consistent between web browsers.

Tuesday, January 27, 2009

Human-computer interaction

Human–computer interaction is the study of interaction between people (users) and computers. It is often regarded as the intersection of computer science, behavioral sciences, design and several other fields of study. Interaction between users and computers occurs at the user interface (or simply interface), which includes both software and hardware, for example, general-purpose computer peripherals and large-scale mechanical systems, such as aircraft and power plants. The following definition is given by the Association for Computing Machinery[1]:

"Human-computer interaction is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them."

Because human-computer interaction studies a human and a machine in conjunction, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques in computer graphics, operating systems, programming languages, and development environments are relevant. On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, and human performance are relevant. Engineering and design methods are also relevant. Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success. HCI is also sometimes referred to as man–machine interaction (MMI) or computer–human interaction (CHI).

Web template system

Dynamic web pages usually consist of a static part (HTML) and a dynamic part, which is code that generates HTML. The code that generates the HTML can do this based on variables in a template, or on code. The text to be generated can come from a database, thereby making it possible to dramatically reduce the number of pages in a site.

Consider the example of a real estate agent with 500 houses for sale. In a static web site, the agent would have to create 500 pages in order to make the information available. In a dynamic website, the agent would simply connect the dynamic page to a database table of 500 records.

In a template, variables from the programming language can be inserted without using code, thereby losing the requirement of programming knowledge to make updates to the pages in a web site. A syntax is made available to distinguish between HTML and variables. E.g. in JSP the tag is used to output variables, and in Smarty, {$variable} is used.

Many template engines do support limited logic tags, like IF and FOREACH. These are to be used only for decisions that need to be made for the presentation layer, in order to keep a clean separation from the business logic layer, or the M(odel) in the MVC pattern.

Style sheet (web development)

Web style sheets are a form of separation of presentation and content for web design in which the markup (i.e., HTML or XHTML) of a webpage contains the page's semantic content and structure, but does not define its visual layout (style). Instead, the style is defined in an external stylesheet file using a language such as CSS or XSL. This design approach is identified as a "separation" because it largely supersedes the antecedent methodology in which a page's markup defined both style and structure.

The philosophy underlying this methodology is a specific case of separation of concerns.

Separation of style and content has many benefits, but has only become practical in recent years due to improvements in popular web browsers' CSS implementations.

Speed

Overall, users experience of a site utilising style sheets will generally be quicker than sites that don’t use the technology. ‘Overall’ as the first page will probably load more slowly – because the style sheet AND the content will need to be transferred. Subsequent pages will load faster because no style information will need to be downloaded – the CSS file will already be in the browser's cache.

Maintainability

Holding all the presentation styles in one file significantly reduces maintenance time and reduces the chance of human errors, thereby improving presentation consistency. For example, the font color associated with a type of text element may be specified — and therefore easily modified — throughout an entire website simply by changing one short string of characters in a single file. The alternate approach, using styles embedded in each individual page, would require a cumbersome, time consuming, and error-prone edit of every file.

Accessibility

Sites that use CSS with either XHTML or HTML are easier to tweak so that they appear extremely similar in different browsers (Internet Explorer, Mozilla Firefox, Opera, Safari, etc.).

Sites using CSS "degrade gracefully" in browsers unable to display graphical content, such as Lynx, or those so very old that they cannot use CSS. Browsers ignore CSS that they do not understand, such as CSS 3 statements. This enables a wide variety of user agents to be able to access the content of a site even if they cannot render the stylesheet or are not designed with graphical capability in mind. For example, a browser using a refreshable braille display for output could disregard layout information entirely, and the user would still have access to all page content.

Customization

If a page's layout information is all stored externally, a user can decide to disable the layout information entirely, leaving the site's bare content still in a readable form. Site authors may also offer multiple stylesheets, which can be used to completely change the appearance of the site without altering any of its content.

Most modern web browsers also allow the user to define their own stylesheet, which can include rules that override the author's layout rules. This allows users, for example, to bold every hyperlink on every page they visit.

Consistency

Because the semantic file contains only the meanings an author intends to convey, the styling of the various elements of the document's content is very consistent. For example, headings, emphasized text, lists and mathematical expressions all receive consistently applied style properties from the external stylesheet. Authors need not concern themselves with the style properties at the time of composition. These presentational details can be deferred until the moment of presentation.

Portability

The deferment of presentational details until the time of presentation means that a document can be easily re-purposed for an entirely different presentation medium with merely the application of a new stylesheet already prepared for the new medium and consistent with elemental or structural vocabulary of the semantic document. A carefully authored document for a web page can easily be printed to a hard-bound volume complete with headers and footers, page numbers and a generated table of contents simply by applying a new stylesheet.

Website Planning

Before creating and uploading a website, it is important to take the time to plan exactly what is needed in the website. Thoroughly considering the audience or target market, as well as defining the purpose and deciding what content will be developed are extremely important.

Purpose

It is essential to define the purpose of the website as one of the first steps in the planning process. A purpose statement should show focus based on what the website will accomplish and what the users will get from it. A clearly defined purpose will help the rest of the planning process as the audience is identified and the content of the site is developed. Setting short and long term goals for the website will help make the purpose clear and plan for the future when expansion, modification, and improvement will take place.Goal-setting practices and measurable objectives should be identified to track the progress of the site and determine success.

Audience

Defining the audience is a key step in the website planning process. The audience is the group of people who are expected to visit your website – the market being targeted. These people will be viewing the website for a specific reason and it is important to know exactly what they are looking for when they visit the site. A clearly defined purpose or goal of the site as well as an understanding of what visitors want to do or feel when they come to your site will help to identify the target audience. Upon considering who is most likely to need or use the content, a list of characteristics common to the users such as:

* Audience Characteristics
* Information Preferences
* Computer Specifications
* Web Experience

Taking into account the characteristics of the audience will allow an effective website to be created that will deliver the desired content to the target audience.

Content

Content evaluation and organization requires that the purpose of the website be clearly defined. Collecting a list of the necessary content then organizing it according to the audience's needs is a key step in website planning. In the process of gathering the content being offered, any items that do not support the defined purpose or accomplish target audience objectives should be removed. It is a good idea to test the content and purpose on a focus group and compare the offerings to the audience needs. The next step is to organize the basic information structure by categorizing the content and organizing it according to user needs. Each category should be named with a concise and descriptive title that will become a link on the website. Planning for the site's content ensures that the wants or needs of the target audience and the purpose of the site will be fulfilled.

Friday, January 23, 2009

ICANN

The Internet Corporation for Assigned Names and Numbers (ICANN) is the authority that coordinates the assignment of unique identifiers on the Internet, including domain names, Internet Protocol (IP) addresses, and protocol port and parameter numbers. A globally unified namespace (i.e., a system of names in which there is at most one holder for each possible name) is essential for the Internet to function. ICANN is headquartered in Marina del Rey, California, but is overseen by an international board of directors drawn from across the Internet technical, business, academic, and non-commercial communities. The US government continues to have the primary role in approving changes to the root zone file that lies at the heart of the domain name system.

Because the Internet is a distributed network comprising many voluntarily interconnected networks, the Internet has no governing body. ICANN's role in coordinating the assignment of unique identifiers distinguishes it as perhaps the only central coordinating body on the global Internet, but the scope of its authority extends only to the Internet's systems of domain names, IP addresses, protocol ports and parameter numbers.

Internet protocols

The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet.

The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force(IETF). The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in Requests for Comments (RFCs), freely available on the IETF web site.

The principal methods of networking that enable the Internet are contained in a series of RFCs that constitute the Internet Standards. These standards describe a system known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the space (Application Layer) of the software application, e.g., a web browser application, and just below it is the Transport Layer which connects applications on different hosts via the network (e.g., client-server model). The underlying network consists of two layers: the Internet Layer which enables computers to connect to one-another via intermediate (transit) networks and thus is the layer that establishes internetworking and the Internet, and lastly, at the bottom, is a software layer that provides connectivity between hosts on the same local link (therefor called Link Layer), e.g., a local area network (LAN) or a dial-up connection. This model is also known as the TCP/IP model of networking. While other models have been developed, such as the Open Systems Interconnection (OSI) model, they are not compatible in the details of description, nor implementation.

The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems for computers on the Internet and facilitates the internetworking of networks. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive
growth of the Internet has led to IPv4 address exhaustion. A new protocol version, IPv6, was developed which provides vastly larger addressing capabilities and more efficient routing of data traffic. IPv6 is currently in commercial deployment phase around the world.

IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not accessible with IPv4 software. This means software upgrades are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still
lagging in this development.

Thursday, January 22, 2009

History Of Graphics Interchange Format

CompuServe introduced the GIF format in 1987 to provide a color image format for their file downloading areas, replacing their earlier run-length encoding (RLE) format, which was black and white only. GIF became popular because it used LZW data compression, which was more efficient than the run-length encoding that formats such as PCX and MacPaint used, and fairly large images could therefore be downloaded in a reasonably short time, even with very slow modems.

The original version of the GIF format was called 87a. In 1989, CompuServe devised an enhanced version, called 89a, that added support for multiple images in a stream, interlacing and storage of application-specific metadata. The two versions can be distinguished by looking at the first six bytes of the file, which, when interpreted as ASCII, read "GIF87a" and "GIF89a", respectively.

GIF was one of the first two image formats commonly used on Web sites, the other being the black and white XBM.[citation needed] JPEG came later with the Mosaic browser.

The GIF89a feature of storing multiple images in one file, accompanied by control data, is used extensively on the Web to produce simple animations. The optional interlacing feature, which stores image scan lines out of order in such a fashion that even a partially downloaded image was somewhat recognizable, also helped GIF's popularity,[citation needed] as a user could abort the download if it was not what was required.

Graphics Interchange Format

The Graphics Interchange Format (GIF) is a bitmap image format that was introduced by CompuServe in 1987 and has since come into widespread usage on the World Wide Web due to its wide support and portability.

The format supports up to 8 bits per pixel, allowing a single image to reference a palette of up to 256 distinct colors chosen from the 24-bit RGB color space. It also supports animations and allows a separate palette of 256 colors for each frame. The color limitation makes the GIF format unsuitable for reproducing color photographs and other images with continuous color, but it is well-suited for simpler images such as graphics or logos with solid areas of color.

GIF images are compressed using the Lempel-Ziv-Welch (LZW) lossless data compression technique to reduce the file size without degrading the visual quality. This compression technique was patented in 1985. Controversy over the licensing agreement between the patent holder, Unisys, and CompuServe in 1994 inspired the development of the Portable Network Graphics (PNG) standard; since then all the relevant patents have expired.

Design Complies with Best Practices of Modern Web Design

The IN.gov design is built based on today’s modern Web design standards. Content is marked up semantically so that the underlying HTML markup properly designates the content displayed. Additionally, content and design are separated in accordance with modern Web design standards.

This separation of content and design has many benefits, which include:

• making it easier to make enterprise style changes across the entire site with the change of one cascading style sheet (CSS);
• faster page load times;
• pages that are more accessible to impaired users who use alternative devices; and,
• allowing the same content to be appropriately formatted for other devices, such as
handhelds and printers, with only a different style sheet.

Web Design Requirements and Standards

Though standardization of Web design across an entire Web site (regardless of internal divisions of an entity) is found across the thousands of Web sites of private sector entities, governmental entities are notoriously bad at securing executive support and reaching consensus on a standard design. This was certainly true for the state of Indiana, which once had more than 75 different agencies with different looking Web sites. As a result, sites were agency-focused, not customerfocused; confusing due to lack of consistency; presented the same types of information and functions (e.g., navigation and search) differently; usually had a stale design and illogical structure; with pages often out of date. All this, despite the fact that external customers demand an easy-to-use Web site and do not care whether the Web site is supported by the private or public sector.

To meet the expectations of its external customers, the governor’s office and the IN.gov Program took on the bold initiative to become the first state in the nation to implement a common set of design requirements and standards. These requirements and standards are derived from the work of a multi-agency task force that was established in 2007 to create a standard “look and feel” for state agency Web sites.

Wednesday, January 21, 2009

Difference between TCP and UDP

TCP ("Transmission Control Protocol") is a connection-oriented protocol, which means that upon communication it requires handshaking to set up end-to-end connection. A connection can be made from client to server, and from then on any data can be sent
along that connection.

* Reliable - TCP manages message acknowledgment, retransmission and timeout. Many attempts to reliably deliver the message are made. If it gets lost along the way, the server will re-request the lost part. In TCP, there's either no missing data, or, in case of multiple timeouts, the connection is dropped.
* Ordered - if two messages are sent along a connection, one after the other, the first message will reach the receiving application first. When data packets arrive in the wrong order, the TCP layer holds the later data until the earlier data can be
rearranged and delivered to the application.
* Heavyweight - TCP requires three packets just to set up a socket, before any actual data can be sent. It handles connections, reliability and congestion control. It is a large transport protocol designed on top of IP.
* Streaming - Data is read as a "stream," with nothing distinguishing where one packet ends and another begins. Packets may be split or merged into bigger or smaller data streams arbitrarily.UDP is a simpler message-based connectionless protocol. In connectionless protocols, there is no effort made to set up a dedicated
end-to-end connection. Communication is achieved by transmitting information in one direction, from source to destination without checking to see if the destination is still there, or if it is prepared to receive the information.
* Unreliable - When a message is sent, it cannot be known if it will reach its destination; it could get lost along the way. There is no concept of acknowledgment, retransmission and timeout.
* Not ordered - If two messages are sent to the same recipient, the order in which they arrive cannot be predicted.
* Lightweight - There is no ordering of messages, no tracking connections, etc. It is a small transport layer designed on top of IP.
* Datagrams - Packets are sent individually and are guaranteed to be whole if they arrive. Packets have definite bounds and no split or merge into data streams may exist.

User Datagram Protocol

The User Datagram Protocol (UDP) is one of the core members of the Internet Protocol Suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, sometimes known as datagrams, to other hosts on an Internet Protocol (IP) network without requiring prior communications to set up special transmission channels or data paths. UDP is sometimes called the Universal Datagram Protocol. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768.

UDP uses a simple transmission model without implicit hand-shaking dialogues for guaranteeing reliability, ordering, or data integrity. Thus, UDP provides an unreliable service and datagrams may arrive out of order, appear duplicated, or go missing without notice. UDP assumes that error checking and correction is either not necessary or performed in the application, avoiding the overhead of such processing at the network interface level. Time-sensitive applications often use UDP because dropping packets is preferable to using delayed packets. If error correction facilities are needed at the network interface level, an application may use the Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) which are designed for this purpose.

UDP's stateless nature is also useful for servers that answer small queries from huge numbers of clients. Unlike TCP, UDP is compatible with packet broadcast (sending to all on local network) and multicasting (send to all subscribers).

Common network applications that use UDP include: the Domain Name System (DNS), streaming media applications such as IPTV, Voice over IP (VoIP), Trivial File Transfer Protocol (TFTP) and many online games.

History Of Internet Protocol Suite

The Internet Protocol Suite resulted from work done by Defense Advanced Research Projects Agency (DARPA) in the early 1970s. After building the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn was hired at the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across them. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET.

By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of the CYCLADES network, with important influences on this design.

With the role of the network reduced to the bare minimum, it became possible to join almost any networks together, no matter what their characteristics were, thereby solving Kahn's initial problem. One popular saying has it that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string." There is even an implementation designed to run using homing pigeons, IP over Avian Carriers, documented in RFC 1149.

A computer called a router (a name changed from gateway to avoid confusion with other types of gateways) is provided with an interface to each network, and forwards packets back and forth between them. Requirements for routers are defined in (Request for Comments 1812).

The idea was worked out in more detailed form by Cerf's networking research group at Stanford in the 1973–74 period, resulting in the first TCP specification (Request for Comments 675) (The early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around the same period of time (i.e. contemporaneous), was also a significant technical influence; people moved between the two).

DARPA then contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, a split into TCP v3 and IP v3 in the spring of 1978, and then stability with TCP/IP v4 — the standard protocol still in use on the Internet today.

In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between the U.S., UK, and Norway. Between 1978 and 1983, several other TCP/IP prototypes were developed at multiple research centers. A full switchover to TCP/IP on the ARPANET took place January 1, 1983.

In March 1982, the US Department of Defense made TCP/IP the standard for all military computer networking. In 1985, the Internet Architecture Board held a three day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, helping
popularize the protocol and leading to its increasing commercial use.

On November 9, 2005 Kahn and Cerf were presented with the Presidential Medal of Freedom for their contribution to American culture.

Internet Protocol Suite

The Internet Protocol Suite (commonly known as TCP/IP) is the set of communications protocols used for the Internet and other similar networks. It is named from two of the most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP), which were the first two networking protocols defined in this standard. Today's IP networking represents a synthesis of several developments that began to evolve in the 1960s and 1970s, namely the Internet and LANs (Local Area Networks),which emerged in the mid- to late-1980s, together with the invention of the World Wide Web by Tim Berners-Lee in 1989 (and which exploded with the availability of the first popular web browser: Mosaic.

The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer solves a set of problems involving the transmission of data, and provides a well-defined service to the upper layer protocols based on using services from some lower layers. Upper layers are logically closer to the user and deal with more abstract data, relying on lower layer protocols to translate data into forms that can eventually be physically transmitted.

The TCP/IP model consists of four layers (RFC 1122).[1][2] From lowest to highest, these are the Link Layer, the Internet Layer,the Transport Layer, and the Application Layer.

Tuesday, January 20, 2009

Web log analysis software

Web log analysis software (also called a web log analyzer) is a simple kind of Web analytics software that parses a log file from a web server, and based on the values contained in the log file, derives indicators about who, when, and how a web server is visited. Usually reports are generated from the log files immediately, but the log files can alternatively be parsed to a database and reports generated on demand.

Indicators reported by most web log analyzers

* Number of visits and number of unique visitors
* Visits duration and last visits
* Authenticated users, and last authenticated visits
* Days of week and rush hours
* Domains/countries of host's visitors
* Hosts list
* Number total pageviews
* Most viewed, entry and exit pages
* Files type
* OS used
* Browsers used
* Robots
* HTTP referrer
* Search engines, key phrases and keywords used to find the analyzed web site
* HTTP errors
* Some of the log analyzers also report on who's on the site, conversion tracking and page navigation.

Bandwidth throttling

Bandwidth throttling is a method of ensuring a bandwidth intensive device, such as a server, will limit ("throttle") the quantity of data it transmits and/or accepts within a specified period of time. For website servers and web applications, bandwidth throttling helps limit network congestion and server crashes, whereas for ISP's, bandwidth throttling can be used to limit users' speeds across certain applications (such as BitTorrent), or limit upload speeds.

A server, such as a web server, is a host computer connected to a network, such as the Internet, which provides data in response to requests by client computers. Understandably, there are periods where client requests may peak (certain hours of the day, for example). Such peaks may cause congestion of data (bottlenecks) across the connection or cause the server to crash, resulting in downtime. In order to prevent such issues, a server administrator may implement bandwidth throttling to control the number of requests a server responds to within a specified period of time.

When a server using bandwidth throttling has reached the allowed bandwidth set by the administrator, it will block further read attempts, usually moving them into a queue to be processed once the bandwidth use reaches an acceptable level. Bandwidth throttling will usually continue to allow write requests (such as a user submitting a form) and transmission requests, unless the bandwidth continues to fail to return to an acceptable level.

Likewise, some software, such as peer-to-peer (P2P) network programs, have similar bandwidth throttling features, which allow a user to set desired maximum upload and download rates, so as not to consume the entire available bandwidth of his or her Internet connection.

Professional Web Designing Tips

A web design is all about innovative, technical and graphical designs. Web page designing is nothing but creating, planning, modeling and executing of electronic media content delivery via internet in the form of website. A competent Web designer will make use of simple tools to offer professional website. A web page comprises of innovative techniques, tools, designs, programs and graphics like template, background color, images, content, font, codes, page color, tables, graphics and navigations. Some of the web design tips for web searchers are

Page design is one of the important tools for a web design. It should comprise background color, images, color, fonts and graphics.

Adequate hyperlinks, images, content, CSS are created then it is set to be the best website design.

Experts will make use of flash, CSS and languages like HTML, java script, PHP and XML etc. It creates a professional, attractive and impressive web design.

It should be inclusive of search engine optimization and web content. Content plays the major role in web designing. Professional website designer knows well the importance of quality content for web designing.

Proper modeling, executing and formatted web designs created by atomic55.net makes the web page to be the best website design.

Web server

The term web server can mean one of two things:

1. A computer program that is responsible for accepting HTTP requests from clients (user agents such as web browsers), and serving them HTTP responses along with optional data contents, which usually are web pages such as HTML documents and linked objects (images, etc.).

2. A computer that runs a computer program as described above.

Common features

Although web server programs differ in detail, they all share some basic common features.

1. HTTP: every web server program operates by accepting HTTP requests from the client, and providing an HTTP response to the client. The HTTP response usually consists of an HTML document, but can also be a raw file, an image, or some other type of document (defined by MIME-types). If some error is found in client request or while trying to serve it, a web server has to send an error response which may include some custom HTML or text messages to better explain the problem to end users.

2. Logging: usually web servers have also the capability of logging some detailed information, about client requests and server responses, to log files; this allows the webmaster to collect statistics by running log analyzers on these files.

In practice many web servers implement the following features also:

1. Authentication, optional authorization request (request of user name and password) before allowing access to some or all kind of resources.

2. Handling of static content (file content recorded in server's filesystem(s)) and dynamic content by supporting one or more related interfaces (SSI, CGI, SCGI, FastCGI, JSP, PHP, ASP, ASP.NET, Server API such as NSAPI, ISAPI, etc.).

3. HTTPS support (by SSL or TLS) to allow secure (encrypted) connections to the server on the standard port 443 instead of usual port 80.

4. Content compression (i.e. by gzip encoding) to reduce the size of the responses (to lower bandwidth usage, etc.).

5. Virtual hosting to serve many web sites using one IP address.

6. Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS.

7. Bandwidth throttling to limit the speed of responses in order to not saturate the network and to be able to serve more clients.

Monday, January 19, 2009

Domain name

The main purpose of a domain name is to provide symbolic representations, i.e., recognizable names, to mostly numerically addressed Internet resources. This abstraction allows any resource (e.g., website) to be moved to a different physical location in the address topology of the network, globally or locally in an intranet, in effect changing the IP address. This translation from domain names to IP addresses (and vice versa) is accomplished with the global facilities of Domain Name System (DNS).

By allowing the use of unique alphabetical addresses instead of numeric ones, domain names allow Internet users to more easily find and communicate with web sites and any other IP-based communications services. The flexibility of the domain name system allows multiple IP addresses to be assigned to a single domain name, or multiple domain names to be services from a single IP address. This means that one server may have multiple roles (such as hosting multiple independent websites), or that one role can be spread among many servers. One IP address can also be assigned to several servers, as used in anycast networking.

Personal web page

Personal web pages are World Wide Web pages created by an individual to contain content of a personal nature. The content can be about that person or about something he or she is interested in. Personal web pages can be the entire content of a domain name belonging to the person (which would then be a personal website), or can be a page or pages that are part of a larger domain on which other pages are located - an example of one such larger site is GeoCities. Another example would be a student's website for school. Personal web pages are often used solely for informative or entertainment purposes. Defining personal web page is difficult, because many domains or combinations of web pages that are under the control of a single individual can be used by the individual for commercial purposes, ranging from just the presentation of advertising, to electronic commerce: the sale of goods, services or information; in fact eBay began as the personal web page of Pierre Omidyar.

Personal web pages may be as simple as a single page or may be as elaborate as an online database with gigabytes of data. Many Internet service providers offer a few megabytes of space for customers to host their own personal web pages.

The content of personal web pages varies and can, depending on the hosting server, contain anything that any other websites do. However, typical personal web pages contain images, text and a collection of hyperlinks. Many can contain biographical information, résumés, and blogs. Many personal pages will include information about the author's hobbies and pastimes, and information of interest to friends and family of the author.

Monday, January 12, 2009

Java applet

A Java applet is an applet delivered to the users in the form of Java bytecode. Java applets can run in a Web browser using a Java Virtual Machine (JVM), or in Sun's AppletViewer, a stand-alone tool for testing applets. Java applets were introduced in the first version of the Java language in 1995. Java applets are usually written in the Java programming language but they can also be written in other languages that compile to Java bytecode such as Jython.

Applets are used to provide interactive features to web applications that cannot be provided by HTML. Since Java's bytecode is platform independent, Java applets can be executed by browsers for many platforms, including Windows, Unix, Mac OS and Linux. There are open source tools like applet2app which can be used to convert an applet to a stand alone Java application/windows executable/linux executable. This has the advantage of running a Java applet in offline mode without the need for internet browser software.

A Java Servlet is sometimes informally compared to be "like" a server-side applet, but it is different in its language, functions, and in each of the characteristics described here about applets.

Java (software platform)

Java refers to a number of computer software products and specifications from Sun Microsystems that together provide a system for developing application software and deploying it in a cross-platform environment. Java is used in a wide variety of computing platforms from embedded devices and mobile phones on the low end, to enterprise servers and supercomputers on the high end. Java is nearly ubiquitous in mobile phones, Web servers and enterprise applications, and while less common on desktop computers, Java applets are often used to provide improved functionality while browsing the World Wide Web.

Writing in the Java programming language is the primary way to produce code that will be deployed as Java bytecode, though there are compilers available for other languages such as JavaScript, Python and Ruby, and a native Java scripting language called Groovy. Java syntax borrows heavily from C and C++ but it eliminates certain low-level constructs such as pointers and has a very simple memory model where every object is allocated on the heap and all variables of object types are references. Memory management is handled through integrated automatic garbage collection performed by the Java Virtual Machine (JVM).

On 13 November 2006, Sun Microsystems made the bulk of its implementation of Java available under the GNU General Public License, although there are still a few parts distributed as precompiled binaries due to intellectual property restrictions.

Thursday, January 8, 2009

Internet Telephony (VoIP)

VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP can be free or cost much less than a traditional telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL.

VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a personal computer.

Voice quality can still vary from call to call but is often equal to and can even exceed that of traditional calls.

Remaining problems for VoIP include emergency telephone number dialling and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Traditional phones are line-powered and operate during a power failure; VoIP does not do so without a backup power source for the phone equipment and the Internet access devices.

VoIP has also become increasingly popular for gaming applications, as a form of communication between players. Popular VoIP clients for gaming include Ventrilo and Teamspeak, and others. PlayStation 3 and Xbox 360 also offer VoIP chat features.

The World Wide Web

Many people use the terms Internet and World Wide Web (or just the Web) interchangeably, but, as discussed above, the two terms are not synonymous.

The World Wide Web is a huge set of interlinked documents, images and other resources, linked by hyperlinks and URLs. These hyperlinks and URLs allow the web servers and other machines that store originals, and cached copies of, these resources to deliver them as required using HTTP (Hypertext Transfer Protocol). HTTP is only one of the communication protocols used on the Internet.

Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.

Software products that can access the resources of the Web are correctly termed user agents. In normal use, web browsers, such as Internet Explorer, Firefox and Apple Safari, access web pages and allow users to navigate from one to another via hyperlinks. Web documents may contain almost any combination of computer data including graphics, sounds, text, video, multimedia and interactive content including games, office applications and scientific demonstrations.

Through keyword-driven Internet research using search engines like Yahoo! and Google, millions of people worldwide have easy, instant access to a vast and diverse amount of online information. Compared to encyclopedias and traditional libraries, the World Wide Web has enabled a sudden and extreme decentralization of information and data.

Using the Web, it is also easier than ever before for individuals and organisations to publish ideas and information to an extremely large audience. Anyone can find ways to publish a web page, a blog or build a website for very little initial cost. Publishing and maintaining large, professional websites full of attractive, diverse and up-to-date information is still a difficult and expensive proposition, however.

Many individuals and some companies and groups use "web logs" or blogs, which are largely used as easily updatable online diaries. Some commercial organisations encourage staff to fill them with advice on their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work.

Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and MySpace currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts.

Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow.

In the early days, web pages were usually created as sets of complete and isolated HTML text files stored on a web server. More recently, websites are more often created using content management or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organisation or members of the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.

Internet and the workplace

The Internet is allowing greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections and Web applications.

The Internet viewed on mobile devices

The Internet can now be accessed virtually anywhere by numerous means. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet from anywhere there is a cellular network supporting that device's technology.

Within the limitations imposed by the small screen and other limited facilities of such a pocket-sized device, all the services of the Internet, including email and web browsing, may be available in this way. Service providers may restrict the range of these services and charges for data access may be significant, compared to home usage.

Common uses

E-mail

The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Even today it can be important to distinguish between Internet and internal e-mail systems. Internet e-mail may travel and be stored unencrypted on many other networks and machines out of both the sender's and the recipient's control. During this time it is quite possible for the content to be read and even tampered with by third parties, if anyone considers it important enough. Purely internal or intranet mail systems, where the information never leaves the corporate or organization's network, are much more secure, although in any organization there will be IT and other personnel whose job may involve monitoring, and occasionally accessing, the e-mail of other employees not addressed to them.

Wednesday, January 7, 2009

Static and Dynamic web page

A static Web page is a Web page that always comprises the same information in response to all download requests from all users. Contrast with Dynamic web page.

It displays the same information for all users, from all contexts, providing the classical hypertext, where navigation is performed through "static" documents.

Advantages

* Quick and easy to put together, even by someone who doesn't have much experience.
* Ideal for demonstrating how a site will look.
* Cache friendly, one copy can be shown to many people.

Disadvantages

* Difficult to maintain when a site gets large.
* Difficult to keep consistent and up to date.
* Offers little visitor personalization (all would have to be client side).

Dynamic web page

Classical hypertext navigation occurs among "static" documents, and, for web users, this experience is reproduced using static web pages. However, web navigation can also provide an interactive experience that is termed "dynamic". Content (text, images, form fields, etc.) on a web page can change, in response to different contexts or conditions. There are two ways to create this kind of interactivity:

1. Using client-side scripting to change interface behaviors within a specific web page, in response to mouse or keyboard actions or at specified timing events. In this case the dynamic behavior occurs within the presentation.
2. Using server-side scripting to change the supplied page source between pages, adjusting the sequence or reload of the web pages or web content supplied to the browser. Server responses may be determined by such conditions as data in a posted HTML form, parameters in the URL, the type of browser being used, the passage of time, or a database or server state.

The result of either technique is described as a dynamic web page, and both may be used simultaneously.

To adhere to the first definition, web pages must use presentation technology called, in a broader sense, rich interfaced pages. Client-side scripting languages like JavaScript or ActionScript, used for Dynamic HTML (DHTML) and Flash technologies respectively, are frequently used to orchestrate media types (sound, animations, changing text, etc.) of the presentation. The scripting also allows use of remote scripting, a technique by which the DHTML page requests additional information from a server, using a hidden Frame, XMLHttpRequests, or a Web service.

Web pages that adhere to the second definition are often created with the help of server-side languages such as PHP, Perl, ASP or ASP.NET, JSP, and other languages. These server-side languages typically use the Common Gateway Interface (CGI) to produce dynamic web pages. These kinds of pages can also use, on client-side, the first kind (DHTML, etc.).

Web integration

Web Integration is leveraging the enormous success of the Web Browser to access services and information on the Web. The services can for example include lookup in news archives, searching cheap flights and ordering cinema tickets, even editing Wikipedia. Information can for example include search results from Google or content from any other online information source, even RSS feeds. Web Integration allows for fast integration of any Web browsable content, data, and applications into portals, wireless devices, content management systems, applications, databases, RSS feeds, REST or web services.

Web Integration is the engine behind most Mashup (web application hybrid) sites today. Some of them are using commercial Web Integration products, and others use technologies like Python, Perl, etc.

Benefits

The web front-end is the most widespread interface on the web, by definition everything on the Web is accessible from a web browser and can thus be accessed with Web Integration. This gives many benefits like:

* Anything on the web can be mashed-up as-is, thus the entire web can be used for mash-ups.
* The web human interface is very easy to understand, no deep programming skills are needed to work with Web Integration.
* Mash-up applications can be done without any interference with others.
* The entire internet becomes a database of information, even when there is no RSS feeds or other data feed, as long as the information is available from a web browser.

Types of Integration

# Integration at the presentation layer. This layer is the human user interface, either web-based or a platform-specific GUI or terminal interface. This layer allows user to collaborate with an application. Integration at the presentation layer lets have to access to a user interface of a remote application.
# Integration at the functional layer. This type of integration provides direct access to business logic of applications. It is attained by interaction between applications and API or by interaction with web services.
# Integration at the data layer. In this case we mean access to one or more databases used by a remote application
# Complex integration. Commerce solutions of web-integration as a rule include all three types of integration

User Interface and Interaction Design

User interface design or user interface engineering is the design of computers, appliances, machines, mobile communication devices, software applications, and websites with the focus on the user's experience and interaction. Where traditional graphic design seeks to make the object or application physically attractive, the goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals—what is often called user-centered design. Where good graphic/industrial design is bold and eye catching, good user interface design is to facilitate finishing the task at hand over drawing unnecessary attention to itself. Graphic design may be utilized to apply a theme or style to the interface without compromising its usability. The design process of an interface must balance the meaning of its visual elements that conform the mental model of operation, and the functionality from a technical engineering perspective, in order to create a system that is both usable and easy to adapt to the changing user needs.

User Interface design is involved in a wide range of projects from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interaction yet also require some unique skills and knowledge. As a result, user interface designers tend to specialize in certain types of projects and have skills centered around their expertise, whether that be software design, user research, web design, or industrial design.

User Interface and Interaction Design

Designing the visual composition and temporal behavior of GUI is an important part of software application programming. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline known as usability. Techniques of user-centered design are used to ensure that the visual language introduced in the design is well tailored to the tasks it must perform.

Typically, the user interacts with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of the user. A Model-view-controller allows for a flexible structure in which the interface is independent from and indirectly linked to application functionality, so the GUI can be easily customized. This allows the user to select or design a different skin at will, and eases the designer's work to change the interface as the user needs evolve. Nevertheless, good user interface design relates to the user, not the system architecture.

The visible graphical interface features of an application are sometimes referred to as "chrome".[4] Larger widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message or drawing. Smaller ones usually act as a user-input tool.

A GUI may be designed for the rigorous requirements of a vertical market. This is known as an "application specific graphical user interface." Examples of an application specific GUI are:

* Touchscreen point of sale software used by waitstaff in a busy restaurant
* Self-service checkouts used in a retail store
* Automated teller machines (ATM)
* Airline self-ticketing and check-in
* Information kiosks in a public space, like a train station or a museum
* Monitors or control screens in an embedded industrial application which employ a real time operating system (RTOS).

The latest cell phones and handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and touch screen multimedia centers.

User interface design

User interface design or user interface engineering is the design of computers, appliances, machines, mobile communication devices, software applications, and websites with the focus on the user's experience and interaction. Where traditional graphic design seeks to make the object or application physically attractive, the goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals—what is often called user-centered design. Where good graphic/industrial design is bold and eye catching, good user interface design is to facilitate finishing the task at hand over drawing unnecessary attention to itself. Graphic design may be utilized to apply a theme or style to the interface without compromising its usability. The design process of an interface must balance the meaning of its visual elements that conform the mental model of operation, and the functionality from a technical engineering perspective, in order to create a system that is both usable and easy to adapt to the changing user needs.

User Interface design is involved in a wide range of projects from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interaction yet also require some unique skills and knowledge. As a result, user interface designers tend to specialize in certain types of projects and have skills centered around their expertise, whether that be software design, user research, web design, or industrial design.

Tuesday, January 6, 2009

Web page

A web page or webpage is a resource of information that is suitable for the World Wide Web and can be accessed through a web browser. This information is usually in HTML or XHTML format, and may provide navigation to other web pages via hypertext links.

Web pages may be retrieved from a local computer or from a remote web server. The web server may restrict access only to a private network, e.g. a corporate intranet, or it may publish pages on the World Wide Web. Web pages are requested and served from web servers using Hypertext Transfer Protocol (HTTP).

Web pages may consist of files of static text stored within the web server's file system (static web pages), or the web server may construct the (X)HTML for each web page when it is requested by a browser (dynamic web pages). Client-side scripting can make web pages more responsive to user input once in the client browser.

File hosting service

A file hosting service, online file storage service, or online media center is an Internet hosting service specifically designed to host static content, typically large files that are not web pages. Typically they allow web and FTP access. They can be optimized for serving many users (as is implied by the term "hosting") or be optimized for single-user storage (as is implied by the term "storage"). Related services are video sharing, virtual storage and remote backup.

Uses

Software file hosting

Shareware authors often use file hosting services to serve their software. The inherent problem with free downloads is the huge bandwidth cost. These hosts also offer additional services to the authors such as statistics or other marketing features.

Personal file storage

Personal file storage services are aimed at private individuals, offering a sort of "network storage" for personal backup, file access, or file distribution. Users can upload their files and share them publicly or keep them password-protected.

Prior to the advent of personal file storage services, off-site backup services were not typically affordable for individual and small office computer users.

Sometimes people prefer hosting their files on a publicly accessible HTTP server. In this case, they generally choose paid hosting, and use their hosting for this purpose. Many free hosting providers do not allow the storage of files for non-website-related use.

Content caching

Content providers who potentially encounter bandwidth congestion issues may use services specialized in distributing cached or static content. It is the case for companies with a major Internet presence.

Cyberspace

Cyberspace is the global domain of electro-magnetics accessed through electronic technology and exploited through the modulation of electromagnetic energy to achieve a wide range of communication and control system capabilities. The term is rooted in the science of cybernetics and Norbert Weiner’s pioneering work in electronic communication and control science, a forerunner to current information theory and computer science. Through its electro-magnetic nature, cyberspace integrates a number of capabilities (sensors, signals, connections, transmissions, processors, controllers) and generates a virtual interactive experience accessed for the purpose of communication and control regardless of a geographic location.

In pragmatic terms, Cyberspace allows the interdependent network of information technology infrastructures (ITI), telecommunications networks--such as the Internet, computer systems, integrated sensors, system control networks and embedded processors and controllers common to global control and communications. As a social experience, individuals can interact, exchange ideas, share information, provide social support, conduct business, direct actions, create artistic media, play simulation games, engage in political discussion, etc. The term was originally coined by the cyber-punk science fiction author, William Gibson.[1] The now ubiquitous term has become a conventional means to describe anything associated with computers, information technology, the internet and the diverse internet culture.

Cognitive metaphor

The cognitive metaphor of a website is the association of the site concept to an experience outside of a site's environment. It is used to enhance the level of comfort the user experiences using the website since this association relates the navigational schemes, processes, and informational areas of a site to something familiar. For example, a tabbed metaphor can be used in a site for organization of information because users can relate the site's organization to that of using a file drawer of tabbed file folders. This relationship between the file drawer containing folders allows a user who is unfamiliar with a website to navigate it comfortably and with less aggravation.

Literature and Cognitive Metaphor

"The most recent linguistic approach to literature is that of cognitive metaphor, which claims that metaphor is not a mode of language, but a mode of thought. Metaphors project structures from source domains of schematized bodily or enculturated experience into abstract target domains. We conceive the abstract idea of life in terms of our experiences of a journey, a year, or a day. We do not understand Robert Frost's "Stopping by Woods on a Snowy Evening" to be about a horse-and-wagon journey but about life. We understand Emily Dickinson's "Because I Could Not Stop for Death" as a poem about the end of the human life span, not a trip in a carriage. This work is redefining the critical notion of imagery. Perhaps for this reason, cognitive metaphor has significant promise for some kind of rapprochement between linguistics and literary study."

Monday, January 5, 2009

Colocation centre for webhosting service

A colocation centre (collocation center) ("colo") or carrier hotel is a type of data center where multiple customers locate network, server and storage gear and interconnect to a variety of telecommunications and other network service provider(s) with a minimum of cost and complexity.

Increasingly, organizations are recognizing the benefits of colocating their mission-critical equipment within a data centre. Colocation is becoming popular because of the time and cost savings a company can realize as result of using shared data centre infrastructure. Significant benefits of scale (large power and mechanical systems) result in large colocation facilities, typically 4500 to 9500 square metres (roughly 50000 to 100000 square feet). With IT and communications facilities in safe, secure hands, telecommunications, internet, ASP and content providers, as well as enterprises, enjoy less latency and the freedom to focus on their core business.

Additionally, customers reduce their traffic back-haul costs and free up their internal networks for other uses. Moreover, by outsourcing network traffic to a colocation service provider with greater bandwidth capacity, web site access speeds should improve considerably.

Major types of colocation customers are:

* Web commerce companies, who use the facilities for a safe environment and cost-effective, redundant connections to the Internet
* Major enterprises, who use the facility for disaster avoidance, offsite data backup and business continuity
* Telecommunication companies, who use the facilities to interexchange traffic with other telecommunications companies and access to potential clients

Clustered hosting

Clustered hosting technology is designed to eliminate the problems inherent with typical shared hosting infrastructures. This technology provides customers with a “clustered” handling of security, load balancing, and necessary website resources.

A clustered hosting platform is data-driven, which means that no human interaction is needed to provision a new account to the platform.

Clustered hosting "virtualizes" the resources beyond the limits of one physical server, and as a result, a website is not limited to one server. They share the processing power of many servers and their applications are distributed in real-time. This means that they can purchase as much computing power as they want from a virtually inexhaustible source, since even the largest customer never consumes more than a fraction of a percent of the total server pool. Customer account changes (to add new resources or change settings) are propagated immediately to every server in the cluster. This is different from typical shared hosting architectures that usually require changes to a configuration file that becomes live after the server is rebooted during off hours, or are pushed on a cyclic basis every few hours.

Multiple tiers of security are integrated into the clustered hosting platform. In a typical hosting environment, the security layer is usually not integrated in the platform. The stock solutions used for shared hosting do not solve core issues around integrating security between the application and the operating system. At best, most typical hosts will implement a firewall solution, and weaknesses inherent with the operating system will remain exploitable to those that penetrate the firewall.

Clustered hosting network layer protections employ intelligent routing, redundant switching fabric and built in firewall and proxy technology. Clustered hosting provides considerable advantages over traditional hosting architectures in mitigating denial-of-service attacks and other network attacks because such attacks can be dispersed over a large pool of servers, and if individual hardware components are impacted by such attacks, they automatically fall out of traffic handling during the attack.

Grids versus conventional supercomputers

Distributed" or "grid" computing in general is a special type of parallel computing[citation needed] which relies on complete computers (with onboard CPU, storage, power supply, network interface, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.

The primary advantage of distributed computing is that each node can be purchased as commodity hardware, which when combined can produce similar computing resources to a multiprocessor supercomputer, but at lower cost. This is due to the economies of scale of producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors.

The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet.There are also some differences in programming and deployment. It can be costly and difficult to write programs so that they can be run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues.

If a problem can be adequately parallelized, a "thin" layer of "grid" infrastructure can allow conventional, standalone programs to run on multiple machines (but each given a different part of the same problem). This makes it possible to write and debug on a single conventional machine, and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.

Grid computing

Grid computing (or the use of a computational grid) is the application of several computers to a single problem at the same time - usually to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data. According to John Patrick, IBM's vice-president for Internet strategies, "the next big thing will be grid computing."

Grid computing depends on software to divide and apportion pieces of a program among several computers, sometimes up to many thousands. Grid computing can also be thought of as distributed and large-scale cluster computing, as well as a form of network-distributed parallel processing. It can be small -- confined to a network of computer workstations within a corporation or it can be large -- a public collaboration across many companies or networks.

It is a form of distributed computing whereby a "super and virtual computer" is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks. This technology has been applied to computationally-intensive scientific, mathematical, and academic problems through volunteer computing, and it is used in commercial enterprises for such diverse applications as drug discovery, economic forecasting, seismic analysis, and back-office data processing in support of e-commerce and web services.

What distinguishes grid computing from conventional cluster computing systems is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed. Also, while a computing grid may be dedicated to a specialized application, it is often constructed with the aid of general purpose grid software libraries and middleware.

Friday, January 2, 2009

Website awards for advertising and design

Several advertising and design award schemes now include categories for websites and other interactive media. Among the most prominent are:

Addy Awards

The Addy awards are operated by the American Advertising Federation, which is based in Washington DC, USA. In addition to awards for print, poster, and television advertisements, there are several categories for interactive media. These include: Business to Business Websites, Consumer Websites, Banners & Pop-Ups, E-Cards, Micro & Mini Websites, Online Games, Online Newsletters, Podcasts, Mobile Marketing, Internet Commercials, and Webisodes. Selection is based on a judgement of creative quality.

Clio Awards

The Clio awards program, which is based in New York USA, is operated by Nielsen, the Dutch media conglomerate formerly known as VNU. Clio awards recognise excellence in advertising and design. There is an interactive category, which grants awards to websites. Other categories include: TV/Cinema, Print, Poster, and Billboard. The Clio jury comprises more than 100 judges drawn from more than 60 countries. Awards are granted during the four day Clio Festival, held each May in Miami, Florida.

D&AD

The D&AD awards program is operated by D&AD, a non-profit organization based in London, England, and founded in 1962, which represents the global creative, design and advertising communities. D&AD offers several annual awards for websites, including awards for: Websites, Microsites, New Uses of Websites, Writing, Sound Design, Interface & Navigation, and Photography. Two levels of award are granted: the Yellow Pencil (equivalent to a silver award) and the coveted Black Pencil (equivalent to a gold award).

Website awards

The internet industry has established various award schemes for websites, following the example of the Tony, Oscar, BAFTA, Cannes Film Festival and Emmy awards which are granted in the fields of theatre, film and television. This article covers notable English language website award schemes.

General website awards
Best-on-topic website awards
Website awards for advertising and design
Aggregating award sites

General website awards

There are numerous general website award schemes, many of which carry little credibility. Among the most prominent general website award schemes are:

Favourite Website Awards

The Favourite Website Awards (FWA) scheme has been operated since 2000 by FWA, which is based in Knebworth, England. FWA claims to be the world's most visited website award program, receiving more than 1 million visits per month. FWA selects a Site of the Day, a Site of the Month, and a Site of the Year. Recent Sites of the Day are shown as thumbnails on the FWA front page. FWA bases its selections on: Design 40%, Navigation 25%, Graphics 15%, Content 15%, and Personality 5%. As a result of the emphasis on design and graphics, winning websites tend to be strikingly designed and visually attractive. In addition to the Site of the Year, there is an annual People's Choice Award decided by online public vote.

Interactive Media Awards.

The Interactive Media Awards (IMA) scheme has been operated since 2004 by the non-profit Interactive Media Council Inc., which is based in New York, USA. Entries are judged on the following criteria: Design, Content, Feature Functionality, Usability, and Standards Compliance & Cross-Browser Compatibility. IMA takes the view that a website rich in graphic design is of little value if its content is weak, boring and useless. Awards are granted annually in 100 categories. These are divided into four quarterly judging rounds. Categories include Advertising, Agriculture, Arts & Culture, Banking, Community, Education, Energy, Legal, News, Politics, Real Estate, School, Spirituality, and Sports.

WebAwards

The WebAwards scheme has been operated since 1997 by the Web Marketing Association, based in Simsbury, Connecticut, USA. It grants annual awards to websites in 96 industry categories including Advertising, Architecture, Automobile, Banks, Broadcasting, Insurance, Investor Relations, Legal, Leisure, Media, Medical, Military, Movies, Music, News, Pharmaceuticals, Political, Real Estate, Retail, School, Sports, Technology, Travel, and University. Entries are judged by three or more expert judges on seven criteria, each of which is given equal weight. The criteria are: Design, Innovation, Content, Technology, Interactivity, Copywriting, and Ease of use.

Webbys

The Webby Awards scheme has been operated since 1996 by the International Academy of Digital Arts and Sciences, which is based in New York, USA. Awards are granted each spring for websites which demonstrate Best Practice in: Content, Structure & Navigation, Visual Design, Interactivity, Functionality, and Overall Experience. Other award categories include: Activism, Commerce, Fashion, Humor, Kids, News, Politics, Science, and Sports. There are Business Website awards covering categories which include: Automotive, Financial Services, Professional Services, Retail, and Travel. The main awards are decided by a panel of judges. The related People's Voice Awards are decided by online public vote.

Best-on-topic website awards

Beesker. The Beesker award scheme, operated by Extonet Ltd of Cambridge UK, selects the world's best website on each of several hundred narrow topics from Aardvarks to Zippers. Selection is based on depth and reliability of content, and clarity of presentation. 400 topics are covered in the following categories: Arts & Entertainment, Education, Food & Drink, Hobbies, Home & Garden, Natural World, People, Places & Travel, Sports, and Technology.

History of Website builder

The first websites were created in the early 1990's. These sites were hand written in a markup language called HTML. The early versions of HTML were very basic, only giving websites basic structure (headings and paragraphs), and the ability to link using hypertext. As the Web and web design progressed, the markup language changed to become more complex and flexible, giving the ability to add objects like images and tables to a page. Features like tables, which were originally intended to be used to display tabular information, were soon subverted for use as invisible layout devices. Page layout using tables made these pages difficult to update, as adding information generally meant rewriting the whole page. With the advent of Cascading Style Sheets (CSS), formatting pages became separated from content making pages easier to edit. Database integration technologies such as server-side scripting and design standards like W3C further changed and enhanced the way the pages are built.

Software was written to help design web pages and by 1998 Dreamweaver had been established as the industry leader, however purists criticised the quality of the code produced by such software as being overblown and reliant on tables. As the industry moved towards W3C standards Dreamweaver amongst others were criticised for not being compliant. The Acid2 Test, developed by the Web Standards Project is used to test compliance with standards and most modern web builders now support CSS and are more or less compliant, though many professionals still prefer to write optimized source code by hand.

Open source software for building web sites took much longer to become established mainly due to problems with browser compliance with standards. Most open source developers are interested with being standards compliant rather than commercially viable, whereas those producing software for sale need it to work with Internet Explorer which is still not completely standards compliant. W3C started Amaya in 1996 to showcase Web technologies in a fully-featured Web client. This was to provide a framework that integrated lots of W3C technologies in a single, consistent environment. Amaya started as an HTML + CSS style sheets editor and now supports XML, XHTML, MathML, and SVG.

With the coming of the second generation of the Internet, also known as Web 2.0, many more people are surfing the web, few of which have any technical knowledge. These users want an easy and stress free experience. With the explosion of commerce on the Internet more and more people need their own web site and so software designers created better and simpler WYSIWIG web builders. As more people connected to the web using broadband it became possible to use web builders on line rather than buy or download one. Web hosts began to provide web building software as part of the package, claiming a web site can be created in 10 minutes with out any technical knowledge. These on-line web builders are easy to use and offer small businesses and private individuals a relatively quick and cheaper alternative to employing a professional web designer or learning how to write source code. They can produce colourful and professional looking pages but are tied to a single web host.