Thursday, April 23, 2009

HTTP status codes

The following is a list of HyperText Transfer Protocol (HTTP) response status codes. This includes codes from IETF internet standards as well as unstandardised RFCs, other specifications and some additional commonly used codes. The first digit of the status code specifies one of five classes of response; the bare minimum for an HTTP client is that it recognises these five classes. Microsoft IIS may use additional decimal sub-codes to provide more specific information, but these are not listed here. The phrases used are the standard examples, but any human-readable alternative can be provided. Unless otherwise stated, the status code is part of the HTTP/1.1 standard.

1xx Informational:
Request received, continuing process.

This class of status code indicates a provisional response, consisting only of the Status-Line and optional headers, and is terminated by an empty line. Since HTTP/1.0 did not define any 1xx status codes, servers must not send a 1xx response to an HTTP/1.0 client except under experimental conditions.

100 Continue
This means that the server has received the request headers, and that the client should proceed to send the request body (in the case of a request for which a body needs to be sent; for example, a POST request). If the request body is large, sending it to a server when a request has already been rejected based upon inappropriate headers is inefficient. To have a server check if the request could be accepted based on the request's headers alone, a client must send Expect: 100-continue as a header in its initial request and check if a 100 Continue status code is received in response before continuing (or receive 417 Expectation Failed and not continue).

101 Switching Protocols
102 Processing (WebDAV)

2xx Success:
The action was successfully received, understood, and accepted.

This class of status code indicates that the client's request was successfully received, understood, and accepted.

200 OK
Standard response for successful HTTP requests. The actual response will depend on the request method used. In a GET request, the response will contain an entity corresponding to the requested resource. In a POST request the response will contain an entity describing or containing the result of the action.
201 Created
The request has been fulfilled and resulted in a new resource being created.
202 Accepted
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.
203 Non-Authoritative Information (since HTTP/1.1)
204 No Content
205 Reset Content
206 Partial Content
The server is serving only part of the resource due to a range header sent by the client. This is used by tools like wget to enable resuming of interrupted downloads, or split a download into multiple simultaneous streams.
207 Multi-Status (WebDAV)
The message body that follows is an XML message and can contain a number of separate response codes, depending on how many sub-requests were made.

3xx Redirection
:
The client must take additional action to complete the request.

This class of status code indicates that further action needs to be taken by the user agent in order to fulfil the request. The action required may be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A user agent should not automatically redirect a request more than five times, since such redirections usually indicate an infinite loop.

300 Multiple Choices
Indicates multiple options for the resource that the client may follow. It, for instance, could be used to present different format options for video, list files with different extensions, or word sense disambiguation.
301 Moved Permanently
This and all future requests should be directed to the given URI.
302 Found
This is the most popular redirect code[citation needed], but also an example of industrial practice contradicting the standard. HTTP/1.0 specification (RFC 1945) required the client to perform a temporary redirect (the original describing phrase was "Moved Temporarily"), but popular browsers implemented it as a 303 See Other. Therefore, HTTP/1.1 added status codes 303 and 307 to disambiguate between the two behaviours. However, the majority of Web applications and frameworks still use the 302 status code as if it were the 303.
303 See Other (since HTTP/1.1)
The response to the request can be found under another URI using a GET method. When received in response to a PUT, it should be assumed that the server has received the data and the redirect should be issued with a separate GET message.
304 Not Modified
Indicates the resource has not been modified since last requested. Typically, the HTTP client provides a header like the If-Modified-Since header to provide a time against which to compare. Utilizing this saves bandwidth and reprocessing on both the server and client[citation needed].
305 Use Proxy (since HTTP/1.1)
Many HTTP clients (such as Mozilla[3] and Internet Explorer) do not correctly handle responses with this status code, primarily for security reasons.
306 Switch Proxy
No longer used.
307 Temporary Redirect (since HTTP/1.1)
In this occasion, the request should be repeated with another URI, but future requests can still use the original URI. In contrast to 303, the request method should not be changed when reissuing the original request. For instance, a POST request must be repeated using another POST request.

4xx Client Error:
The request contains bad syntax or cannot be fulfilled.

The 4xx class of status code is intended for cases in which the client seems to have erred. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents should display any included entity to the user. These are typically the most common error codes encountered while online.

400 Bad Request
The request contains bad syntax or cannot be fulfilled.
401 Unauthorized
Similar to 403 Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. See Basic access authentication and Digest access authentication.
402 Payment Required
The original intention was that this code might be used as part of some form of digital cash or micropayment scheme, but that has not happened, and this code has never been used.
403 Forbidden
The request was a legal request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.
404 Not Found
The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.
405 Method Not Allowed
A request was made of a resource using a request method not supported by that resource; for example, using GET on a form which requires data to be presented via POST, or using PUT on a read-only resource.
406 Not Acceptable
The requested resource is only capable of generating content not acceptable according to the Accept headers sent in the request.
407 Proxy Authentication Required
408 Request Timeout
Client failed to continue the request
409 Conflict
Indicates that the request could not be processed because of conflict in the request, such as an edit conflict.
410 Gone
Indicates that the resource requested is no longer available and will not be available again. This should be used when a resource has been intentionally removed; however, it is not necessary to return this code and a 404 Not Found can be issued instead. Upon receiving a 410 status code, the client should not request the resource again in the future. Clients such as search engines should remove the resource from their indexes.
411 Length Required
The request did not specify the length of its content, which is required by the requested resource.
412 Precondition Failed
413 Request Entity Too Large
The resource that was requested is too large to transmit using the current protocol.
414 Request-URI Too Long
The URI provided was too long for the server to process.
415 Unsupported Media Type
The request did not specify any media types that the server or resource supports. For example the client specified that an image resource should be served as image/svg+xml, but the server cannot find a matching version of the image.
416 Requested Range Not Satisfiable
The client has asked for a portion of the file, but the server cannot supply that portion (for example, if the client asked for a part of the file that lies beyond the end of the file).
417 Expectation Failed
418 I'm a teapot
The HTCPCP server is a teapot. The responding entity MAY be short and stout. Defined by the April Fools' specification RFC 2324. See Hyper Text Coffee Pot Control Protocol for more information.
422 Unprocessable Entity (WebDAV) (RFC 4918)
The request was well-formed but was unable to be followed due to semantic errors.
423 Locked (WebDAV) (RFC 4918)
The resource that is being accessed is locked
424 Failed Dependency
(WebDAV) (RFC 4918)
The request failed due to failure of a previous request (e.g. a PROPPATCH).
425 Unordered Collection
Defined in drafts of WebDav Advanced Collections, but not present in "Web Distributed Authoring and Versioning (WebDAV) Ordered Collections Protocol" (RFC 3648).
426 Upgrade Required (RFC 2817)
The client should switch to TLS/1.0.
449 Retry With
A Microsoft extension. The request should be retried after doing the appropriate action.
450 Blocked
A Microsoft extension. Used for blocking sites with Windows Parental Controls.

5xx Server Error:
The server failed to fulfil an apparently valid request.

Response status codes beginning with the digit "5" indicate cases in which the server is aware that it has encountered an error or is otherwise incapable of performing the request. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and indicate whether it is a temporary or permanent condition. Likewise, user agents should display any included entity to the user. These response codes are applicable to any request method.

500 Internal Server Error
A generic error message, given when no more specific message is suitable.
501 Not Implemented
The server either does not recognise the request method, or it lacks the ability to fulfil the request.
502 Bad Gateway
503 Service Unavailable
504 Gateway Timeout
505 HTTP Version Not Supported
506 Variant Also Negotiates (RFC 2295)
507 Insufficient Storage (WebDAV) (RFC 4918)
509 Bandwidth Limit Exceeded (Apache bw/limited extension)
This status code, while used by many servers, is not specified in any RFCs.
510 Not Extended (RFC 2774)
Further extensions to the request are required for the server to fulfil it.

Wednesday, April 22, 2009

Hypertext Transfer Protocol (http)

Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. Its use for retrieving inter-linked resources led to the establishment of the World Wide Web.

HTTP development was coordinated by the World Wide Web Consortium and the Internet Engineering Task Force (IETF), culminating in the publication of a series of Requests for Comments (RFCs), most notably RFC 2616 (June 1999), which defines HTTP/1.1, the version of HTTP in common use.

HTTP is a request/response standard of a client and a server. A client is the end-user, the server is the web site. The client making a HTTP request—using a web browser, spider, or other end-user tool—is referred to as the user agent. The responding server—which stores or creates resources such as HTML files and images—is called the origin server. In between the user agent and origin server may be several intermediaries, such as proxies, gateways, and tunnels. HTTP is not constrained to using TCP/IP and its supporting layers, although this is its most popular application on the Internet. Indeed HTTP can be "implemented on top of any other protocol on the Internet, or on other networks." HTTP only presumes a reliable transport; any protocol that provides such guarantees can be used."

Typically, an HTTP client initiates a request. It establishes a Transmission Control Protocol (TCP) connection to a particular port on a host (port 80 by default; see List of TCP and UDP port numbers). An HTTP server listening on that port waits for the client to send a request message. Upon receiving the request, the server sends back a status line, such as "HTTP/1.1 200 OK", and a message of its own, the body of which is perhaps the requested resource, an error message, or some other information.

Resources to be accessed by HTTP are identified using Uniform Resource Identifiers (URIs) (or, more specifically, Uniform Resource Locators (URLs)) using the http: or https URI schemes.

Request message:
uest message consists of the following:

* Request line, such as GET /images/logo.gif HTTP/1.1, which requests a resource called /images/logo.gif from server
* Headers, such as Accept-Language: en
* An empty line
* An optional message body

The request line and headers must all end with (that is, a carriage return followed by a line feed). The empty line must consist of only and no other whitespace. In the HTTP/1.1 protocol, all headers except Host are optional.

A request line containing only the path name is accepted by servers to maintain compatibility with HTTP clients before the HTTP/1.0 specification.

Request methods:

HTTP defines eight methods (sometimes referred to as "verbs") indicating the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing on the server.

HEAD
Asks for the response identical to the one that would correspond to a GET request, but without the response body. This is useful for retrieving meta-information written in response headers, without having to transport the entire content.

GET

Requests a representation of the specified resource. Note that GET should not be used for operations that cause side-effects, such as using it for taking actions in web applications. One reason for this is that GET may be used arbitrarily by robots or crawlers, which should not need to consider the side effects that a request should cause. See safe methods below.

POST
Submits data to be processed (e.g., from an HTML form) to the identified resource. The data is included in the body of the request. This may result in the creation of a new resource or the updates of existing resources or both.

PUT
Uploads a representation of the specified resource.

DELETE
Deletes the specified resource.

TRACE
Echoes back the received request, so that a client can see what intermediate servers are adding or changing in the request.

OPTIONS

Returns the HTTP methods that the server supports for specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.

CONNECT
Converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSL-encrypted communication (HTTPS) through an unencrypted HTTP proxy.

HTTP servers are required to implement at least the GET and HEAD methods and, whenever possible, also the OPTIONS method.

Safe methods
Some methods (for example, HEAD, GET, OPTIONS and TRACE) are defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe.

By contrast, methods such as POST, PUT and DELETE are intended for actions which may cause side effects either on the server, or external side effects such as financial transactions or transmission of email. Such methods are therefore not usually used by conforming web robots or web crawlers, which tend to make requests without regard to context or consequences.

Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way, and careless or deliberate programming can just as easily (or more easily, due to lack of user agent precautions) cause non-trivial changes on the server. This is discouraged, because it can cause problems for Web caching, search engines and other automated agents, which can make unintended changes on the server.

Idempotent methods and web applications

Methods PUT and DELETE are defined to be idempotent, meaning that multiple identical requests should have the same effect as a single request. Methods GET, HEAD, OPTIONS and TRACE, being prescribed as safe, should also be idempotent, as HTTP is a stateless protocol.

By contrast, the POST method is not necessarily idempotent, and therefore sending an identical POST request multiple times may further affect state or cause further side effects (such as financial transactions). In some cases this may be desirable, but in other cases this could be due to an accident, such as when a user does not realize that their action will result in sending another request, or they did not receive adequate feedback that their first request was successful. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may re-submit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once.

Note that whether a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other non-idempotent action is triggered by a GET or other request. Ignoring this recommendation, however, may result in undesirable consequences if a user agent assumes that repeating the same request is safe when it isn't.

Status codes
:
n HTTP/1.0 and since, the first line of the HTTP response is called the status line and includes a numeric status code (such as "404") and a textual reason phrase (such as "Not Found"). The way the user agent handles the response primarily depends on the code and secondarily on the response headers. Custom status codes can be used since, if the user agent encounters a code it does not recognize, it can use the first digit of the code to determine the general class of the response.

Also, the standard reason phrases are only recommendations and can be replaced with "local equivalents" at the web developer's discretion. If the status code indicated a problem, the user agent might display the reason phrase to the user to provide further information about the nature of the problem. The standard also allows the user agent to attempt to interpret the reason phrase, though this might be unwise since the standard explicitly specifies that status codes are machine-readable and reason phrases are human-readable.

List of HTTP status codes
1. 1xx Informational
2. 2xx Success
3. 3xx Redirection
4. 4xx Client Error
5. 5xx Server Error

Persistent connections:
In HTTP/0.9 and 1.0, the connection is closed after a single request/response pair. In HTTP/1.1 a keep-alive-mechanism was introduced, where a connection could be reused for more than one request.

Such persistent connections reduce lag perceptibly, because the client does not need to re-negotiate the TCP connection after the first request has been sent.

Version 1.1 of the protocol made bandwidth optimization improvements to HTTP/1.0. For example, HTTP/1.1 introduced chunked transfer encoding to allow content on persistent connections to be streamed, rather than buffered. HTTP pipelining further reduces lag time, allowing clients to send multiple requests before a previous response has been received to the first one. Another improvement to the protocol was byte serving, which is when a server transmits just the portion of a resource explicitly requested by a client.

HTTP session state:
HTTP is a stateless protocol. The advantage of a stateless protocol is that hosts do not need to retain information about users between requests, but this forces web developers to use alternative methods for maintaining users' states. For example, when a host needs to customize the content of a website for a user, the web application must be written to track the user's progress from page to page. A common method for solving this problem involves sending and receiving cookies. Other methods include server side sessions, hidden variables (when the current page is a form), and URL encoded parameters (such as /index.php?session_id=some_unique_session_code).

Secure HTTP:
There are currently two methods of establishing a secure HTTP connection: the HTTPS URI scheme and the HTTP 1.1 Upgrade header, introduced by RFC 2817. Browser support for the Upgrade header is, however, nearly non-existent, hence the HTTPS URI scheme is still the dominant method of establishing a secure HTTP connection. Secure HTTP is notated by the prefix HTTPS:// instead of HTTP://

HTTPS URI scheme
HTTPS: is a URI scheme syntactically identical to the http: scheme used for normal HTTP connections, but which signals the browser to use an added encryption layer of SSL/TLS to protect the traffic. SSL is especially suited for HTTP since it can provide some protection even if only one side of the communication is authenticated. This is the case with HTTP transactions over the Internet, where typically only the server is authenticated (by the client examining the server's certificate).

HTTP 1.1 Upgrade header
HTTP 1.1 introduced support for the Upgrade header. In the exchange, the client begins by making a clear-text request, which is later upgraded to TLS. Either the client or the server may request (or demand) that the connection be upgraded. The most common usage is a clear-text request by the client followed by a server demand to upgrade the connection, which looks like this:

Client:
GET /encrypted-area HTTP/1.1
Host: www.example.com

Server:
HTTP/1.1 426 Upgrade Required
Upgrade: TLS/1.0, HTTP/1.1
Connection: Upgrade

The server returns a 426 status-code because 400 level codes indicate a client failure (see List of HTTP status codes), which correctly alerts legacy clients that the failure was client-related.
The benefits of using this method for establishing a secure connection are:
* that it removes messy and problematic redirection and URL rewriting on the server side,
* it allows virtual hosting of secured websites (although HTTPS also allows this using Server Name Indication), and
* it reduces user confusion by providing a single way to access a particular resource.

A weakness with this method is that the requirement for a secure HTTP cannot be specified in the URI. In practice, the (untrusted) server will thus be responsible for enabling secure HTTP, not the (trusted) client.

Search Engine Optimization(SEO)

Search engine optimization or SEO is the art of placing your website in the first few pages of a search engine for a strategically defined set of keywords. In simple words it means that your website will appear on the first page of a search engine like Google, when someone searches for your product or service.

Typically, the earlier a site appears in the search results list, the more visitors it will receive from the search engine. SEO may target different kinds of search, including image search, local search, and industry-specific vertical search engines.
As an Internet marketing strategy, SEO considers how search engines work and what people search for. Optimizing a website primarily involves editing its content and HTML coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines.

The acronym "SEO" can also refer to "search engine optimizers," a term adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site, SEO tactics may be incorporated into web site development and design. The term "search engine friendly" may be used to describe web site designs, menus, content management systems and shopping carts that are easy to optimize.

Another class of techniques, known as black hat SEO or Spamdexing, use methods such as link farms and keyword stuffing that degrade both the relevance of search results and the user-experience of search engines. Search engines look for sites that employ these techniques in order to remove them from their indices.

Why Search Engine Optimization?

# Major search engines command over 400 million searches everyday, day after day. A well designed Search engine optimization(SEO) program helps you get this piece of the pie, which you might be losing otherwise to your competition.
# SEO offers a much better return on investment than other traditional forms of internet marketing like banner campaigns and email marketing.
# Search Engine Optimization helps you capture targeted traffic... people who are already looking for the product or service you offer.
# Search Engine Optimization by an efficient SEO Company is a long term and permanent answer to your traffic woes. Once a website has been optimized for search engines it can stay at the top for long periods of time.

Webmasters with search engines:
By 1997 search engines recognized that webmasters were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Infoseek, adjusted their algorithms in an effort to prevent webmasters from manipulating rankings.

Due to the high marketing value of targeted search results, there is potential for an adversarial relationship between search engines and SEOs. In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web,was created to discuss and minimize the damaging effects of aggressive web content providers.

SEO companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients. Wired magazine reported that the same company sued blogger and SEO Aaron Wall for writing about the ban. Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.

Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, chats, and seminars. In fact, with the advent of paid inclusion, some search engines now have a vested interest in the health of the optimization community. Major search engines provide information and guidelines to help with site optimization. Google has a Sitemaps program to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website. Google guidelines are a list of suggested practices Google has provided as guidance to webmasters. Yahoo! Site Explorer provides a way for webmasters to submit URLs, determine how many pages are in the Yahoo! index and view link information.


Getting indexed


he leading search engines, Google, Yahoo! and Microsoft, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click. Such programs usually guarantee inclusion in the database, but do not guarantee specific ranking within the search results. Yahoo's paid inclusion program has drawn criticism from advertisers and competitors.Two major directories, the Yahoo Directory and the Open Directory Project both require manual submission and human editorial review. Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that aren't discoverable by automatically following links.

Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.

Preventing crawling
To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.

Search engine optimization methodology includes:
:
1. Website Review:
To rank on Google or any other search engine, a site needs to be indexed first. Hence, the first step in the process of search engine optimization is to ensure that your site pages can be crawled and indexed by the search engine spiders. In this step we analyze your code and identify possible spider stoppers in a website. These may include broken links, missing tags, complicated link structure and other subtle factors that may have been overlooked when the site was designed.

2. Goal Analysis
Why are you indulging in SEO services? What do you expect to gain from your SEO campaign? Do you have practical search engine ranking targets that you would like to achieve?

It is important to have goals clearly defined in an SEO campaign. These goals can be in terms of an increase in revenue from organic search engine traffic, increase in ROI, increase in traffic or just rankings for branding. Keeping goals in perspective, customize and present the best SEO strategy possible given the time-frame, resources and other practical constraints.

3. Competition Analysis

This step involves studying what your competition is doing. Websites of competitors, which have undergone search engine optimization, offer valuable keyword and optimization insights. Analyzing an already optimized site allows us to determine their lead over your site with regards to rankings. Then it determine SEO optimization techniques being employed and the segments being targeted actively on search engines. After this step, able to tell you exactly what your competitor is doing and what you should do to beat them on search engines.

4. Keyword Identification
Many online businesses, despite having great search engine rankings, either do not get enough traffic to their site or do not convert enough visitors. A major reason for this is because the keywords that they are targeting may not be the keywords that are being searched for on the search engines. Keyword identification is a very important part of search engine optimization and includes researching keywords that will not only get great traffic but are also most relevant to your business.

5. On Page Optimization

This step in the process of SEO involves the actual optimization of your web pages. Here, pages will be optimized with regards to tags, link structures, images, body text, and other visible and invisible parts.

6. Building Incoming Links
Good incoming links are often the difference between good rankings and great rankings. Incoming links can be reciprocal links or one way links from directories, articles and news releases. Our link popularity campaigns are human powered and completely manual.

7. Search Engine Submissions

Manual Submissions to search engines and directories is a process often referred as search engine submissions. Many advertise a quick and simple software or service which helps submit to over 1000 engines for a few dollars, but few tell you that it is the top 10 engines that command over 85% of the internet's search engine traffic.

8. Analysis and Tweaking
Search engine optimization is a long-term solution to your traffic woes. Our comprehensive seo services involve continuous fine tuning of the website based on traffic trends and ranking trends. Search engines often change their algorithms... SEO tweak your website to compensate of the changes, hence enabling you to stay on top.

White hat versus black hat
:
SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques of which search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Some industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO. White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.

An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility,although the two are not identical.

Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.

Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review. One infamous example was the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices. Both companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.

Tuesday, April 21, 2009

Verification and Validation (V & V)

In software project management, software testing, and software engineering, Verification and Validation (V&V) is the process of checking that a software system meets specifications and that it fulfils its intended purpose. It is normally part of the software testing process of a project.

Definitions

It is also known as software quality control.
Validation checks that the product design satisfies or fits the intended usage (high-level checking) — i.e., you built the right product. This is done through dynamic testing and other forms of review.
According to the Capability Maturity Model (CMMI-SW v1.1), “Validation - The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] Verification- The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]."

In other words, validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place, while verification is ensuring that the product has been built according to the requirements and design specifications. Validation ensures that ‘you built the right thing’. Verification ensures that ‘you built it right’. Validation confirms that the product, as provided, will fulfill its intended use.

Validation and Verification
Verification is one aspect of testing a product's fitness for purpose. Validation is the complementary aspect. Often one refers to the overall checking process as V & V.

* Validation: "Are we trying to make the right thing?", i.e., does the product do what the user really requires?
Have we built the right software (i.e., is this what the customer wants?)?
It is product based.

* Verification: "Have we made what we were trying to make?", i.e., does the product conform to the specifications?
Have we built the software right (i.e., does it match the specification?)?
It is process based.

The verification process consists of static and dynamic parts. E.g., for a software product one can inspect the source code (static) and run against specific test cases (dynamic). Validation usually can only be done dynamically, i.e., the product is tested by putting it through typical usages and atypical usages ("Can we break it?").


Within the modeling and simulation community, the definitions of validation, verification and accreditation are similar:

* Validation: The process of determining the degree to which a model, simulation, or federation of models and simulations, and their associated data are accurate representations of the real world from the perspective of the intended use(s).
* Accreditation is the formal certification that a model or simulation is acceptable to be used for a specific purpose.
* Verification: The process of determining that a computer model, simulation, or federation of models and simulations implementations and their associated data accurately represents the developer's conceptual description and specifications.

Both verification and validation are related to the concepts of quality and of software quality assurance. By themselves, verification and validation do not guarantee software quality; planning, traceability, configuration management and other aspects of software engineering are required.

Classification of methods

In mission-critical systems where flawless performance is absolutely necessary, formal methods can be used to ensure the correct operation of a system. However, often for non-mission-critical systems, formal methods prove to be very costly and an alternative method of V&V must be sought out. In this case, syntactic methods are often used.

Test cases
A test case is a tool used in the V&V process.

The QA team prepares test cases for verification--to determine if the process that was followed to develop the final product is right.

The QC team uses a test case for validation--if the product is built according to the requirements of the user. Other methods, such as reviews, when used early in the Software Development Life Cycle provide for validation.

Independent Verification and Validation

Verification and validation often is carried out by a separate group from the development team; in this case, the process is called "Independent Verification and Validation", or IV&V.

Friday, April 17, 2009

Risk Based Testing

What exactly is risk?
It’s the possibility of negative or undesirable outcome.
Risk can be defined as the chance of an event, hazard, threat or situation occurring and its undesirable consequences, a potential problem. The level of risk will be determined by the likelihood of an adverse event happening and the impact (the harm resulting from that event).

In the future, a risk has some likelihood between 0% and 100 %; it is possibility, not a certainty. In the past, however, either the risk has materialized and become an outcome or issue or it has not; the likelihood of a risk in the past is either 0% or 100%.
The likelihood of a risk becoming an outcome is one factor to consider when thinking about the level of risk associated with its possible negative consequences. The more likely the outcome is, the worse the risk. However, likelihood is not the only consideration.
For example, most people are likely to catch a cold in the course of their lives, usually more than once. The typical healthy individual suffers no serious consequences. Therefore, the overall level of risk associated with colds is low for this person. But the risk of a cold for an elderly person with breathing difficulties would be high. The potential consequences or impact is an important consideration affecting the level of risk, too.

Classification of Risk
1. Product Risks (Factors relating to what is produced by the work, i.e. the thing we are testing)
2. Project Risks (Factors relating to the way the work is carried out, i.e. the test project)

1.Product Risks:


Potential failure areas (adverse future events or hazards) in the software or system are known as product risks, as they are a risk to the quality of the product, such as:

o Failure-prone software delivered.
o The potential that the software/hardware could cause harm to an individual or company.
o Poor software characteristics (e.g. functionality, reliability, usability and performance).
o Software that does not perform its intended functions.
Risks are used to decide where to start testing and where to test more; testing is used to reduce the
risk of an adverse effect occurring, or to reduce the impact of an adverse effect.

Product risks are a special type of risk to the success of a project. Testing as a risk-control activity provides feedback about the residual risk by measuring the effectiveness of critical defect removal and of contingency plans.

You can think of a product risk as the possibility that the system or software might fail to satisfy some reasonable customer, user, or stakeholder expectation. ‘Product Risks’ sometimes refer as ‘Quality Risks’ as they are risks to the quality of the product. Unsatisfactory software might omit some key functions that the customers specified, the users required or the stakeholders were promised. Unsatisfactory software might be unreliable and frequently fail to behave normally. Unsatisfactory software might fail in ways that cause financial or other damage to a user or the company that user works for. Unsatisfactory software might have problems related to a particular quality characteristic, which might not be functionality, but rather security, reliability, usability, maintainability or performance.

You can think of a product risk as the possibility that the system or software
might fail to satisfy some reasonable customer, user, or stakeholder
expectation. (Some authors refer to 'product risks' as 'quality risks' as they are risks to
the quality of the product.) Unsatisfactory software might omit some key
function that the customers specified, the users required or the stakeholders were
promised. Unsatisfactory software might be unreliable and frequently fail to
behave normally. Unsatisfactory software might fail in ways that cause financial
or other damage to a user or the company that user works for. Unsatisfactory
software might have problems related to a particular quality characteristic,
which might not be functionality, but rather security, reliability, usability,
maintainability or performance.

Risk-based testing is the idea that we can organize our testing efforts in a way that reduces the residual level of product risk when the system ships. Risk-based testing uses risk to prioritize and emphasize the appropriate tests during test execution, but it's about more than that. Risk-based testing starts early in the project, identifying risks to system quality and using that knowledge of risk to guide testing planning, specification, preparation and execution. Risk-based testing involves both mitigation - testing to provide opportunities to reduce the likelihood of defects, especially high-impact defects - and contingency - testing to identify work-arounds to make the defects that do get past us less painful.
Risk-based testing also involves measuring how well we are doing at finding and
removing defects in critical areas, as was shown in Table 1. Risk-based testing
can also involve using risk analysis to identify proactive opportunities to remove
or prevent defects through non-testing activities and to help us select which test
activities to perform.



Risk-based testing starts with product risk analysis. One technique for risk
analysis is a close reading of the requirements specification, design specifica-
tions, user documentation and other items. Another technique is brainstorming
with many of the project stakeholders. Another is a sequence of one-on-one or
small-group sessions with the business and technology experts in the company.
Some people use all these techniques when they can. To us, a team-based
approach that involves the key stakeholders and experts is preferable to a
purely document-based approach, as team approaches draw on the knowledge,
wisdom and insight of the entire team to determine what to test and how much.

While you could perform the risk analysis by asking, 'What should we worry
about?' usually more structure is required to avoid missing things. One way to
provide that structure is to look for specific risks in particular product risk
categories.You could consider risks in the areas of functionality, localization,
usability, reliability, performance and supportability.
You might have a checklist of typical or past risks that should be considered. You might also want to review the tests that failed and the bugs that you found in a previous release or a similar product. These lists and reflections serve to jog the memory, forcing you to think about risks of particular kinds, as well as helping you structure the documentation of the product risks.

When we talk about specific risks, we mean a particular kind of defect or
failure that might occur. For example, if you were testing the calculator utility
that is bundled with Microsoft Windows, you might identify 'incorrect
calculation' as a specific risk within the category of functionality. However, this is too broad. Consider incorrect addition. This is a high-impact kind of defect, as
everyone who uses the calculator will see it. It is unlikely, since addition is not
a complex algorithm. Contrast that with an incorrect sine calculation. This is a
low-impact kind of defect, since few people use the sine function on the
Windows calculator. It is more likely to have a defect, though, since sine
functions are hard to calculate.

After identifying the risk items, you and, if applicable, the stakeholders,
should review the list to assign the likelihood of problems and the impact of
problems associated with each one. There are many ways to go about this
assignment of likelihood and impact. You can do this with all the stakeholders
at once. You can have the business people determine impact and the technical
people determine likelihood, and then merge the determinations. Either way,
the reason for identifying risks first and then assessing their level, is that the
risks are relative to each other.

The scales used to rate likelihood and impact vary. Some people rate them
high, medium and low. Some use a 1-10 scale. The problem with a 1-10 scale is
that it's often difficult to tell a 2 from a 3 or a 7 from an 8, unless the differences between each rating are clearly defined. A five-point scale (very high, high, medium, low and very low) tends to work well.

Given two classifications of risk levels, likelihood and impact, we have a
problem, though: We need a single, aggregate risk rating to guide our testing
effort. As with rating scales, practices vary. One approach is to convert each risk
classification into a number and then either add or multiply the numbers to calculate a risk priority number. For example, suppose a particular risk has a high
likelihood and a medium impact. The risk priority number would then be 6 (2
times 3).

Armed with a risk priority number, we can now decide on the various risk-
mitigation options available to us. Do we use formal training for programmers
or analysts, rely on cross-training and reviews or assume they know enough? Do
we perform extensive testing, cursory testing or no testing at all? Should we
ensure unit testing and system testing coverage of this risk? These options and
more are available to us.

As you go through this process, make sure you capture the key information in
a document. We're not fond of excessive documentation but this quantity of infor-
mation simply cannot be managed in your head.
We recommend a lightweight table like the one shown in Table 2; we usually capture this in a spreadsheet.



Let's finish this section with two quick tips about product risk analysis. First,
remember to consider both likelihood and impact. While it might make you feel
like a hero to find lots of defects, testing is also about building confidence in key functions. We need to test the things that probably won't break but would be
catastrophic if they did.
Second, risk analyses, especially early ones, are educated guesses. Make
sure that you follow up and revisit the risk analysis at key project milestones.
For example, if you're following a V-model, you might perform the initial
analysis during the requirements phase, then review and revise it at the end
of the design and implementation phases, as well as prior to starting unit test,
integration test, and system test. We also recommend revisiting the risk
analysis during testing. You might find you have discovered new risks or
found that some risks weren't as risky as you thought and increased your confidence in the risk analysis.

2.Project risks
Project risks are the risks that surround the project’s capability to deliver its objectives, such as:

o Organizational factors:
- skill and staff shortages;
- personal and training issues;
- political issues, such as
. problems with testers communicating their needs and test results;
. failure to follow up on information found in testing and reviews (e.g. not
improving development and testing practices).
- improper attitude toward or expectations of testing (e.g. not appreciating the value of finding defects during testing).

o Technical issues:
- problems in defining the right requirements;
- the extent that requirements can be met given existing constraints;
- the quality of the design, code and tests.

o Supplier issues:
- failure of a third party;
- contractual issues.

When analyzing, managing and mitigating these risks, the test manager is following well established
project management principles. The ‘Standard for Software Test Documentation’ (IEEE 829) outline
for test plans requires risks and contingencies to be stated.

To deal with the project risks that apply to testing, we can use the same concepts we apply to identifying, prioritizing and managing product risks.

Remembering that a risk is the possibility of a negative outcome, what
project risks affect testing? There are direct risks such as the late delivery of the test items to the test team or availability issues with the test environment. There are also indirect risks such as excessive delays in repairing defects found in
testing or problems with getting professional system administration support for
the test environment.

To discover project risks, ask yourself and other project participants and stakeholders,
-What could go wrong on the project to delay or invalidate the test plan, the test strategy and the test estimate?
-What are unacceptable outcomes of testing or in testing?
-What are the likelihoods and impacts of each of these risks?'
-This process is very much like the risk analysis process for products.

For any risk, product or project, you have four typical options:
• Mitigate: Take steps in advance to reduce the likelihood (and possibly the
impact) of the risk.
• Contingency: Have a plan in place to reduce the impact should the risk
become an outcome.
• Transfer: Convince some other member of the team or project stakeholder
to reduce the likelihood or accept the impact of the risk.
• Ignore: Do nothing about the risk, which is usually a smart option only
when there's little that can be done or when the likelihood and impact are
low.

There is another typical risk-management option, buying insurance, which is
not usually pursued for project or product risks on software projects, though it
is not unheard of.

Here are some typical risks along with some options for managing them.
• Logistics or product quality problems that block tests:
These can be mitigated through careful planning, good defect triage and management, and robust test design.

• Test items that won't install in the test environment:
These can be mitigated through smoke (or acceptance) testing prior to starting test phases or as part of a nightly build or continuous integration. Having a defined uninstall process is a good contingency plan.

• Excessive change to the product that invalidates test results or requires
updates to test cases, expected results and environments:
These can be mitigated through good change-control processes, robust test design and light weight test documentation. When severe incidents occur, transference of the
risk by escalation to management is often in order.

• Insufficient or unrealistic test environments that yield misleading results:
One option is to transfer the risks to management by explaining the limits on
test results obtained in limited environments. Mitigation - sometimes complete alleviation can be achieved by outsourcing tests such as performance
tests that are particularly sensitive to proper test environments.

Here are some additional risks to consider and perhaps to manage:

• Organizational issues such as shortages of people, skills or training,
problems with communicating and responding to test results, bad expec
tations of what testing can achieve and complexity of the project team or
organization.

• Supplier issues such as problems with underlying platforms or hardware,
failure to consider testing issues in the contract or failure to properly
respond to the issues when they arise.

• Technical problems related to ambiguous, conflicting or unprioritized
requirements, an excessively large number of requirements given other
project constraints, high system complexity and quality problems with the
design, the code or the tests.
There may be other risks that apply to your project and not all projects are
subject to the same risks.

Finally, don't forget that test items can also have risks associated with them.
For example, there is a risk that the test plan will omit tests for a functional area or that the test cases do not exercise the critical areas of the system.

Friday, April 10, 2009

Capability Maturity Model (CMM)

The Capability Maturity Model (CMM) in software engineering is a model of the maturity of the capability of certain business processes. A maturity model can be described as a structured collection of elements that describe certain aspects of maturity in an organization, and aids in the definition and understanding of an organization's processes.

Maturity model
A maturity model can be described as a structured collection of elements that describe certain aspects of maturity in an organization. A maturity model may provide, for example :

* a place to start
* the benefit of a community’s prior experiences
* a common language and a shared vision
* a framework for prioritizing actions
* a way to define what improvement means for your organization.

A maturity model can be used as a benchmark for comparison and as an aid to understanding - for example, for comparative assessment of different organizations where there is something in common that can be used as a basis for comparison. In the case of the CMM, for example, the basis for comparison would be the organizations' software development processes.

Capability Maturity Model Structure
The Capability Maturity Model involves the following aspects:

* Maturity Levels: A 5-Level process maturity continuum - where the uppermost (5th) level is a notional ideal state where processes would be systematically managed by a combination of process optimization and continuous process improvement.
* Key Process Areas: A Key Process Area (KPA) identifies a cluster of related activities that, when performed collectively, achieve a set of goals considered important.
* Goals: The goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. The extent to which the goals have been accomplished is an indicator of how much capability the organization has established at that maturity level. The goals signify the scope, boundaries, and intent of each key process area.
* Common Features: Common features include practices that implement and institutionalize a key process area. There are five types of common features: Commitment to Perform, Ability to Perform, Activities Performed, Measurement and Analysis, and Verifying Implementation.
* Key Practices: The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the KPAs.

Levels of the Capability Maturity Model
There are five levels defined along the continuum of the CMM, and, according to the SEI: "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief."

Level 1 - Ad hoc (Chaotic)
It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes.

Level 2 - Repeatable
It is characteristic of processes at this level that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.

Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place (i.e., they are the AS-IS processes) and used to establish consistency of process performance across the organization.

Level 4 - Managed
It is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level.

Level 5 - Optimizing
It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements.

Software process framework for SEI's Capability Maturity Model

The software process framework documented is intended to guide those wishing to assess an organization/projects consistency with the CMM. For each maturity level there are five checklist types:

Friday, April 3, 2009

Error message

An error message is a message displayed when an unexpected condition occurs, usually on a computer or other device. Error messages are often displayed using dialog boxes. Error messages are used when user intervention is required, indicate that a desired operation has failed, or give very important warnings such as being out of hard disk space. Error messages are pervasive throughout computing, and are part of every operating system, or computer hardware device. Proper design of error messages is an important topic in usability and other fields of human-computer interaction.

Common error messages

These computer-related error messages can occur in almost any program.

* File not found occurs when the requested file cannot be found. The file may have been damaged, moved, deleted, or a bug may have caused the error.
* The device is not ready most often occurs when there is no floppy disk (or a bad disk) in the disk drive and the system tries to perform tasks involving the floppy disk.
* Access is denied occurs if the user has insufficient privileges to a file, or if it has been locked by some program or user.
* Out of memory occurs when the system has run out of memory or tries to load a file too large to store in RAM. The fix is to close some programs or get more memory.
* Low Disk Space occurs when the hard drive is full. To fix this, close some programs (to free swap file usage) and delete some files (normally temporary files, or other files after they have been backed up), or get a bigger hard drive.
* [program name] has encountered a problem and needs to close. We are sorry for the inconvenience. Windows XP message displayed when a program causes a general protection fault or invalid page fault.
* The blue screen of death.

Error message usability checklist


To be effective, an error message must contain the following information:
* Message ID - For many applications, a reference number for the error is an invaluable piece of information. This number will help network support personnel to easily diagnose the error and possible courses of action. An index also serves as a universally understood indicator of an error in situations where various languages are used.

* Timestamp - The date and time of the onset of an error should be displayed so that the help centre can correlate the event to log files with the same timestamp.

* Message type and severity - Classify the error as either an automatically recoverable error, manually recoverable error, or non-recoverable error, or some other appropriate label. Inform the user about the level of severity, and tell the user about the possible consequences of the exception.

* User and process details - Display information that identifies the user who “triggered” the error in the form of a user ID or whatever system is used to identify users for the specific system. The process or task that the system was trying to perform should also be documented in case users have multiple applications running and are not sure which one caused the error. This information is important in diagnosing errors from a helpdesk perspective.

* Short message with details button - The displayed message should be clear and concise, in "novice user" language. It should contain the reference number and the basic information regarding the type of error, severity, and corrective action. There should also be a details button which shows an advanced user more information about the specifics of the error. The details button can also be used to explain corrective actions to the user.

* Program state and configuration - Show information about the error that is relevant to the user. For example, if the user entered an invalid value in a field, the error message should tell the user which field caused the error and what the invalid values were. Showing current program field values and applicable configuration information in the message will allow the user to deduce the cause of the error and correct it. If a piece of hardware is not configured properly, the current configuration can point the user to the real problem. For example, if you have the wrong printer set as your default, the error message should show which printer is currently set to default so the user can clearly see that there is a problem. i.e. "Printer Epson 123A not ready" instead of "Printer not ready".

Message format
The format for error messages is not static. The format of error messages is dependent on many things, however the three main factors that influence the format design are as follows.

Technical limitations

The strengths and restrictions of the technology you're working with should be among the first things you take into account when planning error messages. You must be careful to ensure that the medium you use to communicate the error supports the size, shape, and style of your error message.

Amount of information presented

The nature of the error message will determine the amount of information required for the error message. If the error message is short –"Sorry, our Web site is currently undergoing maintenance, Please try again later." it may be more effective as a pop-up window than as a separately loaded page. This is because short messages can easily be drowned out if there is other content on the page. If there is no other content on the error message page, then the short error message will look out of place on a big empty page.

User input required


Finally, you should choose an error message format based on the type of input you require from the user to correct a problem. Errors that simply inform the user of a problem that they can’t fix, such as a busy server on a website, are best suited to pop ups. In this situation, all you need to do is inform the user of the problem, as no corrective action aside from trying again later. The only control available to the user should be the “Ok” button. However, controls such as “Retry/OK” and “Cancel” should be used if the user is being prompted to corrective action. For example, “Windows encountered an error and needs to restart. Would you like to restart now?” should include an option to cancel and to restart. The buttons should correspond to the available options, i.e. “Restart now” button vs. “OK”, and “Later” vs. “Cancel”. This makes the options clearer to the user and thereby being conducive to correct decision making.

Presentation guidelines
The presentation and appearance of error messages are critical factors that heavily influence how well a user comprehends and responds to an error message. The following three principles should be adhered to when designing an error message.

Capture the user's attention

The visual attributes of an error message, including its color, size, and location can and should be used to grab the user’s attention and inform them that an error has occurred. Usually, the color red, a bold font, and the location at the top of the page and in front of any other window are good ways to allow the user to know that an error is present. In addition, an exclamation point is often used as an iconographic symbol to express importance.

Explain what went wrong

To effectively communicate with users, the application must speak their language. Messages must explain what the problem is using terminology that even a novice user can understand. The key idea here is to explain what went wrong, not just tell the user an error code. This is not to say to never include error codes in the error message. Instead, explain it in the proper context if it could prove useful. For example, an error message could reference error code 3555 and display the message “Please contact our help desk and reference error code 3555 for assistance”.

Show where error occurred and suggest possible solutions

Pointing users to solutions can be accomplished many ways. The language used is an important factor in getting users to understand what went wrong and consequently, how to fix it. For example, a poor error message might read, "You have entered an invalid string character in Field 123A". A better error message reads, "The zip code field contains an invalid character. Only numbers may be entered". Novice users won’t know what a string character or field 123A is, but they will recognize what the zip code field and “numbers” are.

Additionally, you can show the user exactly where the problem occurred by providing visual cues such as highlighting the field label with color, font treatment, and iconographic images.

Once an error is located, instructions should be given to the user on corrective action, once again explained in the users’ language.

Providing examples of acceptable input is also a powerful technique for suggesting solutions for certain types of errors.

Design heuristics
The following is a general set of guidelines to aid in the effective design of error messages. There are many more that can be found in many different texts and this list is by no means absolute. Those included below are only a few of the commonly accepted heuristics.

* Do provide useful information that helps the user diagnose the problem - i.e. Use “Link target does not exist” instead of “Bad link”.
* Do be precise - i.e. “Missing file name extension” instead of “File not found.”
* Do describe the problem - i.e. “Disk full” instead of “File error.”
* Do use a neutral tone - i.e. Change the tone in “Bad input” to “Command is unrecognizable.” to avoid blaming the user.
* Do use complete sentences - i.e. Use “Binding is too long.” rather than “Binding too long.”
* Do not personify the system, unless you mean to convey a sense of naturalness - i.e. “Node parameter cannot use Windows NT protocols.” is better than “Parameter node does not speak any of our protocols.”
* Avoid displaying an error message at all, if possible, i.e. disable the Paste option if no data is on the clipboard, instead of showing an error like “No data on clipboard” when the user attempts a paste operation.

Different Messagebox symbols:








Acceptance Testing

Introduction
In software engineering, acceptance testing is formal testing conducted to determine whether a system satisfies its acceptance criteria and thus whether the customer should accept the system.
The main types of software testing are:
Component.
Interface.
System.
Acceptance.
Release.
Acceptance Testing checks the system against the "Requirements". It is similar to systems testing in that the whole system is checked but the important difference is the change in focus:
Systems Testing checks that the system that was specified has been delivered.
Acceptance Testing checks that the system delivers what was requested.
The customer, and not the developer should always do acceptance testing. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgment.

The forms of the tests may follow those in system testing, but at all times they are informed by the business needs.

The test procedures that lead to formal 'acceptance' of new or changed systems. User Acceptance Testing is a critical phase of any 'systems' project and requires significant participation by the 'End Users'. To be of real use, an Acceptance Test Plan should be developed in order to plan precisely, and in detail, the means by which 'Acceptance' will be achieved. The final part of the UAT can also include a parallel run to prove the system against the current system.

Factors influencing Acceptance Testing
The User Acceptance Test Plan will vary from system to system but, in general, the testing should be planned in order to provide a realistic and adequate exposure of the system to all reasonably expected events. The testing can be based upon the User Requirements Specification to which the system should conform.
As in any system though, problems will arise and it is important to have determined what will be the expected and required responses from the various parties concerned; including Users; Project Team; Vendors and possibly Consultants / Contractors.
In order to agree what such responses should be, the End Users and the Project Team need to develop and agree a range of 'Severity Levels'. These levels will range from (say) 1 to 6 and will represent the relative severity, in terms of business / commercial impact, of a problem with the system, found during testing. Here is an example which has been used successfully; '1' is the most severe; and '6' has the least impact :-
'Show Stopper' i.e. it is impossible to continue with the testing because of the severity of this error / bug

Critical Problem; testing can continue but we cannot go into production (live) with this problem
Major Problem; testing can continue but live this feature will cause severe disruption to business processes in live operation
Medium Problem; testing can continue and the system is likely to go live with only minimal departure from agreed business processes
Minor Problem ; both testing and live operations may progress. This problem should be corrected, but little or no changes to business processes are envisaged
'Cosmetic' Problem e.g. colours; fonts; pitch size However, if such features are key to the business requirements they will warrant a higher severity level.
The users of the system, in consultation with the executive sponsor of the project, must then agree upon the responsibilities and required actions for each category of problem. For example, you may demand that any problems in severity level 1, receive priority response and that all testing will cease until such level 1 problems are resolved.
Caution. Even where the severity levels and the responses to each have been agreed by all parties; the allocation of a problem into its appropriate severity level can be subjective and open to question. To avoid the risk of lengthy and protracted exchanges over the categorisation of problems; we strongly advised that a range of examples are agreed in advance to ensure that there are no fundamental areas of disagreement; or, or if there are, these will be known in advance and your organisation is forewarned.
Finally, it is crucial to agree the Criteria for Acceptance. Because no system is entirely fault free, it must be agreed between End User and vendor, the maximum number of acceptable 'outstandings' in any particular category. Again, prior consideration of this is advisable.
N.B. In some cases, users may agree to accept ('sign off') the system subject to a range of conditions. These conditions need to be analysed as they may, perhaps unintentionally, seek additional functionality which could be classified as scope creep. In any event, any and all fixes from the software developers, must be subjected to rigorous System Testing and, where appropriate Regression Testing.


Conclusion

Hence the goal of acceptance testing should verify the overall quality, correct operation, scalability, completeness, usability, portability, and robustness of the functional components supplied by the Software system.