English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

2007-03-05 17:16:02 · 5 answers · asked by PK 1 in Computers & Internet Software

5 answers

Beta

Your referring to Beta software. A program is called beta when it has just been compiled and needs to be tested to ensure stability, this process will allow the programmers and developers to receive feedback from testers with issues the find, to make the software more stable before releasing it to the general public.

A beta version or beta release usually represents the first version of a computer program that implements all features in the initial requirements analysis. It is likely to be useful for internal demonstrations and previews to select customers, but unstable and not yet ready for release. Some developers refer to this stage as a preview, as a technical preview (TP) or as an early access. As the second major stage in the release lifecycle, following the alpha stage, it is named after the Greek letter beta, the second letter in the Greek alphabet.

2007-03-05 17:24:44 · answer #1 · answered by Vincent 6 · 1 0

Software testing is the process of finding out the errors in a developed software, in-order to deliver the error fre product the end user.

2016-04-15 00:08:49 · answer #2 · answered by broad 2 · 0 0

A test for a computer product prior to commercial release. Beta testing is the last stage of testing, and normally can involve sending the product to beta test sites outside the company for real-world exposure or offering the product for a free trial download over the Internet. Beta testing is often preceded by a round of testing called alpha testing.
http://www.webopedia.com/TERM/b/beta_test.html

Start your research here; http://www.google.com/search?q=beta%20testing&sourceid=groowe&ie=utf-8&oe=utf-8

Make it a great day!

2007-03-05 17:25:21 · answer #3 · answered by Hokiefire 6 · 0 0

s/w testing is the method of testing the product for all its worth.
It helps the develpoer to know the bugs so that they can be removed.

2007-03-05 18:17:07 · answer #4 · answered by Anonymous · 0 0

Hi,

It's impossible to prove that software written for some useful purpose is correct without experimentation. There are methods like formal mathematical proofs, requirements managment, and requirements tracing that try and ensure that software is built correctly in the first place, but they can't prove the software will satisfy the purpose it was written for by themselves.

So we test software systems to provide confidence they do what is expected.

Defining what "is expected" of software means understanding who is going to use it, how it's going to be used, and what characteristics amount to "the right quality". Quality is not an absolute, it has subjective (e.g. ease of use) and objective (e.g conformance to standards) components.

The art and Science of Software Testing has matured a great deal over the years, especially as software systems grew vastly complex and interconnected. Software systems that run the internet, control the space shuttle, and control banking transactions can be built of literally millions of "lines of code" each of which is like a cog in a giant watch. A line of code may only affect a small area of the system, or have consequences for the entire system. There's a saying in the software business about how volitile software can be:

"it was just a one-line code change!"

I've seen this on a banner in a research and development lab at a major computer manufacturer. Yes, it's just one line of code, but it can crash the system.

So what to do?

Good software test engineers have a model of the test process in their minds. They know how softare is written, how it works, what the requirements for it are, and how customers are really going to use it.

The software test process parallels the development process. Individual developors will test their own code ("Unit Test") before integrating it with code written by other developers in their group. Multiple groups may then integrate their code into a larger system, and that system in turn may be integrated into software that powers a whole product line spread across multiple business units.

The progression of testing typically follows a set of stages. In use, each stage will have entry and exit criteria that make sure the software is ready to be integrated into the next step toward delivering functionality for a product line.

Unit test: Typically done by the developer's themselves

Unit test is like taking a magnifying glass and making sure the widget you created is doing it's little widget job before you approve it being included in the machine. In software this involves "single stepping" through the code using a debugger, creating small software test programs that exercise the API's of the code, running static analysis tools over the code, measuring code coverage (statement coverage, branch coverage, path coverage..) etc. Unit testing is also called "white box" or "structural" testing. This points out that the testing is done with open knowledge of how the code is written. It seeks to make sure the code was written in a way that is correct compared to the software system specification, software functional specification, and any relavent coding guidelines.

Once all the code "chunks" being developed by a group are integrated into a system that has some functionality, it's time for feature test.

Feature test measures the integrated code against the "feature specification". Feature test is usually done using "black box" techniques. This means that the testers are not concerned with how the code was written, they are concerned that the end result does what it's supposed to. Coverage is measued using a combination of requirements tracing (writing test cases based on the feature specification in a way that makes sure all specs are covered), conformance testing (if the software implements a public standard), and code coverage (to provide feedback to the testers and the development team about how much code is executed when the features are exercised).

Once the features from one group are tested, they are integrated with the features from other groups to create a software system. Usually there are a series of integration tests to make sure that the software features contributed all run together before progressing to system test.

System test is concerned with the behavior of all the integrated software. Behaviors often evaluated are:
Reliability: This is done using customer "use cases" (scenarios for how the system will be used) and running the system with varying but heavy and customer-realistic loads over periods of time. If you measure the time the system runs and the number of defects found carefully, you can predict a MTBF (Mean Time Between Failures) value that can be useful for predicting when a system is ready for release.
Performance: Speed of the system, transactions per second, packets per second etc.
Scale: An aspect of performance, how many users can the system handle? How many connections?
Stress: What does it take to break this system? Often stress tests that impose unrealistic loads on the system under test reveal problems that will happen over time under more normal load.
Interoperability: Does the system speak correctly with other systems?
etc... this list can be long.

Then what?

Well, your system may be integrated with other systems and a whole new level of "system solution" testing required.

Work up front and during the process, in creating good specifications, in writing and reviewing test plans for each stage of testing, in monitoring and accumulating metrics at each stage, in having "Milestone" meetings when moving from one stage to the next all lead to a good business decision.

In the private sector at least, the decision to release is ultimately a business decision. Your competitors are racing to market with new features, you need to as well. Nobody wants to get something to market first, only to have it fail miserably. That said, if you are not first to market with product for a new kind of product, you won't likely ever be a major player. So at the end of the day, you manage the software development, and test process to minimize errors and quantify risk so you can make business decisions based on useful information.

There are a number of great books (I like Boris Beizer "Software test techniques", "Black box testing", "System testing and Quality Assurance") and zillions of websites. I'll provide this one, which has a lot of links to mainstream test literature.
http://www.softwareqatest.com/qatlnks1.html

Finally, "an ounce of prevention is worth a pound of cure". Preventing software defects in the first place can be the most cost effective way to minimize errors (aka "bugs"). You can't prevent all defects, so knowing what you can realistically prevent, what you must test for, and how to balance the two is part of your overall QA/Engineering-Productivity plan.

and worthy of another whole question!

2007-03-09 04:12:42 · answer #5 · answered by Anonymoose 4 · 0 0

fedest.com, questions and answers