Is your Web-App Selenium-Test Compatible?

Visit us – http://www.codeplatter.com/contact.php

 

By Pallavi Sharma

If you are trying to automate your web application using selenium, a few things which you should know before you start the quest. For most of the selenium commands a “target” is required which is laymen terms is the object in your web-application on which you wish to perform a desired action.

Selenium follows a location strategy to locate the element in your application which consists of four modes:

1. Locating by Identifier.

2. Locating by XPath/CSS

3. Locating by DOM

4. Location by CSS

You will find detail description of the above locating techniques on the selenium website which are explained well there [http://seleniumhq.org/]. But what not is explained is when to use what and the negatives of each locating strategy.

1. Locating by Identifier: To locate an element using an identifier means that the element must have either an “ID” attribute or “NAME” attribute which should be unique on that page. You may use “type” or “index” property in combination with these identifiers to help selenium uniquely identify the element.

Selenium is right in expecting that if will find an element uniquely by locating using an identifier. The W3-standards also states that the “ID” attribute has to be provided for all elements and it should be unique. But since the HTML has no inherent check on such slip-ups, most of the developers end up doing so in their web applications while developing forgetting that it has to undergo testing also.

So if you know before hand that “Selenium” is the automation tool which suits your web application scenario the best; ensure your developers haven’t made such mistakes.

2. Locating by XPath: To locate an element using an Xpath, is not a very straightforward solution but it helps immensely when you have to use multiple attributes to help locating an element. Xpath is a powerful way of using any attribute by which you would want the tool to find the element and perform action on it. It is useful mostly in the cases when the developers have used same name, ids across the website and didn’t cared for the W3-standards.

But if you can expect your developers to overlook W3 standards for unique IDs you may very well expect them to give you “unclosed” tags in your application! And if your web application has even a single unclosed tag, Selenium won’t be able to detect the element using the “XPath”.

So if the above is the locating method, you expect to find an element ensure you don’t have unclosed tags in the application.

3. Locating by DOM: The last resort to find an element and weakest one to. It is not advisable to use especially if your website undergoes too many changes. And also if the intention of testing your web site is functional/regression.

4. Locating by CSS: If for initial thoughts you are thinking that what if my web-app doesn’t have CSS will this work, then the answer is “Yes”. This locator type works whether you have CSS or don’t have CSS. It is faster than X-Path, and as stated on the Selenium website, experience users like to use the CSS way of locating an element. This also doesn’t works if the DOM is broken, and also is browser depended, as different browsers have different way of handling CSS.

To summarize the above, it is vital to do a W3 standard check on your website so that you can throw all issues back at your developers to fix, before you jump in to testing your website using Selenium. A good starting point is the, W3-validator [http://validator.w3.org/] available freely online. It clearly list down the issues like duplicate ids, unclosed tags and other slip-ups. Selenium is a powerful tool to use to test your web application across operating systems and browsers but if the DOM is broken than none of the locators will work, coz selenium using the browsers java script engine so the results will also depend on which browser you are using for the test.

It is not mentioned clearly on the website http://www.seleniumhq.org when to use what locator strategy, and it is kind of overwhelming information why they have provided all such locator categories to the users? Couldn’t there have been a simpler and straight forward way of handling elements using just a single locator type which works under any circumstance. I don’t know the answer to this yet, but maybe will have….

Till the next blog happy testing your sites with Selenium.

How to become a good Software Test Engineer?

Visit us – http://www.codeplatter.com/contact.php

How to become a good Software Test Engineer?

A good Software Test Engineer has a ‘test to break’ attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with Developers, and an ability to communicate with both Technical (Developers) and Non-technical (Customers, Management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the Software Development process, gives the Tester an appreciation for the Developers’ point of view, and reduce the learning curve in Automated Test Tool programming. Judgement skills are needed to assess high-risk or critical areas of an application on which to focus testing efforts when time is limited.

Software Testing Types

Visit us – http://www.codeplatter.com/contact.php

ACCEPTANCE TESTING. Testing to verify a product meets customer specified requirements. A customer usually does this type of testing on a product that is developed externally.

BLACK BOX TESTING. Testing without knowledge of the internal workings of the item being tested. Tests are usually functional.

COMPATIBILITY TESTING. Testing to ensure compatibility of an application or Web site with different browsers, OSs, and hardware platforms. Compatibility testing can be performed manually or can be driven by an automated functional or regression test suite.

CONFORMANCE TESTING. Verifying implementation conformance to industry standards. Producing tests for the behavior of an implementation to be sure it provides the portability, interoperability, and/or compatibility a standard defines.

FUNCTIONAL TESTING. Validating an application or Web site conforms to its specifications and correctly performs all its required functions. This entails a series of tests which perform a feature by feature validation of behavior, using a wide range of normal and erroneous input data. This can involve testing of the product’s user interface, APIs, database management, security, installation, networking, etcF testing can be performed on an automated or manual basis using black box or white box methodologies.

INTEGRATION TESTING. Testing in which modules are combined and tested as a group. Modules are typically code modules, individual applications, client and server applications on a network, etc. Integration Testing follows unit testing and precedes system testing.

LOAD TESTING. Load testing is a generic term covering Performance Testing and Stress Testing.

PERFORMANCE TESTING. Performance testing can be applied to understand your application or WWW site’s scalability, or to benchmark the performance in an environment of third party products such as servers and middleware for potential purchase. This sort of testing is particularly useful to identify performance bottlenecks in high use applications. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions.

REGRESSION TESTING. Similar in scope to a functional test, a regression test allows a consistent, repeatable validation of each new release of a product or Web site. Such testing ensures reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process. Though regression testing can be performed manually an automated test suite is often used to reduce the time and resources needed to perform the required testing.

SMOKE TESTING. A quick-and-dirty test that the major functions of a piece of software work without bothering with finer details. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

STRESS TESTING. Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. A graceful degradation under load leading to non-catastrophic failure is the desired result. Often Stress Testing is performed using the same process as Performance Testing but employing a very high level of simulated load.

SYSTEM TESTING. Testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.

UNIT TESTING. Functional and reliability testing in an Engineering environment. Producing tests for the behavior of components of a product to ensure their correct behavior prior to system integration.

WHITE BOX TESTING. Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing.

SQL Injection Prevention Techniques (Series-II)

By Atish Singh

(Continued from SQL Injection Prevention Techniques (Series-I))

Stored Procedures

Stored procedures have the same effect as the use of prepared statements when implemented safely*. They require the developer to define the SQL code first, and then pass in the parameters. The difference between prepared statements and stored procedures is that the SQL code for a stored procedure is defined and stored in the database itself, and then called from the application. Both of these techniques have the same effects in preventing SQL injection so it’s your choice, which approach makes the most sense for you.

Safe Java Stored Procedure Example

The following code example uses a CallableStatement, Java’s implementation of the stored procedure interface, to execute the same database query. The “sp_getAccountBalance” stored procedure would have to be predefined in the database and implement the same functionality as the query defined above.

String custname = request.getParameter(”customerName “); // This should REALLY be validated
try {
CallableStatement cs = connection.prepareCall(”{call sp_getAccountBalance(?)}”);
cs.setString(1, custname);
ResultSet results = cs.executeQuery();
// … result set handling
} catch (SQLException se) {
// … logging and error handling
}

Safe VB .NET Stored Procedure Example

The following code example uses a SqlCommand, .NET’s implementation of the stored procedure interface, to execute the same database query. The “sp_getAccountBalance” stored procedure would have to be predefined in the database and implement the same functionality as the query defined above.

Try
Dim command As SqlCommand = new SqlCommand(”sp_getAccountBalance”, connection)
command.CommandType = CommandType.StoredProcedure
command.Parameters.Add(new SqlParameter(”@CustomerName”, CustomerName.Text))
Dim reader As SqlDataReader = command.ExecuteReader()
‘ …
Catch se As SqlException
‘ error handling
End Try

There are some additional security and non-security benefits of stored procedures also that are worth considering. One security benefit is that if you make exclusive use of stored procedures for your database, you can restrict all database user accounts to only have access to the stored procedures. This means that database accounts do not have permission to submit dynamic queries to the database, giving you far greater confidence that you do not have any SQL injection vulnerabilities in the applications that access the database. Some non-security benefits include performance benefits (in most situations), and having all the SQL code in one location potentially simplifies maintenance of the code and keeps the SQL code out of the application developers’ hands, leaving it for the database developers to develop and maintain.

Escaping all User Supplied Input

Each DBMS supports a character escaping scheme using which you can escape special characters to indicate to the DBMS that the characters you are providing in the query are intended to be data, and not code. If you escape all user supplied input using the proper escaping scheme for the database you are using, the DBMS will not confuse that input with SQL code written by the developer, thus avoiding any possible SQL injection vulnerabilities.

Additional Defenses

Least Privilege

To minimize the potential damage of a successful SQL injection attack, you should minimize the privileges assigned to every database account in your environment. Do not assign DBA or admin type access rights to your application accounts. We understand that this is easy, and everything just ‘works’ when you do it this way, but it is very dangerous. Start from the ground up to determine what access rights your application accounts require, rather than trying to figure out what access rights you need to take away. Make sure that accounts that only need read access are only granted read access to the tables they need access to. If an account only needs access to portions of a table, consider creating a view that limits access to that portion of the data and assigning the account access to the view instead of the underlying table. Rarely, if ever, grant the create or delete access to database accounts.
If you adopt a policy where you use stored procedures everywhere, and don’t allow application accounts to directly execute their own queries, then restrict those accounts to only be able to execute the stored procedures they need. Don’t grant them any rights directly to the tables in the database.
SQL injection is not the only threat to your database data. Attackers can simply change the parameter values from one of the legal values they are presented with, to a value that is unauthorized for them, but the application itself might be authorized to access. As such, minimizing the privileges granted to your application will reduce the likelihood of such unauthorized access attempts, even when an attacker is not trying to use SQL injection as part of their exploit.

While you are at it, you should minimize the privileges of the operating system account on which the DBMS runs. Don’t run your DBMS as root or system! Most DBMSs run out of the box with a very powerful system account. For example, MySQL runs as system on Windows by default! Change the DBMS’s OS account to something more appropriate, with restricted privileges.

White List Input Validation

It is always recommended to prevent attacks as early as possible in the processing of the user’s (attacker’s) request. Input validation can be used to detect unauthorized input before it is passed to the SQL query. Developers frequently perform black list validation in order to try to detect attack characters and patterns like the ‘ character or the string 1=1, but this is a massively flawed approach as it is typically trivial for an attacker to avoid getting caught by such filters. Moreover, these filters frequently prevent authorized input, like O’Brian, when the ‘ character is being filtered out.
White list validation is appropriate for all input fields provided by the user. White list validation involves defining exactly what IS authorized, and by definition, everything else is not authorized. If it’s well structured data, like dates, social security numbers, zip codes, e-mail addresses, etc. then the developer should be able to define a very strong validation pattern, usually based on regular expressions, for validating such input. If the input field comes from a fixed set of options, like a drop down list or radio buttons, then the input needs to match exactly one of the values offered to the user in the first place. The most difficult fields to validate are so called ‘free text’ fields, like blog entries. However, even those types of fields can be validated to some degree, you can at least exclude all non-printable characters, and define a maximum size for the input field.

Visit us – http://www.codeplatter.com/contact.php

SQL Injection Prevention Techniques (Series-I)

SQL Injection Prevention Techniques (Series-I)

By Atish Singh

Here a set of techniques is mentioned, which provide prevention from SQL Injection, one of the dangerous security vulnerability. This technique is beneficial for the technologies, such as Java, .net, php and so on.

Prevention from SQL Injection requires lot of defensive measures to be taken for your application. The basic defensive measures to be taken are considered as the primary defense that consists of some programming techniques, defined as follows:

Primary defenses:
1. Parameterization(Prepare statement)
2. Stored Procedure
3. Escaping all user supplied input

Although, applying the primary defense techniques, you can secure your application from the basic security vulnerabilities, if you want to secure the application a step ahead, then you need to use extra defensive measures defined as follows:

Extra Defenses:
1. Least Privilege
2. White List Input Validation

Let’s take an example that is unsafe and is vulnerable for SQL Injection.
The following (Java) example is UNSAFE, and would allow an attacker to inject code into the query that would be executed by the database.
String query = “SELECT account_balance FROM user_data WHERE user_name = “+ request.getParameter(”customerName”);

try {
Statement statement = connection.createStatement();
ResultSet results = statement.executeQuery(query);
}

In the preceding example, the invalidated “customerName” parameter that is simply appended to the query allows an attacker to inject any SQL code they want. Unfortunately, this method for accessing databases is common to be used among programmers.
Considering the preceding example, let’s now discuss the defensive measures that can be used to prevent SQL Injections.

Primary Defenses

Prepared Statement
Prepared statements ensure that an attacker is not able to change the intent of a query, even if SQL commands are inserted by an attacker. In the safe example below, if an attacker enters the userID of xyz’ or ‘1’=’1, the parameterized query would not be vulnerable and would instead look for a username which literally matched the entire string xyz’ or ‘1’=’1.

Language specific recommendations:
• Java EE – use PreparedStatement() with bind variables
• .NET – use parameterized queries like SqlCommand() or OleDbCommand() with bind variables
• PHP – use PDO with strongly typed parameterized queries (using bindParam())
• Hibernate – use createQuery() with bind variables (called named parameters in Hibernate)

Safe Java Prepared Statement Example
The following code example uses a PreparedStatement, Java’s implementation of a parameterized query, to execute the same database query.
String custname = request.getParameter(“customerName”); // this should REALLY be validated too

// perform input validation to detect attacks
String query = “SELECT account_balance FROM user_data WHERE user_name =?”

PreparedStatement pstmt = connection.prepareStatement(query);
pstmt.setString(1, custname);
ResultSet results = pstmt.executeQuery( );

Safe C# .NET Prepared Statement Example
With .NET, it’s even more straightforward. The creation and execution of the query doesn’t change. All you have to do is simply pass the parameters to the query using the Parameters.Add() call as shown here.
String query =
“SELECT account_balance FROM user_data WHERE user_name = ?”;
try {
OleDbCommand command = new OleDbCommand (query, connection);
command.Parameters.Add(new OleDbParameter(”customerName”, CustomerName Name.Text));
OleDbDataReader reader = command.ExecuteReader();
//
} catch (OleDbException se) {
// error handling
}

Hibernate Query Language (HQL) Prepared Statement (Named Parameters) Examples

First is an unsafe HQL Statement

Query unsafeHQLQuery = session.createQuery(”from Inventory where productID=’”+userSuppliedParameter+”‘”);

Here is a safe version of the same query using named parameters

Query safeHQLQuery = session.createQuery(”from Inventory where productID=:productid”);
safeHQLQuery.setParameter(”productid”, userSuppliedParameter);

Visit us – http://www.codeplatter.com/contact.php

Performance Testing of Web Services – I

By Ravinder Singroha

In this series of blog, we will understand what we mean by performance testing of a web service, and with each upcoming series, we will take the various commercial and open source tools available to assist us in this venture. Let us first begin with a very basic question:

What is Web Service?

Web services are an XML based technology that allow applications to communicate with each other, regardless of the environment, by exchanging messages in a standardized format ( XML ) via web interfaces ( SOAP and WSDL APIs ). In simpler terms a Web services is a web enabled API. To read and understand further about web services you can visit, Pallavi’s blog.
Lets us now understand what we mean by the Performance Testing and why should we do performance testing course

What is Performance Testing?

Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing can verify whether a system or software meets the specifications claimed by its manufacturer or vendor. The process can compare two or more devices or programs in terms of parameters such as speed, data transfer rate, bandwidth, throughput, efficiency or reliability. For further reading, visit Himanshu’s blog.

Why Web Service Needs Performance Testing?

The next question that comes to our mind is why our web services need performance testing training. Here I am going to assume that you are either an experienced reader or you had taken my recommendations seriously and visited the blogs mentioned above.
As you can see in the diagram below that, there is common service provider, which is used by all service requestors. These requestors would want assurance that the web service meets acceptable performance criteria when under stress/load. The goal of performance or load testing of a Web service application is to find how the web service scales as the number of clients accessing it increases. Also, to gather the performance and stability information of the server such as throughput, CPU usage, the time taken to get a response from a web method, etc. when there are, say, 5, 50, 200, 500 concurrent users or more.
The service requestors require these statistics to ensure that the provider is meeting the SLA’s set. Whether they can expect a reliable user experience when a web service is used under load, which may be caused due to, increase in the number of users accessing it and under volume stress, which may be caused when large requests and responses are generated.
In addition, the service providers need to ensure that they have enough web server resources where the service is hosted to cater to the expected load and the web service is scalable enough to function rightly under stress.

Web Service Performance Criteria:

There are various performance criteria, which one should consider when testing a web service for performance.

Server Side:

Throughput: It is measured as number of request sent per second.

Latency (Transaction Time): It is the time taken between service request arriving and being serviced per second. We test the response time as we,
a. Increase the size of web service request, b. Increase the number of virtual users

Resource Utilization: We should be able to figure out the resource demand of the web service under various virtual user workloads/requests volume, so that an optimum resource can be provided for the expected load.

Client Side:

Latency: Time taken for service call to return the earliest response bytes, includes network latency.

Throughput: Average bytes flow per unit time including latency.

Conclusion:
Loosely coupled service environments provide unique benefit for the organizations that utilize them but they introduce new challenges. Both the service providers and consumers should be aware of the performance of the service so that they can ensure it meets the SLA’s specified. Many tools and profilers are available to detect and optimize the performance of a web service. In the next series of blogs, I would be taking a few tools and understand how to do performance testing of a web service using the various tools.

Visit us – http://www.codeplatter.com/contact.php

Is your Web-App Selenium-Test Compatible?

Visit us – http://www.codeplatter.com/contact.php

By Pallavi Sharma

If you are trying to automate your web application using selenium, a few things which you should know before you start the quest. For most of the selenium commands a “target” is required which is laymen terms is the object in your web-application on which you wish to perform a desired action.

Selenium follows a location strategy to locate the element in your application which consists of four modes:

1. Locating by Identifier.

2. Locating by XPath/CSS

3. Locating by DOM

4. Location by CSS

You will find detail description of the above locating techniques on the selenium website which are explained well there [http://seleniumhq.org/]. But what not is explained is when to use what and the negatives of each locating strategy.

1. Locating by Identifier: To locate an element using an identifier means that the element must have either an “ID” attribute or “NAME” attribute which should be unique on that page. You may use “type” or “index” property in combination with these identifiers to help selenium uniquely identify the element.

Selenium is right in expecting that if will find an element uniquely by locating using an identifier. The W3-standards also states that the “ID” attribute has to be provided for all elements and it should be unique. But since the HTML has no inherent check on such slip-ups, most of the developers end up doing so in their web applications while developing forgetting that it has to undergo testing also.

So if you know before hand that “Selenium” is the automation tool which suits your web application scenario the best; ensure your developers haven’t made such mistakes.

2. Locating by XPath: To locate an element using an Xpath, is not a very straightforward solution but it helps immensely when you have to use multiple attributes to help locating an element. Xpath is a powerful way of using any attribute by which you would want the tool to find the element and perform action on it. It is useful mostly in the cases when the developers have used same name, ids across the website and didn’t cared for the W3-standards.

But if you can expect your developers to overlook W3 standards for unique IDs you may very well expect them to give you “unclosed” tags in your application! And if your web application has even a single unclosed tag, Selenium won’t be able to detect the element using the “XPath”.

So if the above is the locating method, you expect to find an element ensure you don’t have unclosed tags in the application.

3. Locating by DOM: The last resort to find an element and weakest one to. It is not advisable to use especially if your website undergoes too many changes. And also if the intention of testing your web site is functional/regression.

4. Locating by CSS: If for initial thoughts you are thinking that what if my web-app doesn’t have CSS will this work, then the answer is “Yes”. This locator type works whether you have CSS or don’t have CSS. It is faster than X-Path, and as stated on the Selenium website, experience users like to use the CSS way of locating an element. This also doesn’t works if the DOM is broken, and also is browser depended, as different browsers have different way of handling CSS.

To summarize the above, it is vital to do a W3 standard check on your website so that you can throw all issues back at your developers to fix, before you jump in to testing your website using Selenium. A good starting point is the, W3-validator [http://validator.w3.org/] available freely online. It clearly list down the issues like duplicate ids, unclosed tags and other slip-ups. Selenium is a powerful tool to use to test your web application across operating systems and browsers but if the DOM is broken than none of the locators will work, coz selenium using the browsers java script engine so the results will also depend on which browser you are using for the test.

It is not mentioned clearly on the website http://www.seleniumhq.org when to use what locator strategy, and it is kind of overwhelming information why they have provided all such locator categories to the users? Couldn’t there have been a simpler and straight forward way of handling elements using just a single locator type which works under any circumstance. I don’t know the answer to this yet, but maybe will have….

Till the next blog happy testing your sites with Selenium.

Visit us – http://www.codeplatter.com/contact.php

Agent Controller Issue on Starting RAServer Process

Visit us – http://www.codeplatter.com/contact.php

By Kuldeep Singh

Introduction: This document has been prepared in order to resolve the issue that might occur during Invocation of RPT- Agent Controller process on Linux Machine.

Requirement: Our requirement was to generate the load from Linux machine (client) on the application server

For this, we have installed Load Generating tool (Rational Performance Tool version – 7.0.2) on window machine (OS: Window XP 2000 Profession SP-2) and RPT-Agent controller Process (version-7.0.2.1) on Linux machine (OS-Red Hat Enterprise Linux AS release 4-Nahant).

Below are enlisted some of the issues which were encountered during the load distribution through load generating machine (RPT) to Linux machine.

On executing the performance schedule we were getting the following error “Connection failed on host 172.23.244.207”.

————————————————————

Security Message

Connection failed on host 172.23.244.207

Reason:
IWAT0284E The agent controller is not available on host 172.23.244.207
Make sure that:
*the agent controller is installed.
*the agent controller is configured to communicate with your machine
*you have the correct host name and port number for the agent controller.

————————————————————–

Possible reason: The above error might have occurred due to Agent Controller is not installed or is not running on Linux machine.
Since, on Linux machine Agent Controller process (RAServer) process is not started automatically. So, we have to start this process manually.

Starting and Stopping Agent Controller on Linux machine:
• To start the Agent Controller process (RAServer) on Linux machine, move to the Installation location’s bin directory (for e.g. /opt/IBM/AgentController/bin). Then execute the following command
./RAStart.sh

• To stop the Agent Controller process (RAServer) on Linux machine, move to the Installation location’s bin directory ( for e.g. /opt/IBM/AgentController/bin) and then execute the following command
./RAStop.sh

On trying to start the Agent Controller process on Linux machine we may get the following Errors. (Below section describe the Error description, reason and resolution for the same)

Error:
1) Starting Agent Controller
“RAServer: error while loading shared libraries: libstdc++-libc6.2-2.so.3: cannot open shared object file: Error 40 No such file or directory.
RAServer failed to start.” Error

Possible Reason: Since the Agent Controller is compiled using libstdc++-libc6.2-2.so.3 shared library. Ensure that this shared library exists under the /usr/lib directory. If it does not exist, then you have to install the RPM package compat-libstdc++ that comes with the operating system installation media.
Note: – To make sure that libstdc++-libc6.2-2.so.3 shared library is available in the /usr/lib directory:
Move to the /usr/lib directory and execute the following command at the Shell prompt.
# ls –l libstdc*

Resolution:
The solution is to install the standard C++ compatibility libraries in order to satisfy this library dependency. The version of Linux on the client machine will determine what RPM or software package needs to be installed.
In our case, since we are using Red Hat Enterprise Linux As Release 4 (Nahant) Operating System on Linux machine, we need to install compat-libstdc++-296-2.96.132.7.2.i386.rpm package that is located on the Red Hat 4.0 Installation Disc 3.
Note: For more which rpm package required installing, browse the following link
http://seer.entsupport.symantec.com/docs/267077.htm

We can also download required rpm package from the following link
http://rpmfind.net/linux/rpm2html/search.php?query=libstdc%2B%2B-libc6.2-2.so.3&submit=Search
http://rpmfind.net/linux/RPM.

How to Install Required RPM Package:
1) Insert the required disc in CD-ROM and change the directory Red Hat/RPMS from Shell command.
cd media/CDROM/Red Hat/RPMS/
2) Enter the following command and execute
rpm –ivh compat-libstdc++-296-2.96-132.7.2.i386.rpm
If installation is successful, you see the following message:
Preparing…   ########################################### [100%]
1:compat-libstdc++-296   ################################## [100%]
RPM prints out the name of the package and then prints a succession of hash marks as the package is installed as a progress meter.
Note: For more information on RPM package browse the following link
http://www.faqs.org/docs/securing/chap3sec20.html

Now we can start the Agent Controller Process (RAServer) on Linux machine. Following message should be displayed on successfully start the Agent Controller Process.

Starting Agent Controller
RAServer Started Successfully
RPM prints out the name of the package and then prints a succession

2) “RAServer failed to Start” Error
Possible Reason: This failure is usually caused when TCP/IP port 1002 is not free. Agent Controller listens on this port by default. Agent controller was just stopped and restarted before the port could be released.
• If Agent Controller failed to start. You can start it as follows:
If port 10002 is being used by another process, you can change the port number by editing the serviceconfig.xml file. Serviceconfig.xml file is located in Installation Location Config’s directory /opt/IBM/AgentController/Config/

• If Agent Controller was just stopped, wait a few minutes and try to start it again.

Updates about QTP 10 (I)

Visit us – http://www.codeplatter.com/contact.php

QTP 10 revolves around 3 pivotal features, alongside several minor features (which turned out to be quite revolutionary):

I. QC integration – which (mostly) boils down to Resource Management and Source Control:

Resource Management: Although you could keep saving your resources as attachments (for backward compatibility), you can upgrade to a new, fuller mode of work. This includes a whole new Resource module in QC, and allows for some very neat tricks on Function Libraries, Tests, Object Repositories etc.

It should be noted, though, that other types of files (external excel / XML files, for example), remain as unmanaged attachments.

1. Resources have full meta-data, and have a special view pane – you can view Object-Repositories, data-tables, and function libraries code right from QC.

2. Resources are aware of their dependencies – Who relies on them, and who do they rely on. This enables a very strong warning system – when changing / deleting a resource, you’ll be alerted to the repercussions – namely, which tests, if any, might break. Also, the ability to immediately know who uses a share object repository is very useful, nearly revolutionary.

3. A very neat trick is a live, automatically updated path system – When moving a function library between folders, QC will automatically update all the tests which depend on it, so they will use it at its new location. This makes the once critical problem of hard-path-link a non-issue. Very impressive.

4. A word about the user interface – when opening a QC resource / test from QTP, the file dialog shows the items with large, crisp icons, very similar to Word’s save dialog. Everything is very clear and intuitive, as is the ability to revert back to saving / opening a test from the File-System.

5. And what about your existing projects? Well, when upgrading to QC 10, a wizard will automatically transform all you unmanaged attachments to managed resources (if you’d like it to).

Source Control: This includes a very rich line of features which are very well executed, and effectively allow you to manage a QTP project as any other code project:

1. First, the basics – QTP and QC 10 introduce a new Check-in/Check-out ability. It works similar to what you’d expect – a checked out item will be locked to all other users, and you can immediately know an item’s status by looking at its icon (green/red locks).

2. An interesting twist regards manner in which a test / resource is locked – it’s at the user level (not the local machine level). This means that if you log into QC from a different machine, you’ll have access to all your checked-out items, even if they were originally checked-out on a different local machine. The ability is implemented very well, both from QTP’s end, as well as from QC’s end.

A major enabler for source control is the new versioning feature of QC. It manifests both with a kind of instant versioning for a single resource, and with a project-wide “base-line version”, which allows you to revert your entire test framework to a previous version. Both types of versioning are supported by a surprisingly robust comparison mechanism. You can select two versions of a resource / test, a see a very detailed comparison of their respective changes. For function libraries this amounts to a “simple” text comparison, but this feature truly shines in full test comparisons.

It will present changes in the different actions and their resources (data-table, object repositories, inner code), as well as in the global test-settings, associated function libraries, recovery scenarios, and pretty much anything you could think of. The ability to drill-down into the data is very impressive; and the effective, concise manner in which the data is presented in the top level view is downright unbelievable. A nice touch is a small screen capture of the change, in case you don’t remember what “Run all rows –>Changed into-> Run a single iteration only” means (for example).

Now to the versioning mechanism itself: Whenever you check and item in, a new “version” will be created, and you’ll be able to revert back to it with ease. The snapshots are visible both from QC and QTP, and you can very easily choose which one to open. This allows you a kind of an instant undo to a single file which worked in the past, but is broken in the present.

The second mechanism presents the ability to select several folders, and create a full blown “base-line version” of them and everything they relate to. Defects, inner-connections, tests, history data, resources – all these and more will be “frozen” and preserved as a base-line. You can then choose to revert back to an old baseline, and truly regain all the abilities that were present at that time. As all the resources, attachments tests and reports will be restored, you don’t have to worry about forgetting anything, or leaving some minor resource at the wrong version. This is versioning with a vengeance – it allows you to track the AUT’s versions with your own automation versions, enabling, among other things, running automation efforts on several AUT versions at once.

For conclusion – The new abilities inherit in the connection of QTP and QC Atlantis are (or at least seem to be) revolutionary. At last, QTP projects can be naively managed as code projects; and some of the supporting infrastructure is surprisingly robust and useful.

For best QTP Certification course visit here

Application Security | PCI DSS Overview

Visit us – http://www.codeplatter.com/contact.php

As the number security breaches has increased, regulatory and industry requirements have become more stringent. One of the most popular compliance standard is PCI DSS. It was developed by the major credit card companies as a guideline to help organizations that process card payments prevent credit card fraud, cracking and various other security vulnerabilities and threats. A company processing, storing, or transmitting payment card data must be PCI DSS compliant or risk losing their ability to process credit card payments and being audited and/or fined. Here is brief overview of what PCI DSS is all about.

What is PCI- DSS?

· PCI stands for Payment Card Industry.

· PCI-DSS actually stands for PCI Data Security Standards (DSS), currently at version 1.2. PCI DSS is a set of comprehensive requirements for enhancing payment account data security. It was developed by a council (PCI SSC) which includes American Express, Visa International, MasterCard Worldwide, Japan Credit Bureau (JCB). The council is responsible for developing and managing the PCI DSS standards, establishing and maintaining Qualified Security Assessors (QSA) and Approved Scanning Vendors (ASV).

Who must comply with PCI?

Any company that stores, processes or transmits cardholder data must comply with PCI. Compliance to PCI is assurance to the organization that IT infrastructure and business processes are secure. It can serve as great marketing tool for company and instill greater confidence in customer’s and stakeholders’ minds.

Scope of PCI –DSS

All systems that store, process or transmit Cardholder’s data.

a) Applications processing Cardholder’s data ( e.g. e-commerce application, sales processing application)

b) Network Infrastructure

c) Storage Area Networks

d) Data Extracts including Cardholder’s data.

e) Backups

f) Log Files

g) Paper records

h) People

i) Org wise processes and structure

j) Third parties that stores or transmit Cardholder’s data on Organization’s behalf such as suppliers and dealers.

Who can help you get PCI DSS?

a) Consulting Agencies: Consulting agencies can help you find gaps, implement processes to fill the gaps and do a pre audit to make you prepare for final audit by QSA.

b) QSA: A security company qualified by PCI SSC to assess compliance to the PCI DSS standard. QSA’s are certified by PCI SSC to perform on site security assessments for verification of compliance with PCI DSS.

A list of QSA’s can be found at

http://www.pcisecuritystandards.org/pdfs/pci_qsa_list.pdf