Thursday, December 31, 2009

Groovy and Jemmy

In this demo, we have opportunities

1. To Bind your Java application with groovy console.

2. To Automate manual steps in groovy console.

1. Bind your Java application with groovy console.

//BindJavaApplication.groovy

import org.netbeans.jemmy.*;
import org.netbeans.jemmy.explorer.*;
import org.netbeans.jemmy.operators.*;

// Fire up the SwingSet2 Application

new ClassReference("SwingSet2").startApplication();

// Get a reference to the SwingSet JFrame and put it into the console's script binding.

mainFrame = new JFrameOperator("SwingSet")

def bind = new Binding()

bind.setProperty("testScript",this)

bind.setProperty("mainFrame",mainFrame)


// Fire up the Groovy console.

def console = new groovy.ui.Console(this.class.classLoader,bind)

console.run()

Thread.sleep(700000)


2 . Automate manual steps in console.

import org.netbeans.jemmy.*;

import org.netbeans.jemmy.explorer.*;

import org.netbeans.jemmy.operators.*;

// Get first button object to click
buttonObj = new JButtonOperator(mainFrame,0)

// to click on button with title “OK” buttonObj = new JButtonOperator(mainFrame,”OK”)

t = new Thread() {

buttonObj .push()

}

t.start()

Hey, what we have just done; we have just clicked first button of SwingSet.


Cheers !! Enjoy Automation.

~NV


Monday, July 27, 2009

AFT (Automation Framework Team) should not take developers' false catches

What does It mean by "developers' false catches" ?

It means those implementations or application unexpected behaviors which should not be automated with automation framework.

Why those should not be the candidates for automation? It is observed that adding these kinds of requirements in automation may set short ROI for organizations but it never becomes for long terms – You may ask “why” again, below are couple of examples which put more lights on this question.

Case-Study : #1

“Product Team has new requirements in build 1.2.x like objects inputs are changed with prefix or suffix strings and that affects more than 1000 automated test-case(s)”

Due to limited resources and short dead line to certify build(1.2.x) of the product, Product lead comes to AFT and asks for help. He requests to update Automation Framework in such a way that manages prefix and suffix inputs and his team does not have to update their test-case(s). As part of the service team (AFT), we provided solution at framework level. These solutions may help them to certify product but it also carries live wires to take cares are as

1. As per Automation Framework development perspective,
Automation Framework becomes dependent to product builds.
According to Standard Automation Framework protocols, framework should not depend on product; Independent framework can be useful to automate other products as well and such framework would have maintenance and complexity low which helps to give consistent behavior for automation projects.
2. As per Product team perspective,
Product team loses track for test-case(s) on product behavior. Since, Automation Framework injects input data for build 1.2.x ; There becomes no control to update those input data by product team and they lost transparency of test-case(s) with test-data inputs which may raise high effort require to map test-data inputs with testcase(s) per builds for both product team and Automation Framework Team.


Case Study #2
"Product Team has more than 800 test-case(s) automated which works fine with couple of synchronization points (like status as “Processing On Server”). Next iteration, team gets new build and say for “Processing On Server” is broken."

We always face some issues with product to automate. Some are very common when application behavior changed and automation framework has synchronization issues but they concern when they becomes blocker issues for team to certify product. I see lots of organization does not care for this kind of issues since they are not affecting to their core functionality to certify build manually. Because of automation works on synchronizing objects behavior; those issues become “real matters” to resolve them. In one of the experience I have, these issues (broken synchronization points or application unexpected behaviors) made automation in batch executions hanged and crashed the application.

In such condition, product team wants to certify build and request to resolve this kind of issues at automation framework level. As part of solution provider team(AFT), we resolves these issues at automation framework level and helped them to certify build.

But, Why these issues should be resolved at application level not automation framework level? Because

1. As per Automation Framework perspective,
To resolve these kind of issues, automation framework injected with codes to handle broken synchronization points. This becomes un-necessary degrade test-case execution performance and pulls garbage code to automation framework project.
2. As per Product team perspective,
Most of the time, These issues are related to performance of the application and so they should be fixed at application level with priority in QA cycle.

Concluding as …
AFT supports product teams to unblock them to certify builds by giving hot fixes or work-around at framework level but however it is the product teams' responsibility to have priorities to fix those issues (application unexpected behaviors - false catches) at application level and deliver quality product.

After all, we deliver product only; not product with automation framework to handle product’s uncertainties :)



Please, send your precious comments here,

With Regards,
Nimesh - VN.

[Photo: In this Nov. 19, 1978 file photo, Philadelphia Eagles' Herm Edwards (46) pounces on a ball fumbled by New York Giants quarterback Joe Pisarcik (9), right foreground, in the last minutes of the game. Credit: G. Paul Burnett]

Everyone is talking about automation tools like QTP,...

Everyone is talking about automation tools like QTP, WinRunner, LoadRunner,Silk Test, Selenium Bla Bla bla ... but Once these tools come into real practice and he\she has to automate test-case(s) with them in existing automation framework.

How far we can go with our goal?


This post gives opportunity to discuss on characteristics of test-case(s) while doing designing for test-case(s) in automation.
Below are the couple of thoughts, I believe to have in automated test-case(s).

§         Concise - Test-case(s) should be as simple as possible.
i.e. Test-case(s) should not call multiple other complex Test-case(s). It should have minimum dependency.

§         Self Checking – Test-case(s) should have verification steps and It
should report in its results such that no human interpretation is
necessary. For example, I have found in one project; team has around 1500 test-case(s) – out of more than 44% test-case(s) has steps like

1.      Open Menu Item.

2.      Enter A/C number and other Inputs on screen say for A.

3.      Click on Button "Next" on screen A.

4.      Click on Button "Next" on screen B.

5.      Click on Button "Next" on screen C

6.      Click on Button "Close" on screen D – here, one product was created.

Here, all screens (A to D) have Button "Next" and "Close" and that raises some real concern to take care. Why? Because of default values of each screen are missed to verify. Test-case for each screen should have all default values' verification steps so that user\automation tool can confirm that he has achieved so and so functional flow screen while running specific scenarios. If any new screen with "Next" and "Close" button introduces in middle of screen(A-D) in new build – this will capture by automation test-case(s).

§      Repeatable – Test-case(s) can be run repeatedly without human
intervention. DDT(Data Driven Test) or test-step parameterization is one of the big benefit of automation.

§      Robust – Test-case(s) should produce same result now and forever.
Automated test-case(s) should have consistent output. It is not an easy task to achieve this characteristic. In hybrid Automation Framework, Automation team can create BO keywords to setup baseline and this keywords should be practice well enough that test-case(s) can have baseline to initialize and other test-case(s) in queue can execute regardless of previous test-case(s) passed or failed.

§      Necessary – Everything in each Test-case(s) should contribute to the specification of desired behavior. All pre-requisites should be set well – Test-case(s) should have none or minimum pre-requisites to set manually.

§      Clear – Each step in Testcase should be easy to understand.

§      Efficient – Test-case(s) run in a reasonable amount of time. It manual test-case(s) takes 100 minutes then automated test-case(s) should not take more then 50-60 minutes at the best efficiency.

§      Specific – Each failure step should point to a specific piece of broken functionality

§      Independent – Each Test-case(s) can be run by itself or in a suite
with an arbitrary set of  other tests in any order. To avoid dependency in Hybrid automation framework, automation team can develop BO keywords and add them for each test-case(s) to avoid dependency.

§      Maintainable – Automated test-case(s) should be maintained and extended with testcase management tools like TestLink, HPQC, Test Director or many other freeware tools which can easily plug with automation framework.

§      Traceable – Test-case(s) should be traceable to the requirements.

So, when we have Test-case(s) to automate – have above protocols in mind to do your tasks more effective and efficient. I would like to have more thoughts on this ; please, add your precious comments here.

Enjoy Automation / NV

Popular Posts