General QA interview questions

 General QA interview questions




1. How do you determine when to stop testing? OR You said you have x,y and z types of testing in your team, do you think it is enough?
(Global Relay Lead SDET)

2. How do you triage bugs? Who does it?
(Global Relay Lead SDET)

3. In your resume, you have put, you have done functional testing, performance testing, etc. What is the strategy when you design them? OR what strategy do you apply to design test cases? (Global Relay Lead SDET)

4. Do you perform unit testing? OR who performs unit testing?
(Global Relay Lead SDET)

5. What do you do when you find a bug during the last day of the release?
(Amazon phone interview)

6. Explain your team's QA process.

7. Explain your team's deployment process.

8. What do you do when you find a bug in the production environment?

9. How do you decide testing is finished?
(Cvent)

10. You say it's a bug and dev says it's not, how do you handle this situation?
(Cvent)



Bugs found in prod env, not in dev, What will you do?

- Verify that if you have missed any test case while executing

- Check test data and steps and make sure it’s a valid bug 

- Once you know bug is valid, start analysing 

- Check release version, test data, test env, config files, env variables, database schema

- If its due to performance, network, data base - narrow down the problem based on the available information 

- Try to replicate the setup and try to reproduce on QA/Dev env

- Verify that everything is deployed properly


What is the release process in your team?

We follow agile testing. Our sprint is of two weeks. In the beginning of the sprint when planning happens, we get to know about the stories, we go over requirements and discuss about roadblocks and QA gets to clarify about the ticket. Then for the next 2 days QA team work on developing test cases. I write test cases put those in test rail and also on the github ticket in a well formatted table. I work closely with devs to understand more about products, features which are getting developed. More I know about the system I can test better. Depending on the type of the story I may have write just FE or BE test cases. I may have to write database or network related tests. I do cover negative, edge cases and all the functional and E2E test cases. 

Dev starts implementing it, they write unit tests. When dev finishes developing, they create a pull request,  static code analysis tool codacy checks for code coverage, code complexity, code duplication in every commit and in pull request. Peer review also happens. Code gets merged only after full set of unit tests are PASS. Jenkins automatically merges the code, runs the unit tests and deploys to dev stack.

Now the post merge tests gets triggered which includes sanity test cases and partial regression test cases which are E2E tests developed by QA using selenium.

If the tests fail, commit which caused the failures gets identified, developer gets notification, status of PASS/Fail gets marked on the commit. Dev starts fixing the issues. Once the feature is all PASS on automation pipeline, dev moves the ticket to “testing” pipeline.

Once it comes to QA for testing, QA executes smoke first then those test cases which were already written. When all the test cases are PASS, I put “PASS on dev” stamp on the ticket and move it further 

If it fails, it goes back to dev for fixing the reported failures. Each failure needs to be analysed by QA to find out the root cause. QA needs to take logs, analyse and point where the issue is happening (FE or BE etc) 

Now we are at the end of the sprint. We need to deploy to production. QA does partial regression tests manually on dev to make sure everything is fine before the final deployment.

Des stack gets deployed on staging, automatic sanity and regression tests gets triggered on staging.

QA does new feature testing manually.

Once things look good on staging, production deployment happens. Jenkins triggers sanity as well as regression tests on prod test client. QA verifies new features manually.

Performance and security tests on staging. This we do at the end of the week. Run it over the weekend.


Defining test strategy 

What we have now? - Requirement documents, design documents, test plan.

In some cases, we have organizational level test strategy documents to refer too

Test strategy defining testing types and why those types of testing are required.

This will also include who and when will test

Clear steps

A. Understand documents/requirements/products

B. Agree on testing types, why these are required

C. Test case writing

D. Test case coverage

E. Test cycle creation - smoke, sanity, regression etc 

F. Automation what to automate. Also decide which test cases should be part of CI - pre and post merge test suites

G. MAke sure unit tests are written

H. Plan for parallel execution

I. Analyse past failures and customer complaints. Get to know important customer touch points.  Understand critical areas of the project

J. Give importance to exploratory testing

  1. Go over documents, dive deep, discus, understand and give feedback

  2. Start test case writing 

Very important - test case coverage, test cases shall cover all functions, modules

Test cases shall map against requirement to make sure each requirement is covered

No redundant test cases

Assign priority, severity and testing types correctly

Re use if possible and write re-usable

Clear steps - anyone should able to execute test cases

Assumptions

Test data should be kept separately 

Need to cover all types of testing

Reduce unnecessary execution of huge test cases set

  1. Create a smart test cycle - understand what has been modified, whats going to be delivered and define test cycle based on that. 

  2. Smoke - sanity - regression - clear distinctio

  3. Automation - Decide what to automate, carefully choose automation framework. 

  4. Decide which goes in CI

  5. Make sure unit tests are written

  6. Parallel execution - if you need to support whole lot of browsers, OS combinations

  7. During testing - document your bug - very well, report it well. In addition to just creating bugs, give metrics about overall quality of the product so that decisions can be taken in improving the quality

  8. Have a very good understanding of the product architecture, design and code, so that you can give feedback. For e.x. Fix has been developed for the bug. Need to check whether its quick fix or permanent fix. Having quick fixes will only add more issues later

  9. Communication is very essential - reporting quality metrics, if bug is opened own the bug,  triage it, take it till the closure.

  10. Analyse past failures, customer complaints to come up with areas to focus on


For E-commerce sites testing strategy could focus on

Availability testing

Usability

Security 

Performance load/stress

In addition to functional tests

Also,

Test all functions

Gauge how friendly it is to the users

Make it secure - nobody wants to submit their personal data unless its secure

In our day to day life we use online retail sites so much, we need to use this experience and think as a user and do more exploratory testing

Important customer touch points must be tested with utmost care and priority 

SEO friendly ness

Practically, due to time and budget considerations, it is not possible to perform exhausting testing for each set of test data, especially when there is a large pool of input combinations.

  • We need an easy way or special techniques that can select test cases intelligently from the pool of test-case, such that all test scenarios are covered.

  • We use two techniques - Equivalence Partitioning & Boundary Value Analysis testing techniques to achieve this

SEO friendlyness - If there are too many java scripts and Flash elements on the UI. It decreases search engine ranking. Search engine crawls through websites to rank it. Crawlers can not crawl javascripts and flash elements well. 

Comments