Encourage management to debug manual tests and do everything right

I am working on a project that is quite complicated in terms of size (for building a web application). The first problem is that no one is interested in any products that could really solve the problems associated with the project (lack of time, no adjustments in the timing in response to constantly changing requirements). It doesn’t matter that these products are not expensive ($ 500 for a company producing millions), and not products that require a large configuration (although the project requires products such as assembly automation tools to free up time).

In any case, this means that testing is done manually, because the documentation is deliverable - this means that the actual technical design, implementation and testing of the site is suffering (are we developers or authors of documents? What are we trying to do here? Questions that come to mind) . The site is quite large and complex (not on a Facebook scale or something like that), but when conducting manual tests, as it was ordered to do (despite my warnings), they tell me that this is not high-quality testing, and therefore not high-quality product for come out of it.

What benefits can I offer appropriate people to encourage automated testing (which they know I can implement)? I know that you can change the resolution using cmd with a third-party application for Windows, so all this can be part of an automatic build. Instead, I probably have to manually run all of these browser permutations, screen resolutions, and window sizes. Also, where are the recorded tests recorded? Do they burst when windows are minimized? The big problem is that I do the work of monitoring the test, and the PC does not do ALL the work, which is my job (make the computer everything to work). And given the lack of resources, this clogs the dev-box - yes, it is used for development, and then for testing. It’s much better to automate this for an overnight launch,when the box is free.

+3
5

, , , :

  • , , .
  • , - ( , ), , . , , .
  • , , . - , , . ; . - , , , , .
+2

. , +. - ( ). , , , , , .

, , , , ROI , , / , .

. , . -, . - QA, , . , , , .

+1

, . , . , , , .

0

? , , , .

, . , . , . , , ( , - , , ).

... ... ? , .

0

... , - . . , , , , . , , , .

- , .

But the tricky part is to find a strategy that allows you to use the tool with minimal graphical drag and drop in the first release using the test tool. Testing is always crushed at the end of the chart, which is why it is most sensitive to graphic stress. All you can do to show control of how to reduce or remove the learning curve and automatic test setup and setup time is likely to increase your chances of using the tool.

0
source

Source: https://habr.com/ru/post/1731110/


All Articles