TestChat

   2 years ago
#TestChatExploratory TestingA testing chat on Exploratory Testing
   2 years ago
#TestChatDevice TestingA testing chat on Device Testing
Ministry of Testing
Thank you everyone for getting involved! Thoroughly enjoyed it and can't believe the hour is up! We'll be back again on the 28th of February for our next #TestChat
Ministry of Testing
OK, the first question of the evening - What crossovers are there with current testing and testing AI? What existing testing methods can be used to test AI? #TestChat
Marcin Placzek
I belive we still are able to use session-based testing for AI testing. It will be appropriate for first validation and for discovery the next test paths #TestChat
Heather Reid
I'm thinking exploratory testing is going to be more valuable than ever with AI #TestChat
Amit Bansal
#TestChat - from what I understand about AI testing, its highly specialized field. Therefore, extensive and huge data set will play a key role here
Robert Ryan
If the output is meant to mimic human intelligence - then surely something which satisfies a human would be the measure. But we have humans/people which have extremes of behaviour.
magdao
Exploratory for sure. I'm also guessing methods from usability testing might come in handy, as there will be a large grey area of what inputs can lead to useful outputs and an application should probably be able to guide the user to the effective ones. #TestChat
Amit Bansal
#TestChat - I believe this would also crossover/overlap with exploratory and boundary testing, would you agree?
Richard Bradshaw
exploratory testing will be key. As I believe it will be very rare there will be a single repeatable answer. So we'll have to continuously be asking questions. #testchat
Ministry of Testing
True, so we still have to test edge cases, negative test etc #TestChat
Robert Ryan
For example if w had an AI system which suggested stock market investment, then there could be the risky AI or very cautious AI engine.
Alex is in your softwares testing all your things
Part of my exploratory testing involves generating my own mental model of the product - it seems like AI would continuously be changing...?
Noemi Ferrera
I think really depends on the AI application itself. Analytics could give us a lot of information about the system being well implemented and achieving its purpose. Also, user input.
Richard Bradshaw
Modelling is a key skill that will be in high demand. A lot of attempts at automated checks with have asserts against some model instead of a fixed oracle. So those models will need carefully designing
Ministry of Testing
@alangshall Yes it would act as a continuously learning system - so you're model would have to adjust #testchat
Niranjani
Exploratory testing could be used to generate data and feed to the AI engine so that it can make better judgements.
magdao
#TestChat If the AI is expected to learn, perhaps some lessons from teaching, as in good old IRL school methods to judge "quality characteristics" and progress, might prove applicable.
Alex is in your softwares testing all your things
So exploratory testing would need to be inherently repetitive - running the same steps multiple times to assess that the results fall into a desired range or pattern?
Marcin Placzek
@TheTestLynx I like this approach. Of course, it's usable only for existing solutions. However, it may provide a lot of usability issues or even incorrect modelling of the chat bot #testchat
Noemi Ferrera
@PlaczekMarcin are we testing the AI application's output or that the AI application is AI?
Ministry of Testing
@magda0s Which methods could be employed? Or can you think of any off the top of your head?
Niranjani
Yeah and also let the AI engine take all other parameters into consideration like environment, timezone, locale, etc.
FirstName Surname
how would we minimise the probe effect - would we "wipe" what the AI learns for each test?
Robert Ryan
Should stress testing an AI system result in it giving incorrect answers (as humans might under stress). I presume we do not want these human traits to be implemented.
magdao
#TestChat My teaching experience lies way back and is specifically related to language teaching but there are measurements or rather classifications for understanding texts, for example.
Noemi Ferrera
Some existing methods would still be valid (for testing the output of the app) But what about using AI to test an AI application (output)? Because, it would actually be possible :)
magdao
#TestChat So there would be ways to watch how the AI learns to react to natural inputs. Or text formulation, something along the lines of what Robert Ryan mentioned as cautious vs. risk-happy AI, can it express the answer in various ways depending on user "clues".
Marcin Placzek
@TheTestLynx I think it's a trap and involves huge 'unknown' area.
Ministry of Testing
@magda0s Ah that would be an interesting area to investigate, the overlap and use of certain standards etc. #TestChat
Noemi Ferrera
One thing about AI is that most of the systems use feedback for learning, so if we stress it with wrong data we might actually break it
Aya Akl
Exploratory testing and monkey tests that we need them to be smart enough to make random actions and have random results and assertions with different data that we add them for us to know which scenarios of actions would break.
Ministry of Testing
@TheTestLynx Ah this was an interesting point from the masterclass. Depending on the app - some negative testing is needed, but should it be limited to prevent breaking? Or is it needed to make sure that the app doesn't fall over when it discovers a negative case? #TestChat
Noemi Ferrera
I think we would need actual examples to understand this, AI is a really big field. I'll prepare something on the matter.
Ministry of Testing
OK folks, last question. What skills do you believe are necessary to get a job as an AI tester? Is it a case of building on existing knowledge or exploring new emerging technology? Or both? #TestChat
magdao
I guess for some areas the skills will be hard computer science - on the training side (after all, the training inputs need to validated/organized as well). #TestChat
magdao
On the other hand I see a lot of room for crossover from psychology or humanities. Even if I don't really expect a strong AI, ever, if we want the little ones to emulate/enhance the human mind, the same ways of working could very well apply. #TestChat
FirstName Surname
I'm more concerned that AI will replace the job of tester :( #TestChat
Ministry of Testing
@magda0s That's true. I'm guessing it's hard to know about it as it is such a new technology. It might be a case of finding projects and learning as you go at the minute #TestChat
Noemi Ferrera
I think it could be analytics, could be manual testing, could be AI knowledge, could be automation knowledge...same as with any other app, since it's basically a different way of implementing applications
Ministry of Testing
@WotUpBiches Never, sure we've heard that time and again. There will always be need of a tester - especially if there's need of a dev! #TestChat
Aya Akl
@WotUpBiches I don't think so, Automation didn't, It will be a different type of testing with lots of cases and new studies.
Ministry of Testing
@magda0s That would be an interesting area to explore #TestChat
magdao
@WotUpBiches I had that worry when I worked as a translator and automated translation was a scare. The I saw it in practice and as long as you don't make the entire chain perfectly clean and consistent, robots won't manage to take over ;) #TestChat
Marcin Placzek
I think it will be new area of knowledge which will get experience from some communication and cultural studies.
Ministry of Testing
@TheTestLynx I think the ability to perform data analysis will definitely become more key moving forward. #TestChat
Aya Akl
It will be for sure both, building on existing new skills and broaden these skills to suits AI and learning different methodologies.
Alex is in your softwares testing all your things
Some amount of Education as a discipline as well. How to formally assess knowledge, how knowledge is generated, etc, etc.
Robert Ryan
Perhaps using Specification by example (aka BDD) is appropriate - since we would be writing tests in domain language of the users. Perhaps there will be a need for training- and maybe it will be the testers role to confirm to the system what is right and what is
Marcin Placzek
@WotUpBiches I can agree that it might be finish for testers, but I believe QA will be still needed
Robert Ryan
If something is hard to define - then this is where the human will remain. If you can define it easily than it can be automated.
Ministry of Testing
@PlaczekMarcin Is that not part of what testers help with? Along with the rest of the dev team too. #TestChat
Ministry of Testing
Very specific answer there, only python?