Non exhaustive testing gotchas #3408
Closed
evilhonda3030
started this conversation in
Ideas
Replies: 1 comment
-
I'll close this discussion and open 3 topics-focused ones. I'll also reformulate my proposals for more productive discussion. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
My project is getting bigger and exhaustive tests slows us down
The symptoms are:
All in all, all that stuff Krzysztof Zabłocki uncovered once upon a time.
To mitigate I started experimenting with
exhaustivity == .off
and it turned out to be less intuitive than I expectedHere is the list of the things I'm concerned about:
A little information on how to wait/control effects a sent action causes
Use case
I want send and action, wait for its effects to finish and assert on the results aka I don't want to care about implementation details.
Issues
assert
docs say we can do the followingWhile
finish()
docs say that post interactions withTestStore
cause undefined behaviour.finish
since I want to interact withTestStore
more and no documentation says how I can proceed.TestStoreTask
does not apply state changes. Is it a limitation of havingreceive
API?TestStore.finish()
does. I don't know if it's actually correct sincefinish()
declares that further interactions are impossible.skipInFlightEffects()
does not support timeout, I assumemegaYield
might be not sufficient.Summary
TestStoreTask.finish
? Or add a helper method to the library?.finish()
is not required if there are few effects thanks tomegaYield
. But I have to admit that relying on implementation details is dangerous. I was able to implement a test (rather insane one) to breakmegaYield
. Would be great to communicate this through the docs.Should
skipInFlightEffects()
supporttimeout
?Does this have undefined behaviour?
Assert action is NOT received
Use case
Filter out unnecessary events of
AsyncStream
in.run
to avoid performance pitfalls.Issues
Summary
Fragile tests due to unimplemented dependencies
Use case
I don't want my test to break due to use of a new dependency in
Feature
unless expectation failsSummary
testValue
at all and maketestValue
equal topreviewValue
when defining dependency? The downside is bad support of exhaustive testing when needed.store.dependencies = .preview
at the beginning of the test?I'm attaching a simple project which implement some tests which fail.
NonExhaustiveTestingGotchas.zip
Beta Was this translation helpful? Give feedback.
All reactions