Thank you to all the volunteers, speakers and sponsors that came together to make South Florida .NET Code Camp 2017 happen. Thank you for providing Code for Fort Lauderdale with a community table to tell people about how we’re trying to improve our city and county. I enjoy meeting and talking with all the attendees. I learn a lot from those conversations. I’ve recorded some notes from one of the sessions that I was able to attend below.

Your Application: Understanding What Is and What Should Never Be
by David V. Corbin

Here is the powerpoint for this talk. When testing your application, it’s important to have a narrow focus. Take the simple example of calculating the slope of a line.

y = mx + b

m = delta y / delta x = rise / run

What happens when it is a vertical line? The run is 0. How does the program handle divide by 0?

The Testing Taxonomy contains Top level “Kingdoms”.

Transitory Testing

  • Thinking about the problem
  • Ad hoc execution
  • Local helper programs
  • No long term record
  • How can you possibly know what he did in his head?

Durable Testing

  • Durable Testing
    • Why do we skimp on Durable testing? Perceived cost is high. We’re not being effectively lazy. Maximize the amount of work not done. from Agile Manifesto. Once you get through the mindshift, it is easier for most things. Some things you have to pay to implement it.
  • Tests exist with expected results
  • Audit trail showing the test was done
  • Manual tests
  • Unit tests
    • Unit Tests and System Tests are the endpoints of a spectrum
  • Automated tests
  • System Testing

UI/Browser -> Logic -> code -> Logic -> DAL -> S.P. TSQL -> Data

Component Tests

  • API Tests
  • Integration Tests
  • Sub-system Tests

We never tried that set of inputs. Never did those two things at the same time. It worked in the last version! Get rid of regression errors permanently! “I hate the same pain twice.”

It’s important to understand current state of the application and the constraints of future state. For example, this should action not take longer than a given time period. Have some artifact about the constraints and that can be tested automatically. Testing should be a game in the mathematical sense. When there is a set of decisions with a desired outcome, Game Theory.

Where do we get value in our organization and in our situation?

How are we measuring our testing

  • Code coverage
  • Low numbers indicate large amounts of untested code
  • High numbers are often meaningless
  • Cyclomatic complexity
  • Absolute minimum test paths that you need to run
  • Does not detect data driven scenarios

Data Specific Considerations

  • Reveals many errors in logic/calculation
  • Can be hard to identify

Time specific considerations

  • Discovers problems that are often not found pre-production
  • Virtually impossible without special design considerations

IO rewriting

  • Multi-threaded and async operations
    • Often the most difficult to understand, categorize and test
    • Careful design is your best defense
    • Using the latest await/async
  • How to test if a collection is modified? You can with unit tests.

Negative Testing

  • The art of testing what is not there
  • Common problems
    • Unexpected field changes
    • Unexpected events
    • Unexpected allocations
    • Unexpected reference retention
  • Nobody achieves perfection.
    • Forget about always and never.
    • Exploratory testing is your best defense for catching the gaps.

Multiple Views with Long Term Focus

  • Deep understanding encompasses a range:
    • A wide view
    • A deep view
  • It is impossible to get to the point of Understanding Everything
  • One will never be Done
  • It is a continuing quest for understanding

What is Software Quality?

  • Grady Bosh (UML)
  • Look at what is not quality
    • Surprise
    • If things happen according to expectation, then you have your desired level of software quality
    • Understanding reduces surprises
    • There will always be bugs/defects/surprises
    • Increase in known issues is a good thing
  • One cannot test everything!
    • Don’t attempt to.
    • Create a simple prioritization Matrix.
    • Identify a small target for your next sprint.
    • Strive for continual improvement.
    • Add a robust definition of done.
    • Experiment and try to make each time a little bit better.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s