Test Coverage Report



Results

Testing started at 3:48 PM ...
10/29 15:48:47: Launching 'EnergyUsageDatabaseT...' on Pixel 2 API 30.
Running tests

$ adb shell am instrument -w -r -e debug false -e class 'eco.emergi.embit.EnergyUsageDatabaseTest' eco.emergi.embit.test/androidx.test.runner.AndroidJUnitRunner
Connected to process 16611 on device 'emulator-5554'.

Started running tests

Tests ran to completion.

EnergyUsageDao.kt is covered 60%. Note that this is due to not writing tests for unused functions.

Interpretation

Which part of the code is this report for?
This test suite is for the part of our app which locally stores and retrieves streaming energy consumption data. Specifically, it tests our data persistence layer, for which we used the Room Android library.

Do the results represent a healthy test suite, an unhealthy test suite, or somewhere in between? Why?
Our code is showing a healthy test suite, though it will certainly need to expand as we add new features. While everything is passing at this time, we still have quite a few features to add which will require even more testing. I am expecting that it will be quite difficult to test the functionality we still need to implement for scheduling recording that battery info. That also makes it difficult to write the tests in advance, so in that sense our test suite could be considered somewhere in between.

Are there parts of the code that don’t seem worth testing? If so, why not?
There are quite a few sections of the database code that is not worth testing. The reason being, the database table file called EnergyUsage.kt is implicitly tested when using the EnergyUsageDao. The same goes for the EnergyUsageDatabase.kt. These are very standard Room database implementations and the only place testing is required is for using the Dao in EnergyUsageDao.kt. We also decided not to test the functionality that we were not using anywhere in the code. This was a decision made because they were standard functions and since we aren’t planning to use them, there was no reason to spend our limited time working on that. Also, most of the functions in the interface of EnergyUsageDao.kt are the name and an annotation that is standard specification.

What are the top testing priorities in the current code? How important are they?
A high priority untested section of code is our energy consumption monitoring logic. We have code which listens for events from the Android energy monitoring class BatteryManager. This code is critical to the operation of our app, since it (along with the OS clock) is the source of all data we store and post to the client's endpoint(s).

Are there any parts of the code that seem important to test, but you’re not sure how to test them?
Going forward, it will be important to test our application's interface with the client's REST endpoint(s). This will be non-trivial to test since the endpoint(s) haven't yet been built, and we may need to mock them for testing purposes. This then introduces another collection of code to test: the mock endpoint(s). The need to create and test the mock endpoint(s) extends beyond the temporary blocker; by keeping the tests independent of the real endpoints, the tests will prove helpful to diagnose issues as they arise since they will be specific to our code (unlike broader integration tests).