[Android]Jetpack for test related environments

I’ve bet Truth since [Android]Checking Android Testing Support Library 1.0. I encountered an issue using AssertJ with ATSL 1.x and I talked about it in https://www.slideshare.net/KazuMatsu/20171215-andoirdtestnight before.

In a talk https://www.youtube.com/watch?v=wYMIadv9iF8 , Google starts to provide Jetpack. The pack has various libraries and test related libraries as well. For example, Espresso.

What I surprised is Truth Android extension. They announced the extension will be bundled in the pack. It means they will bundle Truth as an assertion library for Android. For me, the news was very good opportunity since it shows my prediction was a success.

Robolectric 4.0, Nitrogen and AndroidTestOrchestration were also interested.

We can see an announcement of Robolectric in http://robolectric.org/blog/2018/05/09/robolectric-4-0-alpha/ . According to the release note, we can implement test code like instrumented tests but can handle it via Robolectric. In the video, we can hear running test in no emulator environment and I predicted it was this feature. Recent years, Googlers have committed to Robolectric heard and I also thought it would integrate to one of the testing libraries.

@RunWith(AndroidJUnit4::class)
class OnDeviceTest {
  @get:Rule val rule = ActivityTestRule(NoteListActivity::class.java)

  @Test fun clickingOnTitle_shouldLaunchEditAction() {
    onView(withId(R.id.button)).perform(click())
    intended(hasAction(equalTo("android.intent.action.EDIT")))
  }
}

I’ve noted some interesting things for me in the Google IO especially test.

Advertisements

Read “A Practical Guide to Testing in DevOps”

A couple of weeks ago, I read A Practical Guide to Testing in DevOps

The book explained and described DevOps x Testing with many references and keywords.

you can see why people struggle to understand where testing fits in a model that doesn’t mention it at all. For me, testing fits at each and every single point in this model.
> by Dan Ashby


Continuous Testing in DevOps…

In the book, we can see many words, beyond the team boundary. We can see explanations about separated test teams and activities in many traditional style world. But recent Agile and DevOps culture breaks the separations. It is the natural thing.

We can see some concrete steps about pair work and test activities. For example, first 10 min …, next 20 min is… etc. We can image how to work them in our activity quickly, I believe.

Comparing testing too deeply or too shallow also helps us.

The book is useful to understand recent development and testing culture. And almost stories were very proper for my current environment. I had some unknown words, but we’ve been working such things and knew the word and definitions.

I’d love to recommend this book to other guys who would like to catch up with recent development style include testing.

Read “Antifragile Systems and Teams”

I read Antifragile Systems and Teams. Entirely, this book is DevOps book, I thought. This book also short. So I just left some lines in this article and I’ll leave here.

Anyway, do you heard Antifragile? I hadn’t known the word before.

Let refer the Taleb’s word from a book.

“Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better”
> https://en.wikipedia.org/wiki/Antifragile

As the word said, antifragile is anti + fragile. But it isn’t not fragile. It’s more robust and resilient thing. And this book tried to show how to build such systems/teams, especially software development industry.

Lately, I believe DevOps methodology became common. We can imagen agile, flexible, robust and autonomous things. And this book also challenges to show use cases and examples for it.

What I surprised was I saw QA(Quality Assurance) related engineers often appeared. I thought this kind of books shows some ideal story, but this book was more close to real DevOps story. I also read DevOps and Testing books as same as this book.

This book is too short. Only 20 pages in PDF and they were summarised good enough things. So, if you have an opportunity to read it, let’s read it.

See you happy DevOps? 🙂

[iOS]xccov and JSON format

Before Xcode 9.2, we can get formatted coverage data using https://github.com/SlatherOrg/slather, for example. The library formats llvm-cov.

But from Xcode 9.3, xccov is introduced officially. The command can run via xcrun.

xccov is a new command-line utility for inspecting the contents of Xcode coverage reports. It can be used to view coverage data in both human-readable and machine parseable format. To learn more, enter man xccov in Terminal. (37172926)

I tried the feature with https://github.com/KazuCocoa/test.examples and I put a simple Ruby gem script to get the target name and the line coverage.

Run

$ git clone https://github.com/KazuCocoa/test.examples && cd test.examples
$ xcodebuild -workspace test.examples.xcworkspace -scheme test.examples -derivedDataPath Build/ -destination 'platform=iOS Simulator,OS=11.3,name=iPhone 7' -enableCodeCoverage YES clean build test CODE_SIGN_IDENTITY="" CODE_SIGNING_REQUIRED=NO
$ xcrun xccov view --only-targets --json Build/Logs/Test/*.xccovreport > result.json

Get JSON

We can get some kind of format.

# Output: https://gist.github.com/KazuCocoa/40eaa3ac9de5e52c1a3795c49657dd4b
$ xcrun xccov view --json Build/Logs/Test/*.xccovreport | jq .

# Output: https://gist.github.com/KazuCocoa/879554e02934d368f976959d3f8cec4b
$ xcrun xccov view --only-targets --json Build/Logs/Test/*.xccovreport | jq .

Parse the JSON with Ruby

parsed = Xccov::Parse.new(file: './result.json')
parsed.targets_line_coverage["test.examples.app"] #=> 0.35

Conclusion

From Xcode 9.3, we can get formatted coverage data using xcrun command. I’ve seen some scripts use Slather and Nokogiri to handle XML format and collect them. But the script will be more simple. Does this make you happy?

Taste mabl which provides ML-driven test automation service

Lately, ML related technologies step into industry section and many engineers challenge to integrate them into their services.

The movement also comes into test/quality industry. I also have some ideas to use them though.

A few weeks ago, I found a service which named mabl which provides ML-driven test automation service. The service run tests using ML technologies, not test ML technologies itself. I try the service recently and put some results and thought here.

URL: https://www.mabl.com

The target URL is simple, https://www.google.co.jp

I don’t do anything except for input the URL into a particular area. Then the journey started and I just wait for a while.

The site looks crawling the target site and checking all links. The followings are the results. I can see some error message which may be broken links.

Screen Shot 2018-02-27 at 10.22.02

Screen Shot 2018-02-27 at 9.34.37

In test section, we can see some test cases and its results. Their test cases are generated automatically. According to the titles and some logs, the test aims to collect available transitions, it means available links and some data.

Screen Shot 2018-02-27 at 10.21

We also can see train section. I guess the section is a teacher in ML world. So, we running test iteratively, then the ML logic learns the test target and they will be able to conduct automated test efficiently.

Screen Shot 2018-02-27 at 10.22

Screen Shot 2018-02-27 at 10.23

I believe if we run test iteratively and collect test data, cases and results, the ML will be more inteligent and they can run test automatically effective.

Anyway, it looks promising and I’d love to try my thought using ML technologises.

[Elixir]property based testing with stream_data

Elixir will bundle property-based testing in the core.
http://elixir-lang.github.io/blog/2017/10/31/stream-data-property-based-testing-and-data-generation-for-elixir/

We can see the prototype for the library from https://github.com/whatyouhide/stream_data .

The following lines are the example I applied the library before.

  use ExUnitProperties

  property "get body" do
    check all body <- StreamData.binary(),
              max_runs: 50 do
      value = %{"request" => %{"method" => "GET",
            "path" => "/request/path", "port" => 8080},
            "response" => %{"body" => body, "cookies" => %{},
            "headers" => %{"Content-Type" => "text/html; charset=UTF-8",
              "Server" => "GFE/2.0"}, "status_code" => 200}}
      assert Body.get_body(value) == body
    end
  end

https://github.com/KazuCocoa/http_proxy/pull/47/commits/11bf6ff5a763c97c3ec6c7bc4a1811de53a6d0c2

We can see more details from https://hexdocs.pm/stream_data/StreamData.html and you can see how flexible the methods.

[Flutter]some thoughts for the Flutter

I’ve created a repository which is flutter app.
https://github.com/KazuCocoa/noteapp

The app is implemented by Dart lang, but I have no solid experience for it.
That framework provides us very helpful IDE. In testing stuff, the framework provides us with some test framework like presenter test and unit level testing set.

The generated app has no resource id as well as RN. So, it’s difficult to mature test automation stuff for device level.

Screen Shot 2018-02-15 at 0.38.14

Anyway, I like Flutter than RN because of IDE and related guiding, but it’s difficult to use the framework without human resources in some kind of teams.

[ML]Backtesting and Cross-validation

This article is a memo to me reading https://eng.uber.com/omphalos/

This article is a backtesting tool which used in Uber to validate ML related thing.
I’m not sure some words and I’d like to memorise them in my brain. Thus, I’ve published this article.

Backtesting is a term used in oceanography, meteorology and the financial industry to refer to testing a predictive model using existing historic data. Backtesting is a kind of retrodiction, and a special type of cross-validation applied to time series data.

I haven’t known the word even I’m in Test/Quality world…
Similar concept is applied in our Kage, which is

a model validation technique for assessing how the results of a statistical analysis will generalize to an independent data set.

Have a good testing!

Tasting report portal again

Today, the reportportal pre-4.0 released.

I’ve posted try reportportal before and I thought the tool looks helpful for us. I know of Allure2 which is close to this tool but report portal is a more pluggable reporting tool.

The tool provides to analyze test results using ML technology. We can see the feature’s video from https://www.youtube.com/watch?v=d2ekWI2exZ4 and it looks interesting. I guess the feature will be a handly trial to analysis test resuts.

We can run the tool with docker-composer easily and also, we can add test data via demo data section. Read http://reportportal.io/docs/Project-configuration . Thus, we can see some dashboard to confirm the behaviour.

Also, we can handle data via API and the portal also provide LDAP authentication, but it’s beta yet.

Reporting and management bugs with scenarios help developers and the tool also considers automated test scenarios with some testing framework.

[Android]Checking Android Testing Support Library 1.0

update: Aug 9, 2017

I encountered no tests found error when running tests with AssertJ and AndroidJUnitRunner1.0.0.
https://stackoverflow.com/questions/45402645/instrumented-tests-failure-with-androidjunitrunner-1-0-0-and-assertj


A few days ago, Android Testing Support Library 1.0 was released.
I pick up some awesome stuff for me, and I think this release will help enhance test automation for other 3rd party libraries.

IdlingResources

help synchronise against

  • Executors
    • com.android.support.test.espresso.idling:idling-concurrent:3.0.0
  • network requests and responses
    • com.android.support.test.espresso.idling:idling-net:3.0.0

New view matchers/actions/methods

Parameterised testing

GrantPermissionRule

Understand how to write/think test for Android

Android Test Orchestrator

Runner related command options

  • -e classLoader – Provide the ability to pass class loaders using runner args
  • -e filter – Add support for custom JUnit filters to be specified using runner args
  • -e runnerBuilder – Allows developers to provide their own implementations of RunnerBuilder that can determine whether and how they can run against a specific class