“ジョブ理論”を読んだ。原書タイトルの方がしっくりくる内容だった。

ジョブ理論を読んだ。確か、昨年の間に買っていたのだが、Kindleの整理で見つけたという感じ…

中身に関しては、問題発見自体の難しさ、捉えにくさに対して「ジョブ」という表現を持ち込んだ、という感じのもの。それを様々な実例から実証している。大学時代に学んだ教授のもとで耳にタコができるくらい?よく言われたことを想起した。つまるところ、一番難しい、問題を発見し、定義する。それを行う1つの手法という感じですね。

“理論”といっているのでどれだけ確立されたものかと見てみると、実証的なものが多く、”jobs theory”と強くいうにはまだのよう(ここは著者も言及してはしますね)。原書のタイトルを見てみると Competing Against Luck: The Story of Innovation and Customer Choice とあるので、個人的にはこちらの方がスッと中身も、読んだ後も馴染む感じがしますね。

視点を利用ユーザ、顧客に合わせ、彼ら/彼女らが何か進むために “何か” を雇用する。そういう考え方で物事を捉え、実装に落とし込んでいくという流れ。

これを読んでいると、そういえば私もかつて大学のころにビジネスディベロップメント系のコンクールとかに応募しつつ、サービス開発方面に頭を突っ込んでいた時代を思い出しました。興味としてはやっぱりサービスとして何かを開発するとなると逃れられない問題定義能力。

ただ、いたるところに”品質”や”プロセス”という言葉が登場したり、話の中身も随所にテスト/品質界隈でよく耳にする言葉もあり、ここら辺はイマドキ感を感じました。

いくつか引用。

プロセスには、公式に定義された文書化された手順と、年月と共に進歩してきた非公式な習慣的行動の両方が組み合わさっている

生産への知識は、顧客が求めていることを知ることではない

また、アンチパターンとして、うまくいっているというものに対する誤謬(logical fallacy)を以下のように列挙していました。

  • 能動的データと受動的データの誤謬
  • 見かけ上の成長の誤謬
  • 確証データの誤謬

この本書の最後にあった、このジョブ理論によって期待されることは、マイクロサービスの文脈でよく言われるOrchestration vs Choreographyの話にも通じるものも感じましたね。

  • 意思決定の分散
  • 資源最適化
  • 意欲の向上
  • 適切な測定能力

まとめ

内容自体は目新しいものではなかったけれど、考え方や表現の仕方として”ジョブ理論”という言葉を持ってきたのはすごい。。。こういう、言語化能力の高い人、ほんと尊敬しますね。。。

こういう言語化能力高いと、私の仕事ももっと成果出せるのだろうな。

Advertisements

[Flutter]some thoughts for the Flutter

I’ve created a repository which is flutter app.
https://github.com/KazuCocoa/noteapp

The app is implemented by Dart lang, but I have no solid experience for it.
That framework provides us very helpful IDE. In testing stuff, the framework provides us with some test framework like presenter test and unit level testing set.

The generated app has no resource id as well as RN. So, it’s difficult to mature test automation stuff for device level.

Screen Shot 2018-02-15 at 0.38.14

Anyway, I like Flutter than RN because of IDE and related guiding, but it’s difficult to use the framework without human resources in some kind of teams.

Watching FB’s Android at Scale

I watched FB’s Android at Scale: https://code.facebook.com/posts/1958159731104103/android-scale-2018-recap/

I put my memo which I got interested in.

Automated Testing Practices @ Scale: Waseem Ahmad, Facebook

I know almost tips and same thought.

App Modularization and Module Lazy-Loading: Mona Huang, Instagram

  • How they separate their modulized app
  • How to separate their module to fit their structure

Screen Shot 2018-02-07 at 22.56.47

Model-View-Presenter @ Scale: Sam Thompson and Zach Westlake, Pinterest

  • Their Engineer increase and they have 40+ engineers in 2016
  • Their Approach

Screen Shot 2018-02-07 at 23.13.50

Screen Shot 2018-02-07 at 23.20.27

Screen Shot 2018-02-07 at 23.22.58

Conclusion

My company and team also start trying same thing and way to go.
Hopefully, some of our members talk our activities and tips publish to the world…

Read “Android アプリ設計パターン入門”

Androidアプリの設計パターンの書籍を読んだ。
https://peaks.cc/books/architecture_patterns

自身の頭の整理、という感じですね。

個人的にはMVP/MVVMの話が一番よかったです。Activity/Fragmentやその他のもの。
Registoryレイヤはやっぱり色々なアーキテクチャでは安定していますが、PresenterやViewModelとかは今どうなのか、というところを把握するために、特に、

私自身、簡単な話だったり既存コードがMVP/MVVMとか、そこらへんの話はついていけるのですが、スクラッチで書けと言われるとちょっと参考を引っ張ってこないとサラでは書けないレベル。この書籍で、だいぶ頭の中の整理ができた気がします。

iOSもObjcから最近リリースされたし、mobile appのアーキテクチャは少し基盤ができた感じですかね。
https://www.objc.io/books/

[Android][Java][JUnit]Some links for JUnit 5

I discussed JUnit 5 with my team. I put some links and quotes in this post we discussed mainly then.

Core principle

Parameterized tests

https://github.com/junit-team/junit5/wiki/Core-Principles#parameterized-tests

Dynamic Test

http://www.baeldung.com/junit5-dynamic-tests

The DynamicTests are executed differently than the standard @Tests and do not support lifecycle callbacks. Meaning, the @BeforeEach and the @AfterEach methods will not be called for the DynamicTests.

https://stackoverflow.com/questions/44096293/how-are-dynamic-tests-different-from-parameterized-tests-in-junit-5#comment75557358_44114477

the dynamic test run was almost x10 faster to complete.

Migrating from JUnit 4 to JUnit 5

http://www.baeldung.com/junit-5-migration

BTW,

I tried to introduce some JUnit 5 thing in my private project to catch up with some syntax in JUnit 5.

https://github.com/KazuCocoa/EspressoEnv/commit/3c325488de59e5c644117048559a7d3b85a9abe3

Read “Software Design X-Rays”, measure code quality

This month, I read Software Design X-Rays and leave summary and my memos here.

The book shows us how to measure one of code quality and find hotspots, and how to fix or improve them. We also can learn some helpful git script to measure them quickly and get an opportunity to consider how to find/what kind of code is/will be hotspots.

We know some situations will be hotspot like:

  • Files/Classes/Methods which have a large number of lines
  • Files/Classes/Methods which have complicated logic
  • Files/Classes/Methods which change frequently
  • Files/Classes/Methods which are modified by many developers
  • Project structure is complicated

In this book, we can see them and suggestions how to improve them and real data from OSS projects. It was so interested in large-scale systems related talk.

Human memory is fragile and cognitive biases are real

The significant thing when we consider this kind of topic, we should mention what is technical debt. Technical dept depends on several situations. e.g. Business phase.

So, this book also mentions such thing and sometimes address the organisation itself. We must keep in mind what/when/which is suitable for our world.

Visualise codebase

The book tries to visualise codebase. The author develops and serves https://codescene.io/ to measure and visualise codebase.
We can see examples from the site for some OSS projects such as .NET and Ruby on Rails. Hotspots have a large circle and red colour…

The next figure shows the Ebbinghaus forgetting curve where we quickly forget information learned at day one.

In measuring some metrics, indentation based complexity metrics was the most impressed thing. I also considered such thing and can agree with the way as simple measurement.

Not only coding

One of the interesting thing about this book is this book also mention about organisation and development process.

The data shows us no process, and complicated organisation structure leads hotspots. So, this book also tries to measure team oriented measurement and mapping codebase and organisation/team structure. In the mapping, the author considers boundaries in codebase and organisation base.

A common fifteen-minute coffee break is the cheapest team building exercise you ever get, and we shouldn’t underestimate the value of informal communication channels that help us notice new opportunities and work well together.

Detect Hotspot

  • How frequent each file changes
git  log --format=format: --name-only | egrep -v '^$' | sort | uniq -c | sort -r
git log --format=format: --name-only --after=2016-01-01 -- drivers/gpu/ | sort | uniq -c | sort -r
  • To show who commits and how many commits they are
git rev-list --count HEAD
  • Get authors
git shortlog -s -- .
git shortlog -s --after=2016-09-19 -- . | wc -l
  • Get the age of code
git log -1 --format="%ad" --date=short -- particular/file.path

Conclution

The book is in beta yet. But it’s worth for Test/Quality related engineers and some managers and developers as well.

Recently, we can see frequently autonomous team and team autonomy related topics. I read “Designing Autonomous Teams and Services”を読んだ before.
Recent Agile, Microservices and other things step toward to autonomous thing. Personally, the way is the lovely thing.

[ML]Backtesting and Cross-validation

This article is a memo to me reading https://eng.uber.com/omphalos/

This article is a backtesting tool which used in Uber to validate ML related thing.
I’m not sure some words and I’d like to memorise them in my brain. Thus, I’ve published this article.

Backtesting is a term used in oceanography, meteorology and the financial industry to refer to testing a predictive model using existing historic data. Backtesting is a kind of retrodiction, and a special type of cross-validation applied to time series data.

I haven’t known the word even I’m in Test/Quality world…
Similar concept is applied in our Kage, which is

a model validation technique for assessing how the results of a statistical analysis will generalize to an independent data set.

Have a good testing!

[Android]Custom Lint for Android x Kotlin

We can implement some custom lint for Kotlin code.
So, I put an example the custom lint.

Some helpful links.

You can also find examples in the GitHub repositories. Let’s try to implement custom lint and reduce easy mistake from your projects.

Gradle

dependencies {
    compileOnly "com.android.tools.lint:lint-api:26.0.1"
    compileOnly "com.android.tools.lint:lint-checks:26.0.1"
}

sourceCompatibility = "1.8"
targetCompatibility = "1.8"

jar {
    manifest {
        attributes("Lint-Registry-v2": "your.custom.lint.package.LintClassName")
    }
}

Detector

class yourCustomLintDetector : Detector(), UastScanner {
    companion object {
        private val issueId = "yourCustomLint"
        private val issueDescription = "Description for yourCustomLint"
        private val issueExplanation = "Some more concrete explanation for yourCustomLint"
        // https://android.googlesource.com/platform/tools/base/+/master/lint/libs/lint-api/src/main/java/com/android/tools/lint/detector/api/Category.java#122
        private val issueCategory = Category.CORRECTNESS
        private val issuePriority = 10
        private val issueSeverity = Severity.FATAL
        private val implementation = Implementation(yourCustomLintDetector::class.java, Scope.JAVA_FILE_SCOPE)

        internal val issue = Issue.create(issueId, issueDescription, issueExplanation, issueCategory, issuePriority, issueSeverity, implementation)
    }

    override fun applicableSuperClasses() = listOf("some.super.class.of.yourCustomLint")

    // If a target, class which inherits "some.super.class.of.yourCustomLint", has no "yourCustomLintName" method, then the following lines report it user.
    override fun visitClass(context: JavaContext, declaration: UClass) {
        val isOverridden = declaration.methods
                .firstOrNull { method -> method.isOverride && method.name.contains("yourCustomLintName") } != null

        if (!isOverridden) {
            declaration.uastAnchor?.let {
                context.report(issue, context.getLocation(it), "$issueDescription \n $issueExplanation")
            }
        }
    }
}

Registry

import your.custom.lint.package

class LintClassName : IssueRegistry() {
    override fun getIssues() = listOf(yourCustomLintDetector.issue)
}

check

Add the lines on your gradle dependencies.

dependencies {
  lintChecks project(':your-lint-project')
}

Test

You can implement tests for the custom rule.

https://android.googlesource.com/platform/tools/base/+/studio-master-dev/lint/libs/lint-tests/src/main/java/com/android/tools/lint/checks/infrastructure/LintDetectorTest.java

import com.android.tools.lint.checks.infrastructure.LintDetectorTest

class yourCustomLintDetectorOverrideTest : LintDetectorTest() {
    fun testBasic() {
        lint().files(
                LintDetectorTest.java("Implement Java Code which should raise lint errors.")
                .allowCompilationErrors()
                .run()
                .expect("Error message the lint should raise")
    }

    override fun getDetector() = yourCustomLintDetector()

    override fun getIssues() = listOf(yourCustomLintDetector.issue)
}
import com.android.tools.lint.checks.infrastructure.LintDetectorTest

class yourCustomLintDetectorOverrideTest : LintDetectorTest() {
    fun testBasic() {
        lint().files(
                LintDetectorTest.java("Implement Java Code which should not raise lint errors.")
                .allowCompilationErrors()
                .run()
                .expectClean()
    }

    override fun getDetector() = yourCustomLintDetector()

    override fun getIssues() = listOf(yourCustomLintDetector.issue)
}

Conclusion

I described an example to implement Custom Lint for Gradle plugin. I believe custom lint help your projects and if you work with some guys, the feature should help you more powerful.

Have a good testing 🙂

[iOS]memo for ios-device-control

Around 5 months ago, Google has opensourced ios-device-control to make iOS related work automate. The repository supports both real devices and simulators.

https://github.com/google/ios-device-control

It provides some examples and [https://github.com/google/ios-device-control/blob/master/java/com/google/iosdevicecontrol/examples/ExampleSimulatorDeviceControl.java] is one example with Simulator. When we run the example, we can see the following outputs.

$ java -jar SimulatorDevice-jar-with-dependencies.jar
Jan 15, 2018 12:15:03 AM com.google.iosdevicecontrol.examples.ExampleSimulatorDeviceControl main
INFO: Screenshot written to: /var/folders/y6/524wp8fx0xj5q1rf6fktjrb00000gn/T/screenshot4392370072269065541.png

Google has EarlGrey for test automation against UI level and this repository doesn’t compete with it since this tool looks mainly for handle real devices, I guess. I know every one who hope to enhance test automation for iOS struggling with the difficulties.

Have a good automation 🙂

[NPM][Appium] Restrict installed module size

I found https://github.com/appium/appium/issues/9912 and I learned the npm‘s --production flag to reduce the module size.

Install without devDependencies

$ npm install --production

https://docs.npmjs.com/cli/install#description

With the –production flag (or when the NODE_ENV environment variable is set to production), npm will not install modules listed in devDependencies.

Remove devDependencies resources

https://docs.npmjs.com/cli/prune

$ npm install
$ cd node_modules/appium
$ npm prune --production