Compose Datasets, Don't Inherit Them

In relatively young disciplines, like deep learning, people tend to leave behind old principles. Sometimes this is a good thing because times have changed and old truths, i.e. over-completeness being a bad thing, have to go. Other times, such old principles stick around for a reason and still people over-eagerly try to throw them out of the window. I am no exception in this regard so let me tell you how I “re-learned” the tried and true design pattern of “Composition over Inheritance”....

30 January 2022 · 13 min

Make DL4J Readable Again

A while ago, I stumbled upon an article about the language Kotlin and how to use it for Data Science. I found it interesting, as some of Python’s quirks were starting to bother me and I wanted to try something new. A day later, I had completed the Kotlin tutorials using Kotlin Koans in IntelliJ IDEA (which is an excellent way to get started with Kotlin). Hungry to test out my new language skills, I looked around for a project idea....

20 September 2020 · 13 min

How to Trust Your Deep Learning Code

Deep learning is a discipline where the correctness of code is hard to assess. Random initialization, huge datasets and limited interpretability of weights mean that finding the exact issue of why your model is not training, is trial-and-error most times. In classical software development, automated unit tests are the bread and butter for determining if your code does what it is supposed to do. It helps the developer to trust their code and be confident when introducing changes....

1 August 2020 · 27 min