I finally got my first macbook

Since I started freelancing simple web pages and graphic designs I always wanted to have a macbook. For many in first-world countries having a mac doesn’t seem like a great thing or something. But here in Argentina, apple products are considerably more expensive than in the rest of the world. You can check for yourself:

For me that meant that to buy a 3 old years old macbook pro of 13′, I would have to work a full year without wasting a penny to buy one of those. You can guess that I wasn’t making a lot of money for a long time. But I also remember that I was earning the average Argentinian salary for a recently graduated Software Engineer. Which was half of what the average truck driver earns.

But even before working with other engineers I had even a worst manual job where I printed bags. It was a very physical exhausting job for my arms if I did that more than 7hs straight. But, the good thing is that I was able to see a lot of youtube videos while doing that, because I only had to do very mechanical repetitive moves.

My favourite videos where the DEF CON hacking conference videos. Most of the time they presented very simple hacking solutions I was able to understand. I was amazed for how simple or unsophisticated hacking can be.

In those conferences most guys make their presentations using their macs, so I got always exposed to the apple logo as tag for developers. Most cool developers presenting their findings/projects always do that using a mac. But that was an impossible dream, I was also working on my own apps, and running a relational db would consume all my monthly earnings just in AWS expenses.

Things also got worse with time, not only I didn’t get any more bag jobs nor twitter jobs. Apparently twitter started to get very competitive. So, at a given moment I remember I didn’t have anything in the refrigerator to eat. I could buy food of course but I wouldn’t have enough to run a proper RDBMS. No worry, I’m fine my fathers and grandmother brought food. But still, that kind of situation makes you very defensive while using money, you don’t know how the next month would look like.

Why would you care about all of that? If you didn’t know that about me you wouldn’t be able to understand how significative is to me to be in a position where I can eat whatever I want and also type in my very own MacBook.

For me a MacBook signified the final achievement, the only thing that can signal if I made it or not. Now I have it, I got it in my hands. This week I cried hugging my new MacBook, I walked around hugging it strongly against my chest. I’m full of joy.

You may wonder what do I want next after I finally got the thing that I was desiring for 10 years. Well, it was a milestone, a point in a roadmap, the adventure just started.

About updating dependencies

gold and silver round accessory

Did you notice how different communities manage their ecosystem? How for the most part most communities want to have the most up to date versions? Of course, we want the shiniest and most feature complete versions of our software.

Always having the most up to date versions of something is not always desirable. Specially in production, but we can also agree that having old outdated software isn’t an option. Or is it? I would say that, in my Erlang experience, we need to have a good balance between the two.

In general we trust the ability of our own code to scale, but we don’t trust our dependencies. We have to know and understand the inner workings of each one of our dependencies before using them in production.

Developers in general don’t test their libraries under heavy load. And many start changing their behavior when you push your system and dependencies to their limits. An extremely common example is when libraries need to store state, in Erlang, they usually rely on gen_server, which will work just fine until you push to prod and you get out of memory very quickly.

Another not that common example in erlang, is when you have a dependency which calls C code. If the C code doesn’t finish fast it basically destroys all the scheduling. EVM loses it’s preemptive scheduling capabilities if it’s not executing erlang code.

There are many other considerations like, does this lib uses any kind of pooling, how does it scale? it’s internal structure can handle all the call we will be making?

When I was a RTB developer I didn’t just update the dependencies. It was a separate task on itself: I had to read all the source-code changes, and search in the mailing lists for any problems other developers had, and only then if sufficient time passed I started updating the dependencies. Where for the most part I tried to always be one version behind the last one.

When I worked doing RTB we even had to be able to answer yes to any of these questions, before we were able to use a new dependency:

  • Is it a critical update that without it our system crashes?
  • Is it indispensable for our system? and it doesn’t make sense to write our selves while at the same time you know the inner workings of it?
  • Do you have production experience with that particular dependency?

The more moving parts we have the more complexity we get, and the more frequent we update our software we add even more complexity and bugs. The emergent behavior of our systems can’t be predicted.

Expert python topics you should know

blue and black ball on blue and white checkered textile

What are the main topics that distinguish an advanced developer from a just effective enough python programmer? You are good at python when what you code is elegantly simple and idiomatic.

Each language and community has its own way of resolving certain kind of problem. That specific way of doing things, is what we call idiomatic. We want our code to be idiomatic because not only we will be writing code that is easier to understand but also we are resolving problems using well known and tested techniques. Being idiomatic is to create simple code that relies on existing solution for normal problems. We don’t reinvent the wheel.

In this post I will describe the main topics that can make your code more idiomatic, and some advanced functionalities you need to be familiar as an advanced python developer.

Multiple python versions

There will be situations where you need multiple versions of python. You may be just fine using the default python 2 or 3 of your system. But there are situation when some client/project requires a very specific version. You may also need to work in different projects which any of them may use different specific versions. In this scenario you need a way to manage your python versions. And this is not the same than managing dependency versions. I’m talking about the python language version itself.

The solution to this problem is very simple, just use pyenv. With it, you will be able to have any version you want at your disposal, very easy.

$ pyenv versions # lists all installed versions
$ pyenv install 3.7.4 # installs specific verion
$ pyenv global 3.7.4 # activates the specific version
$ pyenv local 3.7.4 # version for a directory

The pyenv also installs development headers that you will need when making c/c++ extensions. But you shouldn’t worry about the exact path. CMake‘s find_package is going to help you with that.

find_package(Python3 COMPONENTS Development)
target_include_directories(<project_name>
    PUBLIC ${Python3_INCLUDE_DIRS})

Dunder methods

Dunder or magic method, are methods that start and end with double _ like __init__ or __str__. This kind of methods are the mechanism we use to interact directly with python’s data model. A language data model describes:

  • How Values are stored in memory.
  • Object identity, equality and truthiness.
  • Name resolution, function/method dispatching.
  • Basic types, type/value composition.
  • Evaluation order, eagerness/laziness.

Basically the __ methods allow us to interact with core concepts of the python language. You can see them also as a mechanism of implementing behaviours, interface methods.

For a detailed description of many useful dunders and related concepts I recommend you to read this guide.

@ Function decorators

Decorators are nothing more than an special case of Higher Order functions with @ syntax support. It’s quite equivalent to doing function composition. We can use decorator not only for normal functions but also for class methods.

from functools import wraps
def add10(f):
    @wraps(f)
    def g(*args, **kwargs):
        return f(*args, **kwargs) + 10
    return g
@add10
def add1(a):
    return a + 1
p.add1(0)  # 11

wraps from fuctools is in itself another decorator that keeps the metadata from the original wrapped function.

Interfaces

Interfaces help us to enforce the implementation of certain characteristics by other code that commits in doing so.

One of the characteristic we can enforce is the definition of specific methods, like we would do defining a normal java interface:

from abc import ABC, abstractclassmethod
class Animal(ABC):
    @abstractclassmethod
    def make_sound(self):
        return "indistinguishable noise"
class Cat(Animal):
    def make_sound(self):
        return "miauu"
class Dog(Animal):
    def make_something(self):
        return "eat"

We used the @abstractclassmethod decorator to enforce the definition of specific method in child classes:

>>> Cat().make_sound()
'miauu'
>>> Dog()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: Can't instantiate abstract class Dog with abstract methods make_sound

But the previous enforcement required us to try to create an instance in the first place. If we would like to be even more strict, we could use metaclass to make the script fail while loading the class definition.

class Animal(type):
    def __new__(cls, name, bases, body):
        if 'make_sound' not in body:
            raise TypeError('no make_sound method')
        return super().__new__(cls, name, bases, body)
class Cat(metaclass=Animal):
    def make_sound(self):
        return "miauu"
class Dog(metaclass=Animal):
    def make_something(self):
        return "eat"
Traceback (most recent call last):
  [...] class Dog(metaclass=Animal):
TypeError: no make_sound method

In the same way that when we instantiate a class we create an object, when we instantiate a Meta-class we create a class. Meta Classes are a way of controlling the creation of classes. This example also indirectly shows that the __new__ dunder is the responsible of creating the instance while __init__ initialized the instance previously created by __new__.

Since python 3.6 instead of having the __new__ method inside a metaclass we can just use __init_subclass__ instead. Then our interface example would be the following:

from _collections_abc import _check_methods
class Animal():
    def __init_subclass__(cls, *args, **kwargs):
        if _check_methods(cls, 'make_sound') is NotImplemented:
            raise TypeError("make_sound not implemented")
class Cat(Animal):
    [...] # same than in previous example
class Dog(Animal):
    [...] # same than in previous example

If you use _check_methods you will have extra style points.

Context manager

Context manager, or in mundane words: classes with __enter__ and __exit__ methods. Context managers give us support for the RAII pattern through the with syntax. An important thing to remember is that when implementing __exit__ you should check the exception values, because you have the choice to propagate or not the exception that happened inside with. If you return a true value you can suppress the exception. But under no circumstance you are expected to re-raise an exception inside the __exit__ method.

For example, let suppose we had nothing better to do than to use the low level http.client library; we could wrap HTTPConnection inside a context manager:

from http.client import HTTPConnection
from contextlib import AbstractContextManager # not really necessary but looks cool
class Conn(AbstractContextManager):
    def __init__(self, host):
        self.host = host
    def __enter__(self):
        self.conn = HTTPConnection(self.host, 80)
        return self.conn
    def __exit__(self, *args):
        self.conn.close()
with Conn('example.com') as conn:
    conn.request('GET', '/')
    res = conn.getresponse()
    print(res.status, res.reason)

That code is very verbose, we can fix that using the contextlib module. I recommend you to read it’s whole documentation. If we want we can use a generator instead of a full AbstractContextManager class implementation.

from http.client import HTTPConnection
from contextlib import contextmanager
@contextmanager
def Conn(host):
    conn = HTTPConnection(host, 80)
    try:
        yield conn
    finally:
        conn.close()

We also can get rid completely of the Conn class with closing:

from contextlib import closing
with closing(HTTPConnection("example.com", 80)) as conn:
    conn.request('GET', '/')
    res = conn.getresponse()
    print(res.status, res.reason)

Asynchronous programming

When we use async and await we are doing cooperative concurrency (not parallelism). You may want to check some online documentation or tutorial online if you are not familiar with those terms.

In practice we have a bunch of async functions and an event loop. And that’s it. But what happens when you actually want to define a real parallel operation? What if some important client wants to have some custom crazy high performant network code? How can we create low-level parallel code that from the point of view of Python appears to be asynchronous code?

First we need a way to make a blocking code a coroutine. The following make_async function does exactly that:

import sys
import inspect
from functools import wraps
from concurrent.futures import ThreadPoolExecutor
def make_async(g):
    @wraps(g)
    async def f(*args, **kwargs):
        loop = asyncio.get_event_loop()
        return await loop.run_in_executor(
            ThreadPoolExecutor(),
            lambda: g(*args, **kwargs)
        )
    sys.modules[__name__]
    frm = inspect.stack()[1]
    mod = inspect.getmodule(frm[0])
    setattr(mod, g.__name__, f)

A very neat function right? Yehaa, but this will only truly work if the g function releases the GIL. The following code, which uses pybind11, defines a C++ module x with a send_message function that inside it releases the GIL.

#include <pybind11/pybind11.h>
#include <thread>
#include <chrono>
std::string send_message(std::string input)
{
    pybind11::gil_scoped_release release; // GIL RELEASE
    std::this_thread::sleep_for(std::chrono::seconds(1));
    return input + " done!";
}
PYBIND11_MODULE(x, m) {
    m.def("send_message", &amp;send_message, 
          "sends something though the network");
}

The pybind11::gil_scoped_release class releases the GIL when is constructed and then acquires the GIL again at the end of the function call.

import asyncio
from x import send_message
make_async(send_message) # using the function from above
async def send(msg):
    print("sending", msg)
    result = await send_message(msg)
    print("sent", result)
    return result
loop = asyncio.get_event_loop()
to_send = [loop.create_task(send(str(i))) for i in range(3)]
loop.run_until_complete(asyncio.wait(to_send))

And the output is what you would expect:

sending 0
sending 1
sending 2
sent 0 done!
sent 1 done!
sent 2 done!

Profiling

Profiling are some of those techniques that we use when we really fucked something up. It’s a great tool to know, but you will suffer trying to figure out why you are getting some esoteric crashes, or why something isn’t working as it should.

For calling trees and CPU time we can use cprofile and KCacheGrind:

$ python -m cProfile -o script.profile main.py
$ pyprof2calltree -i script.profile -o script.calltree
$ kcachegrind script.calltree

But cprofile, profile and hotshot aren’t that useful if we have multi-threaded code or if any bottleneck is generated by non-explicit function calls. A much more effective profiler is yappi, and it really is. You won’t go back to cprofile after playing around with yappi. Don’t take my word for it, you can see that the PyCharm IDE uses yappi by default if you have it installed.

To use yappi we need to add some code to our script:

import yappi
yappi.start(builtins=True)
# YOUR CODE GOES HERE
# a context manager would be great for this
func_stats = yappi.get_func_stats()
func_stats.save('script.calltree', 'CALLGRIND')
yappi.stop()
yappi.clear_stats()

After the profiling ends we can open the profiling file with:

$ kcachegrind script.calltree

Another important aspect of profiling is to record memory usage.

We can take a memory snapshot in any moment with pympler:

from pympler import muppy, summary
all_objects = muppy.get_objects()
summary.print_(summary.summarize(all_objects), limit=100)

The main features of pympler can be accessed through ClassTracker.

Network analysis

Some applications are very hard to understand and we need to start seeing them as black-boxes. Or maybe we have a very obscure problem when we send data through the network.

The most practical way of analysis would consist on you modifying your software to record every request it receives or sends, and in a perfect world you would want that feature to be able to turn on/off while in production.

But most mortals don’t understand thir own systems enough nor want to go thorough that time investment. But even in that case, there are things we can do:

  • Incoming traffic: Ensure that our server receives HTTP traffic, we can do this being behind a load balancer or a reverse proxy, so that we can keep serving https. Doing this we can simply use wireshark to read the incoming traffic.
  • Outgoing traffic: We need a proxy, and if we are making HTTPS requests we need to install custom certificates, so we can me “the man in the middle”. This requires the use of mitmproxy

In normal scenarios, where you understand and control the codebase, you should be logging and analysing the traffic internally without relying on the previous tricks, specially because you may won’t be able to do “mitm attacks” with a production server under heavy load without slowing everything down.

Logs are your best friend…

Logging

Most of the time, logging just works and you shouldn’t worry. But under heavy load we can approach logging by:

  • Don’t logging at all, and only using metrics instead. Or,
  • Send the logs through the network: if log locally, that will put your server and your code under heavy load, and you may need to create code specially designed to being able to handle the logging.
  • Avoid logging to a disk that you use for something else: Don’t put load to a disk that you use for other tasks.
  • If you want to write your local logs yourself, please ensure that you rotate them. You could use RotatingFileHandler, but logrotate is better.

Obviously how you approach any logging problem will depend on how often, how important, and how big are the logs. In most cases you can just log and forget that that exists.

How to create an initramfs after you compile a linux kernel

pink Star Here text

As a summary you build an initramfs by putting all the OS required files in a folder, you convert everything in there to a single .cpio, and finally you gzip that.

From your perspective, the initramfs is a gziped cpio file that contains the initial file system that the kernel needs to run. And that includes the /init script.

I strongly recommend you to read this two files from the Linux kernel source code, even if you don’t fully understand them:

  • Documentation/early-user-space/README: Why do we need an early user space, what it is, and the different tools to compile it with the kernel.
  • Documentation/filesystems/ramfs-rootfs-initramfs.txt: Explanation of the ram filesystems you will be using while booting.

What I wanted to create an initramfs with everything I could need for testing my kernel with qemu without needing to recompile the kernel. For this reason, I didn’t change any option while compiling the kernel.

When we first boot, we need at least some tools to start working. This includes the init process and some tools like ls, mount, mv, etc. To get those user space tools you can use BusyBox. BusyBox has many useful commands available for just 1.1MB:

acpid, add-shell, addgroup, adduser, adjtimex, arch, arp, arping, ash, awk,
base64, basename, bc, beep, blkdiscard, blkid, blockdev, bootchartd, brctl,
bunzip2, bzcat, bzip2, cal, cat, chat, chattr, chgrp, chmod, chown, chpasswd,
chpst, chroot, chrt, chvt, cksum, clear, cmp, comm, conspy, cp, cpio, crond,
crontab, cryptpw, cttyhack, cut, date, dc, dd, deallocvt, delgroup, deluser,
depmod, devmem, df, dhcprelay, diff, dirname, dmesg, dnsd, dnsdomainname,
dos2unix, dpkg, dpkg-deb, du, dumpkmap, dumpleases, echo, ed, egrep, eject,
env, envdir, envuidgid, ether-wake, expand, expr, factor, fakeidentd,
fallocate, false, fatattr, fbset, fbsplash, fdflush, fdformat, fdisk,
fgconsole, fgrep, find, findfs, flock, fold, free, freeramdisk, fsck,
fsck.minix, fsfreeze, fstrim, fsync, ftpd, ftpget, ftpput, fuser, getopt,
getty, grep, groups, gunzip, gzip, halt, hd, hdparm, head, hexdump, hexedit,
hostid, hostname, httpd, hush, hwclock, i2cdetect, i2cdump, i2cget, i2cset,
i2ctransfer, id, ifconfig, ifdown, ifenslave, ifplugd, ifup, inetd, init,
insmod, install, ionice, iostat, ip, ipaddr, ipcalc, ipcrm, ipcs, iplink,
ipneigh, iproute, iprule, iptunnel, kbd_mode, kill, killall, killall5, klogd,
last, less, link, linux32, linux64, linuxrc, ln, loadfont, loadkmap, logger,
login, logname, logread, losetup, lpd, lpq, lpr, ls, lsattr, lsmod, lsof,
lspci, lsscsi, lsusb, lzcat, lzma, lzop, makedevs, makemime, man, md5sum,
mdev, mesg, microcom, mkdir, mkdosfs, mke2fs, mkfifo, mkfs.ext2, mkfs.minix,
mkfs.vfat, mknod, mkpasswd, mkswap, mktemp, modinfo, modprobe, more, mount,
mountpoint, mpstat, mt, mv, nameif, nanddump, nandwrite, nbd-client, nc,
netstat, nice, nl, nmeter, nohup, nologin, nproc, nsenter, nslookup, ntpd,
nuke, od, openvt, partprobe, passwd, paste, patch, pgrep, pidof, ping, ping6,
pipe_progress, pivot_root, pkill, pmap, popmaildir, poweroff, powertop,
printenv, printf, ps, pscan, pstree, pwd, pwdx, raidautorun, rdate, rdev,
readahead, readlink, readprofile, realpath, reboot, reformime, remove-shell,
renice, reset, resize, resume, rev, rm, rmdir, rmmod,route, rpm, rpm2cpio,
rtcwake, run-init, run-parts, runlevel, runsv, runsvdir, rx, script,
scriptreplay, sed, sendmail, seq, setarch, setconsole, setfattr, setfont,
setkeycodes, setlogcons, setpriv, setserial, setsid, setuidgid, sh, sha1sum,
sha256sum, sha3sum, sha512sum, showkey, shred, shuf, slattach, sleep, smemcap,
softlimit, sort, split, ssl_client, start-stop-daemon, stat, strings, stty,
su, sulogin, sum, sv, svc, svlogd, svok, swapoff, swapon, switch_root, sync,
sysctl, syslogd, tac, tail, tar, taskset, tc, tcpsvd, tee, telnet, telnetd,
test, tftp, tftpd, time, timeout, top, touch, tr, traceroute, traceroute6,
true, truncate, ts, tty, ttysize, tunctl, ubiattach, ubidetach, ubimkvol,
ubirename, ubirmvol, ubirsvol, ubiupdatevol, udhcpc, udhcpc6, udhcpd, udpsvd,
uevent, umount, uname, unexpand, uniq, unix2dos, unlink, unlzma,unshare,
unxz, unzip, uptime, users, usleep, uudecode, uuencode, vconfig, vi, vlock,
volname, w, wall, watch, watchdog, wc, wget, which, who, whoami, whois,
xargs, xxd, xz, xzcat, yes, zcat, zcip

I’m listing all those programs because for some reason in the past I thought that all those basic programs would be available for me after compiling the kernel via some sort of magic. But they are not. You need to compile/download them first separately.

Other thing I realized is that you need to build your initramfs your own. I mean, you need to choose which software you want when your kernel starts.

I used this script to create my initramfs:

#!/bin/bash

ARCH="x86_64"
BB_VER="1.31.0"

# Dirs
mkdir -p root
cd root
mkdir -p bin dev etc lib mnt proc sbin sys tmp var
cd -

# Utils
if [ ! -f "root/bin/busybox" ]; then
    curl -L "https://www.busybox.net/downloads/binaries/${BB_VER}-defconfig-multiarch-musl/busybox-${ARCH}" >root/bin/busybox
fi
cd root/bin
chmod +x busybox
ln -s busybox mount
ln -s busybox sh
cd -

# Init process

cat >>root/init << EOF
#!/bin/busybox sh
/bin/busybox --install -s /bin
mount -t devtmpfs  devtmpfs  /dev
mount -t proc      proc      /proc
mount -t sysfs     sysfs     /sys
mount -t tmpfs     tmpfs     /tmp
setsid cttyhack sh
exec /bin/sh
EOF
chmod +x root/init

# initramfs creation

cd root
find . | cpio -ov --format=newc | gzip -9 >../initramfs
cd -

This script will output an initramfs containing an /init bash script, some directories, and the BusyBox binary.

The basic directories are created bin, dev, etc, lib, mnt, proc, sbin, sys, tmp, var.

When BusyBox is downloaded I created also two symbolic links for mount and sh, all the commands that I will be using in the init script.

The init script installs all the others required symbolic links for all the command I listed before while talking about BusyBox. Also it creates the basic file systems so we can start playing around.

gen_init_cpio

In my case I used a simple bash script, but you may want to use user/gen_init_cpio (is in the linux kernel source tree) for that. It’s an already made script for creating an initramfs from a file that describes the file system structure. It can be called like this:

$ usr/gen_init_cpio descriptor | gzip >initramfs

And the descriptor may look like this:

file /init my-init.sh 07555 0 0
dir /bin 0755 0 0
nod /dev/zero 0666 0 0 c 1 5
file /bin/busybox /bin/busybox 0755 0 0

To learn more about the syntax just call the command without any arguments: usr/gen_init_cpio.

Create a functor for a binary tree in Scala

green leaf tree under blue sky

I’m trying to really really understand the scala cats library. So while I was reading the book “Scala with Cats”, I found an exercise which told me to: create a functor for a binary tree:

sealed trait Tree[+A]
final case class Branch[A](left: Tree[A], right: Tree[A])
  extends Tree[A]
final case class Leaf[A](values: A) extends Tree[A]

The most immediate solution is to use recursion. In this way, the solution is extremely simple:

override def map[A, B](tree: Tree[A])(func: A =&gt; B): Tree[B] = {
  tree match {
    case Branch(left, right) =&gt;
      Branch(map(left)(func),  map(right)(func))
    case Leaf(value) =&gt;
      Leaf(func(value))
  }
}

But I felt that this code won’t scale well in a practical scenario for very deep trees. So, I decided I will use my knowledge of data structures in C++ to do this. But it wasn’t that easy. I found many problems:

  • In C++ you can copy the whole thing and then update each field. A normal case class without vars won’t allow you to update anything.
  • In C++ is much more sane to use std::stack that having custom classes with relationships between them. In other words, I don’t have to care in any GC’ed language about the instantiation relationship between objects.
  • scala.collection.mutable.Stack is deprecated. And many say that you should be using a var of type List as replacement for Stack. I believe that they say that because the Stack implementation was just a wrapper around List.
  • scala.collection.mutable.ArrayStack isn’t deprecated, and is implemented using arrays. I believe when you want a mutable stack you definitively should use this, instead of a immutable List.
  • Stacks are very good for traversal or folding. But I found them extremely hard to use when you want to create an immutable data structure.

I wasted many hours trying to resolve the problem using stacks. But the code was so complex that I couldn’t even verify the logic without debugging. For me that is not a real solution. I also tried very hard to avoid the creation of any auxiliary data structure, but I found my self making my code even harder to read.

After some thinking I decided that I needed this class:

case class Node[A,B](var value: Tree[A],
                     var parent: Option[Node[A,B]] = None,
                     var output: Option[Tree[B]] = None,
                     var vars: List[Tree[B]] = List())

Variable explanations:

  • value contains the original input value Tree[A].
  • Also I need a parent, and I only needed a Node parent and not children because a to being able to instantiate a Tree[T] my algorithm only need to traverse in an upward direction. In other words, a Tree[T] can only be created once I have all the children. parent is also an Option because I use the None value to identify the root node and stop the loop.
  • vars is used to store all children. I opted for a List because, I can easily represent that no children was created, one or two, or even more if in the future the tree has more children. Once vars has two element, output can be created.
  • output holds the result. Is an Option because Tree can only be created once we have all the vars required.

I know that you may be thinking that I shouldn’t be using vars for the Node class. But because my real objective was to support arbitrary deep trees, I believe that is a good trade-off. Also the Node can be private.

The map function for the Tree functor is this:

override def map[A, B](inputTree: Tree[A])(func: A => B): Tree[B] = {
  var current = Node[A,B](value = inputTree)
  while(!(current.parent.isEmpty &amp;&amp; current.output.isDefined)){
    current match {
      case Node(_, Some(parent), Some(mapedResult), _) =>
        parent.vars = mapedResult :: parent.vars
        current = parent
      case Node(_, _, _, right :: left :: _) =>
        current.output = Branch(left, right).some
      case Node(Leaf(leafVal), _, _, _) =>
        current.output = Leaf(func(leafVal)).some
      case Node(Branch(left, _), _, _, List()) =>
        current = Node(left, current.some)
      case Node(Branch(_, right), _, _, List(_)) =>
        current = Node(right, current.some)
    }
  }
  current.output.get
}

It has mutation inside the function, but it’s minimal and self-contained. You can even have the Node class inside the map function is you wanted.

Trampoline

I found a little bit difficult to read the non-recursive version of the functor. After all, is an algorithm that is recursive by nature. Also, you may be forced to do it recursively or functionally anyways.

Welcome to cats defer. It allows us to have stack safety.

def deferredMap[A,B](tree: Tree[A])(func: A =&gt; B): Eval[Tree[B]] = {
  tree match {
    case Leaf(value) =&gt; Eval.now(Leaf(func(value)))
    case Branch(left, right) =&gt; for {
      mappedLeft  <- Eval.defer(deferredMap(left)(func))
      mappedRight <- Eval.defer(deferredMap(right)(func))
      mapped      <- Eval.now(Branch(mappedLeft, mappedRight))
    } yield mapped
  }
}

This is much simpler than the previous version with a mutable data structure. The only problem is that: is harder to come up with this solution in the first place. This requires you to know cats and also the eval monads.

If I didn’t use Eval.defer, a stack-overflow would have happened. You may be thinking about using Eval.later, but it doesn’t work. later is just like a lazy val, it will blow your stack when you call .value.

Testing it

I thought that the best way to test if these function aren’t really consuming my stack is to create a really nested map in the first place. And I didn’t want to test my knoledge in mutable data structures again so I used Eval again to create a random function that generates a tree of a given depth.

def branchRandomSwap[A](a: Tree[A], b: Tree[A]): Tree[A] =
  if(math.random() &gt; 5) branch(a, b) else branch(b, a)
def random(n: BigInt): Eval[Tree[Float]] =
  n &gt; 1 match {
    case true =&gt;
      for {
        elem1 <- Eval.defer(random(n-1))
        elem2 <- Eval.now(leaf(2f))
        branch <- Eval.now(branchRandomSwap(elem1, elem2))
      } yield branch
    case false =&gt;
      Eval.now(Leaf(1f))
  }

If you do everything correctly the following code shouldn’t crash:

val randomTree = random(100000).value
randomTree.map(_*2)

Prepare your Android emulator for reverse engineering

black Sony Xperia android smartphone

Do you want to start reverse engineering some android App? And you don’t know where to start? Well, this is the perfect post for you.

This post comes into existence because I needed to see the encrypted network traffic of an android application that wasn’t available in my country. And I don’t have any android phone to spare and try to root it.

In this tutorial we will talk about:

  • Root access to an emulated android device.
  • Man In The Middle for HTTPS traffic.
  • SSL Pinning removal.
  • .APK de-compilation and debugging.

Root access to the Android Emulator

Most real world application need all the Google services activated in the phone to work. If you just try to run your android emulator with an image that has Google Play installed, you won’t be able to call adb root. That is because any Android image with GP installed is considered a production image, and therefore you can’t have root access by default.

The way we interact with the “root” capabilities of a phone are different in a emulated phone if you compare it with a physical one. This is because in a non-production image you already have root access by just issuing:

adb root
adb remount

And that’s it. You already have root access and you can do whatever you want in your adb shell. You may encounter that you can’t write the file system or something. This is because you need to start the emulator with a writable filesystem.

emulator -avd <your-avd-name> -writable-system

I notice that a side effect of using -writable-system is that the snapshot functionality breaks and you can not longer rely on that.

Installing gapps

Because the previous command will only really work in a non-production image, you lack all the Google Apps, including the services that your target application may need to work. For that reason is that you need to install gapps by yourself. You can download them from The Open GApps Project.

It will be very hard for you to find a tutorial explaiing how to install OGApps in a emulated phone, because almost nobody does that. But, here you go:

unzip open_gapps-<your-version>.zip 'Core/*'
rm Core/setup*
lzip -d Core/*.lz
for f in $(ls Core/*.tar); do
    tar -x --strip-components 2 -f $f
done
adb remount
adb push etc /system
adb push framework /system
adb push app /system
adb push priv-app /system
adb shell stop
adb shell start

Normal Android Root

Normally, you “root” your android device when you modify your phone such as that normal user space can do things that only root would be able to do. This is accomplished by installing two pieces:

  • An App that allows you to control which normal app gets access to root privileges.
  • A modified su command installed in your phone’s $PATH.

It’s very hard to achive that “root” state in a emulator. Most of the times my emulator crashed, or the rooting didn’t work. And remember, you can’t use snapshots doing this. Well, some people say that the used snapshots to maintain their root state, but I couldn’t.

Because this way of rooting the phone leaves you with a very unstable system, and is a very hard state to maintain, I will recommend you, if you really need this, do it in a physicall phone. But if you are aventurous, these where the links that tought me how to do it:

But, my advice is that you try to avoid the “normal root” in emulated device. Otherwise, a world of pain will be waiting for you.

Main In The Middle on Android

To really see the traffic of apps we need to do a MITM “attack” on our virtualized phone. Doing MITM over http is trivial, but doing it with https is a little bit more involved. And it’s more anoying if you need to work around SSL Pinning.

For this part of the post, you will need two software installed in your host machine:

  • mitmproxy: A proxy that allows us to inspect all http/s traffic.
  • frida: Dynamic instrumentation that allow us to modify the apps code dynamically without recompiling anything.

The first time you run mitmproxy, it will generate certificates you need to install in your android device. You will find the certificates in:

$ ls ~/.mitmproxy/
mitmproxy-ca-cert.cer  mitmproxy-ca-cert.pem  mitmproxy-ca.pem
mitmproxy-ca-cert.p12  mitmproxy-ca.p12       mitmproxy-dhparam.pem

In our case, the only certificate we will be using is mitmproxy-ca-cert.cer:

adb push mitmproxy-ca-cert.cer /system/etc/security/cacerts/
adb reboot

Then, you may need to activate it on the system settings.

At his point you should be able to start seeing some https traffic in your mitmproxy. But most often than not your target application will still fail because it only trust specific certificates. If that’s the case, it’s time to install frida on your phone. First, you download an executable compatible with your phone cpu from github. Then as described here, you install and run frida:

$ adb root
$ adb remount
$ adb push frida-server /data/local/tmp/
$ adb shell "chmod 755 /data/local/tmp/frida-server"
$ adb shell "/data/local/tmp/frida-server &amp;"

And you may have guessed, frida-server should always be running in the phone to this to work. Then, we need create a frida script to remove the ssl pinning from the phone. In the most basic case this should be enough:

setTimeout(function(){
    Java.perform(function () {
        var TrustManagerImpl = Java.use('com.android.org.conscrypt.TrustManagerImpl');
        TrustManagerImpl.verifyChain.implementation = function (untrustedChain, trustAnchorChain, host, clientAuth, ocspData, tlsSctData) {
            return untrustedChain;
        }
    });
};

Then you run the script with:

frida -l <script-name>.js -U -f <installed-app-package> --no-pause

But, in many cases you want something more complex like this. You should read this article for more details.

APK Decompilation

If you are analyzing an app looking for something, you may need to “decompile” it and get something that you can read. Sadly there is nothing that can convert an apk to java code. But we can get .smali/ .baksmali files. Basically we use a special program that converts the apk with .dex dalvik bytecode to something that we “can” read, Those smali files are like assembly language, very hard to follow. And it gets even harder if the developer used some obfuscation.

I use apktool for this. I recommend you that you do too, but please, always download the last version, and don’t rely on your package manager for this. You use it like this:

# decompile
java -jar apktool.jar -r d <input-apk> -o <decompiler-location>
# compile
java -jar apktool.jar b <decompiled-location> -o <output-apk>

The next step with the decompiled apk, is to try to debug, and follow th flow and state of the program while reading the .smali files. For doing that you need to use the Android IDE with the samli plugin.

To actually be able to debug, you need to force that app to wait for the debugger before it can start, and then when the phone is waiting for the debugger you attach if from the IDE. Yes, no magic “run” button here. A detailed but old tutorial is here.

While debugging you will be changing the source code. But because of the changes the signatures won’t match. So you need to resign the .apk yourself. Assuming you have a self-signed certificate default:

apktool b <decompiled-location> -o out.apk
jarsigner -keystore default.keystore -verbose out.apk default
adb install -r out.apk

How to gradually start adding type hints to a python project

opened beige book

Wouldn’t it be cool to start adding types to an existing code base? Yes, but usually that requires a change the underlining data types we used. For example we could have a code that heavily relies on non-typed dictionary manipulations:

def v2_modulus(v):
    return sqrt(v["x"] ** 2 + v["y"] ** 2)

This is a simple case, but we could also have some .get, del, .clear or any other dict calls, so my first approach wouldn’t be to create a dataclass or any class. We can use TypedDict:

from typing import TypedDict

class Vec2(TypedDict):
    x: float
    y: float

def v2_modulus(v: Vec2):
    return sqrt(v["j"] ** 2 + v["n"] ** 2)

Ohoo, sorry, I made a mistake while retyping this, I wrote v["j"] and v["n"] which clearly are not valid keys for our new Vec2. Luckily I was able to notice that with mypy:

error: TypedDict "Vec2" has no key 'j'
error: TypedDict "Vec2" has no key 'n'

What if instead of using dictionaries we were one of the functional-minded cool kinds which liked immutability with namedtuples? Well, we are in a better situation because mypy can check if we accesses a defined attribute or not. But it won’t catch typing errors:

from collections import namedtuple

Vec2 = namedtuple("Vec2", "x y")
v = Vec2(1, 2)
v.x + v.z
v.x + {}

It can’t catch v.x + {} because it namedtuple lacks typing information. It only defines fields:

error: "Vec2" has no attribute "z"

But you may have guesses it, for every untyped data there is a typed counter-part. Introducing typing.NamedTuple:

from typing import NamedTuple

class Vec2(NamedTuple):
    x: float
    y: float

v = Vec2(1, 2)
v.x + v.z
v.x + {}

This will catch all the problems with the previous code:

error: "Vec2" has no attribute "z"
error: Unsupported operand types for + ("float" and "Dict[, ]")

What if we only used raw tuples? Well, it’s easier:

from typing import Tuple

Vec2 = Tuple[float, float]

v: Vec2 = (1, 2)
v[0] + v[1]
v.x + v.z

The change here is that instead of creating an object of a given type, we need to add a type hint to the newly created variable to indicate it’s type.

error: "Tuple[float, float]" has no attribute "x"
error: "Tuple[float, float]" has no attribute "z"

Now imagine that, we want to fetch an Account based on its id:

class Account: ...
def get_acc(acc_id: int) -> Account: ...
get_acc(1+2)

Having account_id defined as an int or string or whatever “real” type it could be, is a really bad idea. And int is an int and shouldn’t be used as an account identifier, at least from a typing perspective. For this kind of situation we can use NewType.

AccountId = NewType("AccountId", int)
class Account: ...
def get_acc(acc_id: AccountId) -> Account: ...
get_acc(AccountId(1)+2)

That would produce an error:

error: Argument 1 to "get_acc" has incompatible type "int"; expected "AccountId"

Structural Duck-Typing

Believe it or not, duck-typing is nothing more than the implicit definition of an interface, that when is not honored an exception is thrown. What would happen if we have a lot of duck-typed code, like our Vector2 example:

from dataclasses import dataclass
from typing import NamedTuple
from math import sqrt


@dataclass
class Vector2:
    x: float
    y: float


class Vector3(NamedTuple):
    x: float
    y: float
    z: float


class Vector4:
    def __init__(self, x, y, z, w):
        self.x = x
        self.y = y
        self.z = z
        self.w = 2


def modulus(v2):
    return sqrt(v2.x + v2.y)


modulus(Vector2(1,1))
modulus(Vector3(1,1,1))
modulus(Vector4(1,1,1,1))

We really don’t want to start touching those Vector types at all, but what we could do is to add some extra typing to modulus to indicate that in reality it only semantically works for the Vector2 type.

from typing import Protocol

class V2(Protocol):
    @property
    def x(self) -> float: ...
    @property
    def y(self) -> float: ...


def modulus(v2: V2):
    return sqrt(v2.x + v2.y)

V2 is a protocol with two read-only (@property) attributes: x and y. The read-only thing is important in this example because of the use NamedTuple in Vector4.

At the time of writing structural typing is not support for modules by mypy: mypy#5018. Worst case scenario you can wrap Module in classes, and call it the day.