Leaving Amazon for a sabbatical

It’s been over a week since I left Amazon where I’ve worked for over three years. It seems like a right time to write out some thoughts on the journey.

Disclaimer

First of all, a disclaimer which should be obvious but one never knows. All thoughts are mine and from personal experience. Amazon is a huge company with a total employee counter closer to a million, including a (couple) hundreds thousands software developers. Others have gone through different experiences since they interacted with different people/teams.
Another disclaimer is that I have worked in the corporate side of the company which is different from warehouses and deliveries. Although one of my teams was in transportation and we provided solutions to the yard (roughly “parking outside warehouses”) associates, I still don’t feel confident to talk for that side.

What is good about Amazon?

There are plenty of positive things I would like to say about the company. Off the bat: I would recommend working there. Not everyone, which is why I have left, but I’m guessing it will suit most people. There is a large number of teams to try out and internal relocation policies enable resettling.

A first-time corporate employee, the most impressive thing about software development at Amazon was its internal knowledge. There are plenty of resources to go through and learn. Documentations, videos, design, discussions, internal “stack overflow”… At times it can be challenging to find what you’re looking for but that’s due to ever changing environment and inherited vast amounts of information making the scaling difficult. In addition to the written information, all teams have access to principal engineers or folks who have been for ages, and these are also great to poke their brains.

Another positive is the AWS availability. One can prototype as they wish and try out new features and services. It often simplified my design processes as I could quickly verify whether something works or not instead of getting through layers of documentation. It also removes the burden of constant thinking about costs. When using AWS on my own it matters whether I’m paying $10 or $200 a month but for work projects that’s a prototype cost (threshold is usually agreed with the manager).

Some might also find beneficial the opportunity of wearing many hats. I certainly did enjoy it and it was a great learning experience. SDEs at Amazon are defined by both their scope of influence and their role. You often have to participate in scope projects/product, design solutions, create a prototype, lead the teams, communicate with stakeholder and customers. I’ve seen many opinions on the internet about having many hats as a software developer but I’m of the opinion that one needs to know what and why they are building before actually doing. Developers aren’t “coding monkeys” and they should have a say in whatever they’re constructing. The questions would be more on the balance but, since I’m talking about my experience about Amazon, that balance can be shifted as required.

Why then have I left?

The decision for leaving wasn’t sudden; it was growing in me for over a year. I came to Amazon as a software developer with a PhD and machine learning (ML) experience. I was promised that I’ll be able to utilize my skills in analytical/ML-related challenging projects. That hasn’t happened. In the first year I was in a team with the Away Team model who doesn’t have their own projects. It’s more a mercenary that helps out others with some self-interest. Long story short: others like to keep interesting bits for themselves. The second team felt like a salvation. More promises came but I somehow believed them even though they were far in the future. Then, the future came and it was disappointing. More promises. I was somehow included in and given more analytical projects but they weren’t challenging; the challenge there was managing other’s work rather than working out solutions myself. Higher hopes were related to my latest org, Economical Technologies aka EconTech, which is “reinforcement learning first” org. I was there for about 3 months and it was kind of cosy with great expectations but… everything is just too slow. Not only in the EconTech but the whole Amazon. Taking all my experiences, low faith in any promise, adjusting with the covid-19 expected actions in the, I did a mental forecast for the next year and it has shown no difference. Given no expected progress, deteriorated team inclusively due to the pandemic, slashing my salary due to super-low number of stocks after the fourth year and simple annoyance of Amazon’s stance on increasing global wealth gap while it’s getting richer, well, it’s time to go.

Before going to next thought a quick explanation on what I mean by writing that Amazon is slow. Maybe the analogy to forest fires will be suitable? On the whole it’s quickly spreading and it’s super destructive, but if you focus on a specific point on the circumference then you’ll see that it’s rather slow. It’s slow but it’s a constant steady slow pace. Always that one meter a minute further from the centre. The thing is that the circle at this point is huge. So adding tiny bit in all directions can feel like an exponential growth. Amazon as the company is super fast. There are plenty of new services and ideas each year, and it’s expanding its tentacles almost everywhere. Super impressive! However, if we focus on individual products/teams, then it’s a different story. Most teams are slow. The phrase “it’s always day 1” to me means that everyone is new to the company and they haven’t figured out how to communicate effectively. And there’s new comers syndrome where everyone wants to impress others on their first day which leads to mutually self-imposed high expectations from ill-read peer-pressure. Many will work long, unproductive and mindless hours only decreasing the quality of the product. It’s slow because this only appears as a half-baked product. Is that bad? Well, only if you want to consume that product, otherwise put it on display and it looks awesome.

What am I going to do now?

I’m taking sabbatical for the next 6+ months. In my case, sabbatical means taking the time to focus on the skills I loved to use, i.e. analytical thinking and artificial intelligence. During this time I’d like to catch up with all the advancements in the machine learning world and how this could apply to the current pandemic world. I’m especially interested in focusing more on the (Deep) Reinforcement Learning and creating environments/agents. There are a few thoughts on how I could pay back to society by creating my own product. Probably more on that will come once I have more clarity on the problem and solutions.

Having written that, I’d like to be clear that I’m not closing myself on the outside. I’m happy to listen about all opportunities but I’ll be extremely picky and will prioritise interesting challenges.

And if that doesn’t pay out?

Except for money I’m not losing anything. I don’t need much in life but I acknowledge that I come from a fortunate position. I have everything that I already wanted and there are plans for those that are missing. Everything goes into emergency funds and retirement. If not now, then when?

AI Traineree – PyTorch Deep Reinforcement Learning lib

tl;dr: AiTraineree is a new Deep Reinforcment Learning lib based on PyTorch.

A few months ago by some coincidences at work and some news from newsletters I discovered the world of the Deep Reinforcement Learning. Until now it was “one of those” but on a closer inspection… I couldn’t get my eyes off. Something happened and then it clicked. I’ve started playing around with some gyms from the OpenAI and did a nanodegree course on Udacity, and the feeling grew. So, let me share the feeling.

I’ve started a yet-another Python lib to play around with the Deep Reinforcement Learning. It already has some more popular agents (DQN, PPO, DDPG, MADDPG) and is easy to use with the OpenAI gyms. The intention for the lib is to have a bigger zoo of agents, compatible with more environments and have tools for better developing/debugging. Although it is a work-in-progress project it is already usable. What distinguishes this from many is unification of types and making sure that all components can play nicely with each other. The lib is also based on the PyTorch which I’ve seen many smaller projects with DRL but they usually contain a single agent with a specific environment.

Let me know if you want anything specific in the lib. In a couple of weeks I’m planning to have significant contribution to the lib.

Timed OS light/dark theme switching

tl;dr: A GitHub gist with commands walk-through is available here.

What

The ability to adjust themes, and in particular the dark mode, have been one of the most trendy tech features of 2019/2020. Many sites and apps now allow to to flip between the “normal” and the “dark mode”.

Why

Although I don’t belong to the die hard zealots one can find on the internet, I do appreciate this feature when in dark environment as I’m rather light sensitive and most devices have the lowest brightness on too-high for me. It is a nice surprise that Ubuntu 20.04 came with the global theme and a couple default ones. This let’s me to decide when it’s dark and then switch to the dark mode. Since many pages, e.g. stackoverflow.com or duckduckgo.com, now detect OS’s theme mode they will also be in switch into the mode. Neat. So, when the light goes down, my dark mode goes on, and we’re all happy.

But obviously the night comes everyday so why should I sent those 3 seconds of manual labour when I can make it automatic?

How

I won’t into too much details but basically the proposed solution is using a service manager systemd and more specifically its systemctl command. There are two “services”, one for each theme flip, and they are run daily at specific time.

For the servicd to automagically detect your services and timers they can be be placed in ~/.config/systemd/user. It’s likely that there isn’t such directory so create it. The code also expects that there is a directory ~/.scripts where some random utility scripts are placed.

The walkthrough code is below. Please note that none of the files are expected to be where they are so you have to create them and fill with the content that the cat command returned. Also, the script changes default terminal profile and it’s expectating that there are two called “Dark” and “Light” for day and night, respectively.

user@host:~/$ mkdir -p ~/.config/systemd/user
user@host:~/$ mkdir -p ~/.scripts

user@host:~/$ cat ~/.config/systemd/user/dark.service  # Create this file
[Unit]
Description=Automatically change the "Window Theme" to "light" in the morning.

[Service]
Environment=DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
ExecStart=~/.scripts/profile_changer.sh light

user@host:~/$ cat .config/systemd/user/light.timer
[Unit]
Description=Automatically change the "Window Theme" to "light" in the morning.

[Timer]
OnCalendar=*-*-* 06:00:00
Persistent=true

[Install]
WantedBy=default.target

user@host:~/$ cat .config/systemd/userdark.service  # Create this file
[Unit]
Description=Automatically change the "Window Theme" to "dark" in the evening.

[Service]
Environment=DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
ExecStart=~/.scripts/profile_changer.sh dark

user@host:~/$ cat .config/systemd/user/dark.timer  # Create this file
[Unit]
Description=Automatically change the "Window Theme" to "dark" in the evening.

[Timer]
OnCalendar=*-*-* 19:00:00
Persistent=true

[Install]
WantedBy=default.target

user@host:~/$ cat .scripts/profile_changer.sh  # Create this file
#!/bin/bash

get_uuid() {
  # Print the UUID linked to the profile name sent in parameter
  local profile_name=$1
  profiles=($(gsettings get org.gnome.Terminal.ProfilesList list | tr -d "[]\',"))
  for i in ${!profiles[*]}
    do
      local uuid="$(dconf read /org/gnome/terminal/legacy/profiles:/:${profiles[i]}/visible-name)"
      if [[ "${uuid,,}" = "'${profile_name,,}'" ]]
        then echo "${profiles[i]}"
        return 0
      fi
  done
  echo "$profile_name"
}

if [ $1 == "dark" ]; then
  THEME='Yaru-dark'
elif [ $1 == "light" ]; then
  THEME='Yaru-light'
fi
UUID=$(get_uuid $1)

/usr/bin/gsettings set org.gnome.desktop.interface gtk-theme $THEME
/usr/bin/gsettings set org.gnome.Terminal.ProfilesList default $UUID

user@host:~$ chmod a+x .scripts/profile_changer.sh  # Make script executable
user@host:~$ systemctl --user daemon-reload
user@host:~$ systemctl --user enable dark.timer light.timer
user@host:~$ systemctl --user start dark.timer light.timer

The last three commands will refresh the service daemon to and make it look for any file changes, enable timer services to be run in the background on startup and start them now.

That’s less work than expected initially. As most of the time, most of the work came from the StackExchange, from the AskUbuntu thread. Lucky that most of the time there’s someone with a similar question and someone with good answer.

Project closing note: Personal Progress

I’ve been evolving my productivity process for a long time and there are a few aspects that really work for me. One of them is doing official starting and ending ceremonies. By official I mean a session where you pretend to report findings to your boss although they and everyone else are (suspiciously) quite. Often that report means going through the project template and discussing all notes. Then, ceremonially move the project file/note to a different directory with all other completed (not necessarily successfully) projects and writing out learnings.

Since I’ve already shared a bit about the Perosnal Progress, I thought that I might share as well the concluding words. Below is a header of my project file for the Personal Progress. What’s below the header is the project template, followed by notes written on every health checks.


Personal Progress

Activity: Completed
Duration: Mid-term
Status: Success

Leanings

Time estimation

Although I was fairly confident in the time estimation, the target date had to be change a few times. These were related to another project taking higher priority and a few requirement changes to this. Requirement changes were due to used technology and companies behind it. Changing the target date for this project is not a concern; it has little impact on anything.

Technology

Many components I worked with were new to me. The frontend is in React with Bootstrap, backend in Python with Flask, MongoDB as a database, proxying with Nginx and they all are in separate Docker containers. Deployment via the Docker-compose with docker-machine to the Digital Ocean droplets. Despite some frustrating moments I really enjoyed learning all of that. Definitely will try to reuse the stack for future projects.

Personal information

This project allows me to keep track of my personal views. Replying to the same questions every few months/years will let me learn more about myself.


 

What’s not written explicitly in the header is that the project is completed though the webpage is not. There are some bugs to fix and features that would be nice to have, however, they’re not on the top of the priority. The result is good for what is intended to be. If there are requests to change something then that will be taken care of but there’s no value in thinking about it now and being reminded every two weeks. Mental freedom and being fair with myself is more important.

Speeding up EEMD / CEEMDAN

tl;dr: PyEMD documentation has a section on speeding up tweaks.

As an author of the PyEMD package probably the most common question I receive is “Why does it take so long to execute EEMD/CEEMDAN?”. That’s a reasonable question because the EEMD and CEEMDAN can be quite slow. Unfortunately, that is more about the nature of these methods rather than the implementation. (Not saying that the implementation cannot be improved.)

The question often is followed with a description of their signal, that it has 20k+ samples and some weekly seasonality collected for a couple of years with sub-hour frequency. From the perspective of the EMD et al. this means that there are many extrema which in turn means that one needs plenty of disk/memory space to accommodate interim results (especially spline) and that there’s a “higher chance” to produce obtain that odd-extremum which is propagated through all siftings. Unfortunately, it is expected for the full EEMD/CEEMDAN evaluation to take minutes even if the EMD takes a couple of seconds.

Even if EEMD can be parallelized for trails in the ensemble, every added noise will cause slight changes to the signal. EMD is not robust; some perturbation will have no effect and others might return a couple more IMFs than expected. CEEMDAN is even worse in performance because its components depend on each other so the serial in nature with parts that are parallelizable.

I have added a F.A.Q. section to the PyEMD’s Readme file and updated the PyEMD’s documentation with a chapter on factors that affect the performance. These include the used data type, number of iterations and envelope spline selection. Let me know if something is not clear or there are others to be added. It’s been a while since I have played with EMD so maybe there are some significant improvements that I should be aware of.

Let me just do this manually

Have you seen cartoons drawn by the XKCD or, alternatively, Randall Munroe? A definite recommendation to check out his comic strips. They are quite geeky but on a wide range of topics. In particular, he has this cartoon which is a drawing of a spreadsheet with “how often used” vs “time saved”. It’s good generic guidance to consider when being tired with a mundane task and thinking about automating it. My personal spreadsheet is inexistent but if I were to do it there would be additional dimensions, e.g. what’s the learned value. Even if I’m not saving much time the knowledge gain and scratch of curiosity itch is a big win.

Recently I’ve adopted and embraced the containerization through the Docker family. These lightweight, all-inclusive environments allow to develop, build and deploy locally and make sure that when deployed they’ll behave the same. Smack everything in containers, create a yaml template for the docker-composer and deploy on a remote host through docker-machine. Quick and easy. Except for some caveats.

One of the issues that almost made me regret all these containers was with getting CA certificates to terminate TLS on the remote host using the Let’s Encrypt. In short, to obtain the certificate you need to prove that you’re in control of the domain and respond in a specific way for a specific request. Fine but to do that you first need to make the domain responsive thus you need to have some certificates but you don’t have since that’s what you’re trying to do. What to do? Get some self-signed certificates, ask for help, get new certificates, replace the old ones and show that you have new ones. Doing this manually takes a couple of minutes and can be done with a combination of ssh-ing and running a script. Having this done in an automatic fashion as deployment to any host is not that simple.

There are a number of blogs that have tried to describe what to do in such situation but most (that I’ve seen) still focus on using the docker-compose from within the remote host. Unfortunately, that isn’t what I want; for small projects, I want to run a single command from a local host and have everything done automagically. So I have spent days trying out some solutions. Two of my favourite that I’ll return to in the future are Traefik and docker-letsencrypt-nginx-proxy-companion sidecars. The former is nginx replacement with a dashboard and Let’s Encrypt solution, whereas the latter is a container that works with two others to do some magic. In both cases, one has to configure relations either through the environment variables or labels and these then work. Well, it should work but I haven’t actually managed to make them work. The approach with Traefik is nicely documented from the Digital Ocean writers though it takes a while to properly configure everything. The other, nginx-based, is a bit outdated and updating it for example to docker-compose v3… it didn’t go that well.

All in all, I tried to make things run smoothly and automatically, and not needing to ever do them again. What I ended up is to spend 10 min to do things manually, copy over with `scp` and update volume references. Quick and easy. Even writing documentation on how to do it again in the future took me a couple extra minutes, but not days.

Have I done what I was going to? Yes. I have learned new technologies by testing them out and knowing what is where and, also, optimized the future releases through having better documentation and explanation why other things won’t help.

AWS Glue? No thank you.

I’ve been postponing describing my experience with AWS Glue for a couple of months now. It’s one of those things that I really wanted to get out of myself but it hurts to even speak up. Let’s end this pain, let’s end this now. Ehkem… AWS Glue sucks.

We had a use case of doing a daily ETL job from/to Redshift. The transformation was rather simple but given that developers will maintain the logic it was easier to write code rather than convoluted SQL. The job was small enough (<50k rows) that probably a Lambda with a longer timeout would be just fine; however, since more projects were coming that required a larger scale processing we were looking for potential candidates. This was a great opportunity to try out a service that boasts itself on the official blurb.

Issues started right away. Their documentation is/was really terrible. It describes how "great things are" rather than what they do. There were two pages dedicated to Redshift and they were convoluted enough that even AWS support team had difficulties understanding it. When deploying through Cloud Formation some options were missing and had to be manually updated, like activating trigger for cron job(?!). At the time of writing also only Python in version 2.7 was available with examples written by some Golang users, something like:

orgs = orgs.drop_fields(['other_names',
'identifiers']).rename_field(
'id', 'org_id').rename_field(
'name', 'org_name')

No doubt that AWS Glue will be updated and most likely it’s much better now than it was 2 months before. However, I have such a terrible mouthfeel after using it that it’s going to be hard to convince me to give it another shot in the near future. For simple tasks Lambda should be enough and for larger on a single data source use EMR. In cases when there are multiple sources with dependencies orchestrate everything using Data Pipeline. Seems that the Glue is an on-demand EMR with limited not-optimal configuration thus leaving with limited control.

Toggling academia status to halted

There was a significant update on my title. Since the end of November, I am officially a PhD. The relief is immense. Obviously, life goes on and nothing has significantly changed on the outside but I can see that my approach to things lighten up and the approach of “Yes can do” returned. I’m open to new projects and ideas.

Surprisingly enough once just before submitting the final version, I stared (again?) to recognise the greater contribution that the work has and it might have. Given that the Machine Learning community is again gradually incorporating the model-based approaches and go smaller on distance (calculus). Such progress opens up opportunities to apply my work to the broader area of interest.

When will this finish…

For the past few years, my life is on hold. Yes, I go to work and do something there but the majority of the time I’m still spending on PhD. It’s such an existential trap. It’s close to the second year when I’m trying to impress a single person who doesn’t really care. It’s close to four years when I’m trying to improve some idea that I had and thought that it might work because the previous 3 years gave no results.

When I started the PhD I was motivated, interested in everything and shaking from the excitement that I’ll be pushing humanity forward. Now, I just want to do the minimum required. In the hindsight, I’ve wasted my life. Nothing good is coming from this. Hopefully, that is “yet”. December is in or out and, at this stage, I don’t really care.

AWS Polly GUI

Although learning and book knowledge are the best, my personal relationship with reading activity is not the friendliest. Being focused on the text is a huge struggle and I often need to re-read sentences to actually read it. That’s why sometimes I use text-to-speech (TTS) software or service.

Few years ago I discovered an Ivona Text-to-speech software which was far superior to any other TTS solution. It was able to quickly read out loud (and clear) text from my clipboard. Not only it was better than others but also it supported Polish – my language. Even though the default software wasn’t useful for my use cases, i.e. scientific papers have unusual formatting, it wasn’t that difficult to write a wrapper and GUI around the Ivona. Unfortunately, it’s not supported anymore and one cannot download the offline version.

Currently, Ivona is owned by the Amazon and its voices are accessible through the Polly AWS service. It’s a relatively a cheap service but one still has to have an internet connection and it’s not provided with any gui. At least officially.

I’ve written an application to use AWS Polly. It’s a simple graphical interface with some formatting options for the text but it does its job. The AWS Polly GUI is accessible from my GitHub page. It’s running on Python3 with PyQt5.

Features are updated as needed so if something might be helpful to anyone, feel free to contact me or create a ticket issue on the repository. I’m using this for my personal work so I’m not planning on leaving this on a side.