Easy clipboard in bash

Those who know me, know that I’m not a fan of using mouse. Whenever possible I try to avoid using it. Although, after years of experience, I am fluent in keyboarding there are still some tasks that I need mouse. Well, until quite recent.

The number one of annoying tasks is copying things from terminal to clipboard. Depending what exactly I needed to do with it I’d either select something with mouse and then copy, or stream output to file and then select it within editor. Some people found it weird, but yes, often copying through editor is much faster.

Revolution came with `xclip`. I’m not yet fluent in doing everything with it, but even with limited experience that I have it feels like a superpower. This program allows to copy into/from X clipboard.

Most unix distro have it installed by default or at least in package manager. In case of Ubuntu one can install it with:
$ sudo apt-get update
$ sudo apt-get install xclip

Examples:
Copy current directory into clipboard:
$ pwd | xclip

Paste whatever you have in clipboard:
$ xclip -o

Copy to global clipboard so you can use in any other program:
$ echo $PATH | xclip -selection clipboard

If this is too long to type, one can always use aliases, i.e. setting shorter name for long command. Either type directly into terminal (but that is only for current terminal), or update your ~/.bashrc file with:
alias "c=xclip"
alias "v=xclip -o"
alias "cs=xclip -selection clipboard"
alias "vs=xclip -o -selection clipboard"

Then copying content of a file to a clipboard:
$ cat file.txt | cs

… and CTRL+V wherever one wishes.

Yes, this is awesome!

Source:
StackOverflow: How can I copy the output of a command directly into my clipboard?

Kuramoto in Python

Code for Kuramoto in Python is available here or from code subpage.
Explanation on how to use it is on the bottom of this post.

Tiny introduction

Kuramoto[1, 2] is probably one of the most popular and successful models for coupled oscillators. There is plenty of information about it, but in brief summary it models oscillators’ phases to be dependent on scaled phase differences for all pairs of oscillator.

A while ago I have actually wrote on a specific method that I used to find optimal parameters for a Kuramoto model given some observation. (See Bayesian Dynamic Inference here.)

Mathematically speaking this model is defined by a set of coupled ordinary differential equations (ODEs). Given N oscillators, dynamics for each oscillator’s phase \phi_i is defined as i\in N is defined by \dot\phi_i = \omega_i + \sum_{j=0}^N k_{ij} \sin(\phi_i-\phi_j), where the summation is over all others oscillators. Note that one can leave coupling with itself, because in such case \sin(\phi_i - \phi_i) = 0, thus it doesn’t input anything into the final solution.

Quick note that Kuramoto coupling model can be used to reference model with coupling function that can be represented as a sum of harmonic functions. For example, second order means that we are including both k \sin(\Delta\phi) and k^{(2)}\sin(2\Delta\phi). In general Kuramoto model of order M would describe \dot\phi_i = \omega_i + \sum_{m=0}^{M} \sum_{j=0}^N k_{ij}^{(m)} \sin(m (\phi_i-\phi_j)).

Examples

Below are two examples. Both use the same core values with the exception that for the second system the coupling function defined by two harmonic terms. This naturally changes dynamics, but it shouldn’t change much average value, which is close to intrinsic frequency.

Exact values for the first experiment are presented in table below. Each row is a different oscillator with initial phase Φ0 and intrinsic frequency Ω. The following columns denote coupling strength between respective pairs of oscillators.

Φ0 ω k.1 k.2 k.3
1 0 28 0.2 1.1
2 π 19 0.5 -0.7
3 0 11 0.3 0.9

Phase dynamics, i.e. time derivative of obtained phases, are presented in Fig. 1. One can see that all plots are centred on intrinsic frequency with some modulations.

kuramoto_dphi_simple

Fig. 1. Phase dynamics, i.e. time derivative, in simple Kuramoto model.

Table below shows values used in the second simulation. Extra 3 columns denote scaling values used in second harmonic.

Φ0

ω k.1 k.2 k.3 k(2).1 k(2).2 k(2).3
1 0 28 0.2 1.1 -0.5 0.2
2 π 19 0.5 -0.7 -0.4 1.0
3 0 11 0.3 0.9 0.8 0.8

Again phase dynamics have been presented in a form of plot (Fig. 2). I think the difference is clear. Despite having similar mean values (approximately equal to intrinsic frequency), their modulations have change. Not only the frequency content of these modulations has changed, but also their amplitude.

kuramoto_dphi_2harmonic

Fig. 2. Phase dynamics, i.e. time derivative, in Kuramoto model with included second harmonic.

Using code

Except for downloading code from either github or code subpage, one is expected to have SciPy module. Kuramoto uses it to solve differential equations.

Script below shows an example of how one can use the Kuramoto module. When you run this, make sure that kuramoto.py is either in your path. Note also that most of the code is to just defining initial parameters and plotting the results. Actual execution of the module are two lines. Fig. 1 is the expected output.

import numpy as np
import pylab as plt
from kuramoto import Kuramoto

# Defining time array
t0, t1, dt = 0, 40, 0.01
T = np.arange(t0, t1, dt)

# Y0, W, K are initial phase, intrinsic freq and 
# coupling K matrix respectively
Y0 = np.array([0, np.pi, 0])
W = np.array([28, 19, 11])

K1 = np.array([[  0, 0.2,  1.1],
               [0.5,   0, -0.7],
               [0.3, 0.9,    0]])
               
K2 = np.array([[   0, -0.5, 0.2],
               [-0.4,    0, 1.0],
               [ 0.8,  0.8,   0]])
K = np.dstack((K1, K2)).T

# Passing parameters as a dictionary
init_params = {'W':W, 'K':K, 'Y0':Y0}

# Running Kuramoto model
kuramoto = Kuramoto(init_params)
odePhi = kuramoto.solve(T).T

# Computing phase dynamics
phaseDynamics = np.diff(odePhi)/dt

# Plotting response
nOsc = len(W)
for osc in range(nOsc):
    plt.subplot(nOsc, 1, 1+osc)
    plt.plot(T[:-1], phaseDynamics[osc])
    plt.ylabel("$\dot\phi_{%i}$" %(osc+1))
plt.show()

[1] Y. Kuramoto, “Chemical Oscillations, Waves, and Turbulence,” 1984, doi: 10.1007/978-3-642-69689-3.
[2] Steven H. Strogatz, “From Kuramoto to Crawford: exploring the onset of synchronization in populations of coupled oscillators,” 2000, doi: 10.1016/S0167-2789(00)00094-4.

Matrix Multiplication with Python 3.5

Only recently I have started to use Python 3. It’s been out for good 8+ years and all these excuses about incompatibility with some packages were just lazy. Most packages I use are already ported and if I ever find that something is incompatible… well, I’ll think then. But for now let me pat myself on the back for this great leap, because:

In Python 3.5.3 (released today) there is an operator for matrix multiplication! Check out: PEP 465 — A dedicated infix operator for matrix multiplication. The choice of operator, @, is a bit unfortunate, because of the decorators and general association with reference/internet, but seeing how few possibilities are left it’s probably the best choice.

Yes, this is big news for me. The number of times I confused myself with my own matrix operations is just too damn high! I cannot agree more with the author of the PEP 465, so let my shamelessly copy&paste (paraphrased) his reasoning. Behold!

(…) encounter many mathematical formulas that look like:

S = ( H β r ) T ( H V H T ) − 1 ( H β r )

Here the various variables are all vectors or matrices (details for the curious: [5] ).

Now we need to write code to perform this calculation. In current numpy, matrix multiplication can be performed using either the function or method call syntax. Neither provides a particularly readable translation of the formula:

import numpy as np
from numpy.linalg import inv, solve

# Using dot function:
S = np.dot((np.dot(H, beta) - r).T,
           np.dot(inv(np.dot(np.dot(H, V), H.T)), np.dot(H, beta) - r))

# Using dot method:
S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)

With the @ operator, the direct translation of the above formula becomes:

S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)

Notice that there is now a transparent, 1-to-1 mapping between the symbols in the original formula and the code that implements it.

Of course, an experienced programmer will probably notice that this is not the best way to compute this expression. The repeated computation of H β r should perhaps be factored out; and, expressions of the form dot(inv(A), B) should almost always be replaced by the more numerically stable solve(A, B) . When using @ , performing these two refactorings gives us:

# Version 1 (as above)
S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv(H @ V @ H.T) @ trans_coef

# Version 3
S = trans_coef.T @ solve(H @ V @ H.T, trans_coef)

Notice that when comparing between each pair of steps, it’s very easy to see exactly what was changed. If we apply the equivalent transformations to the code using the .dot method, then the changes are much harder to read out or verify for correctness:

# Version 1 (as above)
S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)

# Version 2
trans_coef = H.dot(beta) - r
S = trans_coef.T.dot(inv(H.dot(V).dot(H.T))).dot(trans_coef)

# Version 3
S = trans_coef.T.dot(solve(H.dot(V).dot(H.T)), trans_coef)

Readability counts! The statements using @ are shorter, contain more whitespace, can be directly and easily compared both to each other and to the textbook formula, and contain only meaningful parentheses. This last point is particularly important for readability: when using function-call syntax, the required parentheses on every operation create visual clutter that makes it very difficult to parse out the overall structure of the formula by eye, even for a relatively simple formula like this one. Eyes are terrible at parsing non-regular languages. I made and caught many errors while trying to write out the ‘dot’ formulas above. I know they still contain at least one error, maybe more. (Exercise: find it. Or them.) The @ examples, by contrast, are not only correct, they’re obviously correct at a glance.

Again: yes!

More links

A while ago I’ve started to taste a bit how it feels to work in industry and it feels quite nice. Maybe that’s the specificity of field projected onto the industry, or being tired of how academia works, but I’m enjoying extremely learning all the details about Computer Science, programming and newest technologies.

In addition to last post about Data Science, which still is my main daily ‘look for’, I’ve started to dive deep into computer science. Obviously there are plenty of good information sources and excellent tutorials. Aggregate that I exploiting right now are:

 

I’m planning to add some subpage with links for further reference. Any suggestions are welcomed!

Data Science podcasts

I’m an avid podcast listener. Whenever there’s something that only requires sight or not much focus I’ll try to do it with my headphones on. Great thing about podcasts is that they they are more up-to-date than audiobooks and have reasonably short lengths, so there’s always a fit.

Wanting to be more current with machine learning topics I’ve found few podcasts. These are my recommendations:

Machine Learning competition on Seizure Prediction

tl;dr: Read subject and click on link below.

Some of you, i.e. those lucky ones with connection to the outside World, are probably aware about Machine Learning community trying to aggressively change our lives for better. Regardless whether we like it or not, they’re doing it. Some do this for money, others for fame, and those wicked ones just for fun.

Kaggle is a webpage that hosts Machine Learning competitions. They provide data (usually donated by companies or public organizations) and set a goal. These included detecting and classifying specific whales species from satellite images, or driving a remote car based on an hour of recording, or identifying patients that will return based on historic records, or … It’s actually pretty big. Some prices can be as big as $500,000.

Reason behind this email is one of Kaggel’s recent competitions — Predicting seizures in long-term human intracranial EEG recordings. The challenge is to “The challenge is to distinguish between ten minute long data clips covering an hour prior to a seizure, and ten minute iEEG clips of interictal activity.” Yes, many people has tried this and it’s ongoing research in many labs. The difference here is that you can actually see people’s attempts and their codes. You can read their discussions and follow their logics. It looks like an amazing source of information! Moreover, good contestants are really good at machine learning and they often do their work properly, i.e. complete the challenge.

This all is really important to me. As my background is much in EEG analysis and machine learning. The timing is a bit unfortunate, but I might give it a go.

http://blog.kaggle.com/2016/10/14/getting-started-in-the-seizure-prediction-competition-impact-history-useful-resources/

Facebook birthday

This blog needs some lightening up and what a better way to do that than post some graphs? Exactly! I like to collect data on various things and then make a  graph of them. Things are much better when presented with axis and some number around it.

Not so long time ago I had birthday. Although it happens rather regularly, the last one was special. For the first time I’ve enabled notification on Facebook. It appears that people (a.k.a. friends) are rather nice and are willing to write something positive about you on that day. Here is to memory of that special day!

Summary:
I’ve scraped Facebook’s posts and messages which were within 48 h from the Birthday 00:01 AM. Sent times are aggregated on the graph and text content is summarised in word stats.

 

14370006_1195945397123883_1230729249931003949_nGreen graph displays wishes density as a function of time obtained using Gaussian Kernel Distribution Estimation. Blue dots show cumulative wish count. Both are normalised to highest value being 1. Statistically speaking, green and blue shows (up to constants) probability density function and cumulative density function respectively.

What is nice about this graph is that it generally shows at what time of the day my peers are active. It seems that majority is active in the morning 10-11 am and after 8 pm, which is reassuring that they are very normal average people.

Additional insight gives extracted content. Although there’s not enough data, I think it nicely hints on my background. I’ll leave interpretation, but will also point up that the percentage of wish to total falls closely to the ratio of people who read posts, i.e. 12 – 16 % (even though this is not related as birthday notification goes to everyone).

Wishes:
Happy Birthday: 34 (38.20%)
Wszystkiego najlepszego/All the best: 11 (12.36%)
(tylko) Najlepszego/(only) The best: 10 (11.24%)
Sto lat/100+ years: 16 (17.98%)

Emoji/Emoticons:
Smileys: 25 (28.09%)
Kisses: 13 (14.61%)
Hearths: 3 (3.37%)

Reference:
Dawid: 20 (22.47%)
You: 17 (19.1%)
Boy: 2 (2.23%)

Total number of exclamation mark (!):
!: 82
!!: 11
!!!: 4
!!!!: 2
!!!!!: 1