Recent challenge I had was to convert two given numpy arrays such it’ll be possible to have them in common type without losing information. One can follow this link for stackoverflow entry.

Initially I thought about comparing dtypes, because in Python 2 it is allowed to do something like:

>>> np.float32 < np.float64 True >>> np.int32 < np.float32 True >>> np.int64 > np.float16 False

… which kind of makes sense(?). Well, except that *int64* vs *float16* which looks suspicious. It turns out that these are **type** objects and Python is simply comparing their locations in memory^{[Citation needed]} and that is obviously not reliable. Actually, in Python 3 such comparison is forbidden and it fails.

As the answer to my stackoverflow question suggests one could try to use dtype.kind and dtype.itemsize to create own ordering. This is not difficult, but it should contain all types, such as (unsigned) ints, floats, bools, complex…

Fortunately, for my purposes, there is NumPy’s method which does what I want, i.e. **numpy.find_common_type**. It determines common type following standard coercion rules. With a help of this function my common conversion looks like:

import numpy as np def common_dtype(x, y): dtype = np.find_common_type([x.dtype, y.dtype], []) if x.dtype != dtype: x = x.astype(dtype) if y.dtype != dtype: y = y.astype(dtype) return x, y

What to expect from such function?

*float32* for *float16* and *float32*

*float32* for *int16* and *float32*

*int64* for *int32* and *uint16*

*float64* for *int32* and *float16*

Behaves as it should.