Old school traders and investors used to look through the financial pages to find the prices of financial securities and derivatives. Those pages used to be updated once a day.

A quiet revolution has swept through the trading floors since the early 2000s. Bloomberg Terminals enabled intraday updates on security and derivative prices available to every Bloomberg Terminal user. These updates arrived every hour, every minute, every second… Until, eventually, even more frequent, sub-second, updates became available over Bloomberg’s B-PIPE.

Exchanges, such as CME, made entire order books – the so-called level-two (L2) data – available to their subscribers. The increasing frequency of (now automated – electronic or algorithmic) trading has led to increasingly higher and higher frequency of updates and increasingly finer and finer granularity of the timestamps, which went from millisecond, through microsecond, to nanosecond precision (!).

New York, USA – October 26, 2008: The New York Stock Exchange building on Wall Street in Manhattan.

At the same time, quantitative researchers started noticing that they could obtain more information about the markets by studying higher-frequency, intraday data. Some variables, such as the so-called leverage effect, persisted on finer time scales and could be more effectively measured using high-frequency data.

A new direction in finance had been born. It was referred to as the high-frequency finance by the early pioneers, until it was renamed by academics and practitioners to market microstructure.

This has been enabled through the abundance of rich data in financial markets and banks’ eagerness to put it to use.

Data science is the art, craft, and science of making sense of data. And not only making sense, but drawing nontrivial and practically useful conclusions, that can be (and must be!) constructively acted upon to achieve better outcomes. It is important because we desperately need those better outcomes – as a species and as individual people.

The question is, why are we only focusing on high-frequency finance when there are so many incomparably important fields in human life? The availability of sensors, such as Google Nest, can make high-frequency data a reality in the home, as well as in the industry, where, coupled with the latest advances in ML/AI, high-frequency data can lead a new revolution – in predictive maintenance or medicine.

At the moment medical field is one of the most traditional and bureaucratically cumbersome domains. It takes so much time to sift through all chemicals to get to the right molecular composition, and even more to go through an extended period of clinical trials and get anything decent to the patients suffering and dying in the meantime.

It does not have to be this way. High-frequency data in medicine is rapidly becoming available through the advent of noninvasive sensors such as Abbott’s FreeStyle Libre (for glucose) and OrSense NBM 200MP (for haemoglobin levels (Hb), calculated Hematocrit (Hct), oxygen saturation (SpO2), low perfusion oximetry (SoO2) and pulse rate). ECG data is now available through the latest Apple Watch and FitBands, whereas EEG data – through Muse and OMNIFIT headbands.

High-frequency medicine would import the specialised know-how from high-frequency finance and combine it with recent advances in data science and ML/AI. High-frequency readings from noninvasive sensors (glucose, haemoglobin levels, hematocrit, oxygen saturation, low perfusion oximetry and pulse rate, possibly also ECG and EEG readings) will be combined into a single high-frequency multivariate time series.

Classification and regression algorithms – the two types of algorithms commonly used in ML/AI – will be applied to this combined time series. Classification algorithms will help diagnose diseases at early stages – and cure them with minimum discomfort to the would-be patient. Regression algorithms will be applied to predict – and prevent – anomalies in the vitals time series.

In order for these algorithms to work they must be calibrated on real data. We must therefore start collecting the data for the high-frequency multivariate time series of the vital signals – from healthy volunteers and from patients of as many kinds as possible (cardiological, oncological, etc.). This data must be combined with the information about the disease progression to calibrate the classification and regression ML algorithms. Once they are calibrated, the algorithms can be used to save lives.

Data, therefore, is our largest enabler. It can be collected with minimal discomfort to the existing patients and to healthy volunteers. There is simply no time to waste: this data is urgently required to save lives.

The Bible tells us that, after the Flood, Noah got drunk and was sleeping naked. Two of his sons, Shem and Japhet, averting their eyes, covered their father with the red cloth (while Ham was stupidly laughing at his father’s predicament). Humanity’s health crisis represents God’s dis-grace. Let us do what Shem and Japhet once did and cover this disgrace with data and ML/AI solutions. Maybe then we’ll also be able to recognise what God’s compassion must look like.

For more information, please refer to https://ai.thalesians.com/

LEAVE A REPLY

Please enter your comment!
Please enter your name here