New York, New York, United States – 11-12-2022 (PR Distribution™) –
The coronavirus pandemic that broke out in late 2019 and claimed over 6 million lives uncovered many problems in global healthcare, such as widespread unpreparedness of the infrastructure for mass infections, lack of hospitals and medical staff, etc. Instead of prevention and anticipation, officials, epidemiologists, infectious disease specialists, and virologists were forced to act in the middle of an ongoing unfamiliar crisis.
Many lessons were learned over the past two years. For example, we realized that any organizational decision (be it lockdowns, COVID hospital deployments, or vaccine development) requires digitized global data on infection spread.
COVID-19 triggered a rapid increase of available digital data that governments can rely on when taking epidemiological measures.
Intelligent analysis of such an unprecedented amount of data, in this turn, would be impossible without high-performance computing resources (such as cloud computing), advanced ML algorithms, and neural networks.
Supervising EU lockdowns
The Italian Ministry of Innovation, in collaboration with the University of Pavia, adopted big data analytics to their needs, with the government starting to collect anonymous information on user movements from Facebook and Italian IT companies. Later, Enel X energy company and Here Technologies, a mapping content developer, created the City Analytics map that enabled Italian transportation agencies to track passenger mobility.
The French government went even further in their masking campaign and armed themselves with DatakaLab, the AI-based video analytics tool that informed local transportation authorities on maskless passengers. AI Hub, a Singapore tech company developed the SafeDistancer app to monitor social distancing, based on AI and computer vision technologies. The app processes smartphone images to detect people in the camera frame and emits user alerts when people come too close together.
Infection waves and empty hospital beds
It would be strange if COVID-era AI solutions were used only for upholding restriction adherence and monitoring. As the pandemic progressed, countries began to adopt tools that cloud optimize the workload on medical facilities overwhelmed with admissions.
The UK’s National Healthcare Service (NHS) partnered with Microsoft, Amazon Web Services, Google, Faculty, and Palantir to introduce a digital platform based on big data, AI, and cloud computing. The platform consolidated the data on COVID patients’ length of stay and hospital occupancy level, collected from Public Health England and the NHS. The platform also provided recommendations on hospital staffing and medical equipment supplies.
Telemedicine for symptom management
When it became clear that medics were stretched thin and hospitals turned into hotbeds of infection, telemedicine started to gain momentum, assisted by IoMT (Internet of Medical Things) gadgets like smartwatches, pulse oximeters, or common smartphones with specialized apps.
Telemedicine services enabled doctors to remotely monitor patients that did not require inpatient therapy. For instance, Mayo Clinic and Baptist Health, an NPO from Kentucky, adopted the practice of remote monitoring. In collaboration with Current Health Ltd, they will monitor patients with mild and average COVID-19 cases.
Apple Inc teamed up with the White House Coronavirus Task Force, the Centers for Disease Control and Prevention (CDC), and the US Department of Health and Human Services (HHS) to release the “COVID-19 app”, where patients could provide information on their symptoms.
Helping study and treat the coronavirus
Big data and AI also helped scientists understand the specifics of the new virus and its effect on the human body. Researchers at New York University and Columbia University partnered with two Chinese hospitals and developed an AI tool that predicted which COVID patients would experience more severe symptoms. The service also found the impact of liver enzymes, myalgia, and hemoglobin levels on patients’ health deterioration.
Superfast vaccine launch
During the coronavirus pandemic, among other things, innovative medicines were developed and registered with unprecedented speed. That was how the first two clinically approved mRNA vaccines against COVID-19 were launched, one produced by Moderna and by Pfizer and the other – by BioNTech.
Just a few days after Chinese scientists published the gene sequence of the new virus, Moderna specialists from Massachusetts prepared the plan for vaccine development. 42 days later, they delivered the first batch of the initial vaccine version to the US National Institutes of Health for Phase I trials. The vaccine was tested on people for the first time in early March, even though medication development generally takes years or even decades.
Fragments and leaks
Unfortunately, despite vast amounts of data and numerous analytical tools that have become available over recent years, terabytes of data and their applicability remain an issue.
Scientists complain that technologies implemented by healthcare systems of several countries (or even one country) are siloed and fragmented. The absence of cross-system compatibility leads to the fact that collection, logging, distribution, and exchange of medical data between hospitals and state authorities, let alone different healthcare ministries, can be done only manually.
About the Author
Rustam Gilfanov is an IT entrepreneur, a co-founder of a large IT company, and a venture partner of the LongeVC Fund.
Company Name: Blacklight
Full Name: Vlad
Phone: +7 499 340 33 83
Email Address: Send Email
For the original news story, please visit https://www.prdistribution.com/news/what-science-learned-from-covid-19-and-how-big-data-can-change-medicine/9381212.