Connect with us

Health Care

The Women of Healthcare Standards

Published

on

It’s an unfortunate reality that men significantly outnumber women in leadership positions in health IT – which is part of why Healthcare IT News’ sister publication Women in HIT profiles the work of women in the industry.

Today, Healthcare IT News is featuring an interview with Maria Palombini, director of healthcare and life sciences practice leader at the IEEE Standards Association.

Palombini offers an in-depth look at four high-profile women in health IT who are contributing significantly to creating and promoting healthcare standards: Heather Flannery, CEO of ConsenSys Health and chair of the HIMSS Blockchain Task Force; Dr. Madhuri Gore, professor and tech director at the Dr. S.R. Chandrasekhar Institute of Speech and Hearing; Florence Hudson, CEO of FDHint and executive director of the Northeast Big Data Innovation Hub at Columbia University; and Dr. Ida Sim, professor of medicine at the University of California San Francisco and co-director of informatics and research innovation at UCSF’s Clinical and Translational Sciences Institute.

Palombini talks about the importance and influence of these four healthcare leaders, the value of the female perspective in healthcare, and some of the challenges they and their teams are facing in addressing a critical piece of the digital health puzzle.

Q. Heather Flannery is CEO of ConsenSys Health and chair of the HIMSS Blockchain Task Force. What kind of standards work is she doing and how does her leadership help boost standards?

A. Heather Leigh Flannery serves as the chair of IEEE P2418.6: Standard for the Framework of Distributed Ledger Technology Use in Healthcare and the Life and Social Sciences. The working group chair may be appointed by the IEEE Standards Committee or elected by the working group members.

Heather was the author and submitter of the IEEE P2418.6 Project Authorization Request, which is a structured and highly detailed document that essentially states the reason the standard project exists and what it intends to do.

Part of IEEE SA’s blockchain initiative launched in 2018. As an early mover in this field, this standard will provide a common framework for distributed ledger technology usage, implementation and interaction in healthcare and the life and social sciences, addressing scalability, security and privacy challenges. DLT [distributed ledger technology] tokens, smart contracts, transactions, assets, networks, off-chain data storage and access architectural patterns, and both permissioned and permission-less DLT are included in the framework.

One of the most valuable contributions is using the IEEE global platform to drive engagement with others working on standards in this field. We have about 150 active members in this working group, including comprehensive involvement from both the public and private sector, representatives from global governments and multiple parties across academia.

While this technology aims to improve efficiency, healthcare brings unique challenges and special attention. The purpose of this standard is twofold. First, it is to provide a common semantic model and framework for the usage of blockchain and DLT in healthcare and the life and social sciences, under which a body of detailed, complementary standards specific to myriad niche use-cases can be subsequently developed.

Second, it is to clarify and rationalize the use of DLT in healthcare and the life and social science in concert with converging innovations relevant to the sector, including, but not limited to, the family of artificial intelligence and the internet of medical things, delivering healthcare-specific coordination of these adjacent-standards activities. This involves both opportunities for value creation as well as risk-mitigation challenges.

Of great significance, IEEE SA Open Source has been incorporated into IEEE P2418.6.

In addition to Flannery’s leadership as CEO of ConsenSys Health and chair of the HIMSS Blockchain Task Force, she also served as the FY19 co-chair of the global HIMSS Blockchain in Healthcare Task Force and chairs the Healthcare Special Interest Group at the Enterprise Ethereum Alliance. She is an associate editor of the peer-reviewed journal Frontiers Blockchain for Science.

Flannery is also an Innovation Fellow at EP3 Foundation, has served as industry faculty for the U.S. Department of Health and Human Services Office of the National Coordinator for Health IT, and is an active consultant, advisor and keynote speaker.

Q. Dr. Madhuri Gore is a professor and tech director at the Dr. S.R. Chandrasekhar Institute of Speech and Hearing. What kind of standards work is she doing, and how does her leadership help boost standards?

A. Dr. Madhuri Gore is the vice chair of IEEE P2650 – Standard for Enabling Mobile Device Platforms to Be Used as Pre-Screening Audiometric Systems. This standard will establish the performance, interoperability and validation requirements of a mobile device platform that typically consists of a mobile phone device in conjunction with a portable or wearable device and associated software, to be used as an audiometric pre-screening device. Gore was appointed vice chair by the chair of standards working group and supported by the fellow group members.

This project is particularly important to emerging economies, including Gore’s native India, where diagnostic screenings are out of reach to most of their populations due to accessibility, affordability and other roadblocks. According to the World Health Organization, globally, 1.5 billion people suffer from hearing loss and 430 million suffer from disabling levels of hearing loss that can be mitigated. With the emergence of mobile devices, the opportunity exists to meet these challenges.

The impact of standards for mobile devices used for hearing pre-screening is enormous in terms of scope and the byproduct issues some suffer with hearing loss. The standard will enable use in remote and rural areas, and also drive awareness and prevention of secondary issues, such as depression, unemployment, cognitive decline and dementia, and academic underachievement.

Gore has worked as an audiologist since 1982, including an extensive focus on children with hearing loss. She has conducted neonatal hearing screening, and participated in school screening and rural hearing screening programs for the early identification of hearing loss.

Additionally, Gore is experienced with cochlear implants and has been part of a team that provided guidelines to the government of Karnataka cochlear for its implant program. She currently holds a post as a professor in the Department of Hearing Studies at the Dr. S.R. Chandrasekhar Institute of Speech and Hearing. Gore previously served as president of the Indian Speech and Hearing Association and vice president of CIGI.

Q. Florence Hudson is CEO of FDHint and executive director of the Northeast Big Data Innovation Hub at Columbia University. What kind of standards work is she doing and how does her leadership help boost standards?

A. Florence Hudson chairs the working group for IEEE P2933: Standard for Clinical Internet of Things Data and Device Interoperability with TIPPSS – Trust, Identity, Privacy, Protection, Safety, Security.

Florence originally initiated this work as a pre-standards incubation activity under the IEEE SA Global Wearables and Medical IoT Interoperability and Intelligence Program. After amassing a group of more than 100 volunteers in the incubation work, they achieved consensus to move forward and submit the PAR to become an official IEEE standards working group.

As the author, submitter and original leader of this incubation activity, the IEEE Standards Committee and working group members supported Florence as the chair of this working group. This standard, the first under development in partnership with Underwriters Laboratories, will establish the framework with Trust, Identity, Privacy, Protection, Safety, Security principles for clinical internet of things data and device validation and interoperability.

This includes wearable clinical IoT and interoperability with healthcare systems, including electronic health records, other clinical IoT devices, in-hospital devices, and future devices and connected healthcare systems.

The primary reason for this standards project is that everything is reachable and hackable, including things you might not imagine, such as the weight scale in your home that reports real-time findings to your doctor.

For example, in 2017, the FDA recalled more than 465,000 pacemakers due to hacking concerns. Adjacency is an issue: you can be within 50 feet or so and hack a person’s wearable device. Thus, the mission is to protect healthcare devices and data against device, hardware, software and service hacks.

Of course, the risks of a hack can be enormous, including loss of information and privacy – and life-threatening for some. An expert in TIPPSS, Henderson says this is the new and best cybersecurity paradigm for healthcare IoT and other uses.

Focus on this issue began about five years ago, and Hudson was instrumental in launching the standards working group in 2019. The group comprises more than 250 members from 22 countries and six continents.

Members include representatives of device manufacturers, regulators, the National Institutes of Health, the National Cancer Institute, providers, payers, patient advocates, pharmaceutical companies, technologists, EHR/EMR vendors, researchers, academics, startups, the Hyperledger community and, of course, Underwriters Laboratories.

Hudson is CEO of the Northeast Big Data Innovation Hub at Columbia University, and is the founder and CEO of FDHint, a global advanced technology and diversity and inclusion consulting firm. She leads the COVID Information Commons funded by NSF, providing an open resource to explore research and enable global collaboration to address the COVID-19 pandemic.

Hudson’s career includes her leadership as vice president and chief technology officer for IBM, senior vice president and chief innovation officer at Internet2, special advisor to the NSF Cybersecurity Center of Excellence, and aerospace engineer at NASA and Grumman. Also an author, Henderson published a book about TIPPSS.

She currently serves on boards for Princeton University, California Polytechnic State University (San Luis Obispo), Stony Brook University, Blockchain in Healthcare Today, and the IEEE Engineering in Medicine and Biology Society. She earned a BSE in Mechanical and Aerospace Engineering from Princeton University, and executive business education at Harvard and Columbia universities.

Q. And Dr. Ida Sim is a professor of medicine at the University of California San Francisco; co-director of informatics and research innovation at UCSF’s Clinical and Translational Sciences Institute; and co-founder of Open mHealth. What kind of standards work is she doing, and how does her leadership help boost standards?

A. Dr. Sim serves as chair of IEEE 1752.1(TM)-2021: Standard for Mobile Health Data.

Mobile and wearable devices are being increasingly developed for healthcare purposes. Mobile health data encompasses personal health data collected from sensors and mobile applications: digital biomarkers, which are physiological and behavioral measures collected by means of digital devices such as portables, wearables, implantables or digestibles that characterize, influence or predict health-related outcomes.

Sim is the author and submitter of the PAR for the IEEE 1752.1 standard and now just approved IEEE P1752.2 standards working group. The standards working groups are a product of the work that was envisioned at Open mHealth, a nonprofit group making patient-generated data accessible through an open data standard and community.

Standardizing mHealth data and metadata will improve the ease and alignment accuracy of aggregating data across multiple mobile health sources (semantic interoperability) and will reduce the costs of using this data for biomedical discovery, improving health and managing disease. As a starting point, this working group is focused on specifications for standardized representations of quantitative sleep and physical activity measures, minimum metadata and subjective reports (surveys) defined by this IEEE 1752.1 standard.

The purpose of this standard is the provision of standard semantics to enable meaningful description, exchange, sharing and use of such mHealth data. Data and associated metadata complying to this standard will be sufficiently clear and complete to support their use for a broad set of consumer health, biomedical research and clinical care needs.

Standardizing mHealth data and metadata will yield several benefits, including making data exchange and reuse predictable and constant; making data aggregation across multiple sources easier and more accurate; facilitating development and validation of digital biomarkers; and reducing costs of using mHealth data for care and research. Consider that data comes from a million, if not millions, of people; it needs to come across in a standardized way.

Additionally, with an Open mHealth approach to data sharing, we can have common schemas to structure data and can provide open-source tools to validate data, pull in data from large and popular device manufacturers, and store data and share it securely with others.

This working group is composed of 243 global members representing industry, academia, government, regulatory agencies, clinical researchers and more.

Now, the focus is moving forward with a new work group: P1752.2, the Standard for Mobile Health Data: Representation of Cardiovascular, Respiratory, and Metabolic Measures. The first working group meeting was held this past July. This will include a pilot using synthetic data to test the standard.

Sim is a professor of medicine at the University of California, San Francisco, and co-directs informatics and research innovation at UCSF’s Clinical and Translational Sciences Institute. She also is the director of digital health for the division of general internal medicine.

Sim’s research focuses on open integrated architectures of mobile technologies for clinical research and primary care. She is a global leader in the policy and technology of large-scale sharing of clinical trials and mobile health data. In 2011, she co-founded Open mHealth.

In 2017, she co-founded Vivli, a global data-sharing platform for finding, requesting and analyzing participant-level clinical trials data. Sim has served on multiple advisory committees on health information infrastructure for clinical care and research, including committees of the National Research Council and National Academy of Medicine.

She is a recipient of the United States Presidential Early Career Award for Scientists and Engineers, a Fellow of the American College of Medical Informatics [and] a member of the American Society for Clinical Investigation [as well as] a practicing primary care physician.

Q. What special value does the female perspective bring to healthcare and standards?

A. Women are leading the way in every aspect of the digital health tech transformation: They are founders of startups, CEOs of major organizations, or solving everyday complex challenges with the adoption and use of health technologies. Above and beyond their experience and expertise, consider that 77% of frontline health and long-term care workers are women, according to Becker’s Hospital Review.

Women hold 30% of C-suite positions with healthcare companies, 13% of them in the healthcare innovation space, according to consultancy firm OliverWyman. Of the hundreds of companies in the investment firm Start-Up Health’s VC Fund portfolio, 32% were founded by women. Lastly, more than 80% of household decision-makers for healthcare are women, according to the U.S. Department of Labor.

Women are playing a leading role in healthcare, and rightfully so, whether it be in healthcare technology or leading from the C-suite – and developing standards.

Here at IEEE SA, we have a large representation of women across the globe, including these four extremely talented people who are leading standards development within our healthcare practice, focused on security, validation and protecting the privacy of patient data while distributing and integrating it for diagnostics, monitoring or clinical research.

Each of them leads groups that include the participation of a very diverse volunteer group of hundreds of professionals from across the globe. They are unified in their commitment to the development and adoption of standards, which ultimately lead to better healthcare outcomes.

Q. What are some of the challenges these women and their teams face in addressing a critical piece in the puzzle impeding trust, security and validation in digital health?

A. When these practice leaders presented on a recent webinar hosted by IEEE SA, 58% of viewers polled during the presentation said the foremost challenge involved distrust in the use of medical technologies due to threats and vulnerability risks. Other concerns include lack of data and/or device interoperability; lack of patient identity, data and device validation; and lack of accessibility and feasibility.

These underlying issues are symptomatic of when innovation is outpacing trusted adoption in the market. The growing use of devices – in, on and around the body – for mobile/remote Patient Monitoring also increases the concerns with the use of these devices when many of these critical challenges are not resolved.

The development of consensus-driven standards in addressing continuous questions on validation, interoperability, feasibility, privacy and ethics may be able to deliver the credibility and trust that all stakeholders – patients, clinicians, regulators, researchers and more – are seeking in order to drive wide adoption for healthcare delivery and clinical research use.

More critical focus should be placed on both technical and policy considerations. A global community of leaders in healthcare, technology and policy is needed to develop mutual understanding and recommendations for standards to address the threats and vulnerabilities embedded in the connected healthcare arena. There remain many gaps in the connected healthcare as it relates to security, privacy, ethics, trust and identity, including data and device validation and interoperability.

Resulting recommendations could include technical solutions such as systems of systems reference architecture and/or integrated systems design approach for more comprehensive visibility and detection into the many connected elements within these systems. The future of medical devices, mobile or stationary, will heavily be AI-powered, which will add another element of uncertainty when it comes to trust and adoption.

To truly realize the potential innovation of these devices and the impact they can have on enhanced patient outcomes and contribute towards precision medicine, the issues of security, privacy, interoperability and validation need to be addressed in the form of consensus-driven standards, where the application of the standard makes it seamless for the users to trust that the products and services will work in an ethical, secure and verified manner.

We should move away from carrying this baggage of continued uncertainties to open the doors to future innovation.

Twitter: @SiwickiHealthIT
Email the writer: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Article: healthcareitnews.com

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Health Care

How AI Bias Happens – and How to Eliminate It

Published

on

Artificial intelligence holds great promise for healthcare, and it is already being put to use by many forward-looking hospitals and health systems.

One challenge for healthcare CIOs and clinical users of AI-powered health technologies is the biases that may pop up in algorithms. These biases, such as algorithms that improperly skew results because of race, can compromise the ultimate work of AI – and clinicians.

We spoke recently with Dr. Sanjiv M. Narayan, co-director of the Stanford Arrhythmia Center, director of its Atrial Fibrillation Program and professor of medicine at Stanford University School of Medicine. He offered his perspective on how biases arise in AI – and what healthcare organizations can do to prevent them.

Q. How do biases make their way into artificial intelligence?

A. There is an increasing focus on bias in artificial intelligence, and while there is no cause for panic yet, some concern is reasonable. AI is embedded in systems from wall to wall these days, and if these systems are biased, then so are their results. This may benefit us, harm us or benefit someone else.

A major issue is that bias is rarely obvious. Think about your results from a search engine “tuned to your preferences.” We already are conditioned to expect that this will differ from somebody else’s search on the same topic using the same search engine. But, are these searches really tuned to our preferences, or to someone else’s preferences, such as a vendor? The same applies across all systems.

Bias in AI occurs when results cannot be generalized widely. We often think of bias resulting from preferences or exclusions in training data, but bias can also be introduced by how data is obtained, how algorithms are designed, and how AI outputs are interpreted.

How does bias get into AI? Everybody thinks of bias in training data – the data used to develop an algorithm before it is tested on the wide world. But this is only the tip of the iceberg.

All data is biased. This is not paranoia. This is fact. Bias may not be deliberate. It may be unavoidable because of the way that measurements are made – but it means that we must estimate the error (confidence intervals) around each data point to interpret the results.

Think of heights in the U.S. If you collected them and put them all onto a chart, you’d find overlapping groups (or clusters) of taller and shorter people, broadly indicating adults and children, and those in between. However, who was surveyed to get the heights? Was this done during the weekdays or on weekends, when different groups of people are working?

If heights were measured at medical offices, people without health insurance may be left out. If done in the suburbs, you’ll get a different group of people compared to those in the countryside or those in cities. How large was the sample?

Bias in training data is the bias that everybody thinks about. AI is trained to learn patterns in data. If a particular dataset has bias, then AI – being a good learner – will learn that too.

A now classic example is Amazon. Some years ago, Amazon introduced a new AI-based algorithm to screen and recruit new employees. The company was disappointed when this new process did nothing to help diversity, equity and inclusion.

“All data is biased. This is not paranoia. This is fact.”

Dr. Sanjiv M. Narayan, Stanford University School of Medicine

When they looked closely, it turned out that that the data used for training came from applications submitted to Amazon primarily from white men over a 10-year period. Using this system, new applicant resumes were downgraded if they contained the terms “women’s” or “women’s colleges.” Amazon stopped using this system.

On another front, AI algorithms are designed to learn patterns in data and match them to an output. There are many AI algorithms, and each has strengths and weaknesses. Deep learning is acknowledged as one of the most powerful today, yet it performs best on large data sets that are well labeled for the precise output desired.

Such labeling is not always available, and so other algorithms are often used to do this labeling automatically. Sometimes, labeling is done not by hand, but by using an algorithm trained for a different, but similar, task. This approach, termed transfer learning, is very powerful. However, it can introduce bias that is not always appreciated.

Other algorithms involve steps called auto-encoders, which process large data into reduced sets of features that are easier to learn. This process of feature extraction, for which many techniques exist, can introduce bias by discarding information that could make the AI smarter during wider use – but that are lost even if the original data was not biased.

There are many other examples where choosing one algorithm over another can modify results from the AI.

Then there is bias in reporting results. Despite its name, AI is typically not “intelligent” in the human sense. AI is a fast, efficient way of classifying data – your smartphone recognizing your face, a medical device recognizing an abnormal pattern on a wearable device or a self-driving car recognizing a dog about to run in front of you.

The internal workings of AI involve mathematical pattern recognition, and at some point all of this math has to be put into a bin of Yes or No. (It’s your face or not, it’s an abnormal or normal heart rhythm, and so on.) This process often requires some fine-tuning. This may be to reduce bias in data collection, in the training set, in the algorithm, or to attempt to broaden the usefulness.

For instance, you may decide to make your self-driving car very cautious, so that if it senses any disturbance at the side of the road it alarms “caution,” even if the internal AI would have not sounded the alarm.

Q. What kind of work are you currently doing with AI?

A. I am a professor and physician at Stanford University. I treat patients with heart conditions, and my lab has for a long time done research into improving therapy in individual patients using AI and computer methods to better understand disease processes and health.

In cardiology, we are fortunate in having many ways to measure the heart that increasingly are available as wearable devices and that can directly guide treatment. This is very exciting, but also introduces challenges. One major issue that is emerging in medicine is AI bias.

Bias in medical AI is a major problem, because making a wrong diagnosis or suggesting [the] wrong therapy could be catastrophic. Each of the types of bias I have described can apply to medicine. Bias in data collection is a critical problem. Typically, we only have access to data from patients we see.

However, what about patients without insurance, or those who only choose to seek medical attention when very sick? How will AI work when they ultimately do present to the emergency room? The AI may have been trained on people who were less sick, younger or of different demographics.

Another interesting example involves wearables, which can tell your pulse by measuring light reflectance from your skin [photo-plethysmography]. Some of these algorithms are less accurate in people of color. Companies are working on solutions that address this bias by working on all skin tones.

Other challenges in medical AI include ensuring accuracy of AI systems (validation), ensuring that multiple systems can be compared for accuracy, which ideally would use the same testing data. But this may be proprietary for each specific system – and ensuring that patients have access to their data. The Heart Rhythm Society recently called for this “transparent sharing” of data.

Q. What is one practice for keeping biases out of AI?

A. Understanding the various causes of bias is the first step in the adoption of what is sometimes called effective “algorithmic hygiene.” An essential practice is to ensure as much as possible that training data are representative.

Representative of what? No data set can represent the entire universe of options. Thus, it is important to identify the target application and audience upfront, and then tailor the training data to that target.

A related approach is to train multiple versions of the algorithm, each of which is trained to input a dataset and classify it, then repeat this for all datasets that are available. If the output from classification is the same between models, then the AI models can be combined.

A similar approach is to input the multiple datasets to the AI, and train it to learn all at once. The advantage of this approach is that the AI will learn to reinforce the similarities between input datasets, and yet generalize to each dataset.

As AI systems continue to be used, one tailored design is to update their training dataset so that they are increasingly tailored to their user base. This can introduce unintended consequences. First, as the AI becomes more and more tailored to the user base, this may introduce bias compared to the carefully curated data often used originally for training.

Second, the system may become less accurate over time because the oversight used to ensure AI accuracy may no longer be in place in the real world. A good example of this is the Microsoft ChatBot, which was designed to be a friendly companion but, on release, rapidly learned undesirable language and behaviors, and had to be shut down.

Finally, the AI is no longer the same as the original version, which is an issue for regulation of medical devices as outlined in the Food and Drug Administration guidelines on Software as a Medical Device.

Q. What is another best practice for preventing AI bias?

A. There are multiple approaches to eliminate bias in AI, and none are foolproof. These range from approaches to formulate an application so that it is relatively free of bias, to collecting data in a relatively unbiased way, to designing mathematical algorithms to minimize bias.

The technology of AI is moving inexorably toward greater integration across all aspects of life. As this happens, bias is more likely to occur through the compounding of complex systems but also, paradoxically, less easy to identify and prevent.

It remains to be seen how this field of ethical AI develops and whether quite different approaches are developed for highly regulated fields such as medicine, where transparency and explainable AI are of critical importance, and other endeavors.

Twitter: @SiwickiHealthIT
Email the writer: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Source Here: healthcareitnews.com

Continue Reading

Health Care

At Historic Abortion Arguments, Conservatives Signal Changes

Published

on

Members of the Supreme Court’s conservative majority are suggesting they may make sweeping changes to limit abortion rights in the United States

Original Article: theday.com

Continue Reading

Health Care

Pandemic Worriers Shown to Have Impaired General Cognitive Abilities

Published

on

The COVID-19 pandemic has tested our psychological limits. Some have been more affected than others by the stress of potential illness and the confusion of constantly changing health information and new restrictions. A new study finds the pandemic may have also impaired people’s cognitive abilities and altered risk perception, at a time when making the right health choices is critically important.

Source Here: medicalxpress.com

Continue Reading

Trending

RLER.com