Connect with us

Health Care

Nation-state Threat Actors Are Motivated by Intelligence, Cash

Published

on

As cyberattacks continue to hamper the operations of critical infrastructure, including hospitals, it may be tempting to think of the hackers as if they’re the main characters in the 1995 film of the same name: Kids who want to stir up trouble, and maybe make some cash doing it.

But “this is not a teenager in a hoodie doing these kinds of attacks; these are elaborate, sophisticated, organized criminal gangs,” as Errol Weiss, chief security officer at H-ISAC, warned at HIMSS21 this past summer.

And some of these gangs have the muscle of nation-states behind them – making them even more potentially threatening to healthcare organizations of all shapes and sizes.

Weiss, who will be appearing this December at the virtual HIMSS Healthcare Cybersecurity Forum, spoke with Healthcare IT News this past week just as news broke that the U.S. Department of Justice had charged two men for their alleged involvement in deploying Russia-linked REvil ransomware.

He discussed the motivations for nation-state threat actors, what can be done to tamp down on ransomware and why it’s so important for everyone to protect themselves and their data.

Q. Let’s get started with just a quick overview of the threat landscape. How do nation-state threat actors stand out in terms of cyber attackers?

A. There’s a lot of threat actors out there, and they target all over the place. There are the criminals who are basically doing the shotgun approach. They’re just launching their attacks, and anybody who takes the bait and falls for it as a victim, the criminals will figure out how to monetize what they’ve gotten access to. There’s a spectrum of those threat actors.

Focusing on the nation-state threat: They’re very patient. They take years to run the attack. They’re so good at it, and they’re so difficult to detect that once they’re in, they’re in for months, or maybe a year, before anybody ever figures that part out. We can provide some examples of Russian state actors, Chinese state actors, and maybe lesser-known ones, like those linked to Iran, North Korea, Vietnam.

Q. How has that changed over the years?

A. A few years ago, you could count maybe a few dozen countries that had a decent, offensive cyber capability. And now it’s probably the opposite, where there are only a few dozen countries that don’t have a decent cyber offensive capability. So it’s really come a long way. It’s not unusual to hear about actors like this now.

In the past, too, when we’ve talked about nation-state objectives, it was usually about cyber espionage: They were out to try and short-circuit their way to gain some competitive advantage over their other adversaries, or research and development.

So, of course, the big motivation over the past few years has been around COVID-19. One of the obvious objectives there is vaccine development, treatments, anything that they can get their hands on in terms of being able to help their own population. And I think that makes a lot of sense.

But with these other countries sweeping in – I’m sure they’re still concerned about protecting the population from COVID-19. But they are also motivated by cash. Their objective is also, just like cybercriminals, to steal money. So they’re using ransomware to raise cash. It’s not just about intellectual property.

Q. We’ve certainly seen service disruptions, at least temporary ones, as secondary effects of ransomware attacks on hospitals and health systems. Do you think there’s a possibility that nation-states will deploy ransomware specifically to disrupt services?

A. I can’t say I could find any real-world examples of something like that. But I think it’s certainly feasible, right? There’s been a lot of media coverage and some conjecture that there’s a connection between ransomware events, disabling hospital services, and causing some patient impact.

I think that any reasonable person would probably agree that, of course, if an organization has to use paper, or they’re diverting ambulances, because their IT systems are down, there’s probably going to be some level of patient impact.

Could an adversary use a tactic like that to cause some level of disruption and essentially create a terrorist-level kind of an event? I think the answer is yes.

Especially if you are capitalizing on some natural disaster, and then making it even worse by interfering with the ability for first responders to do what they have to do, or with a hospital or health providers being able to help patients. I think that’s certainly possible, unfortunately.

Q. I just love talking to security professionals. It’s always so cheerful. That raises the question for me: The Biden administration has has signaled that it would treat some ransomware attacks as akin to terrorism, and that it might respond to ransomware with military action. In the future, do you foresee a sort of ceasefire agreement?

A. I think you’re on the right path. When we see events happening like that, I think this is where the citizens would expect assistance from the government of that order. We don’t have the ability to launch bombs or take over countries. And that’s where we would need the government to be able to do that, the military to be able to do that.

We can do things from a malware ransomware defense standpoint; we can try to work with the civil courts to try to make it harder for the bad guys to do malware. But when it comes to arresting people, it’s law enforcement that has to do that. I can’t do that.

So in much the same way, if there was a terrorist event like that, that was really causing that kind of disruption or impact to society, I would expect some kind of response from the government of that order.

“A few years ago, you could count maybe a few dozen countries that had a decent, offensive cyber capability. And now it’s probably the opposite.”

Errol Weiss, H-ISAC

Q. We’re also seeing some movement from Congress to try to implement some carrots and sticks when it comes to cyber incident response. Do you have any thoughts as to the efficacy of those proposed measures?

A. I think it was kind of a knee-jerk reaction – we’re starting to see these mandatory incident reporting requirements, and I’m not sure that’s the right way to go.

Personally, I think when it comes to the ransomware problem that we’re having today, I think it’s being fueled by the underground economy of digital currency. And that’s where I really think we need to address it. I don’t think we’re ever going to be able to get rid of digital currency. I think it’s here to stay.

But I think we’ve got to figure out how we can appropriately control it and regulate it so that it can’t be used for what I see as so many underground, illegal activities. Criminals are able to move money around very, very easily, without any kind of consequences that are established today with legitimate banking institutions.

I mean, let’s face it. Humans have been paying ransoms for a long time – a lot longer than the internet’s been around. And it’s gotten worse for all kinds of reasons.

I think we need to address some of the underlying issues here. The first payments just encouraged the actors to keep going, and now we’re seeing ransom payments that people never would have thought of five years ago. Millions of dollars. It’s unheard of.

Q. Given that environment, what are you hoping audience members will take away from your Fireside Chat this December?

A. These attacks are real. It’s not the science fiction of spy novels anymore. Everyone has got a piece of this puzzle that the adversaries are interested in. These nation-states that I mentioned have got intelligence objectives in order to capture information and protect their country. They’re trying to protect their citizens and their populations.

Right or wrong: We’ve been spying on each other for years, we’re gonna continue to do it. The internet’s an enabling way to do that.

People that are working on things like COVID-19 vaccines, treatment plans and preventive mechanisms are of high interest for adversaries. Whether you’re working on a clinical trial, or you’ve got patients that are being tested in trials, the data that is sitting inside these institutions is a treasure trove.

We’re all sitting on this data that has enormous value for other people. And while you may not have a direct role in that project or the study, you are an avenue for the adversary to obtain that information. That’s where everybody needs to be on alert.

Errol Weiss will continue the discussion at the digital Healthcare Cybersecurity Forum event with Jigar Kadakia, chief information security and privacy officer at Mass General Brigham. Their Fireside Chat, “Focus on Nation State Threats Targeting Health Providers,” is scheduled to air at 3:55 p.m. ET on Monday, December 6.

Kat Jercich is senior editor of Healthcare IT News.
Twitter: @kjercich
Email: kjercich@himss.org
Healthcare IT News is a HIMSS Media publication.

Original Post: healthcareitnews.com

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Health Care

How AI Bias Happens – and How to Eliminate It

Published

on

Artificial intelligence holds great promise for healthcare, and it is already being put to use by many forward-looking hospitals and health systems.

One challenge for healthcare CIOs and clinical users of AI-powered health technologies is the biases that may pop up in algorithms. These biases, such as algorithms that improperly skew results because of race, can compromise the ultimate work of AI – and clinicians.

We spoke recently with Dr. Sanjiv M. Narayan, co-director of the Stanford Arrhythmia Center, director of its Atrial Fibrillation Program and professor of medicine at Stanford University School of Medicine. He offered his perspective on how biases arise in AI – and what healthcare organizations can do to prevent them.

Q. How do biases make their way into artificial intelligence?

A. There is an increasing focus on bias in artificial intelligence, and while there is no cause for panic yet, some concern is reasonable. AI is embedded in systems from wall to wall these days, and if these systems are biased, then so are their results. This may benefit us, harm us or benefit someone else.

A major issue is that bias is rarely obvious. Think about your results from a search engine “tuned to your preferences.” We already are conditioned to expect that this will differ from somebody else’s search on the same topic using the same search engine. But, are these searches really tuned to our preferences, or to someone else’s preferences, such as a vendor? The same applies across all systems.

Bias in AI occurs when results cannot be generalized widely. We often think of bias resulting from preferences or exclusions in training data, but bias can also be introduced by how data is obtained, how algorithms are designed, and how AI outputs are interpreted.

How does bias get into AI? Everybody thinks of bias in training data – the data used to develop an algorithm before it is tested on the wide world. But this is only the tip of the iceberg.

All data is biased. This is not paranoia. This is fact. Bias may not be deliberate. It may be unavoidable because of the way that measurements are made – but it means that we must estimate the error (confidence intervals) around each data point to interpret the results.

Think of heights in the U.S. If you collected them and put them all onto a chart, you’d find overlapping groups (or clusters) of taller and shorter people, broadly indicating adults and children, and those in between. However, who was surveyed to get the heights? Was this done during the weekdays or on weekends, when different groups of people are working?

If heights were measured at medical offices, people without health insurance may be left out. If done in the suburbs, you’ll get a different group of people compared to those in the countryside or those in cities. How large was the sample?

Bias in training data is the bias that everybody thinks about. AI is trained to learn patterns in data. If a particular dataset has bias, then AI – being a good learner – will learn that too.

A now classic example is Amazon. Some years ago, Amazon introduced a new AI-based algorithm to screen and recruit new employees. The company was disappointed when this new process did nothing to help diversity, equity and inclusion.

“All data is biased. This is not paranoia. This is fact.”

Dr. Sanjiv M. Narayan, Stanford University School of Medicine

When they looked closely, it turned out that that the data used for training came from applications submitted to Amazon primarily from white men over a 10-year period. Using this system, new applicant resumes were downgraded if they contained the terms “women’s” or “women’s colleges.” Amazon stopped using this system.

On another front, AI algorithms are designed to learn patterns in data and match them to an output. There are many AI algorithms, and each has strengths and weaknesses. Deep learning is acknowledged as one of the most powerful today, yet it performs best on large data sets that are well labeled for the precise output desired.

Such labeling is not always available, and so other algorithms are often used to do this labeling automatically. Sometimes, labeling is done not by hand, but by using an algorithm trained for a different, but similar, task. This approach, termed transfer learning, is very powerful. However, it can introduce bias that is not always appreciated.

Other algorithms involve steps called auto-encoders, which process large data into reduced sets of features that are easier to learn. This process of feature extraction, for which many techniques exist, can introduce bias by discarding information that could make the AI smarter during wider use – but that are lost even if the original data was not biased.

There are many other examples where choosing one algorithm over another can modify results from the AI.

Then there is bias in reporting results. Despite its name, AI is typically not “intelligent” in the human sense. AI is a fast, efficient way of classifying data – your smartphone recognizing your face, a medical device recognizing an abnormal pattern on a wearable device or a self-driving car recognizing a dog about to run in front of you.

The internal workings of AI involve mathematical pattern recognition, and at some point all of this math has to be put into a bin of Yes or No. (It’s your face or not, it’s an abnormal or normal heart rhythm, and so on.) This process often requires some fine-tuning. This may be to reduce bias in data collection, in the training set, in the algorithm, or to attempt to broaden the usefulness.

For instance, you may decide to make your self-driving car very cautious, so that if it senses any disturbance at the side of the road it alarms “caution,” even if the internal AI would have not sounded the alarm.

Q. What kind of work are you currently doing with AI?

A. I am a professor and physician at Stanford University. I treat patients with heart conditions, and my lab has for a long time done research into improving therapy in individual patients using AI and computer methods to better understand disease processes and health.

In cardiology, we are fortunate in having many ways to measure the heart that increasingly are available as wearable devices and that can directly guide treatment. This is very exciting, but also introduces challenges. One major issue that is emerging in medicine is AI bias.

Bias in medical AI is a major problem, because making a wrong diagnosis or suggesting [the] wrong therapy could be catastrophic. Each of the types of bias I have described can apply to medicine. Bias in data collection is a critical problem. Typically, we only have access to data from patients we see.

However, what about patients without insurance, or those who only choose to seek medical attention when very sick? How will AI work when they ultimately do present to the emergency room? The AI may have been trained on people who were less sick, younger or of different demographics.

Another interesting example involves wearables, which can tell your pulse by measuring light reflectance from your skin [photo-plethysmography]. Some of these algorithms are less accurate in people of color. Companies are working on solutions that address this bias by working on all skin tones.

Other challenges in medical AI include ensuring accuracy of AI systems (validation), ensuring that multiple systems can be compared for accuracy, which ideally would use the same testing data. But this may be proprietary for each specific system – and ensuring that patients have access to their data. The Heart Rhythm Society recently called for this “transparent sharing” of data.

Q. What is one practice for keeping biases out of AI?

A. Understanding the various causes of bias is the first step in the adoption of what is sometimes called effective “algorithmic hygiene.” An essential practice is to ensure as much as possible that training data are representative.

Representative of what? No data set can represent the entire universe of options. Thus, it is important to identify the target application and audience upfront, and then tailor the training data to that target.

A related approach is to train multiple versions of the algorithm, each of which is trained to input a dataset and classify it, then repeat this for all datasets that are available. If the output from classification is the same between models, then the AI models can be combined.

A similar approach is to input the multiple datasets to the AI, and train it to learn all at once. The advantage of this approach is that the AI will learn to reinforce the similarities between input datasets, and yet generalize to each dataset.

As AI systems continue to be used, one tailored design is to update their training dataset so that they are increasingly tailored to their user base. This can introduce unintended consequences. First, as the AI becomes more and more tailored to the user base, this may introduce bias compared to the carefully curated data often used originally for training.

Second, the system may become less accurate over time because the oversight used to ensure AI accuracy may no longer be in place in the real world. A good example of this is the Microsoft ChatBot, which was designed to be a friendly companion but, on release, rapidly learned undesirable language and behaviors, and had to be shut down.

Finally, the AI is no longer the same as the original version, which is an issue for regulation of medical devices as outlined in the Food and Drug Administration guidelines on Software as a Medical Device.

Q. What is another best practice for preventing AI bias?

A. There are multiple approaches to eliminate bias in AI, and none are foolproof. These range from approaches to formulate an application so that it is relatively free of bias, to collecting data in a relatively unbiased way, to designing mathematical algorithms to minimize bias.

The technology of AI is moving inexorably toward greater integration across all aspects of life. As this happens, bias is more likely to occur through the compounding of complex systems but also, paradoxically, less easy to identify and prevent.

It remains to be seen how this field of ethical AI develops and whether quite different approaches are developed for highly regulated fields such as medicine, where transparency and explainable AI are of critical importance, and other endeavors.

Twitter: @SiwickiHealthIT
Email the writer: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Source Here: healthcareitnews.com

Continue Reading

Health Care

At Historic Abortion Arguments, Conservatives Signal Changes

Published

on

Members of the Supreme Court’s conservative majority are suggesting they may make sweeping changes to limit abortion rights in the United States

Original Article: theday.com

Continue Reading

Health Care

Pandemic Worriers Shown to Have Impaired General Cognitive Abilities

Published

on

The COVID-19 pandemic has tested our psychological limits. Some have been more affected than others by the stress of potential illness and the confusion of constantly changing health information and new restrictions. A new study finds the pandemic may have also impaired people’s cognitive abilities and altered risk perception, at a time when making the right health choices is critically important.

Source Here: medicalxpress.com

Continue Reading

Trending

RLER.com