Which is worse for an intrusion detection system false positives or false negatives why

Scientists can sometimes make mistakes or misinterpret data. One mistake that scientists can make is concluding that something is true when it is actually false or concluding that something is false when it is actually true. A false positive is when a scientist determines something is true when it is actually false (also called a type I error). A false positive is a “false alarm.” A false negative is saying something is false when it is actually true (also called a type II error). A false negative means something that is there was not detected; something was missed.

Which is worse for an intrusion detection system false positives or false negatives why

For example, a teacher puts out a jar full of candy and asks each student to hypothesize how many candy pieces are in the jar. John hypothesizes that there are 42 candies. John counts the number candies in the jar. There are 42 candies—John is correct! However, John did not realize that he accidentally missed a few candy pieces that fell on the floor while he was counting. There are actually 46 pieces of candy. In this example, John has made the mistake of a false positive. He said something was true (that his hypothesis of 42 candies in the jar is correct) when it was actually false (there are really 46 candies in the jar). In other words, he accepted his hypothesis when his hypothesis was actually false. 

Sarah also makes a hypothesis about the number of candies in the jar. Sarah hypothesizes that the jar contains 46 candies. Sarah also counts the number of candies in the jar. Like John, Sarah accidentally misses a few candy pieces and counts 42 pieces. Sarah rejects her hypothesis. Sarah has made the mistake of a false negative. She said her hypothesis of 46 was false when it was actually true (there really were 46 candies in the jar). This means that Sarah rejected her hypothesis when it was actually correct.

SF Table 1.3 shows how the decision about accepting or rejecting a hypothesis creates true or false conditions based on the relationship between the hypothesis and reality.

SF Table 1.3. Relationship between reality and hypothesis decisions
Decision Reality/Nature
Hypothesis True Hypothesis False
Hypothesis Accepted True Positive (correct outcome) False Positive
Hypothesis Rejected False Negative True Negative (correct outcome)

All tests have a chance of resulting in false positive and false negative errors. They are an unavoidable problem in scientific testing. This creates problems in data analysis in many scientific fields. For example, a blood test can be used to screen for a number of diseases, including diabetes. To test for diabetes, doctors look at the sugar level in blood when a person has not eaten recently. High blood sugar while fasting is an indicator of diabetes. If a patient did not fast before their blood test, the test may show high levels of blood sugar. The patient may be diagnosed with diabetes when they actually do not have the disease. This is a false positive. This can lead to unnecessary medical treatment. On the other hand a false negative is when the test shows that a patient does not have diabetes when they actually do. In this case the patient may not get treatment and get worse because their disease was undetected.

These examples demonstrate that scientists have to be careful when they make decisions. They try to minimize errors and collect additional information or perform a test multiple times. This is difficult because reducing one type of error often increases the other type of error. Based on the consequences of their decision, one type of error may be more preferable than the other.

In criminal courts, it is generally considered preferable to make a false negative, where the criminal is found innocent when they are really guilty than to convict someone who is innocent (a false positive). On the other hand, with security metal detectors, security would prefer the metal detector indicate it found metal even if it is not present (a false positive) than fail to detect metal when it actually is present (false negative). A false negative could potentially be a security risk.

Because scientists know they might have made an error, they are clear about their procedure and how confident they are in their decision when they share their results.


If you’re working with antivirus software, anti-malware software, or intrusion prevention systems, you may run into cases where you might get a false positive or a false negative. Let’s look at both of these situations, and see how we can resolve these particular issues.

A false positive is when you receive an alert from a security device that’s telling you that there was a problem. The issue with this, is that the security device is actually incorrect. This is a positive, but it’s a false positive– which means there wasn’t really a problem to begin with.

If you’re getting a message from an intrusion detection system or intrusion prevention system, these alerts are usually based on signatures. A piece of information has gone through the IPS that matches a signature, and it’s informing you that there was a match to that. And generally, we have to rely on these signatures, so you always want to be sure that you’re updating to the latest signatures so that a lot of these false positives might not occur.

These false positives can also occur with antivirus or anti-malware software. For instance, in April 2010, McAfee Virus Scan thought that the Windows system program svchost.exe was a virus. Well, that was certainly a false positive– that is an integral part of the Windows XP Operating System. And so, it removed that file, which meant that all Windows XP SP3 devices could not boot. You had to correct that before you rebooted the machine, or once it was rebooted, you had to go through recovery process.

In October of 2011, Microsoft Security Essentials thought that the Chrome Browser was a piece of malware called Zbot, and it deleted the entire browser. So you would try to load the Chrome browser onto your machine, and it would simply be deleted because of the false positive associated with this inside of Microsoft Security Essentials.

If you’re trying to determine if something is really a false positive or not, you might want to get a second opinion. A good choice is the now Google-owned, virustotal.com– where you can point to a particular file on the internet or upload your own files, and see what the results might be in many different types of security software.

This is the virustotal.com website, and I’ve chosen to upload a file called GPpdate that I received in my email. I suspect the virus writer was trying to get me to run this, thinking it was GPUpdate, for the Group Policy Update inside of Microsoft Windows. Let’s choose to scan that file, It’s going to be uploaded, and this file has been seen before by VirusTotal. But I’m going to ask to re-analyze this file so that we’re able to see the process that goes through.

Behind the scenes, VirusTotal has a lot of different antivirus software that it’s going to run against this particular file. So it gives us the name of the file, and it tells us how many different antivirus and anti-malware software is going to detect this particular file as being malicious or being benign. And you can see so far, only one out of the 27 or one out of 36 that have been checked, is showing up as malicious software. Looks like we’ve got two now out of 54.

So as it goes through the scan, you can see the different software like Avast!, and you’ve got Doctor Web, and F-Secure, and Fortinet, and Kaspersky. You’ve got a lot of different software that you can choose from, but only 2 out of that 54 recognized this particular file as being something malicious.

You can also get more details on these files. It’ll even run through different types of Heuristics. For instance, F-Secure found that this was indeed suspicious and Symantec also categorizes this as suspicious. It doesn’t have an exact match for this particular file, but it does notice that this file is doing things that it should not be doing, so it generically categorizes this. This gives you at least some idea if you receive a false positive on whether a file might be something that is malicious, or whether it’s something that’s not going to harm any of your computers.

The opposite of a false positive is a false negative. That means that you did not receive any alerts, no bells went off, there were no sirens, but something bad actually did get through your security systems. This got right through your defenses, and it’s difficult now to go back to determine if there was a false negative or not, because there’s no way to really rewind and know exactly where this might have come into your network.

This is completely silent, so if you had to reconstruct how a piece of malware got into your environment, it becomes a lot more difficult. You want to be sure to check the industry test for hits or misses. Generally, antivirus software, intrusion prevention software and hardware, goes through a number of industry tests where certain files are sent through. And then you can examine how many of them were identified, how many of them had false positives, and how many of them were missed completely they can then be categorized as a false negative.

What is worse in firewall detection a false negative or a false positive and why?

Q: Which is worse in terms of Firewall detection, and why? A false positive or a false negative? A: A false negative is worse by far. A false positive is simply a legitimate result that just got incorrectly flagged.

Which is worse false positive or false negative IT security?

Though both of these are a problem, a false negative is more damaging because it lets a problem go undetected, creating a false sense of security. Whereas a false positive may consume a lot of a tester's energy and time, a false negative allows a bug to remain in the software.

Is it better to have too many false positives or too many false negatives explain?

Based on the consequences of their decision, one type of error may be more preferable than the other. In criminal courts, it is generally considered preferable to make a false negative, where the criminal is found innocent when they are really guilty than to convict someone who is innocent (a false positive).

Why should we reduce false positives in the intrusion detection systems?

Computational power and valuable resources are wasted when the irrelevant data is processed, data flagged, analyst alerted, and the irrelevant data is finally disregarded. In an effort to make intrusion detection systems more efficient the false positive rate must be reduced.