![]() There is no free lunch here, and my personal experience indicates that ESET tend to favor less FPs over a more aggressive detection in consumer products.ĮSET had 0FP in Aug2017, while MSE had 2FP in Jan2018 and 1FP in Feb2018, from 1,500,000 samples, which is 0.00006%.Īs I said, having bad FP score indicates bad product, but having a good FP score doesn't necessarily mean the actual FP of a product is low. But if the optimization in the latter case is good enough to suppress FPs in real life is questionable a majority of AVC's FP test samples are software in top download category from software distribution sites). So there is a trade off here: you can do conservative detection in favor of less FPs, and leave aggresive detection to IT administrators or you can be more aggressive in detection and optimize 3rd party FP test scores using certain white samples in the training set (e.g. Such product is susceptible to "pool pollution" attack and can be bypassed with clever social engineering. ![]() And for some static ML engines with low FPs in such test, one can easily trigger an FP by randomly padding zeroes and ones at the end of a benign file. On the other hand, if one product performs well in such FP test, it doesn't necessarily mean it is indeed low in FPs in real life.įor some "low FP" product in such test, one can easily make it generate an FP using simple tools and innocent code (like hello world). ![]() I personally tend to interpret the FP test in such 3rd party test as: if one product performs poorly in such FP test, it is indeed bad (and there is actually a sensible gap betwen 1~2 FPs and 4~5 FPs in such test when using the product in real life). But it also generates way more false positives. It blacklist threats very quickly with recent updates. MSE is indeed impressive with their new cloud system. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |