Another Security Vendor Shows Its Security Culture

 (Hint, it’s not in a good way)

More news about large vendors making rank amateur mistakes and pretending as if that shouldn’t affect the value of their products and company.  Pricey and non-entry level security company Trend Micro was hit by Googles own code team, the same ones who outed AVGs’ disastrous Web Tuner, in another angry exchange.  Let’s read one of Googles’ engineers’ thoughts about this issue, originally an email exchange with folks at Trend Micro.

“I don’t even know what to say—how could you enable this thing *by default* on all your customer machines without getting an audit from a competent security consultant?”

And further on:

“So this means, anyone on the internet can steal all of your passwords completely silently, as well as execute arbitrary code with zero user interaction. I really hope the gravity of this is clear to you because I’m astonished about this.”

               Yet again, we see supposedly veteran companies making mistakes they simply shouldn’t if they had sufficient processes and culture in place to prevent these mishaps.

               You know what also bothers the heck out of me?  This statement of Trend Micro themselves.  Cut, obviously.

“As part of our standard investigation we checked and verified that the only product affected by these issues is our consumer Trend Micro Password Manager and no commercial or enterprise products are affected.”  (My emphasis.)

               So….those users who may have lost personal account information are less deserving of secure development practices that end in a more secure product than commercial or enterprise users?  You are a security company, to try and justify exceptionally poor development because the customers are smaller (less wealthy is what they really mean) is akin to a doctor going light on hygiene because his customers are mainly poorer.  That defeats the purpose of even getting security, or medicine if those doing it are only going to introduce worse damage in the process.

               I think this is the problem, Trend Micro has an idea that it can separate the security of wealthier customers from poorer customers.  I believe that they believe this is because, if you are either unwilling or unable to spend more money with them, then they have no innate pact with you to provide a secure product or at least, decent development practices in exchange for what small amount of currency you can give them.  This is wrong.  Secure development practices are as integral to security software products as sanitation and hygiene is to surgery, and it cannot be skimped, even if the service is free.  Not that Trend Micro offers anything free.

Disagreeing with Giants and Badly Framed Arguments

There has been a continued dialogue in IT security over the priorities between business ability/flexibility and security.  The general consensus, from college courses and instructors to private sector CIOs, and crypto-gods like Bruce Schneier, is that security should always follow after business opportunity.

            That is fine.  That is what we call a hypothesis or conjecture, because it is a statement without facts and evidence, or even experiments, to back it up.  And until recently I bought into it as well.

            But no longer.  This change came about from reading articles supposedly demonstrating the above conjecture through…..a poll.  Yes.  You read that right.  A single poll was touted by security god and generally more critical thinker Bruce Schneier as demonstrating the proof of his opinion.  Let’s drop those links for you to look at.

This is the first appearance of his post.


This is the most recent one and the one that triggered my thoughts.


This link deals with the original poll and article

            So let’s examine the claims.

            “This article demonstrates that security is less important than functionality.

(Referencing the link at the end of the article)

            Did you even read the article?  It took an informal poll of people generally dealing with IT security and/or budgets, gave them a question requiring a yes or no answer, without reference to numbers, metrics, or studies that make this poll anything other than an opinion poll, and then regurgitated those numbers back at us, as if they mean anything. 

            Now let us be clear. An opinion poll can prove that vanilla or chocolate is America’s favorite ice cream flavor, but opinion polls cannot, by design, prove anything that is outside of opinions.  Like the coefficient of friction.  Or gravity.  An opinion poll of scientists does not prove anything other than, X amount of Y believe Z, or not.

            But Mr. Schneier does not use the word ‘prove’, he uses the word ‘demonstrate’.  I believe my disagreement stands with whatever word you wish to use, what this poll shows is the prevailing attitudes of the 800+ IT personnel and executive personnel interviewed with a flawed question that badly frames the wrong argument.  So, because I do not trust the reader has gone and read the article about the original poll, let’s document it inline.

            “When asked about their preference if they needed to choose between IT security and business flexibility, 71 percent of respondents said that security should be equally or more important than business flexibility.

            “But…(irrational thought process and wording removed) when the same people were asked if they would take the risk of a potential security threat in order to achieve the biggest deal of their life, 69 percent of respondents say they would take the risk.”

            First of all, what is “the biggest deal of their life” and how does that scenario apply to the real word conundrum stated above, business flexibility stands above security?  This is a weasel word, a completely subjective griffin of a concept that can in no way be formally quantified for proper dissection.  We need numbers.

            Second of all, what is ‘a security risk’?  That runs the gamut from…nothing to everything! It is not a useful phrase to give me anything other than what I, from my own personal experience, can read into it, and therefore, numbers derived from it mean nothing.  That is not a demonstration of anything valuable, and certainly do not demonstrate any proof of a guiding principle outside of prevailing attitudes.

            I’d like to point out that the vast majority of business decision conflicts with security are not a “biggest deal of our lives” versus “security breach.”  That makes the scenario less than favorable for getting any real subjective data out of it, so we can all take a look and decide for ourselves.  And by less than favorable, I mean it is an incorrect model to look at, think of, or poll people for, and will create false results if you treat it otherwise.

            If I had wanted to find out how humane or compassionate people are, I wouldn’t dream up a scenario with Mother Theresa and Adolf Hitler and poll people on that, it doesn’t inform on the subject at hand, which is always real people, real world problems, real world solutions.  And neither does this poll question.


            However, that is ok, because what I now think and believe, is that myself, Bruce Schneier, and most of the security community have been looking at this the wrong way, brainwashed by decades of prior belief, and absolutely zero research and proof.  That is a big statement to make, so let’s get into why I believe this.

            First of all, the idea that a business is the one that suffers a breach isn’t exactly true.  I know, Sony clearly suffered a hack two years ago, Experian this year, and Target…often as well.  So what can I mean by businesses and organizations do not suffer breaches?

            Well, who is doing the hacking, what are they after, and how are they going about getting it?

            The vast majority of attacks are done, nowadays, by professional criminal gangs, for the express purpose of monetary gain through identity theft, medical record theft, credit card fraud, etc.  On the flipside, corporate and government espionage does exist, but the idea that the most common and realistic scenarios of security breaches are from people seeking secret or unique intellectual property like software pirates and government contractor attacks, is dishonest at best.  The truth is that the customer data is the gold at the end of the rainbow, not the companies’ info, for the vast majority of attackers and victims.

            So here is where we have to split hairs.  You, as the business owner, may think the personally identifiable information you have on your customers’ is yours, but the truth is, it isn’t.  That security breach at Experian I mentioned? Yeah, that outed 1 out of every 3 Americans financial data, and that wasn’t even the first time Experian did that.  Over 100 million Americans affected, having to watch their credit reports for fraud and fight the charges when they occur over their entire lifetime and we say the company suffered the breach?  That doesn’t fit at all with the reality.  Experian didn’t lose their data, they lost my data.  They lost your data.  His data.  And while Experian gave those customers’ a subscription to identity theft products, which have no good track record and some famous ones have been sued for failing at it, once your Social Security Number is leaked to malicious actors, it stays there forever, waiting to be resold to another botnet or scammer or what have you.  20 years from now, those 100 million Americans’ information will still be floating around, while Experian merely wrote off the expense of a Lifelock-esque product for those customers over a 3-5 year span, and probably passed the cost back onto the customers, who were the ones originally affected to begin with.

            I do agree, Experian is only one example and thus cannot be indicative of a greater trend without more examples, but if we look at the maths, and I welcome any who have already done that legwork for us, I believe customer data is overwhelmingly the most common target of hackers and hacks, and that problem is most keenly felt by the customers of attacked companies, not the company itself. 

            So that is the problem.  Business owners only see their own financial loss or gain when the fact is, those numbers are the tip of the iceberg for effects from security breaches. 

            To me, it is like buying a brand new door, coming home the next day to find it busted in and your home and effects burned to the ground, only to hear on the radio how the door company is bemoaning the loss of their door, fully ignoring the fact that the vast majority of misery or suffering created by the issue/crime was not suffered by the stockholders whatsoever, but by the customer using the product.

            So of course, if this is MY data, and this is the biggest deal of my life, I’d take that risk, but that is never the issue, or at least, is such a rare magical instance I wonder why it wasn’t included as part of a BUZZFEED QUIZ, versus a legitimate web source for IT info.

            See, this is how you can frame an argument badly and completely lead everyone astray from the facts and problems.  It is a problem that can not only occur out of maliciousness but out of ignorance. 

            The real question for these CIOs CISOs etc. is, since the numbers currently available say the average cost of identity theft per victim is $1,500.00 USD, how much will your business deal make you, versus what it could potentially cost your customers, how much does it need to make before it is worth the potential loss of customers’ data, and would you want those metrics, once created, to be made public?  What kind of fallout would you expect from those revelations, if any, and would you use a publicly available risk-benefit metric made on your company by an impartial third party judge or company as a selling point against your competitors if you are found to take less risk with their data for more reward than they do?


Oh, and, why didn’t anyone else think of this already? I cannot be the first.

Ring in the New Year with anti-virus vendor malfeasance!

Like Lenovo with their Superfish debacle, and like AVG with…a previous issue last year, it was discovered by security researchers (Well, Googles own code team for the Chrome browser project) that anti-virus vendor AVG had a product that, when installed, overrode prior protections for web browsing by Chrome, and instituted their own that, very simply, did not work and created a huge risk for any of their users of the free anti-virus product.

               I understand, it was a free product, and there is a lot you shouldn’t expect from free products, especially the kind of polish you’d have on a paid for the product, but there is just no place for this level of bad judgement in our professional products.

               Let’s be clear, this isn’t opinion, these developers were violating well-known practices when they did this to their end users computers.  Let’s read one of the original reports:

        “This extension adds numerous JavaScript API's to chrome, apparently so that they can hijack search settings and the new tab page. The installation process is quite complicated so that they can bypass the chrome malware checks, which specifically tries to stop abuse of the extension API.

               What is funny is that the exploit they created was the first kind of exploit I myself ever successfully tried out….in a lab in full compliance with the law of course.  If a computer writes its instructions at known locations in a memory space every single time, we can attack that code and space!  I used a buffer overrun exploit to achieve this code misdirection, but in this case that wouldn’t even have been needed.  One article does call the lack of security designs a “Common problem with programmers” but I disagree.  A 20+-year-old design flaw in secure programming shouldn’t find it’s way into a security product like this.  Bad Q&A process, bad development processes, bad customer relations, those are the issues here.

               It is difficult to believe that AVG developers worked so hard to defeat a mechanism for protection unknowingly or innocently.  Bad business is bad for business, and as noted, AVG is developing quite a history for introducing security defects in their security products.

BadBIOS is Back! Or is it…?

A couple of years ago, there was a major scare about a type of malware that was able to infect different computer BIOS systems, as well infect devices using inaudible sound frequencies.


               And now, in 2015, we have the same idea floating around, that devices like TVs and phones can communicate using inaudible sound frequencies and become infected.

               This is absolute horse crap.  Let us be clear on how A) Manufacturers work B) Your computer works

Manufacturers minimize cost at all times.  If they sell you an FM/AM radio, while the CB band, FRS bands, and various other radio bands exist, your run of the mill radio only picks up FM/AM, and only between certain bands.  Phones and computers work the same way, with hardware as well as software.  Want to know why your $700 IPhone sounds like crap when you are on hold and classical music is playing?  Because phone manufacturers/companies don’t give you the bandwidth to play the variety of sound an orchestra plays, they give you the minimum for human speech and hearing only.  This would not be the same for someone making speakers for ultrasonic or subsonic sound creation or amplification, but those would be unlikely, inefficient, or too expensive to provide audible sound as well.

  Now if one were to buy a ‘software defined radio’, then those are meant for observing a huge range of radio bands and frequencies, but they are also many times more expensive.  Expect your speaker and microphone manufacturers to do the same minimization of costs.  Why would the speaker be made at greater expense to create a sound that cannot be heard by users?  Your guess is as good as mine!  And why would a microphone be designed, for extra cost, to pick up sounds that cannot be created by people, nor heard by them?

They don’t.  Or at least, you have to pay extra and spend effort finding these types of devices.  I mean, we have ultrasonic range finders, but without a microphone to receive the signals, they can’t infect anything.

How your computer works:  Your computer listens for commands through various routes:  Human interface devices (HIDs) like the keyboard and mouse are one thing that it listens for, and these are typically through a USB port nowadays.  There are various other routes, like the many internal cables within your system and motherboard.  Is It possible that your computer could be hacked so that the audio cable, and the information it carries, are rerouted or altered for malicious means?  Sure…that’s possible…at some level.  So how can your infected computer play sounds, even these inaudible ones we do not believe are even possible though the vast majority of commercial speakers, and infect another computer?  Well, the second computer would have to have a microphone attached.  Let me repeat: Your computer has no ears!  You have to add a microphone to get it to hear anything, and THAT microphone needs to be configured to listen to inaudible sounds and record them.  And then after THAT, there has to have an actual vulnerability in its software to get it to somehow execute this inaudible sound code.  So…both your computers have to be hacked and connected with hardware devices we simply do not see in clients’ offices……ever.

Researchers claimed to have built proof-of-concept exploits with this attack, but again, it requires both computers to be hacked already and to have hardware set up and configured in ways that you simply do not use.  In addition, they did not say what sound frequencies used but did say they were inaudible and used a type of sonar application stack to communicate with similarly set up computers within 20 meters of each other.  However, as stated by one researcher:

“My theory is that this technology could be used to provide targeted malware a means of external communication for contact with a command and control server. The infected system would receive commands from the server and assuming that the initial infection on the covert system was via USB drive, perhaps the malware could store stolen data on the USB.”

What he means is that using sound as the initial infection route is highly unlikely.  At best it provides a novel way to infiltrate data out, at the current rate of two letters a second, 20 bits per second, or about the speed of a circa 1984 modem, of infected computers to other infected computers, with sound cards and microphones set up for exactly this exploit, within 20 meters of each other.

The problem is the number of security bloggers who do not understand the difference between exfiltration of data and malicious infection.  This technique is almost impossible to use to provide an initial route of infection, but if you have already been attacked, and both computers have the necessary equipment to create and understand those audio signals can infiltrate data almost invisibly.

Bloggers like this guy:

       “Hackers Can Infect Your Computer Even If It’s Not Connected To The Internet”.

               No, they haven’t been able to do that yet, and such a method does not seem very practical.  The researchers were very keen to admit the limitations of this attack.  Limitations not apparent in the title.  Which makes the title false.

               And this one from Dark Reading

     , who says “the proofs of concept brought forward are not all farfetched.”

               Yes.  Yes, they are.  At least this blogger recognizes that this technique is only good for getting data out, not for infecting computers, but the ideas presented in all the proof of concepts are exceedingly specific and do not match any known network configurations we have seen or heard.

If you or someone you know has ultrasound speakers and ultrasound capable microphones connected to their computer at all times, feel free to let us know, there is probably at least one.

In the meantime, do not worry about malicious inaudible sounds coming from your TV.  It is the audible sounds they blare all day that contain the real danger.  Keep an open mind and a pinch of salt available at all times.

Using Data to Drive Security Programs

Micro$oft has produced a white paper concerning their data metrics on security issues and how businesses often do not use or have these metrics to choose where to spend IT security money and time. (

               It is a very interesting read, and I’d recommend all business owners or IT managers to read through it.  For the tl;dr crowd (Too Long; Didn’t Read) the biggest threats to most businesses are social engineering, malicious emails, and poor patching practices, yet these too often receive very little attention in organizations of all sizes.  Let’s go through these big three issues and discuss what they are, and how you can help mitigate their effects on your business.

               Social Engineering: Social engineering can take many forms, but typically rely on human kindness and fear to achieve its ends.  A couple of examples:

               A security researcher picked a woman at random from the phone book, looking for unusual names that may indicate an older person, like Geraldine or Esther, and told her he was from a credit card company fraud services, and that someone was attempting to open credit in her name.  He then convinced the lady to give him her social security number so he could check it against the credit offer.  Of course, there was no fake credit application, but he now has her real SSN.  He told her the SSNs didn’t match, and that they were denying the ‘fake’ credit application, and would be in contact later.

               A call to a secretary has the person on the other line purporting to being a new employee under one of the less stable managers in the company, and requests information they shouldn’t have on the pretense that they need it or are going to get yelled at, fired, etc.  Just give them some info so they can do their job….

               If you ever watch any mystery series or read mystery books, this is social engineering and it should be familiar to you.  It relies on human kindness and connection, and can break almost every other security implementation in place.  It is one of the biggest threats to companies and it requires regular security training and education to prevent.

               Malicious emails.  Despite the numerous calls for the death of email, businesses are still using email, with basically no replacement in sight.

               Emails can be dangerous for a number of reasons, and they often interconnect with social engineering.

               Malicious payloads: Very simply, the email itself can be infected and present a security risk, whether you click on it or not.  This happens quite a lot, as email and email clients are almost guaranteed to be in place and of a limited number of platforms for attackers to hit.

               Malicious links:  This has two issues, one being bad or spoofed links to known good sites, and the other is malicious payloads in the link itself.  Examples:

                              You get an email that says it is from Experian Card services, and they think they’ve detected fraud on your account. The email has a link within it asking you to log in with your full and SSN at a link that calls itself, but when you examine the actual link, it, or some other look-alike site, or perhaps a jumble of numbers and letters.

                              You get an email from your local IT security company telling you to go to a link to download a Micro$oft white paper pdf on social engineering, but the pdf itself has a malicious attachment or payload.  Again, the vast majority of businesses have a way to view pdf files, and they use a limited number of clients or attack surfaces to run it, making this a fairly effective attack.

               The ultimate solution to dangerous emails are:  System hardening and social engineering training.

               System hardening, because email is ubiquitous, and has often offered a way straight into someone’s network without needing a human to click a bad link.

               Social engineering training, because this is the best way to spot and stop the attacks.  The ultimate answer to an email that may be faked from a legitimate company is to not use any of the contact info in the email.  They may provide a phone number, email address, whatever, but you need to not trust anything in the email Use Bank of America?  Don’t trust the email link, use your own previously created bookmarks to go to the site and check what you need to.

               Poor patching practices.

               M$’ definition of poor patching practices focuses on the fact that not all unpatched programs represent the same level of threat, and that too often unimportant programs get patched while important ones stay unpatched because…reasons.  They make the point that unpatched programs are an issue, but that only a few serve as mainstay attack vectors because companies do not patch them.  One of the best excuses I saw on the white paper, and that I have seen in person, are companies who cannot update their programs because it will break what they are doing.

               Personally we consider scenarios where patching breaks functionality to be absolutely substandard, nonfunctional, and often the fault of bad developer practices and poor support rather than user error, which is usually 98% of the problems at hand.  If you see yourself in this situation, you need to extricate yourself from it immediately.

                              So part of poor patching practices is attempting to patch everything, as limited resources typically mean the easiest to fix are, while the biggest security issues remain.

               One important issue here, one that we whole-heartedly disagree with, but that you have to decide for yourself whether you agree, is this:

“It is well accepted in the computer security industry that it’s easier to get fired by causing a substantial operational interruption than it is by deciding to accept residual risk by leaving high-risk programs unpatched.”

We think this is exactly the problem.  When the Office of Personnel Management got hacked in February, it wasn’t Katherine Archuleta, the head of OPM, who had her details leaked, it was millions of contractors who do not work at OPM.

When Experian got hacked, it wasn’t the president who had his SSN and details leaked, it was millions of their clients.

And it is all because of this silly, irrational, factually ridiculous and childish sentiment, that operational shutdowns now are more damaging than your clients getting hacked in the future.

For example, the OPM hack was a direct result of the government not caring or having the time and resources to modernize and sanitize the network.  (Sanitize as in make it sane, not sanitary.  To be clear, their network and security practices were insane and useless.).

However, the result of these horrific substandard practices and thoughts is the completely stoppage of OPMs work for OVER SIX MONTHS.

I can’t reiterate that enough, because some know-nothing bean counters decided against a good decent framework for the computer networks that hold and determine national clearances, they had to completely shut down what they did for over 6 months.

And I also can’t help but draw comparisons across American culture for this inability to withstand hardship now for results later.  It is a kind of mental blinder that large companies and the government continually use as a reason to do actual harm against customers while saving themselves a measly buck or two.

In conclusion, managers and owners need to wake up to the real threats their customers face, and they need to care about their customers and how mismanagement can have profound negative effects on individuals and their financial and personal life.  The tools for this are; Education, metrics, rational thinking, and budgeting.