How quickly the world changes. Twenty-two years ago, the internet began to start hitting its stride and became widely available at college campuses. Seventeen years ago, we had a dot-com bubble burst at the same time that DSL came into our homes for the first time. Thirteen years ago, Facebook was born. Ten years ago, the first iPhone appeared and we could really start to expect wifi to be available everywhere.
Today, we have global-scale cyber attacks that use leaked military-grade security exploits and tools to ransom our data in exchange for cryptographic currency based on blockchains. How on Earth did we get here?
In the beginning, the first viruses started out as simple self-replicating programs, not much more. After time, they started to include more destructive components – payloads that were designed to delete files or to program specific physical equipment to self-destruct. A community for virus writers formed on the internet where programmers could post their code and share ideas on how to defeat security. Destructive viruses, in and of themselves, are not particularly profitable, however, which at first relegated the practice of writing them to hobbyists and government entities. The profitable successor to them was malware – software that was designed to hijack and specifically show ads on a victim’s computer. For years, there was a cat-and-mouse chase between antimalware / antivirus writers and the publishers of malware. Then, in 2009, came a game changing technology, blockchain – a distributed, decentralized, but mathematically trustworthy public accounting ledger which could reliably record transactions. Because there was a platform that could reliably record transactions, Bitcoin (and other cryptographic currencies) were born shortly thereafter, making use of blockchain the basis for the currency. This permitted secured, decentralized, peer-to-peer transactions of bitcoins without an intermediary like a bank.
Fast forward to today.
It’s profitable to release viruses that encrypt and ransom people’s data. One of the things that was born out of the hobbyist days of virus writing were toolkits that were modularly designed to incorporate new security exploits. This allowed virus writers to have people with sophisticated knowledge enable people with much less sophisticated knowledge make use of security exploits without having to put in the effort.
So, when a government entity, such as the NSA, experiences a leak of its cyber-weaponry (as they did in March – add link to relevant article) both in terms of exploits which have been hoarded to be used against foreign powers and also the software used to exploit it, we have a situation where such artillery is very quickly incorporated into virus-writing toolkits. Add a scheme for making money off of it, and you get into a situation where we have global cyber-attacks.
What can we do to stop this?
In a nutshell :
Back up your data offsite
Apply security updates weekly
Implement application whitelisting software with or without antivirus software
Encrypt sensitive data and/or put a physical barrier between it and the rest of your building
Don’t run as an administrator
Use firewalls internally on your servers and desktops
Plan for the worst case scenario and determine the fastest ways to recover
Backup Up Data Offsite
There are myriad backup services available to backup all of your data to the cloud. They can be spendy, but they are worth it.
Apply Security Updates
Understand that new exploits that are unknown to the public (so-called “zero-day exploits”) are constantly being found. That gives the hackers of the world a slight edge over everyone else. They are constantly adapting their strategies on how to slip past defenses, and they are being disseminated quickly. Security updates from Microsoft or Apple plug those holes. It is critical, therefore, to ensure that all computer systems are updated regularly.
So, if it is so critical to be done, why are there so many companies that do not immediately download all software updates that are available? It needs to be understood that there is always a certain amount of risk with updating your production servers. It can be difficult to test all interactions between Microsoft or Apple’s new updates against the software that you are currently using to run your company. This risk of damaging productivity is what can make IT people reluctant to do the updates that are needed in order to protect from virus and hacker attacks. Additionally, applying the updates typically requires restarting production servers, taking critical resources offline for up to 20 minutes (depending on your server infrastructure) if all goes well. If things do not go well… well, let’s just say that it’s best to do these updates after hours. If your infrastructure is critical enough that it can’t be taken offline for 20 minutes, then it is worth putting in the money in order to ensure that it has redundancy – a second, third, or fourth server that can operate while the others are undergoing updates.
Update your desktops and servers weekly. The exploits used by the NSA were patched by Microsoft in April. That means that the May and June cyberattacks of this year could have been averted in their entirety if people had kept current on their updates.
Application whitelisting / Antivirus software
Both of these defenses have their pros and cons. Antivirus software depends on an ever-growing database of patterns that it scans executable files and matches them against. New variants are detected by either updating this database or by using heuristics. In short, antivirus looks for patterns of behavior and what it already knows about in order to block something malicious from running on your network.
Whitelisting takes the opposite approach; instead of trying to match patterns of files against a list of bad ones, it matches programs against a list of those that you explicitly tell it to permit to run. This is more effective because the software does not necessarily need to know about new threats so much, as any new programs which can be run that have not explicitly been given clearance to run will be blocked. This requires a little more administrator intervention to allow software updates for you programs (Microsoft and Apple updates are given a free pass to update Windows and OS X), but it’s very good at 1) keeping your users from running bad programs that will infect your system and 2) preventing bad things from running accidentally, e.g., from clicking links inside a scam email.
Although this sounds like something out of a spy novel, doing some really good groundwork on figuring out who should have the ability to change certain files vs. just read them can potentially save some headaches.
For example, if you have a shared folder where everyone can change every file in the system, then anytime someone runs a virus that can encrypt data will have access to an try to encrypt all those files. If a given person has access to read all of the files in the shared folder, but only has the permission to change files in, say, “Folder A”, then if that person runs a virus, the virus will only be able to encrypt the contents of “Folder A”. Designing who has access to make changes to your files can dramatically reduce the amount of time involved in cleaning up your network if you get hit.
To summarize in geek speak: programs can only modify files on your system to the extent that the user accounts that are running them have the permissions set to modify to said files.
Replace / Disable Administrator accounts
Did you know that on every Windows network, the first administrator account always ends with a security identifier of 500? No other account has that on the end of it, making it easy for hackers to target the account that has access to everything. If you use a Windows domain, it’s important to disable the built-in administrator account and replace it with an account that has equivalent permissions. Same goes for workstations.
Encrypt sensitive data and / or put a physical barrier between it and the rest of your building
Were you aware that anytime someone has physical access to a computer, they can boot off of a CD or USB stick and reset the local administrator password? More frightening still, if you have physical access to a server, there are numerous ways to gain access?
That is why it is so important to put your servers in a locked room. Anyone with the right knowledge and physical access could take over your network with about 20 minutes or less. Use good locks.
This is also why it is so important to ensure that your storage drives are encrypted. If you have sensitive data on a workstation or server, encrypting the drive can defeat this kind of attack and keep your data safe.
Don’t run as an administrator
Security is annoying. This is a fact of life. Its whole purpose is to get in the way of your trying to do something.
However, getting back to the idea earlier that a given program can only modify parts of your computer using the privileges that you run the program with, it makes sense to not have your account be an account with complete access to all parts of the computer. Having a more limited account on the system means that if you try to run a virus that can make changes to parts of your system, but your account isn’t able to, then it won’t be able to. If you need to install programs or make changes, you have a second account that has the access to do that which you can temporarily invoke. Talk to your IT manager on how to set this up and how to use it.
Use firewalls internally on your servers and desktops
Some IT administrators disable the firewalls for workstations and servers because it takes time and can cause trouble to work out all the security for a system. Still, having firewalls up on the network can potentially stop the spread of viruses, as well as harden them against network penetration attempts.
Make a Plan
In business, you do not just make plans for growth, you make also plans for if things go sour. In IT, the philosophy should be the same. A given organization’s disaster recovery plan should be a list of how the IT department will respond to specific threats with specific actions. It should read like, “If the entire office is infected with a virus, X personnel will be sent home, Y system will have the highest priority to come back online, followed by Z system”, etc.
In short, identify your most critical data assets to your business and have a plan on how to keep them functioning during a crisis. If they cannot be kept functioning, get an estimate on how long it would take to bring them back online under different scenarios. One of the most useful tools to guide your thinking is to put a dollar cost on each segment of your business that describes how much would be lost if they weren’t able to work. If those numbers are large, it might make sense to invest in some redundant systems.
Try to include an answer for the following questions in your plan: Is there a place in town where a computer can be built quickly? Does it make sense to have a spare workstation or server squirreled away somewhere on standby? Perhaps it makes sense to provide services in the cloud for certain segments on your business so you don’t have to invest in the IT infrastructure?
Unfortunately, at this time, our government entities are finding exploits and not informing the software manufacturers (like Microsoft and Apple) about how to secure them. As a result, when our security organizations are compromised (a concerning trend) and have their knowledge leaked on the world stage, it becomes a problem for everyone. As citizens, this is not something within our power to control.
What is in our power to control is to take strong precautions against the kind of attacks that will result from this. Even implementing two or three of the suggestions above may be all that you need to turn a complete disaster into something you just read about on the news that happened to someone else.
- Nick Kohler
Receive monthly articles with insider information on the commercial real estate market. Subscribe now