In my last post I talked about some of the acquisition tools that are available to use for imaging evidence. This post will demonstrate how to use the tools I mentioned: dd, dcfldd, and FTK Imager.
For dd and dcfldd I’ll be using the SANS SIFT kit and for the FTK Imager demo I’ll by using a Windows 7 machine.
First let’s start with dd:
I’ll break down the command: First I have sudo, this command allows me to run a command as a different user. In this case I’m running this command as the root user. This user has privileges to make changes to the system. This is required because root access is needed to use the /dev/sdc device. Next is dd, this is the invocation of the dd command. Next is if=/dev/sdc. This is telling dd that the input file is the /dev/sdc device. Notice that I put /dev/sdc not /dev/sdc1. The reason for this is because the 1 is the first partition of the USB drive. I want to image the entire drive so I have to take out the 1 and that will allow dd to image the entire drive front to back. After if= is bs=, this is the block size. The block size tells dd how many bytes to convert at one time. The default block size is 512 bytes. This can be changed to a larger size but it may affect performance. Typically I use the block size of 4096 bytes or 4KB. The last part of the command is of=ntfs_usb1.dd. This is the where the output of the dd command is going to be placed. Because I only have the name of the file rather then the full path of the file, the output of the dd command will be placed inside of the file and that file will be placed inside of the current working directory. Notice the the file name ends with the dd extension. This is a raw file, literally ones and zeros. It can not be read by normal means. Forensic software has to be used to be able to view its contents.
After imaging to file I take MD5 hashes of both the USB drive and the image file to make sure that the image file is exactly the same as the USB drive.
Next is dcfldd, this program is almost identical to the dd command:
Notice that dcfldd shows what it has copied so far.
After imaging is complete the same output screen as dd will show.
After dcfldd completed imaging the USB drive I took a MD5 hash of the USB drive and compared it to the hash the dcfldd generated during the imaging process.
The last tool is GUI based and has far more options then the command line tools used above.
So here are the three tools that I use the most when it comes to forensic imaging. I hope you enjoyed this post. My next post will be a mock case where I will go through the first two steps of the forensic process: acquisition and examination. Thanks for reading!
In the previous post I discussed some of the first steps in the acquisition process. Finding the physical or digital evidence at the crime scene, starting the chain of custody, recording when change of control takes place on the chain of custody document, image hashing, and making the copy of the original or best evidence to use for forensic examination. The only task left in the acquisition process is storing the original evidence. In this post I’ll also introduce some acquisition tools and describe some of their features.
Depending on whether or not the best evidence in a case is digital or physical the best practices for storing that evidence will change how the original evidence should be stored. If a physical hard drive is the original evidence then the usual storing method is to place the hard drive on a shelf in a climate controlled room. There are several problems with this method. Original evidence can sit in storage for years before it is called upon for a case. This can lead to the hard drive breaking down while it is in storage. If this happens then the evidence will be changed and the case will most likely be thrown out. With physical hard drives there is not much that can be done to fix this. However with digital evidence measures can be taken that can safeguard it from these problems. The best thing to do with digital evidence is to upload it to a managed RAID system that has regular backups done. (RAID stands for redundant array of independent disks. This type of system is designed to be a more robust type of data storage.) Another method is to have offsite backups of the evidence done. The main copy can be in a computer system at the police station and the backup can be at a separate location for example. If disaster strikes the main location and the main storage system is damaged or destroyed the backup can be used.
There are multiple disk imaging tools to choose from, some use the command line and others use the GUI (Graphical user interface). Let’s start with one of the oldest tools still in use: dd.
dd is a command line tool that is used to capture forensic images from hard drives, USB drives, and other forms of media. dd stands for data description; others may believe that dd stands for data dump. I’ve heard both terms being used so I interchange them; they both refer to the same tool. Dd is built into the Unix operating system, is part of the GNU Corutils package, and has many features:
Forensic image creation
Drive wiping
Data copying
Dcfldd is an upgraded version of the dd program that was created by the US Department of Defense Computer Forensics Lab. Dcfldd has many more features than its dd counterpart:
Hashing of the data on the fly
Meaning that while the imaging is in progress the program is creating a hash
Displays progress of the imaging process
Imaging bit for bit verification
MD5 and SHA-256 hashing of data
FTK Imager is a GUI based tool made by Access Data. FTK Imager can be run from a forensic system or from a USB drive. This tool has a plethora of features:
Forensic image creation
Memory image creation
Local file system mounting
This feature will allow the examiner to take a peek at what’s inside the hard drive and determine if further examination is needed
Image mounting
Deleted file recovery
Hashing of the imaged media
File and folder exporting from forensic images
These are three great tools that can be used to acquire forensic images in the field. In my next post I’ll show how to use each of these tools. Thanks for reading.
Have you ever seen Law and Order or CSI? In these shows a crime takes place and it’s the detective’s job to solve the crime and place the criminal(s) behind bars. During the investigation police tape is used to cut the crime scene off so nothing is disturbed and everything at the scene is how it was when the crime took place. The preservation of the crime scene is a vital step in the process of solving a crime. The crime scene concept can also be applied to digital forensics. In this case the crime scene can be a computer’s hard drive, RAM, or a USB drive. But how can this “crime scene” be preserved so it can be analyzed for evidence? The answer to this is imaging.
What is imaging? Imaging is the process of taking a bit for bit copy of the original data contents from a computer system.
This original data can come from several different sources:
Hard drive
RAM
Removable media
CD
DVD
USB drives
To relate this to physical police forensic work, taking an image is like the police cutting off the crime scene by using police tape.
Why would an image need to be taken? This is so the data on the computer can be examined. Let’s put this into a scenario. We have a company that has an employee that may have illegal pictures on his company computer. Now the only way to find out if this is true is to examine the contents of the computer. It is not wise to check for the illegal pictures using the computer in question. This may alter the data that is on the computer. Taking an image solves this problem. Because the image is a bit for bit copy everything that is in the hard drive is preserved including when the pictures in question were put on the computer, when they were accessed, and where they may have come from.
There are two different types of acquisition: dead and live. Dead acquisition is when an image of a “dead” hard drive or removable media is taken. A hard drive is considering dead when a questionable operating system is not interacting with the hard drive, dead in this case doesn’t necessary mean broken or not repairable. An OS becomes questionable or when it is suspected of being infected with malware or a virus. A hard drive that is removed from a computer is also considered dead. Only non volatile sources of data storage can be imaged while dead. Non volatile means that the contents of the storage device will be preserved when the device is removed from power.
Here are some examples of non volatile storage:
Platter hard drives
Solid state hard drives
CDs
DVDs
Flash drives
SIM cards
Before taking an image of a dead device it is good practice to label the hard drive using an evidence tag. This tag will contain information like:
Case name and evidence number
Date the evidence was taken
Model and serial numbers of the hard drive
Hard drive capacity
Which computer it came from
Type of evidence
Original evidence – the name says it all, this is the evidence that came from the computer in question
Best evidence – in some cases you will not be able to take the original evidence for a case. So the first copy that is taken is the “best evidence” all other copies that are to be used for forensic examination will be taken from the best evidence.
Working copy – A working copy is a copy of the original evidence that is to be tested using forensic tools
When imaging a dead device always use a write blocker. A write blocker is a physical device that blocks a computer system from writing to the device that is connected to the write blocker. For example if I have a dead hard drive that I want to take an image of the prudent course of action would be to connect the dead hard drive to the write blocker then connect the write blocker to my forensic computer system. This set up will allow me to take an image of the hard drive in question without altering the contents on the hard drive thus maintaining the hard drive’s integrity.
Live acquisition is when an image of a hard drive or other form of storage is taken when the suspect OS is interacting with the evidence. This is when volatile storage is imaged, for example when a computer becomes compromised most of the time RAM will have records or evidence of programs that are not supposed to be running on the computer in question. The only way to acquire RAM is to use live imaging. This is because RAM is volatile evidence. When power is removed from RAM the contents are cleared.
After the image is taken a hash needs to be taken of the evidence. Hashing is a method of taking a file or input of any length and producing a fingerprint or unique value which is used to identify a file. If the slightest bit in a file is changed then the hash of the file will radically change. There is also a chance that two different files can have the same hash. This is known as a hash collision. But the odds of this happening are astronomically small.
The two most common hashing algorithms are:
MD5 – Message digest 5
SHA-2 – Secure Hash Algorithm
The formula for making a hash is: Input or file fed into hashing algorithm = hash
The file is fed into the algorithm and it produces a random set of numbers and characters based on the hashing algorithm used.
Here’s an example of hashing and what will happen when a file’s contents are changed
After the original evidence is imaged a copy of the image is taken so it can be examined. A hash is then taken of this copy and compared against the hash of the original evidence. If the hashes match then the copy is exactly the same as the original evidence.
In addition to using hashing, chain of custody must be used to insure the evidence has not been tampered with. Chain of custody is a paper trail that starts when the evidence is seized by law enforcement through final disposition in the court of law. The chain of custody will start with the name and contact information of the first responder that took the item in as evidence for his or her case. The chain of custody document will then have the name and contact information of the next person that takes control of the evidence; also there will be signatures of both people on this document confirming that the exchange of control took place. This is quite common in police work when a first responder has to pass the original evidence to a digital forensics investigator. The chain of custody document should continue to document who takes control of the original evidence, until that evidence is imaged. Once the image of the original is made and the hash verifies it is an exact copy of the original then the hash becomes the chain of custody until final disposition in court. In other words the hash of the image is the link to the chain of custody document.
For my next post I’ll be discussing best practices on storing original evidence, both physical and digital and some of the tools I use to image disk drives. Thanks for reading!
Have you ever seen the movie live free and die hard? This was a movie that was made in 2007 that featured John McClane trying to stop a cyber terrorist. I remember some of my feelings when I was watching the movie. I was thrilled and excited because of what can be done with digital information. It can be used in many different ways, either for good or for evil. One of my favorite parts of the movie was seeing the actors roll out the rubber keyboards and start typing on a computer. Then all of this crazy hacking stuff started happening. This was the movie that pushed me to start studying information security and hacking. I first started studying ethical hacking, otherwise known as penetration testing. Then a little later on I discovered digital forensics. From that point on I was in love, I found my calling.
So what is digital forensics? It is a subset of forensic science that focuses on the recovery and examination of data or evidence found in computing devices. A great analogy would be that digital forensics is just like forensics that you see on shows like Law and Order or CSI but instead of a physical crime scene there is the hard drive that “contains” the crime scene. Digital forensics is used to unravel the events that have taken place on a computer system. Events may be criminal related and some examples of crimes that digital forensics deals with is:
Intellectual property theft
Network intrusion
Credit card theft
The last crime we have seen quite a bit of in recent months. Both Target and Home Depot have been victims of credit card theft on a massive scale. The only way to find out how the thieves breached their system is to examine what happened on the affected system.
There are three steps in the digital forensics process:
Acquisition
Examination of the evidence
Reporting
The first step is acquiring the evidence for future examination. Depending on the situation the investigator may be grabbing a physical hard drive, contents in RAM, CDs, DVDs, USB drive(s) or the contents of a computer’s hard drive. When obtaining the contents of a hard drive or RAM the best practice is to obtain a bit for bit copy of the original evidence called an image. After the image is taken a hash should be generated for the original evidence. This hash will later be used in the next step of the forensic process. This hashing process is similar to what I described in the passwords post. If the slightest bit is changed in a file then its hash will change dramatically. So when a hash is taken for both the original and the copy of the evidence if the hashes are the same then their contents are exactly the same. This is essentially preserving a crime scene exactly as the criminal left it so it can be examined for evidence.
The next step is examination of the evidence. As a rule of thumb an investigator should never examine the original or best evidence. So before examination a copy of the original evidence should be taken and a hash should be taken of the copy. Then the hash values for both the original and the copy are compared. If they match then the contents of both the original and the copy are the same. Depending on what the investigator is looking for he or she will use a series of tools that will comb through the contents of the working evidence to find what they are looking for.
The last step is the single most important step in the digital forensics process: reporting. After the examination a report explaining what was found during the investigation must be written and presented to upper management or the owners of the affected system. Most of the time these people will not be technically savvy so the findings of the investigation must be translated into language they can understand. After all if the report is not done well then proper action may not be taken and that will make the entire investigation process almost meaningless.
For my next few posts I’ll be focusing on each step of the forensic process. I’ll also be posting some of my own tests that I perform in my lab so you can see what happens to a computer when it is hacked and when evidence is analyzed. If there are any comments please feel free to leave a comment below.
I always used to download and update my programs the hard way. I would wait until the programs complain that an update is available. From there I would download the update and patch the program. I sure many of you do the same thing. It’s quite time consuming, until I discovered Ninite.
Ninite is a service that supports the installation and update of multiple programs simultaneously. The Ninite service supports a number of popular programs including iTunes, Skype, and Steam. The way Ninite works is you visit the Ninite website and select the program(s) that you wish to update or install then click the “get installer” button and your computer will download a program that will install and update the programs you selected in the background without any configuration required. Ninite will install the programs in their default locations and with the default settings.
Ninite is very easy to use, I’ll demonstrate it below:
Ninite is a fantastic tool that will save time when updating and installing programs on computers. For my next post I’m going to start diving into my specialty: digital forensics. This post will explain what digital forensics is and why it is important today. Thanks again for reading and if you have any questions or concerns please comment below.
A little while back I was using a simple way to backup my computer’s data. I used to drag and drop the folders between my original hard drive and the backup. Eventually when I had a great deal of data on my computer it became difficult to keep track of what was backed up and what wasn’t. I could have just continued to drag and drop my folders and files onto the backup drive but I did not want to deal with all of the duplicate warnings that came along when I backed up my data. Strangely enough I never used the Windows built in backup program, before I had a chance to do so I was shown an interesting backup utility called free file sync.
So what is free file sync? Free file sync is a program that synchronizes one hard drive’s data contents to another. It is perfect for backing up data. Free file sync is what is called open source software. Open source software is programs that have their source code (the actual programming code) openly available for the public to view and edit. What is great about open source software is that it is usually developed by a public community of developers, so updates happen very often. This is the case with free file sync as well so updates happen quite often.
Free file sync’s GUI shows the two hard drives in two separate tables. On the left is the primary hard drive and on the right is the backup or secondary hard drive.
Free file sync has some great features:
Multiple drives can be backed up at the same time
Compares contents of one drive against the contents of another
Multiple ways to backup data
Two-way
Two-way updating is where changes to one hard drive will be reflected on the other when the back up in done. This occurs both ways. For example say I have two hard drives: A and B. I want to backup the contents of drive A to drive B. Free file sync will compare A to B and see what is different and write changes to B depending on those differences. If I create a text file on A then backup to B the same text file will be written to B. But with two-way if I change a file on B then the changes will be written to A when the backup is done. In my opinion this isn’t a good approach if the two drives are a primary and a backup. The only time I would write from the backup to the primary is if I was restoring the contents of the primary drive using the backup.
Mirroring
Mirroring is where the backup drive is changed to match the primary drive. This is the type of backup method I use and recommend. If I make changes to the primary drive, say deleting a few files and adding some others those changes will be written to the backup when I use free file sync.
Update
Updating is where new and updated files are copied to the backup drive. Any files that are deleted from the primary drive that were previously backed up will still remain on the backup drive.
Custom
The custom setting is where the user can configure the way free file sync will back up hard drive contents. There are five options that can be turned on or off to create the custom setting:
Copy new items to the right
Overwrite right items
Leave as unresolved conflict
Overwrite left item
Copy new items to the left
Cross platform support
Free file sync can be used on Windows, Mac OS-X, and Linux
For Mac users I recommend using the built in time machine for backing up data and settings.
You can take the contents of an entire hard drive and just place them into a folder on the backup drive. It’s all up to user preference.
Free file sync is easy to use. I will demonstrate its use in the tutorial below.
Free file sync is an easy tool to use for backing up a single folder or an entire hard drive’s contents. For the next post I’ll be showing another tool I use that allows me to install and update multiple programs on my computer at the same time. As always if there are any questions or concerns please email me at hackingdefense@icloud.com or leave a comment below.
I remember when I got my first computer back around 2000. It was a great machine when it first came out. It operated quickly, my programs ran quick, and it didn’t act up too much. All of that changed after about three months of using it. It started getting sluggish, programs would not run properly, and it would lock up quite a bit. I didn’t know what to do at the time. Afterwards I just bought a new computer. A few months after that I learned what was causing the original computer to act up. It was due to a lack of maintenance. I didn’t have any AV (anti-virus) or anti-malware programs on it. I also did not regularly run two built in Windows tools: chkdsk (check disk) and defrag. If I had used this set of tools then I may have been able to keep the older computer healthy. This post is about how to keep your computer virus and malware free. Also this post will show you what built in Windows tools will help with file system maintenance.
First let’s start with anti-virus programs. What exactly is an anti-virus program? These types of programs fall under an umbrella of programs called HIDS (Host based intrusion detection). Some may disagree with this assumption but I believe that AV does fall under this category since these types of programs monitor the internals of the computer system for unwanted software. Examples of AV programs are:
Microsoft Security Essentials
Norton Anti-Virus
McAfee Anti-Virus
How do these programs work? After installation of the program it usually updates with files called signatures. These signatures are used by the program to pick up unwanted software that is on the computer. When the AV program scans the computer it will look for programs that match the signatures that the AV program has in its database. If there are any matches then the program will flag them and inform the user about what it has found. After that the program prompts the user about the unwanted software it found and gives the user options on what to do with the unwanted programs. Most AV programs have the same set of features: virus detection, real time protection, signature downloads, etc.
In my opinion there is nothing but pros when it comes to having AV software. No computer in use today should be without some form of AV software.
Here are some of the pros:
Instant detection of viruses
Deletion of viruses
Quarantine of viruses
There are no cons to having an anti- virus program installed on your computer. Today with hacking being so widespread anti-virus is critical to the safety and security of computers.
I use Microsoft Security essentials. It’s a free anti-virus solution that is available from Microsoft.
Another type of program that is extremely useful for computer security is anti-malware. These programs do essentially what anti-virus does. These programs are built to target malware. Make no mistake a virus is not a piece of malware. They are two different malicious programs. In my experience it’s best to have both an anti-virus and an anti-malware program installed on a computer at all times. Some examples of anti-malware programs are:
Features, pros, and cons for the anti-malware are pretty much the same as the features for the anti-virus software packages. These days never have a computer that does not have some form of anti-malware installed on it.
The next couple of tools that are useful are two built in windows utilities that assist with maintaining the file system and hard drive(s) of the computer. The first is chkdsk (check disk) and the second is disk defragment.
The first tool: chkdsk (pronounced check disk) is a Windows built in tool that checks the hard drive for errors in the file system. These errors can prevent the computer from functioning if they are not repaired. This tool will help in fixing these errors. This tool can be used in both the Windows GUI (graphical user interface) and the command line.
This is the second method for checking the disk drive, using the command line.
The second tool: disk defragment is a tool that is used to organize the contents of a hard drive. As the hard drive is used the contents of the drive become fragmented. As fragmentation occurs the performance of the computer slows down. This tool helps to mitigate that problem. This tool can also be used in the GUI and the command line. I will demonstrate both methods in the tutorial below.
Here’s the command line method of using defragment.
One of the most important things you can do with your computer is to keep Windows patched. What exactly is a patch? A patch is a piece of code that fixes a flaw in a program. When Microsoft finds a flaw in Windows they create a patch to fix the problem. Sometimes you will see a window pop up that says updates are ready to be installed on the computer. These are the patches that Microsoft comes out with to fix problems. These patches are usually released on the second Tuesday of every month. This Tuesday is called patch Tuesday. Always keep your computer patched with updates. There is a GUI window that will allow you to check for updates anytime you want. I’ll show this in the tutorial below.
The settings for Windows update can also be set to download important updates automatically.
The last thing is probably the most important task of all: backup your data. I remember on one of my older computers I lost all of my data because I didn’t back it up. Do not make this mistake. With all of the information that is on the average computer these days it is a real pain to have to start from scratch if data is lost.
I run my anti-virus, anti-malware, patch updates, and my data backup once a week. This has been good practice for me and I’m sure it will work well for you.
There is a real neat tool that helps with backups called free file sync. I use this tool to back up my data. I’ll cover this tool in the first post of a new blog post series called Rob’s tool box. Thanks for reading this post and if there are any questions or comments feel free to comment below.
Have you ever had the experience of forgetting a password? Most people now have their passwords recorded somewhere. I remember I had a piece of printer paper that had all of my usernames and passwords on it. There were about 30-40 different combinations of usernames and passwords. That’s quite a bit of passwords to remember. I was creating new accounts and services to use over the course of a few years and I have been putting the login information on this single piece of paper. Then I lost the paper, I was at a standstill. I scrambled until I found the piece of paper. Then I breathed a sigh of relief. After that experience I decided to go with electronic password storage. I started with a program called Keepass then from there I went to Lastpass. Electronic storage is much easier for me to handle. I don’t have to worry about remembering or having a piece of paper with all of my passwords recorded on it. I’ll go into more detail about Keepass and Lastpass later but first, what is a password?
A good definition of a password is a combination or letters, numbers, and special characters (such as @,#,$ and *) that when passed to an OS, application or web service allow the authorized user access to their account. Passwords can also be combined into passphrases. These can be quotes from TV shows or sayings that the user prefers. I recommend this approach because they are easy for the user to remember and they are usually long. When setting a password for an online service, application, or account make sure to check the password requirements. Sometimes they are very lax max length of the password might be only 10-12 characters. I have seen this in several places and personally I don’t like it. I usually like my password length to be somewhere between 15-20 characters. Longer if necessary depending on the account it is protecting.
How do passwords work? I’ll use Windows passwords as an example. When you create an account you fill out your username and password information, but what does Windows do with this password information? When the account is registered Windows takes the password and hands it over to a cryptographic algorithm. (Cryptographic algorithms are a set of rules that are used to scramble a human readable word.) After the password is scrambled it is called either a cryptographic representation or a hash. After the hashing process the password hash cannot be reversed or changed back. This is why the cracking of passwords is necessary by attackers. This hash is stored by Windows and used as a comparative value when trying to log into the account the password is protecting. When you try to log into your user account Windows will take your password that you typed at the log in screen, hash it, then compare it to the original hash that was created during the registration process. If the hashes match then you are granted access, if not then you are denied access.
Windows comes with a nice built in password security feature that is helpful: the password policy. This is used to control how the passwords for all of the accounts on the system are created. The administrator can set a level of complexity, length, and password age. This feature can be enabled through the local security policy dialog box on Windows 7 professional and ultimate and Windows 8 professional and ultimate. For other versions of Windows 7 and 8 the command prompt must be used in order to change the password policy.
In my experience I have found some practices with passwords are good and others not so much. I’ll walk through the rules I use when creating and dealing with passwords.
First and foremost NEVER EVER use the same password for multiple accounts and services. Because if that password is cracked then the attacker will have access to all accounts that use that password. Using the same password is never a good idea.
With passwords most users would think that having special characters is the most important factor; it’s not. The most important factor with password creation is length. Brute force password attacks can crack any password no matter how complex. It’s just a matter of time. This is why length is so important. The longer the password the longer it takes for it to be cracked. Some passwords can take years or decades to crack based on how long it is. Make sure your passwords are a good length. In my experience a good password length is 15 characters. Do not have passwords that are less than seven characters. These are fairly easy the guess and crack. Special characters are a plus but the most important thing is to make sure the passwords you use are of a good length. Do not base your passwords on your name, email address, or date of birth. Doing so would make the passwords easier for the attacker to guess.
I have used two different programs to store my passwords after I stopped recording all of them on a sheet of paper.
Keepass stores the passwords that are placed in the program in a separate file. This file is opened using a master key. I like to think of this file as the password vault. When the file is closed the file is encrypted by the Keepass program. Some additional features are: password creation, grouping of passwords, adding notes to password entries and much more.
Overall Keepass is a great program for anyone that is looking to store their passwords on their computer or mobile device.
Another great program for storing passwords is called Lastpass. This program differs from Keepass in which the passwords are not stored in a file on your computer but on an online server. Lastpass has many features: website password and username auto filling, secure password generation, and saving passwords as you create new accounts on websites.
There are three different versions of Lastpass:
Lastpass pocket
Standalone program that can be installed on a USB drive making your password vault portable
In order to use the mobile version of Lastpass you have to be a premium subscriber. It’s only $12 per year. That’s a $1 per month, more than worth it in my opinion.
In my experience Lastpass has many pros.
Password generation
Note taking for each password entry
Password auto filling
Password entry sorting
Security checking for passwords
In my opinion there is one con with lastpass
You have to remember the security key to open the lastpass password vault. If this key is lost or forgotten then the vault cannot be opened.
Using these programs and steps has made passwords easy for me to use and manage. I hope these tips and tricks also make it easy for you as well. If there are any questions or concerns please feel free to drop a comment on the post or email me at: hackingdefense@icloud.com
JP Morgan admitted that it was a victim to a cyber-attack. 76 million US households and 7 million businesses were affected by the breach. The data that was compromised included names, emails, and other contact information. According to the bank their customer’s money is “safe”.
Imagine yourself at a Bestbuy or other electronics store, you’re looking at a brand new computer tower that has Windows on it and your heart is set on buying it. After you get it home you go through the setup of the machine and the user account(s). After a few months of using the computer BAM!!!! Everything starts acting odd and you do not know why. All you did was browse the web and install a program on the computer. In the background without you knowing it the program you installed downloaded and installed additional programs on the computer. How did it do this without you knowing it?
Everyday many people use their home computers for many things: email, homework, writing, blog posts and banking or other sensitive activities. Your files have to be protected from unauthorized access and the first step to this is to not allow a malicious person to have access to an administrator account. Well this new computer you setup in the story had you setup an administrator account as the one you use for regular usage.
What is an administrator account? First I have to talk about the concept of privileges. With Windows there are different types of user accounts. The important types are: standard user and administrator. An administrator account has the ability or “privileges” to make changes to the system. Some of these changes include: installing and uninstalling programs, deleting certain files, changing security settings, and modifying the network settings. Standard user accounts do not have the privileges that administrator accounts have. This type of account can create files like documents and spreadsheets; they can also delete the files they create. However these accounts cannot make any changes to the system or access any file that does not belong to them. When a user logged into a standard user account tries to make system changes Windows will prompt the user for the administrator’s password (if there is one). Unless this password is put in correctly the system change will not take place. This feature is called user account control (UAC) and its primary purpose is to make sure unwanted system changes do not take place.
For regular computer usage an administrator account should never be used because you don’t want changes to be made to the computer by mistake. Also if a hacker gains control of your user account and it’s an administrator account then the hacker has complete control of your computer. Another reason to separate the user and administrator accounts is because if a user is logged into an administrator account and they click on a link that contains a malicious program then the program will install itself without the user realizing it. But if the same thing was done by a standard user account then the user account control will be triggered alerting the user that a program is trying to install itself. This is one way to stop programs from installing without you wanting them to.
Some of you may think that it’s inconvenient to have to type in a password every time you want to install a program on your computer. Think of it this way: you’re trading a little convenience for security. With airports getting to the gates can take a while because of airport security. Computer security is the same way; if you can put up with inputting a password every time you want to make a system change then you will have a layer of defense not only against attackers but also against user error and programs installing themselves without you knowing about it.
Setting up a separate user and admin accounts can sound like it’s hard but it is not. This can be done using one of two ways: The Windows GUI (Graphical User Interface) and the CLI (Command Line Interface). Personally I prefer the command line due to its simplicity and speed. But with the command line you need to know certain commands and syntax in order create the accounts. I’ll cover the GUI first:
Start off by making sure the account you are using is an administrator account
This is required because the admins are the only accounts that can make changes to a system. This includes creating user accounts.
Click start -> control panel -> user accounts and family safety -> User accounts
After you confirm that the account you are logged into is an administrator account the next step is to create a second administrator account that you know the username and password to. This account will take over the administrator privileges that your daily usage account will no longer have.
Click start -> control panel -> User Accounts and family safety -> click add or remove user accounts
After the setup of the new admin account is complete you can proceed to downgrade your regular usage account to a standard user account.
Close the currently open windows and click start -> control panel -> User accounts and family safety -> User accounts
Log out of the changed account then log back in for the changes to take place.
As an additional measure I alter my User account control settings. I make it more sensitive.
For the more advanced user the command line can be used to change user account types. Each account in Windows belongs to a group. Examples of groups are: the user group and the administrator group.
Open an administrator command line prompt. Click start -> type cmd -> right click on the cmd icon -> click run as administrator
Confirm that the account you want to remove is an administrator by listing the accounts that have administrator level access
Syntax: net localgroup administrators
Log out of the account and log back in to have the changes take place
After separating these accounts out make sure they are both protected with strong passwords. A strong password should be long and contain several different types of alphanumeric and special characters. In a later post I will be covering how quick common passwords can be broken, tools that can make and store passwords, passphrases, and how to make a strong password that is easy to remember.
I hope you have enjoyed and learned from this post. If you have any comments or concerns please feel free to use the comment box below.
Teaching the computing world how to protect themselves against hackers.