Acquisition: How to use three of my favorite tools

In my last post I talked about some of the acquisition tools that are available to use for imaging evidence. This post will demonstrate how to use the tools I mentioned: dd, dcfldd, and FTK Imager.

For dd and dcfldd I’ll be using the SANS SIFT kit and for the FTK Imager demo I’ll by using a Windows 7 machine.

First let’s start with dd:

With the dd command i need to know the location of the mounted USB device that I'm going to image. The mount command will show where the USB device is in the Linux filesystem.
With the dd command I need to know the location of the mounted USB device that I’m going to image. The mount command will show where the USB device is in the Linux filesystem. The third line before the last line says: /dev/sdc1 on /media/Thumb Drive This is the device I’m looking for. /dev/sdc1 is where the USB device is located within the Linux filesystem.
Now that I know the location of the USB devide I can start the imaging process. In this screenshot I invoked the dd command to image the USB bit for bit and to send the image file to a location of my choosing.
Now that I know the location of the USB device I can start the imaging process. In this screenshot I invoked the dd command to image the USB bit for bit and to send the image file to a location of my choosing.

I’ll break down the command: First I have sudo, this command allows me to run a command as a different user. In this case I’m running this command as the root user. This user has privileges to make changes to the system. This is required because root access is needed to use the /dev/sdc device. Next is dd, this is the invocation of the dd command. Next is if=/dev/sdc. This is telling dd that the input file is the /dev/sdc device. Notice that I put /dev/sdc not /dev/sdc1. The reason for this is because the 1 is the first partition of the USB drive. I want to image the entire drive so I have to take out the 1 and that will allow dd to image the entire drive front to back. After if= is bs=, this is the block size. The block size tells dd how many bytes to convert at one time. The default block size is 512 bytes. This can be changed to a larger size but it may affect performance. Typically I use the block size of 4096 bytes or 4KB. The last part of the command is of=ntfs_usb1.dd. This is the where the output of the dd command is going to be placed. Because I only have the name of the file rather then the full path of the file, the output of the dd command will be placed inside of the file and that file will be placed inside of the current working directory. Notice the the file name ends with the dd extension. This is a raw file, literally ones and zeros. It can not be read by normal means. Forensic software has to be used to be able to view its contents.

dd image completion
This screen will show after the dd command has completed imaging the USB drive.

After imaging to file I take MD5 hashes of both the USB drive and the image file to make sure that the image file is exactly the same as the USB drive.

md5 sum of original and image
Notice that the random stings of numbers and letters before ntfs_usb1.dd and /dev/sdc are exactly the same. This verifies that the USB drive and the image file are the same.

Next is dcfldd, this program is almost identical to the dd command:

Using dcfldd to take an image
The only differences between the dcfldd command the dd command shown above is dcfldd after sudo, that’s the invocation of the dcfldd program, hash=md5 (I’m telling dcfldd to use MD5 as the hashing algorithm for image verification), and md5log=md5hash.txt (I’m telling dcfldd to send the md5 hash it generates to a text file named md5hash.txt)

Notice that dcfldd shows what it has copied so far.

After imaging is complete the same output screen as dd will show.

dcfldd image completion

After dcfldd completed imaging the USB drive I took a MD5 hash of the USB drive and compared it to the hash the dcfldd generated during the imaging process.

md5 hashes of image file and original
Both hashes match

The last tool is GUI based and has far more options then the command line tools used above.

After starting FTK Imager here's the screen that you will see.
After starting FTK Imager here’s the screen that you will see.
Click on create image
Click on create image
Select the source of the evidence. In this cases it's a physical drive.
Select the source of the evidence. In this case it’s a physical drive.
Next select which drive to image
Next select which drive to image, the drop down list will have all of the drives that are connected to and recognized by the system.
After drive selection
Next a destination for the image has to be specified. Click add.
Select which format the image is going to be. In my case I chose Raw (dd)
Select which format the image is going to be. In my case I chose Raw (dd)
Next FTK Imager will ask you to fill in some case information.
Next FTK Imager will ask you to fill in some case information.
Next select a destination for the image file.
Next select a destination for the image file. I choose to place the image file on the desktop. Also notice the image fragment size. FTK Imager can split the image file into multiple pieces based on what size is placed in the fragment box. If the size is zero then FTK Imager will not fragment the image file. The image file can also be compressed and encrypted.
After all of the options are selected click start to begin the imaging process
After all of the options are selected click start to begin the imaging process
FTK Imager will display the current progress of the imaging
FTK Imager will display the current progress of the imaging
After imaging is complete FTK Imager will show hash reports and other data related to the imaging process. The most important thing is to make sure that the hashes match.
After imaging is complete FTK Imager will show hash reports and other data related to the imaging process. The most important thing is to make sure that the hashes match.

So here are the three tools that I use the most when it comes to forensic imaging. I hope you enjoyed this post. My next post will be a mock case where I will go through the first two steps of the forensic process: acquisition and examination. Thanks for reading!

Acquisition: Storing the evidence and imaging tools

In the previous post I discussed some of the first steps in the acquisition process. Finding the physical or digital evidence at the crime scene, starting the chain of custody, recording when change of control takes place on the chain of custody document, image hashing, and making the copy of the original or best evidence to use for forensic examination. The only task left in the acquisition process is storing the original evidence. In this post I’ll also introduce some acquisition tools and describe some of their features.

Depending on whether or not the best evidence in a case is digital or physical the best practices for storing that evidence will change how the original evidence should be stored. If a physical hard drive is the original evidence then the usual storing method is to place the hard drive on a shelf in a climate controlled room. There are several problems with this method. Original evidence can sit in storage for years before it is called upon for a case. This can lead to the hard drive breaking down while it is in storage. If this happens then the evidence will be changed and the case will most likely be thrown out. With physical hard drives there is not much that can be done to fix this. However with digital evidence measures can be taken that can safeguard it from these problems. The best thing to do with digital evidence is to upload it to a managed RAID system that has regular backups done. (RAID stands for redundant array of independent disks. This type of system is designed to be a more robust type of data storage.) Another method is to have offsite backups of the evidence done. The main copy can be in a computer system at the police station and the backup can be at a separate location for example. If disaster strikes the main location and the main storage system is damaged or destroyed the backup can be used.

There are multiple disk imaging tools to choose from, some use the command line and others use the GUI (Graphical user interface). Let’s start with one of the oldest tools still in use: dd.

dd is a command line tool that is used to capture forensic images from hard drives, USB drives, and other forms of media. dd stands for data description; others may believe that dd stands for data dump. I’ve heard both terms being used so I interchange them; they both refer to the same tool. Dd is built into the Unix operating system, is part of the GNU Corutils package, and has many features:

  • Forensic image creation
  • Drive wiping
  • Data copying

Dcfldd is an upgraded version of the dd program that was created by the US Department of Defense Computer Forensics Lab. Dcfldd has many more features than its dd counterpart:

  • Hashing of the data on the fly
    • Meaning that while the imaging is in progress the program is creating a hash
  • Displays progress of the imaging process
  • Imaging bit for bit verification
  • MD5 and SHA-256 hashing of data

FTK Imager is a GUI based tool made by Access Data. FTK Imager can be run from a forensic system or from a USB drive. This tool has a plethora of features:

  • Forensic image creation
  • Memory image creation
  • Local file system mounting
    • This feature will allow the examiner to take a peek at what’s inside the hard drive and determine if further examination is needed
  • Image mounting
  • Deleted file recovery
  • Hashing of the imaged media
  • File and folder exporting from forensic images

These are three great tools that can be used to acquire forensic images in the field. In my next post I’ll show how to use each of these tools. Thanks for reading.

Acquisition: Collecting the evidence

Have you ever seen Law and Order or CSI? In these shows a crime takes place and it’s the detective’s job to solve the crime and place the criminal(s) behind bars. During the investigation police tape is used to cut the crime scene off so nothing is disturbed and everything at the scene is how it was when the crime took place. The preservation of the crime scene is a vital step in the process of solving a crime. The crime scene concept can also be applied to digital forensics. In this case the crime scene can be a computer’s hard drive, RAM, or a USB drive. But how can this “crime scene” be preserved so it can be analyzed for evidence? The answer to this is imaging.

What is imaging? Imaging is the process of taking a bit for bit copy of the original data contents from a computer system.

This original data can come from several different sources:

  • Hard drive
  • RAM
  • Removable media
    • CD
    • DVD
    • USB drives

To relate this to physical police forensic work, taking an image is like the police cutting off the crime scene by using police tape.

Why would an image need to be taken? This is so the data on the computer can be examined. Let’s put this into a scenario. We have a company that has an employee that may have illegal pictures on his company computer. Now the only way to find out if this is true is to examine the contents of the computer. It is not wise to check for the illegal pictures using the computer in question. This may alter the data that is on the computer. Taking an image solves this problem. Because the image is a bit for bit copy everything that is in the hard drive is preserved including when the pictures in question were put on the computer, when they were accessed, and where they may have come from.

There are two different types of acquisition: dead and live. Dead acquisition is when an image of a “dead” hard drive or removable media is taken. A hard drive is considering dead when a questionable operating system is not interacting with the hard drive, dead in this case doesn’t necessary mean broken or not repairable.  An OS becomes questionable or when it is suspected of being infected with malware or a virus. A hard drive that is removed from a computer is also considered dead. Only non volatile sources of data storage can be imaged while dead. Non volatile means that the contents of the storage device will be preserved when the device is removed from power.

Here are some examples of non volatile storage:

  • Platter hard drives
  • Solid state hard drives
  • CDs
  • DVDs
  • Flash drives
  • SIM cards

Before taking an image of a dead device it is good practice to label the hard drive using an evidence tag. This tag will contain information like:

  • Case name and evidence number
  • Date the evidence was taken
  • Model and serial numbers of the hard drive
  • Hard drive capacity
  • Which computer it came from
  • Type of evidence
    • Original evidence – the name says it all, this is the evidence that came from the computer in question
    • Best evidence – in some cases you will not be able to take the original evidence for a case. So the first copy that is taken is the “best evidence” all other copies that are to be used for forensic examination will be taken from the best evidence.
    • Working copy – A working copy is a copy of the original evidence that is to be tested using forensic tools

When imaging a dead device always use a write blocker. A write blocker is a physical device that blocks a computer system from writing to the device that is connected to the write blocker. For example if I have a dead hard drive that I want to take an image of the prudent course of action would be to connect the dead hard drive to the write blocker then connect the write blocker to my forensic computer system. This set up will allow me to take an image of the hard drive in question without altering the contents on the hard drive thus maintaining the hard drive’s integrity.

Live acquisition is when an image of a hard drive or other form of storage is taken when the suspect OS is interacting with the evidence. This is when volatile storage is imaged, for example when a computer becomes compromised most of the time RAM will have records or evidence of programs that are not supposed to be running on the computer in question. The only way to acquire RAM is to use live imaging. This is because RAM is volatile evidence. When power is removed from RAM the contents are cleared.

After the image is taken a hash needs to be taken of the evidence. Hashing is a method of taking a file or input of any length and producing a fingerprint or unique value which is used to identify a file. If the slightest bit in a file is changed then the hash of the file will radically change. There is also a chance that two different files can have the same hash. This is known as a hash collision. But the odds of this happening are astronomically small.

The two most common hashing algorithms are:

  • MD5 – Message digest 5
  • SHA-2 – Secure Hash Algorithm

The formula for making a hash is: Input or file fed into hashing algorithm = hash

The file is fed into the algorithm and it produces a random set of numbers and characters based on the hashing algorithm used.

Here’s an example of hashing and what will happen when a file’s contents are changed

I created a text file on a Linux system to show what will happen to a file's hash value when its contents are changed
First I create a text file on the Linux system. Notice in the command that the text in quotes starts with a capital t.
MD5 hash of the text file
Next I used the md5sum command to calculate the md5 hash of the text file. The random string of numbers is the md5 hash of the text file.

 

I then used the nano text editor to change the first letter of the text file to lowercase
I then used the nano text editor to change the first letter of the text file to lowercase
Finally I used the md5sum command again on the text file but the hash of the file is very different because of the change I made to the text. This shows that even the slightest change will affect the hash of the file.
Finally I used the md5sum command again on the text file. The hash of the file is very different because of the change I made to the text. This shows that even the slightest change will affect the hash of the file.

After the original evidence is imaged a copy of the image is taken so it can be examined. A hash is then taken of this copy and compared against the hash of the original evidence. If the hashes match then the copy is exactly the same as the original evidence.

In addition to using hashing, chain of custody must be used to insure the evidence has not been tampered with. Chain of custody is a paper trail that starts when the evidence is seized by law enforcement through final disposition in the court of law. The chain of custody will start with the name and contact information of the first responder that took the item in as evidence for his or her case. The chain of custody document will then have the name and contact information of the next person that takes control of the evidence; also there will be signatures of both people on this document confirming that the exchange of control took place. This is quite common in police work when a first responder has to pass the original evidence to a digital forensics investigator. The chain of custody document should continue to document who takes control of the original evidence, until that evidence is imaged. Once the image of the original is made and the hash verifies it is an exact copy of the original then the hash becomes the chain of custody until final disposition in court. In other words the hash of the image is the link to the chain of custody document.

For my next post I’ll be discussing best practices on storing original evidence, both physical and digital and some of the tools I use to image disk drives. Thanks for reading!

 

Digital forensics: Detective work in cyberspace

Have you ever seen the movie live free and die hard? This was a movie that was made in 2007 that featured John McClane trying to stop a cyber terrorist. I remember some of my feelings when I was watching the movie. I was thrilled and excited because of what can be done with digital information. It can be used in many different ways, either for good or for evil. One of my favorite parts of the movie was seeing the actors roll out the rubber keyboards and start typing on a computer. Then all of this crazy hacking stuff started happening. This was the movie that pushed me to start studying information security and hacking. I first started studying ethical hacking, otherwise known as penetration testing. Then a little later on I discovered digital forensics. From that point on I was in love, I found my calling.

So what is digital forensics? It is a subset of forensic science that focuses on the recovery and examination of data or evidence found in computing devices. A great analogy would be that digital forensics is just like forensics that you see on shows like Law and Order or CSI but instead of a physical crime scene there is the hard drive that “contains” the crime scene. Digital forensics is used to unravel the events that have taken place on a computer system. Events may be criminal related and some examples of crimes that digital forensics deals with is:

  • Intellectual property theft
  • Network intrusion
  • Credit card theft

The last crime we have seen quite a bit of in recent months. Both Target and Home Depot have been victims of credit card theft on a massive scale. The only way to find out how the thieves breached their system is to examine what happened on the affected system.

There are three steps in the digital forensics process:

  • Acquisition
  • Examination of the evidence
  • Reporting

The first step is acquiring the evidence for future examination. Depending on the situation the investigator may be grabbing a physical hard drive, contents in RAM, CDs, DVDs, USB drive(s) or the contents of a computer’s hard drive. When obtaining the contents of a hard drive or RAM the best practice is to obtain a bit for bit copy of the original evidence called an image. After the image is taken a hash should be generated for the original evidence. This hash will later be used in the next step of the forensic process. This hashing process is similar to what I described in the passwords post. If the slightest bit is changed in a file then its hash will change dramatically. So when a hash is taken for both the original and the copy of the evidence if the hashes are the same then their contents are exactly the same. This is essentially preserving a crime scene exactly as the criminal left it so it can be examined for evidence.

The next step is examination of the evidence. As a rule of thumb an investigator should never examine the original or best evidence. So before examination a copy of the original evidence should be taken and a hash should be taken of the copy. Then the hash values for both the original and the copy are compared. If they match then the contents of both the original and the copy are the same. Depending on what the investigator is looking for he or she will use a series of tools that will comb through the contents of the working evidence to find what they are looking for.

The last step is the single most important step in the digital forensics process: reporting. After the examination a report explaining what was found during the investigation must be written and presented to upper management or the owners of the affected system. Most of the time these people will not be technically savvy so the findings of the investigation must be translated into language they can understand. After all if the report is not done well then proper action may not be taken and that will make the entire investigation process almost meaningless.

For my next few posts I’ll be focusing on each step of the forensic process. I’ll also be posting some of my own tests that I perform in my lab so you can see what happens to a computer when it is hacked and when evidence is analyzed. If there are any comments please feel free to leave a comment below.

Rob’s toolbox: Ninite

I always used to download and update my programs the hard way. I would wait until the programs complain that an update is available. From there I would download the update and patch the program. I sure many of you do the same thing. It’s quite time consuming, until I discovered Ninite.

Ninite is a service that supports the installation and update of multiple programs simultaneously. The Ninite service supports a number of popular programs including iTunes, Skype, and Steam. The way Ninite works is you visit the Ninite website and select the program(s) that you wish to update or install then click the “get installer” button and your computer will download a program that will install and update the programs you selected in the background without any configuration required. Ninite will install the programs in their default locations and with the default settings.

Ninite is very easy to use, I’ll demonstrate it below:

Ninite website
Open your favorite web browser and head to http://www.ninite.com
Ninite screen showing programs
As you can see on the bottom of the Ninite website are the programs that the Ninite installer supports.
Ninite screen with programs selected
Click the boxes next to the programs you wish to have Ninite install and update. Then click get installer.
Ninite installer screen
Next you will see this screen. It shows what programs you want downloaded , a confirmation page of sorts. Click download installer to download the Ninite program. You can either choose to run the program or save the program to your computer. I recommend saving the program because then you can run it at any time to update to programs you selected. Ninite will usually download the program to the default location (which is the user downloads folder) unless you specify somewhere else.

 

Locate the Ninite program icon and double click it to run it. After that Ninite will take care of the rest.
Locate the Ninite program icon and double click it to run it. After that Ninite will take care of the rest.

Ninite is a fantastic tool that will save time when updating and installing programs on computers. For my next post I’m going to start diving into my specialty: digital forensics. This post will explain what digital forensics is and why it is important today. Thanks again for reading and if you have any questions or concerns please comment below.

Rob’s toolbox: Free file sync

A little while back I was using a simple way to backup my computer’s data. I used to drag and drop the folders between my original hard drive and the backup. Eventually when I had a great deal of data on my computer it became difficult to keep track of what was backed up and what wasn’t. I could have just continued to drag and drop my folders and files onto the backup drive but I did not want to deal with all of the duplicate warnings that came along when I backed up my data. Strangely enough I never used the Windows built in backup program, before I had a chance to do so I was shown an interesting backup utility called free file sync.

So what is free file sync? Free file sync is a program that synchronizes one hard drive’s data contents to another. It is perfect for backing up data. Free file sync is what is called open source software. Open source software is programs that have their source code (the actual programming code) openly available for the public to view and edit. What is great about open source software is that it is usually developed by a public community of developers, so updates happen very often. This is the case with free file sync as well so updates happen quite often.

Download link for free file sync: http://sourceforge.net/projects/freefilesync/

Free file sync’s GUI shows the two hard drives in two separate tables. On the left is the primary hard drive and on the right is the backup or secondary hard drive.

Free file sync's GUI interface
Free file sync’s GUI interface

Free file sync has some great features:

  • Multiple drives can be backed up at the same time
  • Compares contents of one drive against the contents of another
  • Multiple ways to backup data
    • Two-way
      • Two-way updating is where changes to one hard drive will be reflected on the other when the back up in done. This occurs both ways. For example say I have two hard drives: A and B. I want to backup the contents of drive A to drive B. Free file sync will compare A to B and see what is different and write changes to B depending on those differences. If I create a text file on A then backup to B the same text file will be written to B. But with two-way if I change a file on B then the changes will be written to A when the backup is done. In my opinion this isn’t a good approach if the two drives are a primary and a backup. The only time I would write from the backup to the primary is if I was restoring the contents of the primary drive using the backup.
    • Mirroring
      • Mirroring is where the backup drive is changed to match the primary drive. This is the type of backup method I use and recommend. If I make changes to the primary drive, say deleting a few files and adding some others those changes will be written to the backup when I use free file sync.
    • Update
      • Updating is where new and updated files are copied to the backup drive. Any files that are deleted from the primary drive that were previously backed up will still remain on the backup drive.
    • Custom
      • The custom setting is where the user can configure the way free file sync will back up hard drive contents. There are five options that can be turned on or off to create the custom setting:
        • Copy new items to the right
        • Overwrite right items
        • Leave as unresolved conflict
        • Overwrite left item
        • Copy new items to the left
      • Cross platform support
        • Free file sync can be used on Windows, Mac OS-X, and Linux

For Mac users I recommend using the built in time machine for backing up data and settings.

You can take the contents of an entire hard drive and just place them into a folder on the backup drive. It’s all up to user preference.

Free file sync is easy to use. I will demonstrate its use in the tutorial below.

The highlighted section is where the primary hard drive's contents will be
The highlighted section is where the primary hard drive’s contents will be

 

In the highlighted section click the browse button and select the folder(s) and/or file(s) to backup to the back up hard drive
In the highlighted section click the browse button and select the folder(s) and/or file(s) to back up to the back up hard drive

 

This is where the backup drive's contents will be. Click on the browse button and select the location where you want your back up files to go.
This is where the backup drive’s contents will be. Click on the browse button and select the location where you want your back up files to go.
Click on the comparision button and free file sync will compare the two locations to each other. By default whatever is not on the left side (primary hard drive)will be placed into the right side (backup hard drive).
Click on the comparision button and free file sync will compare the two locations to each other. By default whatever is not on the left side (primary hard drive) will be placed into the right side (backup hard drive).
If you are satisfied with where the files are going to be placed click on the syncronize now button. Make sure to have the backup settings you want. Click on the gear icon next to the sync now button to change the back up settings.
If you are satisfied with where the files are going to be placed click on the synchronize button. Make sure to have the backup settings you want. Click on the gear icon next to the sync now button to change the back up settings. After that click on the start button to begin the syncing of the two drives.
After the syncing is complete free file sync will report how much data was transferred and how long the back up took.
After the syncing is complete free file sync will report how much data was transferred and how long the back up took.

Free file sync is an easy tool to use for backing up a single folder or an entire hard drive’s contents. For the next post I’ll be showing another tool I use that allows me to install and update multiple programs on my computer at the same time. As always if there are any questions or concerns please email me at hackingdefense@icloud.com or leave a comment below.

Keeping your computer healthy

I remember when I got my first computer back around 2000. It was a great machine when it first came out. It operated quickly, my programs ran quick, and it didn’t act up too much. All of that changed after about three months of using it. It started getting sluggish, programs would not run properly, and it would lock up quite a bit. I didn’t know what to do at the time. Afterwards I just bought a new computer. A few months after that I learned what was causing the original computer to act up. It was due to a lack of maintenance. I didn’t have any AV (anti-virus) or anti-malware programs on it. I also did not regularly run two built in Windows tools: chkdsk (check disk) and defrag. If I had used this set of tools then I may have been able to keep the older computer healthy. This post is about how to keep your computer virus and malware free. Also this post will show you what built in Windows tools will help with file system maintenance.

First let’s start with anti-virus programs. What exactly is an anti-virus program? These types of programs fall under an umbrella of programs called HIDS (Host based intrusion detection). Some may disagree with this assumption but I believe that AV does fall under this category since these types of programs monitor the internals of the computer system for unwanted software. Examples of AV programs are:

  • Microsoft Security Essentials
  • Norton Anti-Virus
  • McAfee Anti-Virus

How do these programs work? After installation of the program it usually updates with files called signatures. These signatures are used by the program to pick up unwanted software that is on the computer. When the AV program scans the computer it will look for programs that match the signatures that the AV program has in its database. If there are any matches then the program will flag them and inform the user about what it has found. After that the program prompts the user about the unwanted software it found and gives the user options on what to do with the unwanted programs. Most AV programs have the same set of features: virus detection, real time protection, signature downloads, etc.

In my opinion there is nothing but pros when it comes to having AV software. No computer in use today should be without some form of AV software.

Here are some of the pros:

  • Instant detection of viruses
  • Deletion of viruses
  • Quarantine of viruses

There are no cons to having an anti- virus program installed on your computer. Today with hacking being so widespread anti-virus is critical to the safety and security of computers.

I use Microsoft Security essentials. It’s a free anti-virus solution that is available from Microsoft.

Download website: http://windows.microsoft.com/en-us/windows/security-essentials-download

Another type of program that is extremely useful for computer security is anti-malware. These programs do essentially what anti-virus does. These programs are built to target malware. Make no mistake a virus is not a piece of malware. They are two different malicious programs. In my experience it’s best to have both an anti-virus and an anti-malware program installed on a computer at all times. Some examples of anti-malware programs are:

Features, pros, and cons for the anti-malware are pretty much the same as the features for the anti-virus software packages. These days never have a computer that does not have some form of anti-malware installed on it.

The next couple of tools that are useful are two built in windows utilities that assist with maintaining the file system and hard drive(s) of the computer. The first is chkdsk (check disk) and the second is disk defragment.

The first tool: chkdsk (pronounced check disk) is a Windows built in tool that checks the hard drive for errors in the file system. These errors can prevent the computer from functioning if they are not repaired. This tool will help in fixing these errors. This tool can be used in both the Windows GUI (graphical user interface) and the command line.

Click the start bar then computer
Click the start bar then computer. After that right click the C drive and click on properties.
Under error checking click on check now
This screen will allow you to check the C drive for errors. I normally click on the checkbox for scanning the disk drive for bad sectors. Under normal circumstances Windows will not allow this scanning to take place. This is because the drive is currently “mounted” and in use. If Windows allows the checking to take place it usually take quite a while. Using the command line is a much faster method for checking the disk drive for errors.

This is the second method for checking the disk drive, using the command line.

blog11
Click the start bar then enter “cmd” into the search bar. After that right click on the cmd icon then click run as administrator. You may have to enter an administrator password in order to open the command prompt window.
Screen Shot 2014-10-16 at 2.54.11 PM
After the admin command prompt comes up type “chkdsk” into the command line. Since there is no drive letter specified after the command Windows will check the boot drive (C:) by default.
Screen Shot 2014-10-16 at 2.54.15 PM
After hitting enter for the check disk command Windows will go through checking the disk for errors. In the output notice that the program says that the F parameter was not specified. This means that Windows is only going to check for errors and not fix them. In order for Windows to perform a complete check the C drive must be “unmounted”. This is done by typing “chkdsk /F” at the command prompt. Windows will then ask you if it can perform a disk check on the next reboot of the system. enter y then hit enter. The next time you reboot the computer Windows will run check disk while the system is booting and also fix the error if any are found.
Screen Shot 2014-10-16 at 2.54.36 PM
This is what is displayed when the /F parameter is used on the C drive. Enter y and the next time the computer boots the drive will be completely checked and any errors will be fixed.

The second tool: disk defragment is a tool that is used to organize the contents of a hard drive. As the hard drive is used the contents of the drive become fragmented. As fragmentation occurs the performance of the computer slows down. This tool helps to mitigate that problem. This tool can also be used in the GUI and the command line. I will demonstrate both methods in the tutorial below.

Click the start bar then computer
Click the start bar then computer. Then right click the C drive and left click properties.
Screen Shot 2014-10-16 at 7.19.14 PM
Under disk defragment click defragment now
Screen Shot 2014-10-16 at 7.19.19 PM
This is the disk defragmanet screen. Click on the C drive then click analyze. Always have Windows analyze the disk first, When this is done Windows checks to see how much of the C drive is fragmented. Based on this percentage Windows will tell you either to defragment the disk or to leave it alone. If the drive needs to be defragmented then click defragment disk after the analysis is complete. This method is much slower then the command line method. Also Windows might complain that the disk drive is in use so it will not be able to defragment it.

Here’s the command line method of using defragment.

Screen Shot 2014-10-16 at 2.55.45 PM
Using an administrator command line prompt (see above on how to open an administrator command line prompt) type in ” defrag c: /a”. “defrag” invokes the defrag program, “c:” is the letter of the drive you want to defragment, and “/a” is the parameter that tells defragment program to analyze the drive. To defrag the drive without analyzing just type in “defrag c:”
defrag screen
This is what is displayed in the command prompt when defrag is invoked without any parameters.

One of the most important things you can do with your computer is to keep Windows patched. What exactly is a patch? A patch is a piece of code that fixes a flaw in a program. When Microsoft finds a flaw in Windows they create a patch to fix the problem. Sometimes you will see a window pop up that says updates are ready to be installed on the computer. These are the patches that Microsoft comes out with to fix problems. These patches are usually released on the second Tuesday of every month. This Tuesday is called patch Tuesday. Always keep your computer patched with updates. There is a GUI window that will allow you to check for updates anytime you want. I’ll show this in the tutorial below.

Using search bar to find windows update
Click the start bar then enter “windows update” into the search bar and hit enter.
Checking for updates
This is the Windows update screen. On the left click “check for updates”. Windows will then check for updates.
screen after checking for updates
After Windows completes checking for updates it will tell you what updates are available to download and install. These updates will be split into two categories: important and optional. Always download all of the important updates.
Screen after optonal updates is clicked
After clicking on “1 optional update is available” this screen pops up. This tells you the details of what optional updates are available to download and install. You will see a check box to the left of name. When selecting which updates to download and install this will select all updates. Do this when installing important updates.

The settings for Windows update can also be set to download important updates automatically.

Checking for updates
On the left side of the Windows update screen click on “change settings”
Windows update settings screen
This is the screen that will changes the settings for Windows update. Install updates automatically is usually selected by default and is also the setting I recommend.

The last thing is probably the most important task of all: backup your data. I remember on one of my older computers I lost all of my data because I didn’t back it up. Do not make this mistake. With all of the information that is on the average computer these days it is a real pain to have to start from scratch if data is lost.

I run my anti-virus, anti-malware, patch updates, and my data backup once a week. This has been good practice for me and I’m sure it will work well for you.

There is a real neat tool that helps with backups called free file sync. I use this tool to back up my data. I’ll cover this tool in the first post of a new blog post series called Rob’s tool box. Thanks for reading this post and if there are any questions or comments feel free to comment below.

Reveling the secrets of the password

Have you ever had the experience of forgetting a password? Most people now have their passwords recorded somewhere. I remember I had a piece of printer paper that had all of my usernames and passwords on it. There were about 30-40 different combinations of usernames and passwords. That’s quite a bit of passwords to remember. I was creating new accounts and services to use over the course of a few years and I have been putting the login information on this single piece of paper. Then I lost the paper, I was at a standstill. I scrambled until I found the piece of paper. Then I breathed a sigh of relief. After that experience I decided to go with electronic password storage. I started with a program called Keepass then from there I went to Lastpass. Electronic storage is much easier for me to handle. I don’t have to worry about remembering or having a piece of paper with all of my passwords recorded on it. I’ll go into more detail about Keepass and Lastpass later but first, what is a password?

A good definition of a password is a combination or letters, numbers, and special characters (such as @,#,$ and *) that when passed to an OS, application or web service allow the authorized user access to their account. Passwords can also be combined into passphrases. These can be quotes from TV shows or sayings that the user prefers. I recommend this approach because they are easy for the user to remember and they are usually long. When setting a password for an online service, application, or account make sure to check the password requirements. Sometimes they are very lax max length of the password might be only 10-12 characters. I have seen this in several places and personally I don’t like it. I usually like my password length to be somewhere between 15-20 characters. Longer if necessary depending on the account it is protecting.

How do passwords work? I’ll use Windows passwords as an example. When you create an account you fill out your username and password information, but what does Windows do with this password information? When the account is registered Windows takes the password and hands it over to a cryptographic algorithm. (Cryptographic algorithms are a set of rules that are used to scramble a human readable word.) After the password is scrambled it is called either a cryptographic representation or a hash. After the hashing process the password hash cannot be reversed or changed back. This is why the cracking of passwords is necessary by attackers. This hash is stored by Windows and used as a comparative value when trying to log into the account the password is protecting. When you try to log into your user account Windows will take your password that you typed at the log in screen, hash it, then compare it to the original hash that was created during the registration process. If the hashes match then you are granted access, if not then you are denied access.

In the red box are the account hashes from a machine  I hacked. You will see names like account1 then a random set of characters. account1 is the name of one of the registered accounts on the machine and the random set of characters is the password hash.  The format for the output is: account name:SID (security identifier:password hash) NOTE: The machine I hacked is a VM (Virtual Machine) that belongs to me, I only hacked it to show what a password hash looks like.
In the red box are the account hashes from a Windows 7 machine. You will see names like account1 then a random set of characters. account1 is the name of one of the registered accounts on the machine and the random set of characters is the password hash.
The format for the output is –  account name:SID (security identifier):password hash                                                               Note: This picture is from the output of a hacking tool I used against my own VM (Virtual Machine)

Windows comes with a nice built in password security feature that is helpful: the password policy. This is used to control how the passwords for all of the accounts on the system are created. The administrator can set a level of complexity, length, and password age. This feature can be enabled through the local security policy dialog box on Windows 7 professional and ultimate and Windows 8 professional and ultimate. For other versions of Windows 7 and 8 the command prompt must be used in order to change the password policy.

In my experience I have found some practices with passwords are good and others not so much. I’ll walk through the rules I use when creating and dealing with passwords.

First and foremost NEVER EVER use the same password for multiple accounts and services. Because if that password is cracked then the attacker will have access to all accounts that use that password. Using the same password is never a good idea.

With passwords most users would think that having special characters is the most important factor; it’s not. The most important factor with password creation is length. Brute force password attacks can crack any password no matter how complex. It’s just a matter of time. This is why length is so important. The longer the password the longer it takes for it to be cracked. Some passwords can take years or decades to crack based on how long it is. Make sure your passwords are a good length. In my experience a good password length is 15 characters. Do not have passwords that are less than seven characters. These are fairly easy the guess and crack. Special characters are a plus but the most important thing is to make sure the passwords you use are of a good length. Do not base your passwords on your name, email address, or date of birth. Doing so would make the passwords easier for the attacker to guess.

I have used two different programs to store my passwords after I stopped recording all of them on a sheet of paper.

The first program is Keepass available at: http://keepass.info/download.html

Keepass stores the passwords that are placed in the program in a separate file. This file is opened using a master key. I like to think of this file as the password vault. When the file is closed the file is encrypted by the Keepass program. Some additional features are: password creation, grouping of passwords, adding notes to password entries and much more.

All of the features can be viewed here: http://keepass.info/features.html

Here are the pros of Keepass in my opinion:

  • It can hold a large number of passwords,
  • Create complex passwords with a few clicks of the mouse,
  • Hold notes for each specific password entry

This program has a couple of drawbacks in my opinion.

  • It doesn’t have an auto fill feature
  • In order to use the mobile app a copy of the password file must be saved to the mobile device.

Here’s the websites on where to download the mobile version of Keepass.

Website for android app: https://play.google.com/store/apps/details?id=com.android.keepass&hl=en

Website for iOS app: https://itunes.apple.com/us/app/minikeepass-secure-password/id451661808?mt=8

Overall Keepass is a great program for anyone that is looking to store their passwords on their computer or mobile device.

Another great program for storing passwords is called Lastpass. This program differs from Keepass in which the passwords are not stored in a file on your computer but on an online server. Lastpass has many features: website password and username auto filling, secure password generation, and saving passwords as you create new accounts on websites.

There are three different versions of Lastpass:

  • Lastpass pocket
  • Chrome extension
    • The app is controlled from the Chrome web browser
  • Mobile app
    • iOS
    • Android
  • In order to use the mobile version of Lastpass you have to be a premium subscriber. It’s only $12 per year. That’s a $1 per month, more than worth it in my opinion.

In my experience Lastpass has many pros.

  • Password generation
  • Note taking for each password entry
  • Password auto filling
  • Password entry sorting
  • Security checking for passwords

In my opinion there is one con with lastpass

  • You have to remember the security key to open the lastpass password vault. If this key is lost or forgotten then the vault cannot be opened.

Using these programs and steps has made passwords easy for me to use and manage. I hope these tips and tricks also make it easy for you as well. If there are any questions or concerns please feel free to drop a comment on the post or email me at: hackingdefense@icloud.com

Windows User Accounts: How to build your first line of defense against hacking

Imagine yourself at a Bestbuy or other electronics store, you’re looking at a brand new computer tower that has Windows on it and your heart is set on buying it. After you get it home you go through the setup of the machine and the user account(s).  After a few months of using the computer BAM!!!! Everything starts acting odd and you do not know why. All you did was browse the web and install a program on the computer. In the background without you knowing it the program you installed downloaded and installed additional programs on the computer. How did it do this without you knowing it?

Everyday many people use their home computers for many things: email, homework, writing, blog posts and banking or other sensitive activities. Your files have to be protected from unauthorized access and the first step to this is to not allow a malicious person to have access to an administrator account. Well this new computer you setup in the story had you setup an administrator account as the one you use for regular usage.

What is an administrator account? First I have to talk about the concept of privileges. With Windows there are different types of user accounts. The important types are: standard user and administrator. An administrator account has the ability or “privileges” to make changes to the system. Some of these changes include:  installing and uninstalling programs, deleting certain files, changing security settings, and modifying the network settings. Standard user accounts do not have the privileges that administrator accounts have. This type of account can create files like documents and spreadsheets; they can also delete the files they create. However these accounts cannot make any changes to the system or access any file that does not belong to them. When a user logged into a standard user account tries to make system changes Windows will prompt the user for the administrator’s password (if there is one). Unless this password is put in correctly the system change will not take place. This feature is called user account control (UAC) and its primary purpose is to make sure unwanted system changes do not take place.

For regular computer usage an administrator account should never be used because you don’t want changes to be made to the computer by mistake. Also if a hacker gains control of your user account and it’s an administrator account then the hacker has complete control of your computer. Another reason to separate the user and administrator accounts is because if a user is logged into an administrator account and they click on a link that contains a malicious program then the program will install itself without the user realizing it. But if the same thing was done by a standard user account then the user account control will be triggered alerting the user that a program is trying to install itself. This is one way to stop programs from installing without you wanting them to.

For more information on how user account control works check out the Microsoft page that describes the User account control technology: http://windows.microsoft.com/en-us/windows7/products/features/user-account-control

Some of you may think that it’s inconvenient to have to type in a password every time you want to install a program on your computer. Think of it this way: you’re trading a little convenience for security. With airports getting to the gates can take a while because of airport security. Computer security is the same way; if you can put up with inputting a password every time you want to make a system change then you will have a layer of defense not only against attackers but also against user error and programs installing themselves without you knowing about it.

Setting up a separate user and admin accounts can sound like it’s hard but it is not. This can be done using one of two ways: The Windows GUI (Graphical User Interface) and the CLI (Command Line Interface). Personally I prefer the command line due to its simplicity and speed. But with the command line you need to know certain commands and syntax in order create the accounts. I’ll cover the GUI first:

  1. Start off by making sure the account you are using is an administrator account
    1. This is required because the admins are the only accounts that can make changes to a system. This includes creating user accounts.
    2. Click start -> control panel -> user accounts and family safety -> User accounts
Click on the start menu then click control panel
Click on the start menu then click control panel
Click on user accounts and family safety
Click on user accounts and family safety
Click on User accounts
Click on User accounts
User account screen
User account screen. On the top right hand corner of the Window you will see your account picture, account type, and if the account is password protected. Make sure your account type says “Administrator”

After you confirm that the account you are logged into is an administrator account the next step is to create a second administrator account that you know the username and password to. This account will take over the administrator privileges that your daily usage account will no longer have.

Click start -> control panel -> User Accounts and family safety -> click add or remove user accounts

Click add or remove user accounts
Click add or remove user accounts
Click create new account
Click create new account
Select a username for the new account and make sure to click administrator then click create account
Select a username for the new account and make sure to click administrator then click create account
Your newly created account will then sow up on the manage accounts screen. Next you need to set a password for this new account. Click on the new account's icon.
Your newly created account will then show up on the manage accounts screen. Next you need to set a password for this new account. Click on the new account’s icon.
Then click create a password.
Then click create a password.
Select a password for the new administrator account. Make sure it is strong, You can also make up a password hint if you wish.
Select a password for the new administrator account. Make sure it is strong, You can also make up a password hint if you wish.

After the setup of the new admin account is complete you can proceed to downgrade your regular usage account to a standard user account.

Close the currently open windows and click start -> control panel -> User accounts and family safety -> User accounts

Click standard account then change account type
Click change your account type, UAC will not trigger (already admin). Click standard user then change account type. After this in the user accounts window it should say “Standard”

Log out of the changed account then log back in for the changes to take place.

As an additional measure I alter my User account control settings. I make it more sensitive.

Click on User account control settings
Click on change User Account Control settings
This screen will pop up. These are the default settings for UAC
This screen will pop up. These are the default settings for UAC. Personally I don’t like the fact that Windows does not inform me about when I make changes to the system. I make mistakes and I would like Windows to double check and make sure that I want to make changes to the system.
This is the UAC setting I recommend. I may be annoying that UAC will always trigger but I prefer  it. This will stop any unknown software from installing itself in the background.
This is the UAC setting I recommend. It may be annoying that UAC will always trigger but I prefer it. This will stop any unknown software from installing itself in the background.

For the more advanced user the command line can be used to change user account types. Each account in Windows belongs to a group. Examples of groups are: the user group and the administrator group.

  1. Open an administrator command line prompt. Click start -> type cmd -> right click on the cmd icon -> click run as administrator
Click start -> then type cmd into the search box -> then right click the command prompt icon and click run as administrator. After that click yes and this should show up.
Click start -> then type cmd into the search box -> then right click the command prompt icon and click run as administrator. After that click yes and this should show up. Make sure that the top of the prompt reads: Administrator  Another indicator that the command prompt is an administrator prompt is the current working directory is C:\Windows\System32
  1. Confirm that the account you want to remove is an administrator by listing the accounts that have administrator level access
    1. Syntax: net localgroup administrators
Typing the command: net localgroup administrators will show which users accounts are admins on the system. Make sure the user account you want to downgrade is in this list. If it is not then your work is already done.
Typing the command: net localgroup administrators will show which users accounts are admins on the system. Make sure the user account you want to downgrade is in this list. If it is not then your work is already done.
Type the command: user localgroup administrators (accountName) /del (account name is the name of the account you want to downgrade to a standard user) This command will not delete the user nor delete the administrators group. This command removes the user account you selected from this group. Making it a standard account.
Type the command: user localgroup administrators (accountName) /del (account name is the name of the account you want to downgrade to a standard user) This command will not delete the user nor delete the administrators group. This command removes the user account you selected from this group. Making it a standard account.
Type the command: net localgroup admnistrators agaiin the check if the account you wanted to downgrade is removed from the group. If it is the your work is down. Congratulations you have successfully downgraded an account from administrator to a standard user using the command line.
Type the command: net localgroup admnistrators again and check if the account you wanted to downgrade is removed from the group. If it is then your work is down. Congratulations you have successfully downgraded an account from administrator to a standard user using the command line.

Log out of the account and log back in to have the changes take place

After separating these accounts out make sure they are both protected with strong passwords. A strong password should be long and contain several different types of alphanumeric and special characters. In a later post I will be covering how quick common passwords can be broken, tools that can make and store passwords, passphrases, and how to make a strong password that is easy to remember.

I hope you have enjoyed and learned from this post. If you have any comments or concerns please feel free to use the comment box below.

Teaching the computing world how to protect themselves against hackers.