M2 - Computers, Mobile Devices, and Online Platforms Overview
Class: CYBR-405
Notes:
Computer Overview
Computer Operating Systems
All computer systems are managed by an operating system that performs various functions. Review the types of functions the operating system performs below.
- Manages hardware resources: RAM, disk storage, and devices (printers, scanners, network cards, keyboards, CD/DVD drives, etc.).
- Manages software resources: applications and
- Performs time-sharing functions involving input/output of data and memory allocation.
- Acts as an intermediary between users, hardware, and applications.
Boot Process
The boot process is the set of steps required to take the computer from a no-power state to a fully operational computer that is ready for users to perform tasks. The following steps illustrate several of the milestones in the boot process. Select Step 1 of the process to learn more.
-
Step 1: Power On
- For the forensics examiner, power to a system can substantially impact how much information can be gathered from a system. Disk encryption products, such as Windows BitLocker and macOS FileVault 2, are available to prevent unauthorized access to the data on the disk, but information in memory (RAM) remains unencrypted. This would include decryption keys and passwords.
- A forensics examiner may need to collect a live RAM memory capture from a running system in order to obtain the encryption key needed for accessing the encrypted disk once the system is powered off.
- In such cases, the evidence is the memory (RAM). There are several open-source and commercial forensic tools capable of safely acquiring a RAM memory capture from running systems. Once a RAM capture is collected, other forensic tools can be used to parse the content of the RAM image in order to retrieve valuable information about the system.
-
Step 2: Central Processing Unit initialized
- Powering the main board or motherboard of a computer causes the computer's clock to start. The computer then begins a self-initialization procedure. This includes a quick hardware check of critical processor and RAM components. Once the processor assesses itself as "fit," the boot process continues.
-
Step 3: Load Local System ROM
- The processor then references the system's Read-Only Memory (ROM) for firmware. Firmware is software written for a specific hardware implementation. The firmware is loaded into a predetermined memory location. The boot process will fail if memory is absent. Once the firmware is loaded and verified, the boot process continues.
-
Step 3B: BIOS and UEFI:
- For PC computers, such firmware is located on a memory chip on the motherboard and contains the system's Basic Input/Output System (BIOS) that controls the interaction between the motherboard, hardware, and operating system.
- The new standard that has replaced the BIOS in computers is Unified Extensible Firmware Interface (UEFI). Advantages of UEFI over BIOS firmware include the ability to use large disk drive volumes over 2 TB with a GUID Partition Table (GPT).
-
Step 4: Run Power On Self-Test (POST)
- The firmware's first responsibility is to inventory and test the system's Random-Access Memory (RAM) fully. Following the memory, the firmware begins to inventory primary computer resources. All of the following items must be available and fit, or POST will fail:
- Computer bus - The communication pathways of data.
- Input-output (IO) devices such as keyboard, mouse, and video card. A monitor is not required for a computer to pass POST.
- Memory Components - Any specialized memory devices and almost all disk storage.
- Specialized Components - This inventory includes detecting SATA, RAID, USB and network hardware controllers.
- The memory test performed at POST is performed by writing data to each memory location, and then comparing the result of a read from each location.
- During the boot process, the system may display firmware information that could help better identify the system. This information can include the firmware manufacturer, version number, version date, serial number, setup program key, various logos, etc.
- The firmware's first responsibility is to inventory and test the system's Random-Access Memory (RAM) fully. Following the memory, the firmware begins to inventory primary computer resources. All of the following items must be available and fit, or POST will fail:
-
Step 5: Take Hardware Inventory
- In this step, the firmware continues by completing two specific functions:
- Inventory any devices attached to the specialized components identified in the previous step
- Run a POST step for these newly inventoried devices
- This last step is usually performed by the specialized controller itself.
- In this step, the firmware continues by completing two specific functions:
-
Step 6: Determine Operative System Location
- The BIOS firmware now attempts to load an operating system (OS). To determine which OS should be loaded, the firmware of the system will:
- Assess whether any user interrupts were provided during the boot process. Most computers allow the user to stop the boot process at this point and enter a firmware configuration mode. From this mode, the user can override defaults for just one boot session or permanently alter the firmware configuration.
- Read local firmware settings to assess:
- The storage device that contains the default boot records.
- Whether the BIOS firmware should prompt the user to select a boot device. This selection provides the user an opportunity to override default configurations and is necessary for systems to boot from an optical disk (CD or DVD) or a USB thumb drive or external disk. This prompt can also be obtained by the user interrupting the normal boot process to select the boot device (usually by selecting F12 during the boot process).
- Locate the boot drive (always the C: drive in Windows systems), and if the boot drive is not present, what order of drives to look for in order to boot from.
- The order of disk drives that will be presented to the operating system. For example, a Windows computer may have three physical disk drives. Upon boot-up, the computer's BIOS firmware can report disk0 as "C:", disk1 as "D:" and disk2 as "E:".
- Because computers can have more than one disk and each disk can be reported differently to the operating system, the examiner should document the assigned boot order of disks by the firmware prior to removing any hard disks from the computer.
- The BIOS firmware now attempts to load an operating system (OS). To determine which OS should be loaded, the firmware of the system will:
-
Step 7: Locate Master Boot Record (MBR)
- The Master Boot Record (MBR), created when the disk is formatted, is the most important data structure on the disk. The MBR contains the partition table and executable code required to start the operating system. Each hard disk and partition reserves the first 512 bytes of the hard disk (or logical block address #0) for boot records. The record associated with the disk is referred to as the Master Boot Record.
- Marking the end of the MBR is a 2-byte signature word (or end of sector marker). This marker is always set to 0x55AA. A disk signature (a 4-byte unique number at offset 0x01BB) identifies the disk to the operating system. This figure shows the first sector, sector 0 , of an MBR disk drive. The disk signature is circled in red, the sector is highlighted in yellow, and the end of sector marker is highlighted in pink.
/CYBR-405/Visual%20Aids/Pasted%20image%2020260130213208.png)
-
Step 8: Load the Operative System
- The boot loader's final step is to load the operating system into memory and turn over control of the CPU to the operating system. The now-running OS begins by gathering an inventory of attached devices. This inventory is much more thorough than the one performed previously by the firmware. Inventoried devices are initialized. This includes all memory, additional processors, peripheral devices such as printers, touch screens, advanced keyboards, mice, and video cards.
-
Step 9: Load the First Process
- The final stage of the boot process is similar for most OS installations. At this point, the operating system is now loaded and the system is prepared to start the first process. A default first process is configured when the OS and boot loaders are compiled. In several cases, the boot loader will allow the user to override the default first process. The responsibilities of the first process vary widely, but the ultimate result is a fully booted operating system.
- The first process in UNIX and Linux is "init" which begins loading all necessary drivers and starts all default processes for the system.
- In Windows, the first process, "ntdetect.com," detects hardware and returns the information to the firmware. Next, the firmware calls "ntoskrnl.exe" with the information returned from ntdetect.com as parameters.
- Once all processes are started, the system is booted and ready for the user to use.
- Most operating systems support a sleep mode, often referred to as hibernation mode in which the memory of a system and the current state of the processor are stored as a file on disk. Upon awakening from hibernation mode, the file is copied back into RAM and the CPU state is restored. In these cases, the loading of the first process is circumvented in order to continue the previously booted session. Hibernation is often configured in laptop computers to prevent data loss should the battery run low. The hibernation file is called hiberfil.sys in Windows, and is typically located in the root directory of the C-drive.
- Forensics examiners should always remember to check for copies of memory (often called a "memory image") left behind by a system that has gone into hibernation. Forensic software is available that can parse the hiberfil.sys for valuable artifacts.
- The final stage of the boot process is similar for most OS installations. At this point, the operating system is now loaded and the system is prepared to start the first process. A default first process is configured when the OS and boot loaders are compiled. In several cases, the boot loader will allow the user to override the default first process. The responsibilities of the first process vary widely, but the ultimate result is a fully booted operating system.
Programs and Processes
Each computer has a set of processes that run from start-up or because a program These processes often vary depending on the operating system in use.
A process is an instance of a program that is being currently executed. It is a dynamic and active entity of a program. Processes are created when the programs are executing and they reside in the main memory.
A set of instruction codes that has been designed to complete a certain task. It is a passive entity stored in the secondary memory of the computer system. A program is considered as a passive and static entity has been started.
Memory and Memory Footprints
Understanding how memory is managed and utilized by different processes is crucial for digital forensics recovery. This slide highlights four key aspects of memory management: Memory Availability, Memory Allocation, Memory Boundaries, and In-Process Memory Initialization. By exploring these areas, you will gain insights into each process.
Memory Availability
The memory manager attempts to guarantee RAM is available if needed. To accomplish this goal, the memory manager works with the process manager to move non-active processes and lower priority data from RAM onto specialized memory files on disk. The net effect is to swap RAM memory for disk memory. The act of exchanging memory in this way is called "swapping". Windows operating systems allocate special files for swapping, called Pagefile.sys. While UNIX and Linux can perform swapping this way, swapping in these systems is, by convention, performed by allocating a dedicated partition on the disk that the memory manager can use as extra memory blocks.
Actively used swap files (called "page" files in Windows) are deleted when the system is shutdown cleanly. However, unclean shutdown caused by system crashes or power loss will leave these swap files. In most cases, they will stay until a user deletes them. A forensics examiner can use special forensic software tools to analyze the Pagefile.sys for useful artifacts.
Memory Allocation
Requests for new memory to be allocated are sent to the memory manager from various components within the operating system. These include the process manager, the network manager, device drivers, and others. The memory manager keeps track of which portions of RAM are not currently in use and allocates portions from this pool as needed.
Memory Boundaries
Requests by one process to access memory are sent to the process manager and memory manager. The memory manager then has the opportunity to validate the request and ensure that a process remains within the borders of its own memory space. Boundary checks are notorious sources for security faults (resulting in system crashes) because the memory manager does not validate every memory access (only specific access requests). Such system crashes are often referred to as the blue screen of death (BSOD) in Windows systems.
In-Process Memory Initialization
At times, a process will request memory from the memory manager and require that allocated memory be of a certain size. The memory is allocated as requested, but not defined with any preset value.
The result is allocated memory that contains whatever content that was last stored in that memory location. Although the operating system has the opportunity to set the memory to some particular value, it often does not.
This behavior is a trade-off between the efficiency and reliability of the operating system that has existed since the early implementations of operating systems. Recent advances in computer hardware have enabled operating systems to include "secure" features that pre-set allocated memory to random or constant uniform values.
Other Types of Memories
Other types of memory to address include file cache, crash dump, and file slack
File Cache
Copies of files loaded from disk are stored in memory. This memory that holds these files is called the file cache. The memory for the file cache is allocated dynamically according to the size and number of files used by running processes.
Because files are cached into memory while being actively used by processes, and because the memory manager will swap RAM to disk, the forensics examiner may find swap files and other memory copies stored on disk that contain alternative versions of files.
Crash Dump
During the life of any computer, the operating system may encounter unrecoverable errors that cause the system to crash. In an effort to assist remediation efforts by software vendors, a copy of the memory for a specific process (if only a process crashed), or all memory (if the OS crashed) is taken. These dumps are placed as files on disk.
Crash dumps and hibernation are nearly identical functions. The chief difference is that crash dumps will remain after the system is booted, but hibernation memory files will be removed once the computer's memory is restored. These dumps often litter a computer system and annoy users by consuming unwanted space. They can also contain valuable artifacts.
File Slack
File slack is defined as the unused blocks following the last block of the file and continuing to the end of the cluster.
Although it is possible a skilled user could identify these blocks and store information in them, the more probable scenario is that the blocks were never reset to a default setting when they were freed from a deleted file, thus leaving the possibility of storing information for an investigation.
For example, fragments of prior email messages and "deleted" word processing documents can be found in file slack. On large hard drives, file slack can involve several hundred megabytes of data.
File slack also exists on other storage devices like flash drives. From a digital forensic standpoint, file slack is very important as both a source of computer evidence and a security risk.
/CYBR-405/Visual%20Aids/Pasted%20image%2020260130221401.png)
File systems
All computer devices that digitally store data must have a file system designed to control how data is written to, stored on, accessed, and deleted from the storage device. Just as there are several types of operating systems, there are multiple types of file systems. However, the file system must be compatible with the operating system.
File Allocation Table (FAT)
A File Allocation Table (FAT) provides basic functionality but is void of security features. Clusters, or groups, of contiguous sectors which result in contiguous blocks, are represented by a 12-, 16-, or 32-bit string (FAT12, FAT16 or FAT32). The number of bits limits the number of clusters that the file system can address to 4096, 65536, and 4294967296 (respectively).
Since the number of addressable clusters is finite and fairly limited, each file will be assigned larger clusters of blocks to accommodate storage needs, resulting in large amounts of file slack. The user is impacted because large portions of disk space can be wasted due to file slack. The forensic examiner can use the file slack to look for large contiguous blocks containing data from older files (when the cluster was previously assigned to another file) for evidence that may have been deleted by the user.
Central Tables
The File Allocation Table (FAT) uses central tables to store the location of each file on disk, the location of each file in the directory structure, attributes about the file, and which clusters are used. This creates a potential limitation for when the file system driver needs to extend beyond the first track to allow additional data to be stored. Further, a difference in one table may cause a cluster to be reused which will result in data loss or corruption. As files are created and deleted, performance can suffer as the set of tables can become significantly fragmented across the first track.
File Deletion
Deletion of a file is performed by adding the clusters to the free list of available clusters, disassociating clusters from the deleted file, and flagging the FAT entry as "deleted" by prefixing the file name with 0xE5. This information benefits the forensics examiner in that they can tell when a system has marked files for deletion, but the data has not actually been removed. Because the actual data remains on the disk, it can be recovered.
Timestamps
Timestamps are stored on FAT file systems in local time with no allowance for Universal Time Coordinated (UTC). This could cause anomalies in timeline analysis for files that have been transferred from one system to another using portable media formatted with FAT.
USB Drives
The only version of FAT that is in common use today is FAT32, due to the large file size ( 4 GB ) capacity coupled with its cross-compatibility between multiple operating system types (Windows, macOS, and Linux). However, FAT32 is only practical for removable media like USB drives or SD cards.
exFAT
A newer version of FAT, Extended FAT (exFAT), was introduced in 2006 and does not have a realistic file size limitation like the preceding FAT versions. Timestamps stored on exFAT systems are stored with an offset from UTC time so the issue with FAT timestamps is minimized. exFAT is a practical alternative to portable devices that require storage of large files and interoperability between multiple operating system types.
Microsoft NTFS
New Technology File System (NTFS) is used by Windows operating systems to record file and folder timestamps with a Master File Table (MFT). The various timestamps stored by NTFS include the last modified, accessed, metadata (MFT) change time and born (created) times-often referred to by forensic examiners as the MACB times.
Accessed time can be updated by events besides a user opening a file so it is not a reliable indicator of user interaction, even if enabled. Times stored by NTFS are in Universal Time Coordinated (UTC) and therefore offer more precision for forensic analysis. NTFS stores times in Microsoft time (number of 100-nanosecond intervals from midnight, January 1, 1601, UTC).
NTFS also offers the following enhanced security benefits:
- File and folder security through access control lists (ACL) that are applied to both user accounts as well as through directory and network shares.
- File system journaling that keeps track of changes to files and folders through a series of hidden system metafiles located in the root of every NTFS partition volume.
The names of the NTFS metafiles start with a dollar sign ($) and can only be viewed using certain software tools.
- $MFT (stored in the Master File Table, tracks the location of the MFT)
- $MFTMirr (backup of the $MFT)
- $Extend (stores optional extensions such as quotas, reparse point data, and object identifiers)
- $LogFile (transaction logging file)
- $Volume (stores the volume label, identifier, and version)
- $AttrDef (attribute definition)
- $Bitmap (contains the allocation status of all clusters of the volume)
- $Boot (contains the location of the boot sector and boot strap)
- $BadClus (records the location of damaged disk clusters)
- $Secure (contains security and access control information)
HFS+
Hierarchical File System Plus (HFS+), created by Apple Computer, records the following file and folder timestamps: Created, Modified, Accessed, Record Change, and Added Date. The record change time reflects when the object's metadata is changed in the file system catalog, comparable to the $MFT NTFS). Added date timestamp reflects when the object is moved to its current location. For example, if a file on an HFS+ volume is moved to a different location within the same volume, the Added Date time is updated. Access times can be set by simply clicking on the "get info" option from the Context Menu and is not updated if the file is opened but not saved. Timestamps are stored in MacOS timestamp format (number of seconds since midnight, January 1, 1904, GMT).
Examiners need to be aware that there are other time formats used on macOS filesystems, depending on what artifact or application stored the time.
/CYBR-405/Visual%20Aids/Pasted%20image%2020260130224032.png)
APFS
Apple File System (APFS) replaced HFS+ in macOS in 2016. APFS is optimized for solid-state drives (SSD) and flash storage media. APFS also features built-in multi-key encryption at the disk level. This will make a forensic examination of systems using APFS difficult without the proper access credentials to bypass encryption. APFS timestamps are stored as the number of nanoseconds since January 1, 1970, UTC. As with HFS+, other timestamp formats will be encountered as listed in the previous chart.
EXT4
4th Extended File System (EXT4) is the current file system used by Linux systems. The EXT4 file system divides a partition into block groups that contain file system metadata. The EXT4 file system employs extents, extent trees, directory indexing HTrees, and flex block groups. EXT4 timestamps are stored in UNIX time. An indepth analysis of the Linux EXT4 file system is beyond the scope of this training.
Folder or Directory Structures
Boundaries of hierarchical structures can provide clues to the location of critical information. What are some of the issues digital forensic examiners must address in analyzing these hierarchical structures?
Device Capacity
- Files, folders, and directory structures require a physical device. Since these devices have a limited capacity, the capacity of the device itself becomes the boundary.
Data Prioritization
- Some users may have applied a greater or lesser priority to the system resources. These boundaries delineate those priorities. A storage quota is a good example.
Data Separation
- A user may create a boundary for junk mail or stored music files to keep them separate from work-related email documents.
Computer Dependencies
- Among other reasons, some older computers required boot-related content to be located within the first few megabytes on the physical storage device.
Performance Level
- Physical storage devices store and retrieve data with different performances based on the data location. Performance may also be affected by the partition structure used.
Availability
- When the layout of folders is not set up correctly, programs or services might stop working. This could make other programs or services fail too because the organization of folders directly affects whether an application or service can be used or not.
Computer Storage: Logic Disk Partitions
Computer operating systems manage information through the creation of disk partitions (or disk slices, in macOS and Linux). A partition is a logical 'container' on a physical disk that the operating system usually assigns a drive letter. A disk partition is created using a disk partitioning tool.
Windows Systems
Disk Manager or DiskPart is used to partition disks in Windows.
The boot partition in Windows is always designated the C: drive. Additional partitions can be created that are assigned drive letters D:, E:, etc. In Windows NTFS (the file system used by the Windows operating system), up to four logical partitions may be created on a physical disk drive.
Apple MacOS Systems
Disk Utility is used in macOS to partition disks. Apple macOS file systems (HFS+ and the newer Apple File System, or APFS) do not assign drive letters to partitions. Similar to Linux, macOS file systems use the following disk and volume naming convention:
- disk0 (first physical disk)
- disk0s1 (first logical partition, or slice, on disk 0)
- disk0s2 (second logical partition, or slice, on disk 0)
- disk1s1 (first logical partition, or slice, on the second physical disk)
- and so on
Linux File Systems
FDisk is used to partition disks in Linux. File systems used by Linux (fourth extended filesystem, or ext4) do not assign drive letters to partitions. Instead, these file systems assign a different naming scheme to logical disk partitions:
- sda1 (refers to the first slice, or volume, on the first disk drive)
- sda2 (refers to the second slice, or volume, on the first disk drive)
- sdb1 (refers to the first slice, or volume on the second disk drive) and so on.
Windows Default Configuration
The Windows default configuration is to present the file system to users as a series of disjointed "drives" or partitions.
The booted partition must be labeled "C:". Any remaining partitions will be loaded in the order that the firmware discovers them. Recall that ntdetect.com is the first process that is run as the system is booting. It gathers information from the firmware and then passes this information onto ntoskrnl.exe as parameters. The hard drives, partitions, and partition details are some of the information passed as parameters at this step.
The additional partitions will be loaded as assigned, and automatically be assigned a drive letter. These drive letters can be (re)assigned to each partition except for the boot partition which must remain as C.. These labels will remain even if the system is rebooted. The Windows operating system allows a partition to be set as "hidden." Hidden partitions will not provide any visible record of the partition to the user, but will allow the user to access the data on the partition. The user is only required to know the partition label assigned.
C: Drive
In practice, Windows installations rarely have more than one or two partitions (volumes C: and D:). By default, and almost without exception, systems files for the Windows operating system are located on the C-drive.
The C-drive has several characteristic directories (or folders) that are uniform over all installations.
C:1
C:\Program Files or
C:\Program Files (x86)
C:\Users
C:\Windows
C:\Windows\System32\
C:\Windows\System32\config\
[X:]\$RECYCLE.BIN
C:\Program Files or C:\Program Files (x86)
This directory is the location where applications will load their program files.
By convention, user data is not stored here, but rather it is stored in the user's home directory (C:\Users\<user>\).
64-bit programs use the C:\Program Files\ directory and 32-bit programs will use the C:\Program Files (x86) directory.
C:\Users\
This directory contains user home directories and the default settings. Personal settings are provided as a convenience to the user. Also, for a small measure of security, some personal settings are hidden by default. Key directories include:
C:\Users\<user>\AppDatal\: This hidden directory is the location for most user settings and some data pertaining to applications where the user did not specifically request the data be saved. For example, Microsoft Outlook will make a copy of an email downloaded from a server and store that copy in this directory.C:\Users\<user>\AppData\Local\: Additional or complementary application and operating system configuration data is stored here. This directory includes temporary storage such as web cache, web history, and additional application data.C:\Users\<user>\AppData\Local\History\: Contains database files that Internet Explorer records information about web pages visited by the user. Note, other browser versions will save Internet history in different locations within the user'sAppData\Local\directory.C:\Users\<user>\AppDatalLocal\Microsoft\Windows\INetCookies\: Stores web cookies. A cookie is a file placed on the system by a web server and is used to identify users and possibly prepare customized web pages. Some cookies are configured to store site login information for the user.C:\Usersl\<user>\Desktop\: Contains the content of the user's "Desktop" (files, folders, shortcuts, etc.).C:\Users\<user>\Favorites\: Stores saved website shortcuts (bookmarks).C:\Users\<user>\NTUSER.dat: This file contains the registry entries specific for this user.C:\Usersl\<user>\AppDatalRoaming\Microsoft\Windows\Start Menul\: This directory conventionally holds references to applications. The name and hierarchy specify the application names and hierarchy of the users' "Start Menu."
The automatic data gathering performed on the user's behalf is a goldmine of information about user habits, preferences, and activities. Preventing automatic data gathering requires expertise. In practice, this information is intact and will be available for the forensics examiner.
C:\Windows
This directory contains all of the files necessary for the system. By convention, applications can install files into this folder.
Some temporary information (such as crash dump files) and swap memory files are located here.
Windows is notorious for not cleaning up after itself or after other programs very well. A forensics examiner may need to prove that a particular application was installed on a system. Though the system registry may be clean and the application installation removed, leftover files may remain in this folder. The forensics examiner can use these files as evidence of a particular application's presence on a computer.
C:\Windows\Systems32\
Software applications install dynamic link library (DLL) files into this directory. DLLs contain executable code for program operations.
C:\Windows\Systems32\config\
Windows stores important system registry files in this directory: SAM, SYSTEM, SECURITY, and SOFTWARE. These files contain important information about the computer settings, installed software, and user accounts.
[X:]\$RECYCLE.BIN
At the root of every drive volume is a \$RECYCLE.BIN folder containing the deleted files. The files moved to the Recycle.Bin are eventually moved to unallocated space either by the user emptying the Recycle.Bin or by the OS. Within the Recycle.Bin will be a sub-folder named with the user security identifier (SID) of the user who deleted files from that volume.
UNIX/Linux Default Configuration
The UNIX/Linux approach is to provide users with a single uniform presentation of the file system. The actual implementation details are largely abstracted or hidden from the user. The file system is extended by "mounting" a partition onto the initial (root) partition's directory structure. A change from one mounted file system driver to another is transparent to the user, as each driver appears as additional files within an existing folder.
Because of the differences in administrator style, convention, administrator skill level, and installation purpose, the forensics examiner should record the available partitions and how each partition is assigned. A list of available partitions is located in "/proc/partitions" on Linux servers. The same information can be obtained through tools such as "format" on Solaris. The mapping of these partitions is located in "/etc/fstab" (Linux) and "/etc/vfstab" (Sun UNIX).
root directory, or "/"
The root directory, or "/": Pronounced "slash," is the first file system driver the kernel will load once all devices are initialized. In some cases, the partition "/boot " will be located on a separate partition. Because /boot stores the kernel and critical operating system components, it must be loaded first. Although more recent hardware does not share this limitation, many system administrators maintain the convention of separating /boot from /. In cases where / and /boot are located on the same partition, / will be the first file system driver loaded.
/boot
This directory structure contains the core operating systems, its drivers, the configuration files for GRUB, and LILO to boot the system.
/etc (pronounced "et-ce")
This directory structure is generally very small and always located on the same partition as /. This structure contains all boot scripts and nearly all configuration files for the system. Configurations files are almost uniformly text in nature.
/usr
This directory structure contains all software packages provided by the vendor or distributor and installed for use.
/opt and /usr/local/
These two directory structures are reserved for usercompiled or optional software packages. There has been tremendous discussion as to which structure should be the convention for storing these files, however, no widely accepted standard has been set.
/root
This directory structure is the system administrator's home directory or default directory. It is always located on the same partition as / in case the operating system is booted without mounting additional file systems.
/tmp
This directory is the conventional storage location for all temporary files and should be mounted in all cases, though some inexperienced administrators may include it as part of /. In Sun UNIX, a special driver finds all partitions labeled as temporary and joins them together so that /tmp is the sum of multiple partitions. The free space in these partitions is used by the memory manager for swapping files from RAM to disk. Forensic examiners should be aware of the boot process. Many operating systems clean all content from the /tmp upon boot up.
/bin and /sbin
These directories contain the critical binary programs necessary for the system to load. For example, the program "mount" will be located in either /bin or /sbin. Without the mount program, no other adjuncts to the file system would be possible. These programs often contain less functionality than other programs of the same name that are made available after subsequent mounts.
/home
This directory contains the home directories for users. UNIX and Linux users without administrative privileges can typically only store data in their own home directories or in directories designated as temporary storage such as /tmp. The user's home directory should be the initial focus of an investigation.
/dev
This directory structure contains the file interfaces to device drivers. The operating system interprets file access as driver access. Therefore, the user is provided a simple means to more directly access and control hardware, drivers, and the operating system. Linux predominately populates /dev at install time with all known file-to-driver interfaces. New file device interfaces can be added by a system administrator or by the driver when it is first loaded.
/mnt or /media
These two directory structures are the most common locations for adding removable media (e.g., floppy disks, USBs, CD-ROMs, and DVD-ROMs) to the file systems. Other conventions include /cdrom and /floppy.
/lib
This directory structure contains all files required to support the operating system and those provided by the operating system to applications as helpers to access operating system features.
/proc
This directory structure is a completely virtual representation or view of the running operating system. It (or something similar) is not found on every version of UNIX, but is common to all Linux installs. It contains key information about hardware, current settings of operating system related software (such as firewalls), performance statistics, and all related information about a file.
/var
This directory structure is the conventional storage location for backups, log files, state files, and generally any data file that pertains to the system or any non-user specific application.
Data Storage
Digital Storage Media
Digital storage media devices store information in digital format in the form of binary 1 or 0 . The optical disc uses peaks and valleys that are read through the use of a laser and an infrared reader.
The forensic examiner will need to be familiar with the way data is stored on digital media in order to facilitate the recovery and analysis of the data.
Digital storage media include all of the following.
- Hard disk drives (HDD)
- Solid-state drives (SSD)
- USB thumb drives, flash drives (SD, microSD card)
- Magnetic tape
- Optical discs (CD, DVD, Blu-Ray)
External Storage Media
External storage is data storage that is outside of the computer's main storage or memory. The two most common types use flash memory circuits.
USB Drives
USB storage media can range in size from the size of a fingernail to a multidrive external storage bay. All sizes require a USB interface with the computer. Standard USB drives store data using flash memory circuits.
External SATA (eSATA) Drives
Many computers feature external SATA ports that use special SATA cables to connect an external HDD or SSD. The eSATA drive needs a power source via a separate power cable. This allows high-speed data transfers between the computer and an external drive that can be disconnected, transported, or stored away from the computer.
Understanding How Storage Devices Store Data: Digital Storage Units
The basic unit of a storage device is a bit, or rather a binary switch that is either on or off (e.g., 1 or 0 that represents yes or no, or true or false). These bits are the basic units of any storage media. Eight bits strung together form a byte. Two bytes form a word. Storage media capacity is measured in how many bytes can be stored. Larger capacities are represented by "byte" prefixed with a descriptor that indicates how many such bytes are in that unit.
Transfer vs Storage
Data transfer rates are referred to in bits per second: e.g., 400 Mbps refers to 400 megabits per second. Data storage is always referred to in bytes: e.g., 400 MB refers to 400 megabytes of data.
It is important not to confuse megabits and megabytes, as the latter is eight times larger since there are 8 bits in 1 byte. Review the table for more information.
/CYBR-405/Visual%20Aids/Pasted%20image%2020260131125521.png)
Digital Architecture
Disk drives consist of platters that use both sides to store data. Each platter surface has one drive head that reads and writes data. Each platter surface contains rings referred to as tracks. Each track contains sectors. By default, most file systems format the drives using 512 bytes per sector. Multiple platters are stacked on a spindle. Corresponding tracks on each platter surface make up a cylinder.
For examples look at: The Memory Hierarchy#Storage Technologies and Trends
Data Transfer Technologies
Several dominating standards exist for transferring data between the main memory of the computer and the disk media. A digital forensic examiner may see FireWire, USB, SATA, or PCI during their investigation.
FireWire
FireWire (IEEE-1394 standard) is the specification for transferring and addressing data on connected devices. FireWire devices generally connect to external devices (63 at most).
Unlike other specifications, FireWire devices can communicate directly with each other without having to go through a
central or master controlling device. FireWire supports transfer speeds up to 800 Mbps. FireWire is primarily used between A/N equipment with attached storage, such as camcorders, digital audio controllers, and digital video recorders. FireWire also provides power to attached devices over the same cable used for data transfer.
USB
The USB interface is designed to replace older parallel and serial interfaces.
In most cases, external storage media can be added and removed from a computer system ad hoc. Most USBconnected storage devices are connected through virtual SCSI adapters.
USB 3.0 supports transfer speeds up to 5 Gbps. USB-C is the latest version supporting up to 10 Gbps data transfer speeds.
Serial-ATA (SATA)
The serial ATA interface attaches each disk to the main computer by separate cables. While certainly more efficient in terms of bit throughput, it also provides hot swapping features and more efficient airflow due to smaller cables. This is the predominant type of storage device in use today.
PCI Express (Peripheral Component Interconnect Express)
PCI Express or PCle cards are serial computer expansion cards that offer high-speed data transfer rates of up to 128 GBps . PCle expansion cards can connect to solid-state storage devices in order to obtain tremendous data transfer speeds.
Drive Storage Types
SATA Hard Disk Drive (HDD)
The standard data storage device in use today is a hard disk drive (HDD) that uses spinning magnetic platters where movable heads read and write data to the surface of the platters. HDDs can hold many terabytes of data at a very economical cost.
Solid-State Drive (SSD)
Solid-state drives use integrated circuit assemblies such as NAND-based flash memory to store data. Internal SSDs feature SATA connection for attaching to the computer motherboard. SSDs have no moving components. The downside to SSD technology is the tendency for data to slowly leak over time if left without power. Also, NAND flash memory technology limits the life of the SSD as only a finite number of writes is available to the memory. As a result, forensic examiners should not use SSD media for long-term storage of digital evidence.
Mini-SATA (mSATA)
mSATA is a low-profile interface that connects small solidstate storage devices (the size of a business card) to a computer's motherboard.
M.2 interface
The M. 2 interface, formerly known as the Next Generation Form Factor (NGFF), is a low-profile interface that is designed to replace mSATA standard. M. 2 can be designed to support multiple computer bus interfaces: PCle, SATA and USB 3.0.
Cloud Computing
Introduction to Cloud Computing
Imagine racks of servers operating in a data center. Together, these servers become a massive pool of resources. Divide this "pool" of physical computers into multiple virtual servers and you create a "cloud."
Cloud computing refers to an intangible resource. It is the process where you store data via a third-party provider in what has become known as a cloud. Cloud computing is a solution to the expense and related overhead of owning and managing computer resources.
Introduction to Cloud Computing
Cloud computing is essentially a virtual infrastructure with software applications hosted in a cloud-based environment. This infrastructure makes it possible to have broad-based deployments of applications, application workload sharing environments, and self-serve applications. Cloud computing has developed into convenient, on-demand, elastic, location-independent, economically cheap, public computing resources.
Cloud Computing Defined
By purchasing access to various types of services and software that can be instantly downloaded onto a laptop or mobile device, you can access the services of a cloud storage provider over the Internet.
The concept of cloud computing is not new. Due to the improvement of Internet speeds and the development of exceptional mobile Internet devices, cloud computing has seen dramatic increases in usage.
Cloud Computing:
A model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
The Virtual Office
You can put your public-facing information in a public cloud while keeping customer-sensitive data stored in a private cloud. This split concept is called a hybrid cloud.
Examples of data kept in a hybrid cloud include Facebook, LinkedIn, and Twitter profiles; email; or websites hosted at GoDaddy or other Internet service providers (ISPs).
Cloud computing has revolutionized the concept of the "office". Providing easy mobility, individuals can work anywhere they can access the Internet.
This has given rise to "virtual corporations" (corporations that operate from an Internet connection rather than a brick-and-mortar physical location). Virtual corporations have a considerable advantage over traditional businesses due to much lower operating costs, no fixed lease expenses, and minimal related expenses usually associated with a traditional office.
Digital Forensic Challenges with Cloud Computing
When someone chooses to adopt a cloud computing model, this non-traditional infrastructure brings with it certain technical and trust issues and challenges. These challenges are of significant importance to forensic investigators because, along with collecting the evidence from various cloud providers, the judge and jury must accept a great deal of trust in the authenticity and integrity of the data extracted from many layers of the cloud.
Forensic examiners, court evaluators, and law enforcement have particular challenges when it comes to finding and acquiring evidence from remote cloud computing platforms. Not only are the forensics procedures different, but many examiners lack the appropriate tools and experience for these specialized tasks.
While the goals of all forensic examiners are the same, digital forensic examiners face some non-conventional problems including forensically sound acquisition of remote data, large distributed data volumes, chain of custody, and data ownership. Proving ownership of digital evidence is subject to the same admissibility tests as paper records.
Trustworthiness of Evidence from the Cloud
Digital cloud evidence collections should follow clearly-defined segregation of duties between client and provider. The procedure framework for the search and seizure process occurs in three steps:
- Acquisition
- Authentication
- Analysis
Each case is unique; therefore, the method used to collect, analyze, and verify data must be carefully considered in order to ensure it will be admissible as evidence. Acquisition of data requires a warrant or a subpoena, then the cloud provider should supply the data specified in the subpoena.
Ultimately, it is the judge or jury that must decide if they believe and trust the evidence presented to them. There is reason to be concerned about the trustworthiness of evidence from the cloud. The level of trust influences how a digital exam should be conducted and how to approach the investigation.
Forensic Issues in Cloud Computing
Challenges facing investigators of cloud crime scenes include:
- Hard drive imaging and analysis of deleted files or fragments from cloud data isn't possible.
- Computer settings and registry information (passwords, login dates) aren't usually accessible.
- The cloud environment limits the scope of the examination to what is provided by the login ID and password.
- Cloud storage is volatile; there could be corruption of data over time.
- Cloud environments are moveable; they can be moved or reconfigured frequently.
- Many cloud environments are operated out of other countries that have different privacy laws and restrictions.
- Often, dates don't reflect the date of user activity, rather the date files were moved to the cloud.
- Information stored in the cloud may be spread among multiple hard drives and servers in different locations.
- Clients are often notified about law enforcement subpoenas by the cloud service provider.
- Cloud data possessed by a third party is more likely to be subject to legal challenges.
- Investigators may not have access to log files or a computer's registry.
- No cloud environment is the same; investigators familiar with one may not be familiar with another.
Forensic Issues in Cloud Computing
There are inherent forensics issues with cloud computing. Every investigator, at some point, will find him or herself facing some of these hurdles. If any of these forensics issues are not addressed legally and properly, they could cause evidence to be rendered inadmissible.
Applying current digital forensics law to cloud computing can be complex and complicated. This is especially so with regard to the search and seizure of data from cloud providers. Further solutions are needed to preserve cloud-based evidence and prevent the loss of forensic evidence released from the cloud.
Mobile Devices
Phone and Mobile Device Systems
Smartphones and tablet devices present the most challenges in digital forensic recovery efforts. Mobile device makers are constantly implementing new changes in operating systems that often result in current forensic solutions that no longer work. The implementation of file system encryption on mobile devices presents the most challenges to the digital forensic examiner.
The most common mobile device operating system in the world is by far the opensource Android OS followed by Apple iOS.
File Structure of Mobile Devices
Examples of several of today's popular operating systems (OS) include Android and iOS. All these share roots in UNIX. Forensic investigators need to choose a forensic tool that is compatible with the operating system on the mobile device they want to investigate. Having a basic knowledge of the different types of operating systems is necessary in order to choose the specialized software tool to use for the forensic investigation.
iPhone
The iPhone Operating System is comprised of the (1) Core OS, (2) Core Services, (3) Media Services, and 4) Cocoa Touch. The two bottom layers contain the fundamental interfaces for the iPhone operating system including those used for accessing files, network sockets, low-level data types, and access to POSIX and UNIX sockets.
The media services layer contains the fundamental technologies to supposed 2D and 3D drawing, audio, and video such as Open GL, Quick Time, an audio and image viewer, core audio, and video. Cocoa Touch, the top layer, provides the fundamental infrastructure used by the iPhone and contains the Foundation framework and the UIKit in the applications frameworks division of that layer.
iPhone - Operative System (iOS)
UIKit framework provides the visual infrastructure for your application including windows classes, controls, views, and controllers that manage those objects. User's contact and photo information and additional features of the iPhone hardware are other frameworks available at this level.
/CYBR-405/Visual%20Aids/Pasted%20image%2020260131133809.png)
Android
The Android operating system is comprised of a (1) Linux kernel, (2) libraries, (3) an application framework, and (4) applications. The sandbox is simple, auditable, and based on decades-old UNIX-style user separation of processes and file permissions. The Linux kernel acts as an abstraction layer between the hardware and the rest of the hardware stack. It provides access to core services such as security, process management, memory management, driver model, and network stack. It also provides support such as threading and low-level memory management.
The Android run-time libraries are written in JAVA and provide the available functionality for the applications. When an Android application is launched, it runs as a separate process and instance of the Virtual Machine. The Android operating system can run multiple instances of the virtual machine efficiently.
Other parts of the Android OS use C/C++ libraries. These include System C library, media libraries, surface manager, LibWebCore, SGL, 3D libraries, FreeType, and SQLite.
The applications framework layer builds upon the advantage that the Android operating system is open platform, open source. The platform is designed to simplify the reuse of components as developers are allowed full access to the same framework APIs used by core applications. Open development is supported for views that can be used to build applications, grids, text boxes, lists, buttons, and embedded web browsers.
Also provided is a content provider that enables an application to access data from another application or to share data. A Resource Manager is also provided which gives access to localized strings, graphics, and layout files. An Activity Manager provides a common navigation backstack. The top layer (Applications) provides the email client, SMS program, maps, browser, calendar, contacts, and other JAVA applications.
Mobile Operating System Vulnerabilities
Knowing the vulnerabilities of specific mobile operating systems can benefit a forensic investigator. Just like desktop and laptop computers, operating system vulnerabilities can be exploited by hackers and criminals to cause damage to systems or steal information.
If forensics investigators know the vulnerabilities of a specific operating system, they may choose to either exploit them to their advantage or look for digital evidence of a hacker or attacker attempting to exploit them.
Mobile Device Security Limitations
It is important for an investigator to know the limitations of a mobile device's security and add this information to the planning of the data acquisition. If the data stored is confidential, it is important to ensure physical security of the device. One might also consider using additional encryption software, strong passwords, and password vaults. While these add a layer of security to the mobile device, it means the investigator must use another layer of specialized investigative tools to extract the data needed for the case.
Being able to access ALL the data on a device allowed under the terms of the warrant or subpoena is critical to evidence admissibility. Extracting portions of the data, but being unable to extract the entire amount of data allowed by the warrant, can be limited by the tools available.
Online Devices and Platforms
Online Devices and Platforms
Technology is evolving rapidly with newer and more superior online systems and services are being introduced to the markets.
"For most teens, gaming is a social activity and a major component of their overall social experience." (Pew Research Center, Internet and Technology)
Because children are the main consumers of these systems, there is easy access for pedophiles and other child predators to seek out young children.
Popular online platforms include:
- Microsoft Xbox
- Reddit and other online social media sites
- Twitch
- Nintendo Switch
- Sony PlayStation
Forensic Challenges
The forensic analysis of these online systems can be challenging. The following forensic issues associated with various online platforms have been identified.
Proprietary encryption and the use of non-standard file systems of modern platforms makes extraction and analysis of forensic artifacts difficult. Most relevant data can be obtained through the service provider (e.g., Microsoft, Sony, or Nintendo) using a court order or search warrant as much of the user data is stored on the service provider's cloud services. In reference to game consoles and other online devices, forensic examination is presently limited to determining what games and applications were downloaded along with when the games were played. Data carving of encrypted disk drives from these game consoles is ineffective.
Therefore, the hard drive may not be the most important data source as it has been in previous generations of gaming systems. It is possible that user-generated content will not even appear on the hard drive at all. A court order or search warrant to the service provider will likely be the most productive solution to acquire user activity and communications through a particular online service.
Undercover Operations
These varieties of systems also provide the opportunity for law enforcement to interact with their subjects. Investigators may even participate in online undercover operations. Law enforcement personnel can strike up chats with criminal suspects and record the conversations as evidence for offenses such as child pornography. In the undercover operations, investigators will need to capture the interactions with the criminal suspects with both video and audio.
There are less tools available for analyzing these online devices and platforms. The only tools that function properly are home-made tools coming out of the hacking communities. These tools are often not fully vetted. They often contain bugs and other flaws that cause evidence corruption or present unnecessary hurdles during the investigation.