Thursday, November 7, 2013

Emerging Cybersecurity Technologies

James E. Gilbert
UMUC
September 27, 2013

Abstract
The ever increasing sophistication of cyberattacks represents a mounting and serious risk to private organizations, public agencies, and individual users alike.  To defend against these advanced threats, emerging cybersecurity technologies are necessary.  Although many safeguards are developed by the private sector, the federal government recognizes the global risk cyberattacks represent.  The following paper outlines three of these innovative approaches including prioritized research and development, remote agent technologies, and real-time forensic analyses, as well as the government’s role in their formation.  This partnership between public and private sectors represents a profound understanding of the liability that exists should support for emerging cybersecurity technologies cease. 

Introduction
The development of cyberspace and the Internet represents one of the most revolutionary advancements for mankind.  There are few sectors and fewer countries unaffected by this growing collection of technologies.  Although this phenomenon has influenced a host of areas, it also represents one of the most serious threats to our modern society.  As the developed world moves an increasing amount of critical data online, a myriad of nefarious individuals have adapted traditional criminal activities to the cyber realm.  This rise in the sophistication and frequency of cyberattacks signals the need for a similarly advanced set of defensive mechanisms.  Emerging technologies such as prioritized research and development (R&D), remote agent technologies and real-time forensic analysis represent some of the most promising approaches to defend cyberspace.   These advancements however cannot be developed in a vacuum as cyberattacks affect governments, corporations, and individuals alike.  As a result, a consortium of public and private organizations is necessary to develop the next generation of cyberdefense technologies blending corporate expertise with the support and encouragement of the federal government.  For widespread acceptance, this arrangement should balance defense aspects with the various liability issues that comprise the diverse field of cybersecurity.

Emerging Cybersecurity Technologies
As our society’s reliance on cyberspace grows, the importance of providing secure and reliable access to this resource becomes increasingly important.  Advanced cyberattacks represent a serious risk to critical infrastructure and individual privacy alike.  Technology and policy solutions must be continuously developed to keep pace with emerging threats (Maughan, 2010).  Three of the most promising approaches include prioritized research, remote agent technologies, and real-time forensic analysis.

Prioritized Research and Development
Identifying future technologies remains one of the most complex issues in the field of cybersecurity. This matter is worsened by the fact that the United States lacks a unified cybersecurity policy with multiple agencies in charge of this field.  This translates into a competitive and often counterproductive effort to ensure the advancement of next generation cybersecurity technologies.  In 2006 alone, an assessment of federal R&D identified over 50 cybersecurity projects in various states of funding with many of these initiatives having been postponed for the last decade.  The underinvestment in these technologies was addressed in the 2009 White House Cyberspace Policy Review in which the President’s advisors identified that prioritized R&D must play a key role in America’s cybersecurity (Maughan, 2010).

Although the White House’s Cyberspace Review represents one of the most current calls for reform, this dilemma was recognized as early as 1991.  To address America’s need for emerging technologies, the Networking and Information Technology Research and Development (NITRD) program was formed.  Consisting of Commerce, Defense, Energy, and a variety of other federal agencies, the NITRD program was established with the intent of aligning federal funding with priority areas in the field of cybersecurity (UMUC, 2013).  One of the most current actions this working group has carried out is the publishing of the Comprehensive National Cybersecurity Initiative (CNCI).  Established by Presidential Directive, the CNCI was designed to help establish a comprehensive set of cybersecurity defenses.  Inherent in this initiative was the understanding that protection against cyberattacks required enhancing America’s R&D efforts through the investment in “leap ahead” technologies (Maughan, 2010).  As one of the most well-known supporters of cutting edge technologies, the Defense Advanced Research Projects Agency (DARPA) has been at the forefront of emerging cybersecurity solutions.  In just one example, the agency’s Cyber Fast Track Program provided streamlined grants to over 100 individuals and groups to develop solutions such as cutting edge forensics for Mac OS-X (Sternstein, 2013).  Given the fact that the vast majority of America’s critical infrastructure is privately owned and that innovations generally evolve from private sector initiatives, the federal government has a staked interest in guiding America’s cybersecurity future.

Remote Agent Technologies
As manual auditing and enforcement of computer security compliance becomes more important in the defense against cyberattacks, experts believe increased active monitoring methodologies are needed.  This approach involves using various technologies to conduct both remote tests of network security as well as forensic examinations of individual systems.  Utilizing consolidated safeguards in this manner has the potential to increase the efficiency and effectiveness of cybersecurity by centralizing auditing and patching functions (UMUC, 2013).

Experts no longer believe that comprehensive cybersecurity can be accomplished by utilizing a single product or approach.  Instead, it is becoming more commonplace for administrators to employ a variety of safeguards to secure networks.  For organizations with distributed or complex digital infrastructures however, this approach involves significant expenditures of both technological and human resources.  One possible solution to this dilemma is the use of remote and automated cybersecurity technologies.  Fewer administrators can manage larger networks utilizing consolidated security functions. Common tasks that can be accomplished via remote agents include vulnerability scanning, intrusion detection, and cyclical service checking (Stefan & Jurian, 2012).  In addition to significantly reducing the resources needed to protect a geographically dispersed digital infrastructure, remote agents also provide the ability to handle cyberthreats on a more proactive basis.  Applications such as SysMon, OpenNMS, and Nagios represent flexible platforms that give administrators the tools needed to respond to rapidly evolving attacks (Stefan & Jurian, 2012).

The second major use of remote agents lies in the field of forensics.  Traditional digital forensics is often a static process that involves the onsite imaging of a system’s digital media which is accomplished by shutting down the target system and physically removing the hard drive.  As with traditional network security, this methodology is heavily dependent on specially trained human resources.  In addition, as hard drive space continues to increase in size each year, examiners have been forced to triage digital examinations.  Given these time constraints, forensic investigators have begun moving away from the traditional approach to collecting digital evidence, instead relying on automated and remote technologies to streamline and consolidate forensic examinations.  Remote administration tools such as the GRR Rapid Response architecture offer this flexibility.  GRR is an open source platform that provides administrators with the ability to conduct remote forensics on a truly scalable level.  It is available for a number of systems and can be rapidly developed and deployed to enable remote, real-time forensic analysis of a network (Cohen, Bilby, & Caronni, 2011).

Real-Time Forensic Analyses
Similar to remote agents, another emerging technology in the defense against cyberattacks revolves around real-time forensic analysis.  Reliant upon triaging and evidence preservation, this technique has proven to be an invaluable tool in both the cybersecurity realm and in criminal proceedings (UMUC, 2013).  A forensic analysis conducted in real-time focuses on prioritizing data collection while recognizing the importance of volatile data sources commonly found throughout modern computer systems.

Based on the theory outlined in Moore’s Law, computational resources have continued to double at a fairly consistent pace.  This has led to a similar growth in data storage capacities.  Although these rates of advancement represent significant potential for innovation, it has also left the forensic community somewhat lacking.  Increased computational power and hard space requires enhanced analysis capabilities.  Unfortunately, the field of forensics has been unable to match the pace of this development and accordingly requires new tools to remain current.

Given the increasingly unsustainable model of traditional forensic examinations, security professionals are in need of additional tools.  One answer to this growing problem is the concept of real-time digital analysis.  By combining software based platforms with triaging methods and technologies, analysts are able to accurately and efficiently identify emerging threats quicker.  While investigators have employed triaging tools such as Carvey’s Forensic Scanner or EnCase Portable for a number of years, real-time analysis significantly enhances this technique. The ability to run a continuous analysis in real-time provides examiners with a known good baseline to aid in the identification of emerging cyberattacks.  This technique also leverages the potential for collecting relevant and potentially volatile data.  Traditional forensics relies on a system being powered down, thus risking the loss of valuable data stored in RAM.  Real-time analysis however is continuously run on a system thereby minimizing data loss while simultaneously providing a more complete picture of activities within a network (Roussev, Quates, & Martell, 2013).

Federal Government’s Supporting Role
The majority of America’s critical infrastructure is maintained by the private sector.   Although corporations maintain a fiscal responsibility to secure these resources, the federal government also possesses an obligation to defend it.  As a result, it is incumbent upon the public sector to provide guidance and support in the development of various defensive technologies.  Historically, this assistance has ranged from sharing information and drafting policies to monetary investments.

Prioritized Research and Development
Of the many levels of government support, perhaps the most direct is funding R&D efforts in support of emerging cybersecurity technologies.  This not only maintains a partnership between public and private organizations, but also allows the government to direct a federal cybersecurity strategy in support of the nation’s infrastructure.  In 2011, the White House Office of Science and Technology Policy (OSTP) published the document “Trustworthy Cyberspace: Strategic Plan for the Federal Cybersecurity Research and Development Program” (Maughan, Newhouse, & Vagoun, 2012).  This report not only identified existing deficiencies in the national cybersecurity strategy, but also provided a framework for coordinating objectives for future R&D efforts.

Federal support for prioritized R&D efforts was further bolstered in 2008 with the Leap-Ahead Initiative.  As part of the CNCI, this approach was designed to manage R&D efforts and develop a comprehensive set of strategies to help solve the nation’s growing cybersecurity requirements (Maughan, Newhouse, & Vagoun, 2012).  Under this approach the government’s Cyber Security Information Assurance (CSIA) group directed industry and academic institutions to identify emerging solutions to themes including moving target defense, cyber economic incentives and tailored trustworthy spaces.  Based on input from the private sector and research institutions, these categories were then incorporated into the 2012 federal budget to foster the creation of emerging technologies in these fields.

Remote Agent Technologies
As public and private organizations further integrate their critical infrastructure into networked systems, increasing the efficiency of computer security has become a priority for the nation.  The federal government’s National Institute of Standards and Technology (NIST) recognized this need and responded by creating the National Cybersecurity Center of Excellence (NCCoE).  This public-private partnership represents a forum to develop “…open, standards-based, modular, end-to-end solutions that are broadly applicable, customizable to the needs of individual businesses” (McBride & Waltermire, 2013, p. 1).  In just one example, through collaboration the NCCoE aims to develop “building blocks” to assist in the challenge of continuous monitoring.  The intent is to develop a viable solution that can be applied to multiple industries and organizations.  Based on input from the private sector, the government’s NCCoE has already developed a number of these building blocks to enable “…accurate, timely data collection and secure exchange of software inventory data from computing devices” (McBride & Waltermire, 2013, p. 1).

Real-Time Forensic Analyses
A 2005 report published by the President’s Information Technology Advisory Committee (PITAC) entitled “Cyber Security: A Crisis of Prioritization” outlined the federal government’s role in investing in long-term R&D projects to identify and develop next-generation solutions to America’s emerging digital vulnerabilities (Interagency Working Group on Cyber Security and Information Assurance, 2006).  The document identified various responsibilities for the federal government including a primary leadership role in generating technological advancements in support of defending the nation’s IT assets.  This guidance can be used to identify serious cybersecurity threats to the country, prioritize the nation’s most critical assets, and then coordinate with the private sector on developing broad R&D solutions.

The Cyber Security Research and Development Act of 2002 solidified the national importance of areas such as forensics and intrusion detection.  This law called for significant increases in funding for cybersecurity R&D in various areas.  In February 2003, the federal government issued their National Strategy to Secure Cyberspace report.  In this document, the government identified a number of R&D topics that represented the most serious threats to the American IT infrastructure.  Solutions such as “…protection of systems, networks, and information critical to national security; indications and warnings; and protection against organized attacks capable of inflicting debilitating damage to the economy” were determined to represent the most critical areas for defense (Interagency Working Group on Cyber Security and Information Assurance, 2006, p. 14).  The first item mentioned in this report however was the development of forensics and attack attribution technologies.  Identifying the source of an attack and disseminating this information to other organizations provides one of the greatest strengths in preventing similar incidents.

Liability Recommendations
Although the concept of cybersecurity ranks as one of the nation’s most critical issues to solve, a number of liability questions exist that have derailed any comprehensive strategy.  Topics of concern range from personal privacy to the precise level of responsibility corporate entities must assume.  To obtain a lasting partnership between corporations, individuals and the federal government, these issues require thoughtful consideration.

Concerns over personal privacy rank among the highest reasons for opposition to any national cybersecurity initiative.  Technologies such as remote software management and real-time forensic analysis have the potential to compromise personally identiļ¬able information.  Even though a number of laws are already in place to protect this data, privacy advocates worry about powerful and intrusive technologies in the hands of an overzealous government.  One possible solution to this dilemma is the increased automation of remote security tools (Cohen, Bilby, & Caronni, 2011).  This would result in a minimal number of individuals having access to vast amounts of personal information; thereby minimizing the liability stemming from accidental or intention disclosures.

The second major hurdle to overcome in gathering support for a broad cybersecurity effort is corporate liability.  According to the SEC (2011), there are no current disclosure requirements for corporations experiencing cyberattacks.  There is however an obligation for publicly held companies to report any incident that may affect the operational or financial condition of a company.  In practice, this requirement falls far short of the federal government’s goal for information exchange.  Given the potential usefulness of this activity, corporations should feel safe in disclosing cyberattacks or data breaches without legal repercussion.

Conclusion
The federal government has a long history of supporting innovation in the private sector, especially where matters of national security are concerned.  This realization gained significant traction after incidents such as the September 11, 2001 terrorist attacks and the emergence of foreign-based advanced persistent cyberattacks.  Even though America’s critical infrastructure is maintained almost exclusively by the private sector, the federal government understands that the defense of these resources is directly linked to the safety and security of the United States as a whole.  Federal support for the development of next-generation technologies is necessary to guide the nation’s overall cybersecurity strategy.

References
Cohen, M. I., Bilby, D., & Caronni, G. (2011). Distributed forensics and incident response in the
enterprise. Digital Investigations, 8. doi:1016/j.diin.2011.05.012

Interagency Working Group on Cyber Security and Information Assurance. (2006). Federal plan
for cyber security and information assurance research and development. National Science and Technology Council. Retrieved from http://www.nitrd.gov/pubs/csia/csia_federal_plan.pdf

Maughan, D. (2010). The need for a national cybersecurity research and development agenda.
Communications of the ACM,53(2), 29-31. Retrieved from http://cacm.acm.org/

Maughan, D., Newhouse, B., & Vagoun, T. (2012). Introducing the federal cybersecurity R&D
strategic plan. The Next Wave, 19(4). Retrieved from http://www.nsa.gov/research/tnw/tnw194/article3.shtml

McBride, T., & Waltermire, D. (2013). Software asset management: Continuous monitoring.

Roussev, V., Quates, C., & Martell, R. (2013). Real-time digital forensics and triage. Digital
investigation, 10(2), 158-167. Doi:10.1016/j.diin.2013.02.001

Stefan, C., & Jurian, M. (2012). Distributed communication systems monitoring and proactive
security. Analele Universitati Maritime Constanta,13(17), 185-192. Retrieved from http://www.cmu-edu.eu/anale.html

Sternstein, A. (2013). DARPA to turn off funding for hackers pursuing cybersecurity research.

University of Maryland University College (UMUC). (2013). Module 3: the future of
cybersecurity technology and policy. CSEC 670: Cybersecurity Capstone. Retrieved from http://tychousa1.umuc.edu

U.S. Securities and Exchange Commission (SEC). (2011). CF disclosure guidance: Topic No. 2.


IT Contingency Planning

James E. Gilbert
UMUC
August 2, 2013

Abstract
Modern organizations increasingly rely on information technology (IT) to conduct their daily activities.  As a result, ensuring the resiliency of this asset has become a critical component for most enterprises.  From hurricanes and power outages to cyberattacks, public agencies and private businesses alike face a myriad of threats from both manmade and natural causes.  To mitigate these risks, it is imperative that organizations devise an appropriate contingency plan that incorporates backups and safeguards for IT infrastructure.   The following paper outlines the various planning steps, recovery operations and testing requirements necessary to ensure a successful business continuity plan with a 24-month proposal to adequately test the preparations.  Although maintaining a comprehensive contingency plan requires a significant expenditure of personnel, equipment, and production costs, not developing a backup often proves far more costly.

Introduction
According to a study conducted by McGladrey and Pullen LLP, 43% of companies that experience a disruptive event lasting 10 days never reopen.  51% of firms continue to operate for up to two years following a major data outage, with only 6% of businesses surviving in the long-term (Tittel & Korelc, 2013).  Given the necessity for business continuity as well as the increased dependence on IT systems and services, ensuring the availability of these resources has become paramount in the contingency planning cycle.  While business continuity planning should be tailored for each organization, a number of similarities exist throughout all plans with the Disaster Recovery Institute International (DRII) identifying these common tasks (Vacca, 2009).  Among these steps include planning activities such as conducting a business impact analysis (BIA) and risk assessment.  Organizations must also determine recovery options by identifying relevant risks, selecting appropriate strategies and developing a comprehensive contingency plan.  Finally, continuity operations must also incorporate a verification component.  This includes personnel training, periodic testing, and maintenance of the plan as changes in the organizational mission or structure occurs.  Each step of the contingency planning process is important to the overall success of the enterprise.  Ensuring continuity throughout a disaster requires appropriate resources allocated to critical systems within an organization which necessitates a strong commitment by a firm’s senior management.

Planning
The first step in designing a relevant business contingency arrangement is planning.  This stage requires an organization to weigh the returns from any proposed safeguards.  The frequency and severity of an outage should be assessed when determining the amount of resources that should be devoted to this process.  Applying these considerations to IT resources may be difficult for some firms.  It is often complicated to assess the exact level of impact an intrusion can have on a firm with cyberattacks ranging from amateur denial-of-service (DoS) attacks to advanced persistent threats (APTs) perpetrated by nation states.   Moreover, determining the rate of occurrence of cyberattacks is difficult to estimate for organizations that have never experienced one.  In these instances, it is the responsibility of the cybersecurity professional to make a convincing case for the incorporation of IT resources into the business continuity plan.  This often requires computer security personnel to demonstrate the anticipated return on investment (ROI) that adequate planning will provide an organization (UMUC, 2013).

According to the DRII, this stage should include both a BIA and a risk assessment.  The BIA assesses the potential toll an outage can take on a critical business area.  Conducting this analysis requires stakeholders to identify key value drivers within the firm.  These are the elements within an organization deemed most critical to long-term operations.  Examples of value drivers include components such as intellectual property or data operations (Vacca, 2009).  The amount of resources a firm allocates to system restoration depends on the level of impact an outage is anticipated to have on daily operations.  The BIA assists managers in designing a hierarchy to determine which activities or areas should be reestablished first (Slater, 2012).

The second component in the planning stage is a risk assessment.  This step requires enterprises to perform an objective analysis of probable and possible risks that could affect daily operations.  This step should account for types of disasters or outages historically encountered taking into account the anticipated frequency of occurrence as well as the impact each incident is expected to have on the organization.  With this data, managers can then make an educated decision on how much investment is required to mitigate the impact from potential outages. 

Recovery Operations
The second major component of the contingency planning process that DRII identified deals with recovery strategies.  This includes identifying continuity options based on various scenarios, selecting the strategy most applicable to an organization’s needs and developing a continuity plan based on this data (Vacca, 2009).  Although the details vary greatly depending on the incident, the general theme should always focus on communication.  Contingency plans must include how organizations transmit information in the event of an emergency as well as how employees will talk to each other if normal communication channels are broken.  While some companies may value IT resources while other firms rely more heavily on supply chain logistics, every contingency arrangement should be planned and coordinated with business, security and IT managers working in conjunction to ensure continuity of operations (Slater, 2012). 

As IT resources become increasingly more important in the modern business community, the amount and types of disasters that an organization may encounter have risen significantly.  Where past disasters included natural occurrences such as hurricanes or floods, enterprises today must now also consider outages to their networks caused by manmade sources.  This increase in the number of potential outages has led to the creation of a variety of third-party service providers.  Modern enterprises no longer have to create contingency plans from scratch.  A number of companies offer specialized continuity planning software, while others provide turnkey arrangements to facilitate backup operations.  From data centers to mobile recovery services, Gartner estimates this area represents a $3 to $4 billion dollar industry (Collet, 2007).  Although organizations considering outsourcing this area have a number of options to consider, the primary consideration for IT resources “…requires that the company install backup and recovery systems to override any type of crisis in support of physical and digital security” (UMUC, 2013, p. 8).

Physical
The physical security aspect of a contingency plan includes ensuring that an alternate offsite location is available in the event of an emergency.   This includes not only physical office space but also the IT resources necessary to continue operations during an outage.  From servers and networks to data backups, this component provides a means of ensuring parallel operations.  Backup sites can range from physical locations with minimal infrastructure to sites that fully imitate current operations.  From locations owned and operated by the enterprise to reciprocal agreements with similar firms, organizations have a number of recovery options available.  Like many aspect of business continuity, the level of physical preparedness is often dictated by financial considerations.

Physical backup locations generally fall into three main categories ranging from basic to advanced: cold sites, warm sites, and hot sites (Swanson, Bowen, Phillips, & Gallup, 2010).  Cold sites are facilities with the lowest level of preparation and accordingly are often the least expensive to maintain.  These locations usually have minimal infrastructure in place beyond electricity and environmental controls.  As a result, cold sites require the longest amount of lead time to setup and become fully operational.  The next type of backup facility is a warm site.  These locations have more preparations in place than cold sites and as such are also more expensive to maintain.  Warm sites are usually partially furnished with some or all IT resources and telecommunication equipment already in place.  Accordingly, these facilities require less time to activate than cold sites.  The last category of physical backup locations is a hot site.  “Hot sites are facilities appropriately sized to support system requirements and configured with the necessary system hardware, supporting infrastructure, and support personnel” (Swanson et al., 2010, p. 22).  These locations require the least amount of time to become active with some maintaining a full-time staff.  As a result, hot sites represent the most expensive scenario for most organizations.

Digital
The second major component in assessing recovery options revolves around digital security considerations.  Although infrastructure and personnel are critical aspects in contingency planning, business continuity must also take into account data backups.  Inherent in this process is a multitude of questions and technologies.  Similar to physical security planning, this area is also heavily influenced by cost considerations (UMUC, 2013).

Depending on mission requirements, enterprises may choose any number of methods to backup digital media, databases, or proprietary data.  Decisions on how often data is backed up and to what extent should be guided by the critical nature of the information.  Organizational policy should be clear in dictating the frequency and scope of information archiving.  Additional considerations should include the location of media, frequency of data rotation and the data transmission method to an offsite location.  The National Institute of Standards and Technology (NIST) issues the Federal Information Processing Standards Publication (FIPS) 199, entitled the Standards for Security Categorization of Federal Information and Information Systems.  FIPS 199 outlines the recommended recovery strategies depending on the level of impact an outage is anticipated to have on an organization.  NIST recommends tape backups and a cold site for low priority events.  Outages anticipated to have a moderate effect on daily operations should be mitigated with optical backups and WAN/VLAN replications as well as a cold or warm site.  Finally, NIST recommends a backup strategy that includes mirrored systems and a hot site location for severe disruptions to an organization’s most mission critical systems (Swanson et al., 2010). 

As more organizations chose to backup their critical data, this in turn has led to an increase in the number of companies providing data archiving.  From data centers providing cloud storage to commercial vendors offering full service transportation and restoration services, modern organizations have a number of alternatives to choose from.  Enterprises who retain third-party providers should weigh a variety of criteria.  Considerations such as geographic location could become an issue if the vendor is close enough to the customer to also be affected by an outage.  Other deciding factors should include the accessibility of the stored data, security of the archived media, environmental considerations and of course, cost (Swanson et al., 2010).

Testing Requirements
The third major category the DRII associates with business continuity is the verification, maintenance, and personnel training associated with a disaster recovery plan.  Testing contingency preparations is an important component in this process.  Ensuring relevant personnel are adequately trained for their role during an outage helps guarantee a smooth operation during an actual event.  Additionally, a business continuity plan should be thought of as a living document.  Enterprises should periodically reassess and update contingency plans as mission requirements or organizational structure changes.  Finally, verifying the accuracy and capability of a plan also provides an additional measure of preparedness prior to an actual incident (Vacca, 2009). 

Tabletop and Functional Exercises
According to NIST, the two main evaluations are tabletop and functional exercises (Grance, Nolan, Burke, Dudley, White & Good, 2006).  Tabletop exercises are discussion-based activities where participants role-play their responsibilities during a simulated emergency.  These types of evaluations are usually conducted in an informal classroom setting with personnel discussing their roles and actions during an outage.  A facilitator guides participants through one or more scenarios in the attempt at meeting previously defined objectives.  Depending on the number of scenarios and the detail involved, tabletop exercises can last anywhere from two to eight hours.  This type of evaluation represents the most cost effective means of testing the viability of a business continuity plan.  Tabletop tests provide a forum for team members to demonstrate their emergency knowledge as well as give managers the ability to review contingency plans for errors, missing information or inconsistencies (Kirvan, 2009).

The other most commonly utilized validation activity is a functional exercise.  This evaluation is also scenario driven but instead of discussion-based, functional exercises employ a simulated operational environment.  These types of evaluations are designed to test various aspects of an IT plan to include personnel, procedures or equipment.  Components to test can include recovery site operations, backup systems, and any third-party continuity services (Kirvan, 2009).  Functional or simulated exercises can vary in size and scope and can cover a single component or a full-scale evaluation of an enterprise.  As a result, these tests can last anywhere from several hours to several days and often represent the most costly and time-consuming of the continuity evaluation tools (Grance et al., 2006).  Although they require a significant amount of resource expenditures, functional exercises are also one of the most effective methods of testing a disaster recovery plan prior to an actual event.

Alternate Testing
Although tabletop and functional exercises are the two most commonly utilized methods of evaluation, the commercial vendor Search Disaster Recover also recommends a variety of alternate tests to include plan reviews, orientation tests, and drills (Kirvan, 2009).  In a plan review, participants discuss the proposed business continuity plan in an informal setting.  This step is similar to a tabletop exercise albeit without a scenario.  Orientation tests introduce participants to the contingency plan and helps orient new staff to the disaster recovery policies and procedures of an organization.  Testing time for this evaluation can be as little as an hour and should be considered as a component in the employee training curriculum.  Finally, drills provide an impromptu method of testing staff on established emergency procedures.  These types of evaluations provide training under realistic conditions and are routinely used for response to natural disasters.

24-Month Testing Plan
Testing the veracity of a continuity plan encompasses a number of different exercises.  With a variety of activities available to an organization, the key is to incorporate annual testing into the overall disaster recovery process.  From drills to full-scale events, each activity possesses both merits in the form of preparation and drawbacks in the form of time and financial expenditures.   Finding a balance between an adequate amount of testing and a sufficient level of resource allocation is often the primary difficulty for organizations.  In addition to the actual amount of time needed to conduct the exercise, a far greater amount of time is necessary for “preparation and execution, funding, careful planning and a structured process from pre-test through test and post-test evaluation” (Kirvan, 2009).  Optimally, the financial considerations of any continuity plan should be based on organizational needs to include the “…maximum tolerable period of disruption and recovery time from which the specific measures will be based on” (Pinta, 2011, p. 57).  To determine the amount of money that should be spent on contingency planning and preparations, enterprises must consider factors such as the maximum tolerable downtime (MTD), recovery time objective (RTO), and recovery point objective (RPO).  For most organizations, the longer an outage occurs, the more costly it can become.  As a result, firms must balance the costs necessary to recover from an emergency with the cost of disruption to daily operations.   Plotting these two points on a graph allows managers to visualize the optimal cost balance point that should be allocated to business continuity planning (Swanson et al., 2010).  In their Special Publication 800-53, NIST requires federal agencies to test contingency plans on an annual basis at a minimum (Grance et al., 2006).  This provides a solid starting point for the continuity planning cycle. 

Full-scale and Functional Testing
Full-scale tests, which represent the most comprehensive assessment tool, also require the greatest amount of testing and planning time.  These exercises typically last anywhere from two to eight hours, but require a minimum of four months to plan.  Full-scale tests are also expensive and may be disruptive to daily operational activities (Kirvan, 2009).  As a result, a comprehensive test of all IT systems should take place every one to two years.  The exercise should encompass all aspects of a business continuity plan from evacuating the primary site to activating the backup location.  All IT and communication resources should be evaluated during this process to include “…settings of backup policy, data replication, high availability systems, active and passive devices, local mirror of systems and/or data and use of disk protection technology such as RAID technology” (Pinta, 2011, p. 61).  Due to the cost and time necessary to execute this type of plan, organizations should also consider smaller scale functional tests.  These events exercise only a portion of the continuity operation and as such may be planned in as little as three months.  The actual testing usually lasts two to four hours and causes less disruption to an organization’s daily activities (Kirvan, 2009). 

Drills, Orientation and Tabletop Testing
In addition to full-scale and functional exercises, organizations should also consider limited training events that require less planning and can be executed frequently throughout the year.  Orientation tests should be given to all new personnel in order to provide a solid foundation of an organization’s continuity operations and often only require a month to plan and an hour to deliver.  Drills on the most likely emergency scenarios should be conducted quarterly.  This includes exercises such as tornado or earthquake tests, fire drills, and communication plans.  Testing time for these events can be as little as 10 minutes with a planning cycle of one month.  Lastly, tabletop tests should be incorporated into an organization’s contingency preparations to refine the overall continuity plan.  These events should be conducted just prior to a functional or full-scale test every one to two years.  The planning cycle for these events range from two to three months and can be executed in approximately three hours depending on the size of the organization and the scope of the plan (Kirvan, 2009).  Integrating smaller scale exercises into an enterprises’ planning process allows for more frequent tests.  This in turns gives managers more opportunities to identify weaknesses in the continuity testing as well as provides employees more opportunities to practice their assigned duties in the event of an emergency.

Conclusion
As organizations increasingly rely on IT resources for daily operations, the number and variety of potential risks has risen significantly.  Modern enterprises must consider the impact a network outage would have on their business as well as the effects from traditional natural and manmade disasters.  Perhaps now more than ever, companies and agencies alike must ensure they have adequate disaster recovery and contingency plans in place prior to an actual emergency.  A business continuity plan should be tailored to meet an organization’s specific mission and requirements.  Threats and critical assets should be objectively identified utilizing tools such as business impact analysis and risk assessments.  These evaluations can then be used to develop a contingency plan and the necessary training and testing requirements to maintain the emergency preparations.  Finally, a business continuity plan will only succeed if adequate resources, personnel, and time are allocated to the practice.  This requires receiving support from senior management throughout the entire contingency planning process.

References
Collett, S. (2007). Evaluating business continuity services. CSO Security and Risk. Retrieved

Grance, T., Nolan, T., Burke, K., Dudley, R., White, G., & Good, T. (2006). Guide to test,
training, and exercise programs for IT plans and capabilities. NIST. Retrieved from http://csrc.nist.gov/publications/nistpubs/800-84/SP800-84.pdf

Kirvan, P. (2009). Business continuity and disaster recovery testing templates. Search Disaster

Pinta, J. J. (2011). Disaster recovery planning as part of business continuity management. Agris Online Papers in Economics & Informatics, 3(4), 55-61.

Slater, D. (2010). Business continuity and disaster recovery planning: The basics. CSO

Swanson, M., Bowen, P., Phillips, A. W., & Gallup, D. (2010). Contingency planning for federal

Tittel, E., & Korelc, J. (2013). Understanding the need for business continuity management and

University of Maryland University College (UMUC). (2013). Module 11: Service restoration and
business continuity. CSEC 650: Cybercrime Investigation and Digital Forensics. Retrieved from http://tychousa1.umuc.edu

Vacca, J. R. (2009). Computer and information security. Burlington, MA: Morgan Kaufman Publishers.


Emerging Sources of Data in Digital Forensics

James E. Gilbert
UMUC
June 30, 2013

Abstract
As information technology continues to evolve, a growing number of software and hardware devices now have the ability to store digital evidence.  From personal computers and smart phones to virtual machines and cloud computing, these technologies are becoming commonplace for individuals and organizations alike.  Just as these tools are ubiquitous in the modern era, they have also become invaluable sources of evidence for digital investigators.  With any innovative technology though, come new challenges for forensic examiners.  The following paper presents four sources of digital information (RAM, smart phones, cloud computing and virtual machines) and outlines their usefulness to investigators in obtaining forensic evidence from network intrusion, malware installation, and insider-based attacks.

Introduction
Digital media and the devices that use them have become increasingly commonplace in the modern world.  From transportation and banking to personal smart phones and laptops, virtually every sector in the developed world has integrated some aspect of information technology.  For the legal system, these tools provide an effective means of reconstructing past event and accordingly, have led to an increase in their inclusion as evidence in court proceedings.  This in turn has led to a rise in the demand for digital forensic analysis.  As computing advances, the techniques and methodologies to collect evidence from these devices must also evolve. 

RAM
Arguably, the development of information technology has had one of the biggest impacts in the modern era.  From personal computers to automobiles and TVs, an increasing number of devices rely on this functionality in one form or another.  While comprised of a myriad of technologies, a critical component for any modern computer system is random access memory (RAM).  RAM speeds up data recovery by allowing direct access to information, versus the more traditional process used for hard drives, CDs, and DVDs.   Unlike traditional memory however, RAM is considered volatile storage.  Any information written to this media will be lost once power is disconnected.  This feature presents a number of challenges for forensic investigators.

Collecting RAM from a system involves a “live acquisition” of the data.  This process is contrary to the approach digital investigators historically practice.  The traditional approach to digital media collection is a static method which involves first powering down the system.  Once disconnected from electricity, the analyst then makes a forensically sound image of the storage media (Hay & Nance, 2009).  Once powered down though, any information stored in RAM is lost.  Types of data that can be collected from this area include currently running processes and files located in temporary storage.  Acquiring this information gives a more complete picture of the computer and its users.  This provides an accurate depiction of an information system’s active state by enabling the collection of “information not likely written to disk, such as open ports, active network connections, running programs, temporary data, user interaction, encryption keys, decrypted content, data pages pinned to RAM, and memory resident malware” (Hay & Nance, 2009, p. 31).  The other major challenge to investigators when collecting volatile media is the repeatability of the process.  Data presented as legal evidence must be collected using forensically sound practices.  As defined by the Daubert principle, this means that a forensic process should have the capability to be replicated (Welch, 2006).  This allows for an independent analysis of collected evidence by third parties.  With the live acquisition of evidence from RAM however, any action the investigator takes changes the state of the computer system and therefore cannot be repeated (Hay & Nance, 2009).  So although this process provides a more complete picture of a system’s history, it may not always be admissible in court without additional corroborating evidence.

As it pertains to identifying network intrusions, malware installation and file deletion by insiders, collecting volatile data such as RAM is crucial to investigators in all three areas.  During a static collection, an investigator traditionally shuts down the system either through the OS-provided shutdown process or by disconnecting the power directly from the system.  This has the potential to destroy relevant evidence that is stored in data logs, temporary files or cached data.  Additionally, paranoid or clever suspects may enable scripting cleanup or wiping applications to run during a shutdown process.  In either instance, valuable digital evidence may be lost to investigators if a live acquisition of RAM is not utilized.  Acquiring active media images prior to a shutdown has the potential to identify malware installation and network intrusions.   Collecting an “attacker’s post-compromise interaction with the system” requires capturing a complete picture of a system to include volatile data (Hay & Nance, 2009, p. 31).  Identifying the user’s interaction with the target system has the potential to recreate the steps taken by a hacker penetrating a system.  Similarly, collecting the temporary data written to RAM gives forensic analysts important clues to what types of data was accessed by trusted insiders as well as files that may have been altered or deleted. 

Virtual Machines
Like many technologies, the concept of virtualization has revolutionized the information technology field.  First appearing in the 1960’s, virtual machines (VM) perform the same function as traditional computers, but offers advantages in the areas of server consolidation, testing, and cost (Khangar & Dharaskar, 2012).  Organizations and individuals are no longer limited by physical hardware requirements, allowing data and applications to be processed in a logical realm.  Although VMs operate in ways similar to that of traditional systems, there are still some challenges digital investigators must address when collecting evidence from them.

According to Nelson, Phillips, & Steuart (2010) digital investigations involving VMs do not differ significantly from those focusing on traditional systems.  One of the biggest challenges in collecting data from this technology however is a lack of understanding.  For investigators analyzing any new technology, it is crucial to recognize how the device interacts or compares to traditional technologies.  In the case of virtualization, comprehending the interaction between the VM software and the host system is vital to collecting evidentiary data.  Because virtual systems operate in much the same way as their hardware-based counterparts, digital investigators should acquire a forensic image of the target computer and then process the data using a traditional methodology.  This includes auditing the user logs for both the host system and the virtual machine running on it (Sungsu, Byeongyeong, Jungheum, Keunduck, & Sangjin, 2011).  Understanding the structure and organization specific to VMs is also a critical step in the investigatory process.  For instance, recognizing where data is located under VMware’s Virtual Machine File System (VMFS) can aid analysts in locating critical data in an efficient fashion (Khangar & Dharaskar, 2012).  Additional nuances with collecting data from virtual systems in general include ensuring information is not altered during acquisition, gathering volatile data prior to powering down the system, and overcoming the legal challenges of presenting forensically sound evidence from a new technology.  Because VMs operate in an active state, collecting volatile data from these systems is as important as it is with traditional computers.  As virtualization becomes more commonplace, the forensic capabilities for analyzing these platforms also increase.  This equates to more effective techniques for collecting virtual data as well as wide-spread acceptance of digital forensics throughout the legal process (Khangar & Dharaskar, 2012).

Investigators targeting a virtual machine for analysis have the potential to find evidence similar in amount and scope as they would on a traditional system.  This includes evidentiary data related to network intrusions, malware installation and insider file deletions.  Although virtual systems operate in a logical environment, collecting data from this platform can be accomplished by mounting the VM and then assessing the contents of the digital image.  Just as suspects leave evidence behind on a traditional computer system, activities conducted on a VM also create a set of files which are written to the host computer (Khangar & Dharaskar, 2012).  Obtaining a forensic image of the host computer can provide investigators with network logs pertaining to both the host and the virtual system (Nelson et al., 2010).  One of the most common versions of virtualized software, VMware, “…as the default generates each virtual machine image, memory dump, log and configuration file” (Sungsu et al., 2011, p. 151).  Similar to files found on a traditional computer system, these data repositories may contain evidence related to network intrusions, malware installations and the deletion of files by trusted users.

Smart Phones
One of the most ubiquitous and innovative advancements within the information technology arena has been the invention of the smart phone.  The sheer computing power these devices possess combined with the portability of this technology has made them an invaluable tool.  Just as businesses and individuals have leveraged the growth of communication technology, so have a variety of criminal entities.  Aside from the intrinsic value these devices represent, modern smart phones possess the computing power to rival traditional computer systems.  Many cybercrimes that were historically facilitated with laptops or desktops can now be carried out with a smaller, more concealable smart phone.   As a result of this development, smart phones have become an important repository for evidence for law enforcement agencies throughout the world (Casey & Turnbull, 2011).  Because smart phones are both computers and digital communication devices, collecting data from these sources present a number of challenges for investigators.

The single biggest issue to address in collecting evidence from any mobile device is staying current with the technology.  Every year, companies release a multitude of smart phones to the public.  Many of these models contain proprietary software and hardware features that forensic investigators must stay abreast with.  This involves not only a continuous cycle of education but also a significant financial commitment for forensic laboratories to purchase test models and software.  Although the sheer number of potential phones available may be daunting, there are a number of commercial tools made specifically for digital investigators.  Companies like MicroSystemation, Logicube, and Cellebrite manufacture products that are specially designed to acquire data from mobile devices (Casey & Turnbull, 2011).  Many of these companies also issue updates for software versions and hardware connectors, providing forensic analysts the ability to stay current with emerging technologies.

Additional forensic challenges associated with collecting data from smart phones deal with those features inherent to mobile communication devices.  Modern smart phones integrate a multitude of communication paths to include cellular, Wi-Fi and Bluetooth.  This means there are numerous ways for data on these devices to be overwritten or remotely destroyed.  Many smart phones have the capability to allow remote wiping of stored data on the device.  Although this was designed to protect user data in the event of theft, it also has the unintended consequence of providing criminals the ability to destroy evidence before law enforcement can obtain it.  The ability to alter or destroy data wirelessly means investigators must take added precautions when seizing smart phones.  Options to prevent these devices from receiving or sending signals include turning off the phone or removing the battery.  Although this eliminates the possibility of outside sources altering data on the phone, it may also activate security features such as encryption or lock codes (Casey & Turnbull, 2011).  Smart phones like RIM’s line of BlackBerrys includes 256-bit encryption, an ECC public key, and later versions of the phone’s firmware do not allow for mobile password resetting.  This means that turning off the device will most likely render data recovery virtually impossible (Martin, 2008).  To avoid this situation and isolate the device from unintended signals, investigators can instead place the item in an RF shielded container such as a Faraday bag.

Although smart phones and other mobile communication devices possess a number of challenges for forensic investigators, they also represent valuable sources of digital evidence.  This is in no small part due to the type of storage media that many smart phones possess: flash memory.  Although criminals have a number of potential methods to destroy or alter data on a mobile device, the use of flash memory chips means information can often be successfully recovered.  Due to proprietary algorithms on many smart phones, data is written and erased on flash memory in such a way that deleted information is not immediately wiped.  Flash memory “can only be erased block-by-block, and mobile devices generally wait until a block is full before erasing data” (Casey & Turnbull, 2011, p. 3).  In addition, this form of data storage is also generally more durable against extreme conditions such as temperature, pressure, or impact; making physical destruction of the chips more difficult.  As a result of these features, investigators have the potential to recover data pertaining to malware installation and network intrusion.  Malware loaded onto mobile devices and later erased by perpetrators may still leave digital clues.  Similarly, network intrusions using or directed at smart phones may also leave a trail of evidence for investigators to find (Casey & Turnbull, 2011).  The deletion of files by an insider however may be more difficult to ascertain on a smart phone.  Although these devices often belong to a single individual and should be relatively straightforward to assign ownership to, without sufficient defense mechanisms activated, this lack of security means anyone can use the phone.  Even though data is deleted from a smart phone or the device is used to destroy information on a network, actually identifying the culprit may require more evidence than a digital investigator can obtain.

Cloud Computing
IT experts estimate that cloud computing has the potential to transform information technology as significantly as personal computers, the World Wide Web, and smart phones have (Ruan, Carthy, Kechadi & Crosbie, 2011).  This model encompasses a host of IT concepts that generally describe distributed computing over a network, which fundamentally changes the historic model of IT services.  Large data centers have replaced individual workstations to create a virtual environment for organizations and individuals alike.  Cloud computing employs VMs and a “combination of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and/or Software as a Service (SaaS)” (Barbara, 2009).  Individuals are able to utilize programs in a manner similar to that of traditional hardware-based computers, but at a fraction of the cost.  As a result of these savings, an increasing number of companies have incorporated cloud computing into their traditional approach to data processing.  According to Gartner, cloud computing was forecasted to grow 19.6% in 2012 to an estimated 109 billion worldwide (Gartner, 2012).  This represents a growth rate five times faster than that of on-premises IT equipment (Ruan et al., 2011).  Although cloud computing has revolutionized technology on a number of fronts, this concept is not without its own unique challenges for both customers and forensic investigators alike. 

Storing and accessing data on remote servers represents a number of potential concerns for clients.  Utilizing Internet applications to retrieve sensitive data is inherently risky for organizations.  In addition, cloud users often do not know physically where their data resides or who else the provider may maintain as a client.  This commingling of information and users has the potential for malicious or unintentional data loss if adequate security features are not in place (Barbara, 2009).  It is these same considerations that also represent a variety of unique concerns for digital investigators.

Locating, preserving, and analyzing digital information becomes a challenge when the data is stored in the cloud.  A forensic issue of concern is the loss of valuable pieces of digital evidence.  Items historically acquired by investigators such as registry entries, temporary files, and other similar artifacts may be lost when a user exists a cloud application.  In addition, cloud customers and investigators often have limited access to log files and auditing information (Ruan et al., 2011).  The use of cloud computing also provides suspects with an additional layer of anonymity when carrying out malicious activity.  These factors call into question the issue of evidence validity in a court of law.  Establishing a chain of custody for evidence and creditably explaining this process to a jury is problematic for investigators.  Determining where information is stored, who had access to it, and could other entities have altered the information are all serious considerations for law enforcement agencies (Barbara, 2009).  As a result, the emergence of cloud computing has forced the creation of an entirely new focus in digital forensics called cloud forensics (Ruan et al., 2011).  While data acquisition from computers includes a number of traditional methodologies, retrieving data from cloud-based systems also incorporates a number of other technologies and challenges. 

Currently, many forensic examiners admit that "there is no foolproof, universal method for extracting evidence in an admissible fashion from cloud-based applications" (Barbara, 2009).  This consideration along with chain of custody issues means cloud computing is one of the least reliable technologies for investigators seeking information about network intrusions, malware installations, or the deletion of files by insiders.  Technical dimensions that make this technology difficult to forensically analyze include live forensics, evidence segregation and virtualization.  Many of the same considerations for live acquisition of RAM also exist for investigators collecting evidence from cloud-based systems.  Complex configurations with multiple connected resources significantly increase the forensic workload.  Recreating a timeline of events that occurred solely within the cloud requires precise time synchronization; a feat made more difficult by disparate locations of users and cloud-based data repositories.  Cloud computing is designed to provide a pool of resources to multiple users.  This aspect presents a challenge for forensic investigators not from a data acquisition standpoint, but rather protecting the confidentiality of other clients.  Cloud providers achieve data segregation using software-based compartmentalization.  This configuration presents a challenge for investigators when attempting to collect data from one individual that happens to be sharing resources with numerous other users.  Finally, the last challenge for investigators is the concept of virtualization.  Although VMs on traditional systems is relatively straightforward, on cloud-based systems, this concept takes on a completely new dimension.  Data mirroring over systems located in different states or even countries introduces a number of jurisdictional concerns for law enforcement agencies (Ruan et al., 2011).

Conclusion
The continued evolution of information technology represents a host of potential benefits for mankind.  From personal computers to cloud computing, each new development has advanced our lives in various ways.  For digital investigators however, the emergence of new technologies signifies both advantages and challenges.  New devices mean additional sources of data for investigators to leverage in the course of their analysis.  Conversely, each new scientific advancement represents a myriad of new technologies that investigators must master in order to collect the evidence they possess.  Public and private organizations seeking to stay current in this field must commit to a continuing investment in both money and education. 

References
Barbara, J. J. (2009). Cloud computing: Another digital forensic challenge. Forensic Magazine.

Casey, E. & Turnbull, B. (2011). Digital evidence and computer crime (3rd ed.), 1-44. Waltham,

Gartner. (2012). Gartner says worldwide cloud services market to surpass $109 billion in 2012.

Hay, B., & Nance, K. (2009). Live analysis: Progress and challenges. IEEE Computer and
Reliability Studies, 30-37. Retrieved from http://nob.cs.ucdavis.edu/bishop/papers/2009-ieeesp-2/liveanal.pdf

Khangar, S. V., & Dharaskar, R. V. (2012). Digital forensic investigation for virtual machines.
International Journal of Modeling and Optimization, 2(6), 663-666. Retrieved from http://www.ijmo.org/papers/205-S4038.pdf

Martin, A. (2008). Mobile device forensics. SANS. Retrieved from http://www.sans.org/reading_room/whitepapers/forensics/mobile-device-forensics_32888

Nelson, B., Phillips, A., & Steuart, C. (2010). Guide to computer forensics and investigations.
Boston, MA: Course Technology.

Ruan, K., Carthy, J., Kechadi, T. & Crosbie, M. (2011). Cloud forensics. Advances in Digital

Sungsu, L., Byeongyeong, Y., Jungheum, O., Keunduck, B., & Sangjin, L. (2011). A research on the investigation method of digital forensics for a VMware workstation’s virtual machine. Mathematical and Computer Modeling, 55, 151-160. Retrieved from http://www.sciencedirect.com.ezproxy.umuc.edu/science/article/pii/S0895717711001014

Welch, C. H. (2006). Flexible standards, deferential review: Daubert’s legacy of confusion. Harvard Journal of Law & Public Policy, 29(3), 1085-1105. Retrieved from http://www.harvard-jlpp.com/archive/#293