Spotless Sandboxes: Evading Malware Analysis Systems ? Spotless Sandboxes: Evading Malware Analysis

Download Spotless Sandboxes: Evading Malware Analysis Systems ? Spotless Sandboxes: Evading Malware Analysis

Post on 11-Jun-2018

216 views

Category:

Documents

4 download

TRANSCRIPT

  • Spotless Sandboxes: Evading Malware AnalysisSystems using Wear-and-Tear Artifacts

    Najmeh Miramirkhani, Mahathi Priya Appini, Nick Nikiforakis, Michalis PolychronakisStony Brook University

    {nmiramirkhani, mappini, nick, mikepo}@cs.stonybrook.edu

    AbstractMalware sandboxes, widely used by antiviruscompanies, mobile application marketplaces, threat detectionappliances, and security researchers, face the challenge ofenvironment-aware malware that alters its behavior once it de-tects that it is being executed on an analysis environment. Recentefforts attempt to deal with this problem mostly by ensuring thatwell-known properties of analysis environments are replaced withrealistic values, and that any instrumentation artifacts remainhidden. For sandboxes implemented using virtual machines, thiscan be achieved by scrubbing vendor-specific drivers, processes,BIOS versions, and other VM-revealing indicators, while moresophisticated sandboxes move away from emulation-based andvirtualization-based systems towards bare-metal hosts.

    We observe that as the fidelity and transparency of dynamicmalware analysis systems improves, malware authors can resortto other system characteristics that are indicative of artificialenvironments. We present a novel class of sandbox evasiontechniques that exploit the wear and tear that inevitably occurson real systems as a result of normal use. By moving beyond howrealistic a system looks like, to how realistic its past use looks like,malware can effectively evade even sandboxes that do not exposeany instrumentation indicators, including bare-metal systems. Weinvestigate the feasibility of this evasion strategy by conductinga large-scale study of wear-and-tear artifacts collected from realuser devices and publicly available malware analysis services. Theresults of our evaluation are alarming: using simple decision treesderived from the analyzed data, malware can determine that asystem is an artificial environment and not a real user devicewith an accuracy of 92.86%. As a step towards defending againstwear-and-tear malware evasion, we develop statistical models thatcapture a systems age and degree of use, which can be used toaid sandbox operators in creating system images that exhibit arealistic wear-and-tear state.

    I. INTRODUCTION

    As the number and sophistication of the malware samplesand malicious web pages that must be analyzed every dayconstantly increases, defenders rely on automated analysissystems for detection and forensic purposes. Malware scanningbased on static code analysis faces significant challengesdue to the prevalent use of packing, polymorphism, andother code obfuscation techniques. Consequently, malwareanalysts largely rely on dynamic analysis approaches to scanpotentially malicious executables. Dynamic malicious codeanalysis systems operate by loading each sample into a heavilyinstrumented environment, known as a sandbox, and monitoringits operations at varying levels of granularity (e.g., I/O activity,system calls, machine instructions). Malware sandboxes aretypically built using API hooking mechanisms [1], CPU

    emulators [2][5], virtual machines [6], [7], or even dedicatedbare-metal hosts [8][10].

    Malware sandboxes are employed by a broad range ofsystems and services. Besides their extensive use by antiviruscompanies for analyzing malware samples gathered by deployedantivirus software or by voluntary submissions from users,sandboxes have also become critical in the mobile ecosystem,for scrutinizing developers app submissions to online appmarketplaces [11], [12]. In addition, a wide variety of industrysecurity solutions, including intrusion detection systems, securegateways, and other perimeter security appliances, employsandboxes (also known as detonation boxes) for on-the-flyor off-line analysis of unknown binary downloads and emailattachments [13][17].

    As dynamic malware analysis systems become more widelyused, online miscreants constantly seek new methods to hinderanalysis and evade detection by altering the behavior ofmalicious code when it runs on a sandbox [8], [9], [11], [18][20]. For example, malicious software can simply crash orrefrain from performing any malicious actions, or even exhibitbenign behavior by masquerading as a legitimate application.Similarly, exploit kits hosted on malicious (or infected) webpages have started employing evasion and cloaking techniquesto refrain from launching exploits against analysis systems [21],[22]. To effectively alter its behavior, a crucial capability formalicious code is to be able to recognize whether it is beingrun on a sandbox environment or on a real user system. Suchenvironment-aware malware has been evolving for years,using sophisticated techniques against the increasing fidelityof malware analysis systems.

    In this paper, we argue that as the fidelity and non-intrusiveness of dynamic malware analysis systems continuesto improve, an anticipated next step for attackers is to relyon other environmental features and characteristics to distin-guish between end-user systems and analysis environments.Specifically, we present an alternative, more potent approach tocurrent VM-detection and evasion techniques that is effectiveeven if we assume that no instrumentation or introspectionartifacts of an analysis system can be identified by malwareat run time. This can be achieved by focusing instead on the

    wear and tear and aging that inevitably occurs on realsystems as a result of normal use. Instead of trying to identifyartifacts characteristic to analysis environments, malware candetermine the plausibility of its host being a real system usedby an actual person based on system usage indicators.

  • The key intuition behind the proposed strategy is that, incontrast to real devices under active use, existing dynamicanalysis systems are typically implemented using operatingsystem images in almost pristine condition. Although somefurther configuration is often performed to make the systemlook more realistic, after each binary analysis, the system isrolled back to its initial state [23]. This means that any possiblewear-and-tear side-effects are discarded, and the system isalways brought back to a pristine condition. As an indicativeexample, in a real system, properties like the browsing history,the event log, and the DNS resolver cache, are expected tobe populated as a result of days or weeks worth of activity.In contrast, due to the rollback nature of existing dynamicanalysis systems, malicious code can easily identify that theabove aspects of a system may seem unrealistic for a devicethat supposedly has been in active use by an actual person.

    Even though researchers have already discussed the pos-sibility of using specific indicators, such as the absence ofRecently Open Files [24] and the lack of a sufficient numberof processes [25] as ways of identifying sandbox environments,we show that these individual techniques are part of a largerproblem, and systematically assess their threat to dynamicanalysis sandboxes. To assess the feasibility and effectivenessof this evasion strategy, we have identified a multitude of wear-and-tear artifacts that can potentially be used by malware asindicators of the extent to which a system has been actuallyused. We present a taxonomy of our findings, based on whichwe then proceeded to implement a probe tool that relies on asubset of those artifactsones that can be captured in a privacy-preserving wayto determine a systems level of wear andtear. Using that tool, we gathered a large set of measurementsfrom 270 real user systems, and used it to determine thediscriminatory capacity of each artifact for evasion purposes.

    We then proceed to test our hypothesis that wear and tearcan be a robust indicator for environment-aware malware, bygauging the extent to which malware sandboxes are vulner-able to evasion using this strategy. We submitted our probeexecutable to publicly available malware scanning services,receiving results from 16 vendors. Using half of the collectedsandbox data, we trained a decision tree model that picks afew artifacts to reason whether a system is real or not, andevaluated it using the other half. Our findings are alarming:for all tested sandboxes, the employed model can determinethat the system is an artificial environment and not a real userdevice with an accuracy of 92.86%.

    We provide a detailed analysis of the robustness of thisevasion approach using different sets of artifacts, and showthat modifying existing sandboxes to withstand this type ofevasion is not as simple as artificially adjusting the probedartifacts to plausible values. Besides the fact that malware caneasily switch to the use of another set of artifacts, we haveuncovered a deeper direct correlation of many of the evaluatedwear-and-tear artifacts and the reported age of the system,which can allow malware to detect unrealistic configurations.As a step towards defending against wear-and-tear malwareevasion, we have developed statistical models that capture the

    age and degree of usage of a given system, which can be usedto fine-tune sandboxes so that they look realistic.

    In summary, our work makes the following main contribu-tions:

    We show that certain, seemingly independent, techniquesfor identifying sandbox environments are in fact allinstances of a broader evasion strategy based on a systemswear-and-tear characteristics.

    We present an indicative set of diverse wear-and-tearartifacts, and evaluate their discriminatory capacity byconducting a large-scale measurement study based on datacollected from real user systems and malware sandboxes.

    We demonstrate that using simple decision trees derivedfrom the analyzed data, malware can robustly evade alltested sandboxes with an accuracy of 92.86%.

    We present statistical models that can predict the age ofa system based on its system usage indicators, which canbe used to age existing artificial-looking sandboxes tothe desired degree.

    II. BACKGROUND AND RELATED WORK

    A. Virtualization and Instrumentation Artifacts

    Antivirus companies, search engines, mobile applicationmarketplaces, and the security research community in generallargely rely on dynamic code analysis systems that loadpotentially malicious content in a controlled environment foranalysis purposes [1][6], [8][10], [21], [26][34]. Given thatvirtual machines and system emulators are very convenientplatforms for building malware sandboxes, evasive malwareoften relies on various artifacts of such systems to alter itsbehavior [8], [9], [11], [18][20], [35]. Probably the simplestapproach is to use static heuristics that check for certain systemproperties, such as VM-specific device drivers and hardwareconfigurations, fixed identifiers including MAC addresses,IMEIs (for mobile malware analysis systems), user/host names,VM-specific loaded modules and processes (e.g., VMwareTools in case of VMware), and registry entries [20], [36].

    Even when dynamic analysis systems are carefully config-ured to avoid expected values and configurations, lower-levelproperties of the execution environment can be used to identifythe presence of a virtual environment. These include emulatorintricacies that can be observed at runtime using small codesnippets, timing properties of certain virtualized instructions,cache side effects, and many others [8], [20], [37][43].

    B. Environmental and User Interaction Artifacts

    The increasing sophistication of environment-aware malwarehas prompted many research efforts that focus on eitherdetecting the split personality of malware by running eachsample on multiple and diverse analysis systems [18], [44],[45], or increasing the fidelity of the analysis environment byavoiding the use of emulation or virtualization altogether, andopting for bare-metal systems [8][10], [46].

    These recent developments, however, along with the pro-liferation of virtual environments in production deploymentsand end-user systems, have resulted in yet another change of

  • tactics from the side of malware authors. Given that a VM hostmay be equally valuable as a non-VM host, the number ofmalicious samples that rely on VM detection tricks for evasionhas started to decrease [47].

    At the same time, malware authors have started employingother heuristics that focus mostly on how realistic the stateand configuration of a host is, rather than whether it isa real or a virtual one. Such heuristics include checkingwhether the mouse cursor remains motionless in the centerof the screen [48], observing the absence of Recently OpenFiles [24] or an unusually low number of processes [25],testing whether unrestricted internet connectivity is available,and trying to resolve a known non-existent domain name. Assome malware analysis systems blindly serve all DNS requests,a positive answer to a request for a non-existent domain isan indication of a malware analysis environment and not areal system [9]. Correspondingly, malware sandboxes havestarted emulating user behavior (e.g., by generating realistic-looking mouse movements and actions) and exposing a morerealistic network environment. Unfortunately, as we show inthis work, a neglected aspect of this effort towards makingmalware sandboxes as realistic as possible, is the wear andtear that is expected to occur as a result of normal use.

    C. Sandbox Fingerprinting

    A few research efforts have focused on deriving configurationprofiles from sandboxes used by malware scanning servicesand security appliances [23], [49]. By fingerprinting thesandboxes of different vendors, malware can then identifythe distinguishing characteristics of a given sandbox, andalter its behavior accordingly. Maier et al. developed Sand-Finger [49], a sandbox fingerprinting tool for Android-basedanalysis systems. The extracted fingerprints capture propertiesthat are mostly specific to mobile devices, such as saved WiFinetworks, paired Bluetooth devices, or whether a device isconnected to a host via a cable. The authors show that thederived fingerprints can be used by malicious apps to evadeGoogles Bouncer and other mobile app sandboxes.

    Recently, Blackthorne et al. presented AVLeak, a tool thatcan fingerprint emulators running inside commercial antivirus(AV) software, which are used whenever AVs detect anunknown executable [50]. The authors developed a methodthat allows them to treat these emulators as black boxes anduse side channels as a means of extracting fingerprints fromeach AV engine.

    Yokoyama et al. developed SandPrint [23], a sandboxfingerprinting tool tailored for Windows malware analysissystems. The tool considers a set of 25 features that wereidentified to exhibit characteristic, high-entropy values onmalware sandboxes used by different vendors. These includehardware configuration parameters, which tend to have unusu-ally low values to facilitate the parallel operation of multiplesandbox images (e.g., by lowering the amount of RAM or diskspace), or features specific to the malware sample invocationmechanism employed by each sandbox (e.g., Cuckoo Sandboxsagent.py launch script, or the fact that the file name of the

    analyzed executable is changed according to its MD5 sum orusing some other naming scheme). The authors then performedautomated clustering to classify fingerprints collected afterthe submission of the tool to 20 malware analysis services,resulting in the observation of 76 unique sandboxes. They alsoused some of the features to build a classifier that malwarecan use to reliably evade the tested sandboxes.

    As the authors of SandPrint note, most of the selectedfeatures are deterministic and their values discrete and reliable,and a stealthy sandbox could try to diversify the feature values.Inspired by this observation, our aim in this paper is to explorewhat the next step of attackers might be, once sandbox operatorsbegin to diversify the values of the above features, and toconfigure their systems using more realistic settings, renderingsandbox-fingerprinting ineffective. To that end, instead offocusing on features characteristic to sandboxes, we take aradically different view and focus on artifacts characteristic toreal, used machines. Previous works have shown that sandboxesare configured differently than real systems, and can thus beeasily fingerprinted. The programs used to analyze malware aredesigned to be as transparent as possible, to prevent malwarefrom detecting that it is being analyzed. In this work, weshow that even with completely transparent analysis programs,the environment outside the analysis program is enough formalware to determine that it is under analysis.

    III. WEAR AND TEAR ARTIFACTS

    To determine the extent to which a system has been actuallyused, and consequently, the plausibility of it being a real usersystem and not a malware sandbox, malicious code can relyon a broad range of artifacts that are indicative of the wearand tear of the system. In this section, we discuss an indicativeset of the main types of such artifacts that we have identified,which can be easily probed by malware.

    Our strategy for selecting these artifacts was based onidentifying what aspects of a system are affected as a result ofnormal use, and not on system features that may be affectedby certain sandbox-specific implementation intricaciesarealistically configured sandbox may not exhibit any suchfeatures at all. We located these artifacts by using a methodsimilar to the snowball method for conducting literature surveys.Specifically, upon identifying a potentially promising artifact,we would search in that same location of the system formore artifacts. Besides intuition, to identify additional sourcesof artifacts, we also studied various Windows cleaner andoptimizer programs, as well as Windows forensics literature.

    Our list of artifacts is clearly non-comprehensive and we arecertain that there are many more good sources of artifacts thatwere not included in this study. At the same time, although it isexpected that one can find a multitude of wear-and-tear artifacts,e.g., as a result of the customization, personalization, and useof different applications on an actual user system, our selectionwas constrained by an important additional requirement.

  • A. Probing for Artifacts while Preserving User Privacy

    We can distinguish between two main types of wear-and-tearartifacts, depending on their source: artifacts that stem fromdirect user actions, versus artifacts caused by indirect systemactivity. For instance, a browsers history contains the URLsthat a user has explicitly visited. Expecting to find a popularsearch engine, a social networking service, and a news websiteamong the recently visited URLs could constitute a heuristicthat captures a users specific interests. In contrast, the numberof application events in the system event log is an indirectartifact which, although it depends on user activity, it does notcapture any private user information.

    Direct artifacts are qualitatively stronger indicators of asystems degree of use by a person. Conceivably, malwarecould rely on a broad and diverse range of such artifacts todecide whether a system has been under active use, e.g., byinspecting whether document folders contain plausible numbersof files with expected file extensions, checking for recentlyopened documents in popular document viewing applications,or looking for the most recently typed online search enginequeries, system-wide (spotlight-like) search queries, instantmessaging or email message contents, and so on.

    For the purposes of our study, however, such artifacts areout of question, as collecting them from real user systemswould constitute a severe privacy violation. Consequently, welimit our study mainly to indirect, system-wide artifacts, whichdo not reveal any private user information. To get a glimpseon the evasion potential of direct artifacts, we do considera limited set of browser-specific artifacts, which though arestrictly limited to aggregate counts (e.g., number of cookies,number of downloaded files and, number of visited URLs)that can be collected without compromising user privacy andanonymity.

    B. Artifact Categories

    Many aspects of a system are susceptible to wear andtear, from the operating system itself, to the various file,network, and other subsystems. Considering, for instance,a typical Microsoft Windows host, a multitude of differentwear-and-tear artifacts are available for estimating the extentto which the system has been used. In the following, weprovide a brief description of the main types of artifacts thatwe have identified. The complete set of artifacts that we usedin our experiments is shown in Table I.

    1) System: Generic system properties such as the numberof running processes or installed updates are directly related tothe history of a system, as more installed software typicallyaccumulates throughout the years.

    The event log of Windows systems is another rich sourceof information about the current state and past activity of thesystem. Various types of events are recorded into the eventlog, including application, security, setup, and system eventsof different administrative ratings (critical, error, warning,information, audit success or failure). Of particular interest areevents that denote abnormal conditions, such as warnings about

    missing files, inaccessible network links, unexpected serviceterminations, and access permission errors. The more a systemhas been used, the higher the number of records that will havebeen accumulated in the event log. Besides the simple count ofsystem events, we also consider additional aspects indicative ofpast use, such as the number of events from user applications,the number of different event sources, and the time differencebetween the first and last event.

    The event log will be (almost) empty only if the operatingsystem is freshly installed, or if the user deleted the recordedentries. The vast majority of users, however, are not evenaware of the existence of this log, so it is unlikely that itscontents will be affected (e.g., deleted) by explicit user actions.

    2) Disk: User activity results in file system changes,exhibited in various forms, including the generation oftemporary files, deleted files, actual user content, cacheddata, and so on. Given the sensitive nature of accessinguser-generated content, even in terms of just counting files,we limit the collection of disk artifacts directly related to useractivity only to the number of files on the desktop and inthe recycle bin. The rest of the disk artifacts capture countsof files related to non-user-generated temporary or cacheddata (e.g., generated file-content thumbnails, or process crashminidump files).

    3) Network: The network activity of a system, especiallyfrom a historical perspective, is a strong indicator of actualuse. Every time a host is about to send a packet to a remotedestination, it consults the systems address resolution protocol(ARP) cache to find the physical (MAC) address of thegateway, or the address of another host in the local network.Similarly, every time a domain name is resolved to an IPaddress, the operating systems DNS stub resolver includesit in a cache of the most recent resolutions. The use ofpublic key encryption also results in artifacts related to thecertificates that have been encountered. For instance, once acertificate revocation list (CRL) is downloaded, it is cachedlocally. The list of the URLs of previously downloaded CRLsis kept as part of the cached information, so the number ofentries in that list is directly related to past network activity(not only due to browsing activity, but also due to applicationsthat perform automatic updates over the network). We alsoconsider the number of cached wireless network SSIDs, whichis a good indicator of past use for mobile devices, and thenumber of active TCP connections.

    4) Registry: The Windows registry contains a vast amount ofinformation about many aspects of a system, including manythat would fit in the aforementioned categories. We chose,however, to consider the Registry separately both because itis privy to information that does not exist elsewhere, as wellas because, in Section V, we investigate whether sandbox-detection is possible if an attacker has to constrain himself toa single type of wear-and-tear artifacts.

  • TABLE I: Complete list of wear-and-tear artifacts.

    Category Name Description User Sandbox Baseline

    System

    totalProcesses # of processes 94.4 35 41winupdt # installed Windows updates 794 19 2sysevt # system of system events 27K 8K 334appevt # of application events 18K 2.4K 184syssrc # sources of system events 78 49 48appsrc # sources of application events 40 26 23sysdiffdays Elapsed time since the first system event (days) 370 1.7K 0appdiffdays Elapsed time since the first application event (days) 298 943 0

    Disk

    recycleBinSize Total size of the recycle bin (bytes) 2.5G 50M 0recycleBinCount # files in the recycle bin 109 3 0tempFilesSize Total size of temporary system files (bytes) 302 24 8.2tempFilesCount # temporary system files 411 60 10miniDumpSize Total size of process crash minidump files (bytes) 3M 409K 0miniDumpCount # of process crash minidump files 9 4 0thumbsFolderSize Total size of the systems thumbnails folder (bytes) 63M 8M 2.6MdesktopFileCount # files on the desktop 34 6 3

    Network

    ARPCacheEntries # entries in the ARP cache 19 4.5 5dnscacheEntries # entries in the DNS resolver cache 151 4 3certUtilEntries # URLs of previously downloaded CRLs 1.7K 210 6wirelessnetCount # of cached wireless SSIDs 8 0 0tcpConnections # of active TCP connections 77 27 16

    Registry

    regSize Size of the registry (in bytes) 144.8M 53M 35MuninstallCount # registered software uninstallers 177 58 18autoRunCount # programs that automatically run at system startup 9 3 1totalSharedDlls Legacy DLL reference count 242 176 26totalAppPaths # registered application paths 54 35 23totalActiveSetup # Active Setup application entries 24 32 25orphanedCount # leftover registry entries 11 10 9totalMissingDlls # registered DLLs that do not exist on disk 21 23 13usrassistCount # of entries in the UserAssist cache (frequently opened applications) 222 98 11shimCacheCount # entries in the Application Compatibility Infrastructure (Shim) cache 191K 98K 38KMUICacheEntries # Multi User Interface (MUI) cache entries 66 83 163FireruleCount # of rules in the Windows Firewall 511 332 358deviceClsCount # previously connected USB devices (DeviceInstance IDs) 81 29 60USBStorCount # previously connected USB storage devices 1.7 0 0

    Browser

    browserNum # installed browsers (Internet Explorer, Firefox, Chrome) 2.9 1.4 1uniqueURLs # unique visited URLs 28K 13.8 1.64totalTypedURLs # URLs typed in the browsers navigation bar 1.6K 4 1totalCookies # of HTTP cookies 3K 135 2uniqueCookieDomains # unique HTTP cookie domains 1003 23 1totalBookmarks # bookmarks 274 20 1totalDownloadedFiles # downloaded files 340 3 0urlDiffDays Time elapsed between the oldest and newest visited URL (days) 225 370 0cookieDiffDays Time elapsed between the oldest and newest HTTP cookie (days) 303 256 0

    Every time an application or driver is installed on aWindows system, it typically stores many key-value pairs inthe registry. This effect is so extensive that Windows is knownfrom suffering from registry bloat, since programs often addkeys during installation, but neglect to remove them whenuninstalled. This is exemplified by the multitude of registrycleaning tools which aim to clean the registry from staleentries [51], [52]. Many of the registry artifacts we considerare thus related to the footprints of the applications thathave been installed on the system (e.g., number of installedapplications, uninstall entries, shared DLLs, applications set to

    automatically run upon boot), as well those that have not beenfully uninstalled (e.g., orphaned entries left from removedprograms and listed DLLs that do not exist on disk).We also consider various other system-wide properties,including: the size of the registry (in bytes); the number offirewall rules, as new applications often install additional rules;the number of previously connected USB removable storagedevices (a unique instance ID key is generated for each deviceand stored in a cache); the number of entries in the UserAssistcache, which contains information about the most frequentlyopened applications and paths; the number of entries in the

  • Application Compatibility Infrastructure (Shim) cache, whichcontains metadata of previously run executables that used thisfacility; and the number of entries in the MUICache, whichare related to previously run executables and are generated bythe Multi User Interface that provides support for differentlanguages.

    5) Browser: For many users, an operating system doeslittle more than housing their web browser and a few otherapplications. Web browsers have become an all-inclusive toolfor accessing office suites, games, e-shopping, e-banking, andsocial networking services. Due to the indispensable use of abrowser, it would be really uncommon to find a browser in apristine, out-of-the-box condition on a real user system.

    In fact, the older a system is, the longer the accumulatedhistory of its main browser will likely be. This historyis captured in various artifacts related to past user activity,including previously visited URLs, stored HTTP cookies, savedbookmarks, and downloaded files. As noted earlier, we consideronly aggregate counts of such artifacts, instead of more detaileddata, to respect users privacy. Given that a person may usemore than one browser, or a different one than the operatingsystems default browser, the browser artifacts that we considercorrespond to combined values across all browsers found on thesystem (our tool currently supports Internet Explorer, Firefox,and Chrome). For example, the number of HTTP cookiescorresponds to the total number of cookies extracted from allinstalled browsers. The number of installed browsers is alsoconsidered as a standalone artifact.

    IV. DATA COLLECTION

    A. Probe Tool Implementation

    To gather a large and representative dataset of the identifiedwear-and-tear artifacts from real user systems and malwaresandboxes, we implemented a probe tool that collects artifactinformation and transmits it to a server. The probe tool wasimplemented in C++ and consists of a single Windows 32-bitPE file that does not require any installation, and which doesnot have any dependencies besides already available systemAPIs. Although many artifacts are easy to collect by simplyaccessing the appropriate log files and directories, probably themost challenging aspect of our implementation was to maximizecompatibility with as many Windows versions as possible, fromWindows XP to Windows 10. Besides differences in the pathsof various system files and directories across major versions,in many cases we also had to use different system APIs forWindows XP compared to more recent Windows versionsfor various system-related artifacts (e.g., event log statistics).Browser artifacts are collected separately for each of the threesupported browsers (IE, Firefox, Chrome) that are found on agiven system, and are synthesized into compound artifacts atthe server side.

    The collected information is transmitted to our server throughHTTPS. The tool uses both GET and POST requests, in caseeither of the two is blocked by a sandbox. Besides the collecteddata, additional metadata for each submission include the IP

    address of the host, OS version and installation date, and BIOSvendor. The latter is used as a simple VM-detection heuristic,to prevent poisoning of our real-user dataset with any resultsfrom virtual machines (we explicitly asked users to run thetool only on their real systems).

    Each executable submitted to a public sandbox is generatedwith a unique embedded ID, which allows us to identifymultiple submissions by the same vendor. Since a given uniqueinstance of the tool may run on more than one sandboxes(e.g., due to the use of multiple different sandboxes by thesame vendor, or due to sample sharing among vendors), weadditionally distinguish between different systems based onthe combination of the following keys: reported IP address,OS installation date, Windows version, BIOS vendor, numberof installed user applications, and elapsed time since the firstsystem event.

    B. IRB Approval and User Involvement

    Our experiments for the collection of artifact values fromreal user systems involved human subjects, and thus we hadto obtain institutional review board (IRB) approval prior toconducting any such activity. Our IRB application includeda detailed description of the information to be collected, theprocess a participant would follow, and all the measures thatwere taken to protect the participants privacy and anonymity.

    Our activities did not expose users to any risk. The collecteddata does not include any form of personally identifiableinformation (PII), nor any such information can be derivedfrom it. Due to the nature of our experiment (collecting simplesystem statistics), our probe tool is small and simple enoughso that we can be confident about the absence of any flawsor bugs that would cause damage or mere inconvenience toa users system and data. The tool does not get installed onthe system, and thus users can simply delete the downloadedexecutable to remove all traces of the experiment. The collecteddata merely reflects the wear and tear of a users system as aresult of normal use, rather than some personal trait that couldbe considered PII. Even when considering the full informationthat is recorded in the transmitted data, no personal informationabout the user of the system can be inferred.

    Based on the above, the IRB committee of our institutionapproved the research activity on April 11, 2016.

    All participants (colleagues, friends, and Amazons Mechan-ical Turk workers) were directed to a web page that includesan overview of the experiment, a detailed description of thecollected information (including a full sample report from oneof the authors laptop), and instructions for downloading andexecuting our probe tool (as well as deleting it afterwards). Thepage was hosted using our institutions second-level domainand TLS certificate, and included information about the IRBapproval, as well as the authors full contact details.

    Given that our email requests to friends and colleagues fordownloading and executing a binary from a web page couldbe considered as spear phishing attempts, we pointed to prooffor the legitimacy of the message, which was hosted under theauthors institutional home pages.

  • TABLE II: Distribution of BIOS vendors of users machines.

    BIOS vendor Frequency

    American Megatrends Inc. 25.77%Dell Inc. 18.08%Insyde 15.38%Lenovo 15.38%Hewlett-Packard 6.92%Award Software International Inc. 3.85%Phoenix Technologies Ltd. 3.85%Acer 1.92%AMI 1.92%Intel Corp. 1.92%Alienware 1.54%Toshiba 1.54%Other 1.93%

    C. Data Collection

    To quantify the difference of wear-and-tear artifacts be-tween real users and sandbox environments, we use theaforementioned probing tool to compile three datasets. Thefirst dataset contains wear-and-tear artifacts collected from 270real user machines (Dreal). The majority of these observations(89.4%) originate from Amazon Mechanical Turk workers, whodownloaded and executed our artifact-probing tool, and theremaining from systems operated by friends and colleagues.The users who participated in this study come from 35 differentcountries, with the top countries being: US (44%), India (18%),GB (10%), CA (8%), NL (1%), PK (1%), RU (1%) and 28 othercountries with lower than 1% frequency. Therefore, we arguethat our dataset is representative of users from both developedand developing countries, whose machines may have differentwear-and-tear characteristics. The BIOS vendor distribution ofthe user systems is shown in Table II. As the table shows, theusers personal systems include different brands, such as Dell,HP, Lenovo, Toshiba, and Acer, as well as game series systemssuch as Alienware.

    As mentioned earlier, our tool does not collect any PII anddoes not affect the users machine in any way, i.e., it does notmodify or leave any data. After running the tool, the user cansimply delete the executable.

    While analyzing the collected data from Mechanical Turk tofilter out those users who, despite our instructions, executed ourtool inside a virtual machine (identified through the collectedBIOS information), we observed that our server had collectedartifacts that could not be traced back to Mechanical Turkworkers. By analyzing the artifacts and the logs of our server,we realized that, because the public Human Intelligence Task(HIT) web page for our task on Amazons website waspointing to our tool download page through a link, the crawlersof a popular search engine followed that link, downloadedour probing tool, and executed it. Based on the IP addressspace of the clients who reported these artifacts, as wellas custom identifiers found in the reported BIOS versions,we are confident that these were sandbox environments that

    are associated with a popular search engine that downloadsand executes binaries as a way of proactively discoveringmalware. After removing duplicate entries, we marked therest as belonging to crawlers and incorporated them in oursandbox artifact dataset (described in the next paragraph).

    The second dataset (Dsand) consists of artifacts extractedby sandboxes belonging to various malware analysis services.We identified these sandboxes through prior work on sandboxevasion [23], as well as by querying popular search engines forkeywords associated with malware analysis. Most sandboxesare not available for download, but do allow users to submitsuspicious binaries which they then analyze and report on theactivity of the analyzed binary, in varying levels of detail. Eventhough we were able to find 23 sandboxes that provide usersthe ability to upload suspicious files, our artifact-gatheringserver was able to collect artifacts from 15 different sandboxenvironments (not counting the aforementioned search-enginesandbox). We reason that the remaining analysis environmentsare either non-operational, rely on static analysis to scanexecutables, or completely block any network activity of eachanalyzed binary.

    Table III lists the names of sandboxes from which our serverreceived the results of our probing tools. We uploaded ourprobing tool multiple times in a period of a month and wealso witnessed that some sandboxes executed our probingtool multiple times on different underlying environments (e.g.,multiple versions of Microsoft Windows). As mentioned inSection IV-A, each uploaded executable had a unique IDembedded in its code which was sent to our artifact-gatheringserver, so as to perform accurate attribution.

    The third and final dataset (Dbase) is our baseline datasetwhich consists of artifacts extracted from fresh installations ofmultiple versions of Microsoft Windows. Next to setting uplocal VMs and installing Microsoft Windows XP, WindowsVista, Windows 7, and Windows 8, we also make use of allthe Windows server images available in two large public cloudproviders, Azure and AWS. Overall, Dbase contains the artifactsextracted from 22 baseline environments.

    D. Dataset Statistics

    Table IV shows the number of different Microsoft Windowsversions in our three collected datasets. There we see thatthe vast majority of users are utilizing Windows 7, 8 and10, with only 4.1% of users still utilizing Windows XP andWindows Vista. Contrastingly, we see almost no sandboxesutilizing Windows 10, and the majority of them (83.7%)utilizing Windows 7. Note that because we extract the versionof Windows programmatically, the same version number cancorrespond to more than one Windows versions, such as,Windows Server 2012 R2 and Windows Server 2016 having thesame version with Windows 8.1 and Windows 10. Even thoughit is not the central focus of our paper, we find it troublesomethat sandboxes do not attempt to follow the distribution ofoperating systems from real user environments.

    Figure 1 shows boxplots that reveal the distribution of thevalues of each wear-and-tear artifact across all three datasets

  • 0.00

    0.25

    0.50

    0.75

    1.00n_

    arpc

    ache

    Ent

    ries

    n_dn

    scac

    heE

    ntrie

    s

    n_ce

    rtU

    tilE

    ntrie

    s

    n_tc

    pCon

    nect

    ions

    d_re

    cycl

    eBin

    Cou

    nt

    d_re

    cycl

    eBin

    Siz

    e

    d_te

    mpF

    ilesS

    ize

    d_te

    mpF

    ilesC

    ount

    d_m

    iniD

    umpS

    ize

    d_m

    iniD

    umpC

    ount

    d_th

    umbs

    Fol

    derS

    ize

    d_to

    talP

    roce

    sses

    d_de

    skto

    pFile

    Cou

    nt

    e_ap

    pevt

    e_ap

    pdiff

    days

    e_sy

    sevt

    e_sy

    sdiff

    days

    e_sy

    ssrc

    e_ap

    psrc

    e_w

    inup

    dt

    r_re

    gSiz

    e

    r_un

    inst

    allC

    ount

    r_or

    phan

    edC

    ount

    r_sh

    imC

    ache

    Cou

    nt

    r_de

    vice

    Cls

    Cou

    nt

    r_In

    stal

    ledA

    pps

    r_au

    toR

    unC

    ount

    r_to

    talA

    pps

    r_us

    rass

    istC

    ount

    r_M

    UIC

    ache

    Ent

    ries

    r_F

    ireru

    leC

    ount

    r_to

    talS

    hare

    dDlls

    r_to

    talM

    issi

    ngD

    lls

    tota

    lBoo

    kmar

    ks

    tota

    lCoo

    kies

    uniq

    ueC

    ooki

    eDom

    ains

    cook

    ieD

    iffD

    ays

    tota

    lTyp

    edU

    RLs

    tota

    lDow

    nloa

    dedF

    iles

    uniq

    ueU

    RLs

    urlD

    iffD

    ays

    b_nu

    m

    System artifacts

    Nor

    mal

    ized

    val

    ueUser Sandbox Baseline

    Fig. 1: Distribution of artifacts across user machines, sandboxes, and baselines (normalized values).

    TABLE III: Sandboxes from which artifacts were successfullycollected.

    ID Name #Instances

    1 Anubis 22 Avira 23 Drweb 34 Fortiguard 35 Fsecure 26 Kaspersky 157 McAfee 78 Microsoft 19 HybridAnalysis 110 Pctools 111 Crawlers 5012 Malwr 213 Sophos 414 ThreatTrack 315 Bitdefender 116 Minotaur 1

    TABLE IV: Distribution of operating systems in each dataset.

    v5.1 v6.0 v6.1 v6.2 v6.3 Total

    Users (Dreal) 8 3 105 7 147 270Sandboxes (Dsand) 14 - 82 - 2 98Baseline (Dbase) 1 1 2 1 17 22

    v5.1: Windows XPv6.0: Windows Vista / Win Server 2008v6.1: Win7, Win Server 2008 R2v6.2: Win8 / Win Server 2012v6.3: Win 8.1/Win 10/ Win Server 2012 R2/ Win Server 2016

    (Dreal, Dsand and, Dbase). There we can observe multiplepatterns that will be of use later on in this paper. For example,we see that the vast majority of artifacts have larger values in

    real user systems, than they have in sandboxes and baselines.This observation, by itself, indicates that these artifacts can beused to differentiate between real machines and sandboxes andcan thus be used for malware evasion purposes. Similarly, wealso observe that the distributions of artifact values is widerfor real systems than it is for systems that are not used byreal users. This could indicate that these artifacts could alsobe used to predict the age of any given machine.

    At the same time, we do see certain cases where theopposite behavior manifests. For instance, the appdiffdays andsysdiffdays artifacts are clearly more diverse in sandboxes thanthey are in real user systems. We argue that the explanation forthis behavior is two-fold. As Table IV suggests, on average,sandboxes are older compared to real-user systems. As such,it is entirely within reason that the events in Windows eventlog are further apart than those in real user systems. Moreover,assuming that the Windows log will only store events up toa maximum number, systems with more real-user activity aremore likely to need to delete older events and hence reducethe time span observable in the Windows event log.

    To verify, from a statistical point of view, the differencethat we can visually identify between the distributions ofartifacts, we need to use a metric that will allow us todifferentiate between different distributions. Even though anormal distribution is usually assumed, by conducting aShapiro-Wilk test we find that none of our extracted artifactsmatches a normal distribution. For that reason, we use theMann-Whitney U test to compare distributions, which is anon-parametric alternative of a t-test and does not assume thatthe data is normally distributed. We handle missing values byreplacing them with dummy-coded values because the fact thatan artifact cannot be extracted can be useful information fordifferentiating between systems.

    Figure 2 shows the effect size of the differences betweenthe distribution of Dreal and Dsand as reported by the Mann-

  • Whitney U test. Using standard interpretations of effect sizes,an effect between 0.3 and 0.5 is considered to be characteristicof a medium effect while one higher than 0.5 is considered tobe a strong effect. This confirms that the vast majority of ouridentified wear-and-tear artifacts have different distributions insandbox and real-user environments.

    V. EVADING MALWARE SANDBOXES

    In the previous sections, we described the type of OS artifactsthat we hypothesized would be indicative of wear and tear, andcollected these artifacts from real user machines, sandboxes,and fresh installations of operating systems in virtualizedenvironments. As our results showed, most artifacts havesufficiently different distributions, allowing them to be used topredict whether a given system is an artificial environment ora real user system, and evade the former while executing onthe latter. In this section, we take advantage of these artifactsand treat them as features in a supervised machine-learningsetting, allowing the implementation of simple decision treesthat can be employed by malware for evasion purposes.

    A. Setup and Classifier

    When we were considering which classifier to use, wecame to the conclusion that explainability of the results isa desirable property for our setup. It is important for thesecurity community to understand how these seemingly benignfeatures are different between real user systems and sandboxenvironments. Moreover, attackers can also benefit from aclassifier with explainable results since they can, in real time,reason whether the result of their classifier makes intuitivesense given the execution environment. As such, even if anSVM-based classifier may perform slightly better than mostexplainable supervised learning algorithms, attackers have noway of evaluating the truthfulness of the result, especiallyin environments that are constantly evolving to bypass theirevasion techniques.

    For these reasons, we decided to use decision trees as ouralgorithm of choice. Next to their explainable results, a decision-tree model is also straightforward to implement as a series ofif-else statements, which can be beneficial for attackers whowish to keep the overall footprint of their malware small. Beforetraining our classifier, we inspected the correlation of pairsof features by calculating the Pearson correlation coefficient(r) for each possible pair. The results showed that four pairsof features have a strong correlation (r > 0.7). We decided,however, to include all features in our classifier, since fromthe point of view of defenders, they would have to address allcorrelated features in order to stop malware from successfullyevading their analysis systems.

    The training set of the classifier is composed as follows.For positive examples, we randomly sample 50% of oursandbox observations (49 instances) and add to them 100%(22 instances) of our fresh OS installations. We consider itimportant to augment the sandbox dataset (Dsand) with ourbaseline dataset (Dbase) to cover a wider range of possible

    sandbox environments, starting from the most basic virtualiza-tion environments, to state-of-the-art sandboxes belonging tothe security industry. To balance this data, we randomly sample71 instances from real-user environments collected throughAmazon Mechanical Turk (Dreal). Therefore, our training setconsists of 142 observations, balanced between the two classes.Similarly, our testing dataset comprises the remaining 50%of sandbox observations (49 instances) and a random sampleof 49 real user machines which were not part of our trainingset. We handle missing feature values by setting them equalto zero.

    To improve the accuracy of our classifier, we utilize twoorthogonal techniques. First, we use a 10-trial adaptive boostingwhere the algorithm creates multiple trees, with each treefocusing on improving the detection of the previous tree [53].Specifically, while constructing treei, the algorithm assignslarger weights to the observations misclassified by treei1,aiming to improve the detection rate for these observations.Through the construction of multiple sequential trees, theadaptive boosting decision tree algorithm can learn boundariesthat a single decision tree could not. Note that this choice doesnot really complicate the implementation of this algorithm bymalware, as it can still be implemented as multiple sets of if-else rules, and use a weighted sum of all predictions accordingto the weights specified by the trained model.

    In addition to adaptive boosting, we specify our own costmatrix where false negatives are ten times more expensivethan false positives. The rationale for this decision is that weassume it is preferable for malware, from the point of viewof the malware author, to miss an infection of a real usersmachine (false positive), rather than to execute in a sandboxenvironment where its detection would mean the creation ofsignatures that would jeopardize its ability to infect systemsprotected with antivirus software.

    B. Evaluation

    Using the aforementioned training dataset and parameters,we trained an adaptive boosting decision tree and evaluated iton the testing set. Figure 3 shows three of the ten trees that ourclassifier generated, as well as the fraction of misclassificationof each final decision which, as described earlier, drives theconstruction of the next tree.

    Note that the testing set was not provided to our algorithmduring training, in an attempt to quantify how well our modelgeneralizes to previously unseen data. Our model achieved anaccuracy of 92.86% on the testing set, with a false-negativerate (FNR) of 4.08% and a false positive rate (FPR) of 10.20%.The ten constructed trees had an average tree size of 4.6 splits,with the shortest tree having 3 splits and the longest one 5splits. Table V shows the features that were the most usefulto the algorithm, and the percentage of trees using any givenfeature.

    To assess the robustness of our algorithm against incrementalchanges in sandbox environments, we repeated our aforemen-tioned experiment 30 times, each time removing an additionalfeature from the original datasets, compiling new, randomly

  • 0.0

    0.2

    0.4

    0.6

    d_m

    iniD

    umpC

    ount

    d_m

    iniD

    umpS

    ize

    r_to

    talS

    hare

    dDlls

    r_or

    phan

    edC

    ount

    e_ap

    pdiff

    days

    e_sy

    sdiff

    days

    r_to

    talM

    issi

    ngD

    lls

    d_te

    mpF

    ilesS

    ize

    urlD

    iffD

    ays

    e_sy

    sevt

    d_te

    mpF

    ilesC

    ount

    r_us

    bCou

    nt

    r_M

    UIC

    ache

    Ent

    ries

    tota

    lTyp

    edU

    RLs

    n_w

    irele

    ssne

    tCou

    nt

    r_us

    rass

    istC

    ount

    r_sh

    imC

    ache

    Cou

    nt

    d_th

    umbs

    Fol

    derS

    ize

    d_re

    cycl

    eBin

    Siz

    e

    r_In

    stal

    ledA

    pps

    d_re

    cycl

    eBin

    Cou

    nt

    r_to

    talA

    pps

    r_un

    inst

    allC

    ount

    n_ce

    rtU

    tilE

    ntrie

    s

    cook

    ieD

    iffD

    ays

    e_ap

    pevt

    r_F

    ireru

    leC

    ount

    r_re

    gSiz

    e

    e_w

    inup

    dt

    r_au

    toR

    unC

    ount

    n_tc

    pCon

    nect

    ions

    e_ap

    psrc

    d_de

    skto

    pFile

    Cou

    nt

    tota

    lBoo

    kmar

    ks

    n_ar

    pcac

    heE

    ntrie

    s

    tota

    lDow

    nloa

    dedF

    iles

    uniq

    ueC

    ooki

    eDom

    ains

    e_sy

    ssrc

    tota

    lCoo

    kies

    b_nu

    m

    n_dn

    scac

    heE

    ntrie

    s

    uniq

    ueU

    RLs

    d_to

    talP

    roce

    sses

    r_de

    vice

    Cls

    Cou

    nt

    Features

    Effe

    ct S

    ize

    Fig. 2: The difference of the distributions of a given artifacts values between real user systems and sandboxes, based on theMann-Whitney U test. An effect size above 0.3 (highlighted with light yellow) can be interpreted as a medium difference, whilean effect size above 0.5 (highlighted with red) corresponds to a high difference between the two distributions.

    regartifacts_deviceClsCount

    1

    42 > 42

    cookieDiffDays

    2

    971 > 971

    regartifacts_autoRunCount

    3

    2 > 2

    Node 4 (n = 31)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1

    netartifacts_tcpConnections

    5

    17 > 17

    Node 6 (n = 8)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1 Node 7 (n = 26)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1 Node 8 (n = 1)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1 Node 9 (n = 76)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1

    regartifacts_autoRunCount

    1

    4 > 4

    diskartifacts_totalProcesses

    2

    81 > 81

    regartifacts_usrassistCount

    3

    83 > 83

    Node 4 (n = 38)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1 Node 5 (n = 40)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1 Node 6 (n = 5)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1 Node 7 (n = 59)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1

    evtartifacts_syssrc

    1

    61 > 61

    diskartifacts_totalProcesses

    2

    81 > 81

    netartifacts_certUtilEntries

    3

    367 > 367

    Node 4 (n = 70)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1 Node 5 (n = 7)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1 Node 6 (n = 5)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1 Node 7 (n = 60)

    Use

    rBa

    se

    0

    0.2

    0.4

    0.6

    0.8

    1

    Fig. 3: Trees 1, 3, and 5, calculated by the C5.0 decision tree algorithm with adaptive boosting and a custom cost matrix thatfavors false positives over false negatives.

    sampled, training and testing datasets using the splits describedearlier, and training and testing the new classifier. Specifically,we remove the feature that was the most used by the previousmodel, i.e., the most important one. For example, the firstfeature that we removed was the number of entries in a systemsDNS cache, as it was the most commonly used feature of ourfirst classifier according to Table V. Through this process weaim to quantify an attackers ability of distinguishing sandboxesfrom real users, in the face of dedicated sandbox operatorswho can try to mimic a real user system by populating ouridentified wear-and-tear artifacts with realistic values.

    We argue that this setup captures the worst-case scenariofor an attacker, because not only are there fewer and fewerfeatures to train on, but also because the remaining features areof lower discriminatory power than the removed ones. Figure 4shows how the accuracy, false positive rate, and false negativerate vary, as we keep on removing features from our dataset.As one can notice, the overall accuracy of the classifier remains

    high (greater than 90%) even if we remove as many as 20discriminatory features.

    Similarly, we see that, guided by our custom cost matrix, theclassifier constantly prefers to err on the side of false positivesinstead of false negatives. We even observe that for the firsteight features, the accuracy increases and the FPR/FNR ratesdecrease. This shows that some pattern that was learned by thetraining data concerning the removed features did not appear inthe testing data, and thus removing these features, in fact, madethe classifier more accurate. The average accuracy of the first30 classifiers (removing from 0 to 29 features) is 91.5% with aFPR of 9.05% and a FNR of 7.96%. When the 30th feature isremoved (out of 33 in total), the decision tree algorithm failsto train a model with accuracy greater than random chance,and quits.

    To better understand how each type of artifact affects theaccuracy of our classifier, we retrained our model using onlyone type of artifacts at a time. Table VI shows the number ofartifacts per category and the performance of each classifier on

  • 30

    60

    90

    0 10 20 30Number of removed features

    Perc

    enta

    ge Accuracy False Negative Rate False Positive Rate

    Fig. 4: Variance of accuracy, false negative rate, and falsepositive rate of our boosted classifier as we keep removingthe topmost features. Even after removing tens of the featureswith the highest discriminatory capacity, the accuracy of theclassifier drops only slightly.

    the unseen testing dataset. We can observe that, even thoughthe classifier based on registry artifacts had the most featuresavailable to it, it performed worse than the Network-basedand Disk-based classifiers. This indicates that certain classesof artifacts have higher discriminatory power than the rest.At the same time, all classifiers achieve fairly high accuracy(some even have a 0% FNR), which suggests that malwareauthors can choose which category of artifacts they can focuson, without compromising the evasion capabilities of theirmalware. As such, malware authors who, for example, knowthat their samples will be analyzed by a sandbox that considersregistry-related activity suspicious, can use the other typesof wear-and-tear artifacts for both detecting the sandbox andavoiding raising suspicion.

    Our results demonstrate that, not only is it possible todifferentiate between real systems and sandboxes based onnon-intrusive, sandbox-agnostic wear-and-tear artifacts, but thatincremental patching approaches where one or two artifactsare made to look similar to those in real systems will not besufficient to prevent sandbox evasion.

    VI. ESTIMATING ACTUAL SYSTEM AGE

    In the previous section, we showed that wear-and-tearartifacts can be used to accurately differentiate betweenmachines belonging to real users and sandbox environments.The underlying intuition is that sandbox environments do notage either at all, or clearly not in the same way as userenvironments. Based on this observation, in this section, weexplore the possibility of predicting the age of a system basedon the values of its wear-and-tear artifacts. This possibilitywould allow malware to assess whether a machine exhibits arealistic wear and tear that matches its reported age. At thesame time, it would also enable researchers to artificially agemalware sandboxes to match a desired age.

    TABLE V: Features that the adaptive boosting decision treesrely on, and the percentage of trees utilizing them.

    Category Feature % of trees

    Network dnscacheEntries 100.00%System sysevt 100.00%System syssrc 100.00%Registry deviceClsCount 100.00%Registry autoRunCount 100.00%Browser uniqueURLs 100.00%Disk totalProcesses 66.90%Browser cookieDiffDays 59.86%Registry usrassistCount 54.93%Network certUtilEntries 54.23%System appevt 54.23%Browser uniqueCookieDomains 52.82%Network tcpConnections 50.00%System winupdt 16.90%Registry MUICacheEntries 11.27%Network ARPCacheEntries 4.93%

    TABLE VI: Classifier performance when trained only on asingle type of artifacts (FNR = False Negative Rate, FPR =False Positive Rate).

    Artifact Type #Artifacts Accuracy FNR FPR

    Network 4 95.92% 2.04% 6.12%Disk 5 95.92% 0.00% 8.16%System 5 92.86% 0.00% 14.29%Registry 11 93.88% 4.08% 8.16%Browser 7 92.86% 2.04% 12.24%

    A. Correlation Between Age and Artifacts

    We begin this process by first investigating the self-reportedage of sandboxes and real user machines. Figure 5 shows thecumulative distribution function of the age of user systems andmalware sandboxes. We can clearly see that, on average, usersystems are much newer than sandboxes. Half of the usershave systems less than a year old, yet 50% of sandboxes areless than two years old. This makes intuitive sense since mostusers update their hardware on a regular basis, while manysandboxes were setup once and are just reused over and overfor a long period of time. At the same time, we see that thetwo distributions meet at the top, meaning that there someusers have systems that are as old (or even older) than theoldest available sandboxes.

    Having established their age, we can now investigate theextent to which each individual wear-and-tear artifact canpotentially be used to predict an environments age. Figure 6shows the Pearson correlation between the reported age andeach wear-and-tear artifact for real user systems. We argue thatchanging the age reported by the OS is not only somethingthat everyday users cannot do, but that they do not have anyinterest of doing. As such, we assume that the reported agescollected from real user systems are accurate.

  • 0.00

    0.25

    0.50

    0.75

    1.00

    0 4 8 12System age

    Fra

    ctio

    n of

    sys

    tem

    s

    Users Sandboxes

    Fig. 5: CDF of the reported ages for sandboxes and real usermachines.

    We see that many artifacts that were very successful indifferentiating real machines from sandboxes do not, in fact,correlate with a machines age. By focusing on the artifacts thatappear towards the middle of Figure 6 (having a correlationcoefficient close to zero), one can notice that these artifacts maybe characteristic of user activity but they are also under a userscontrol. Users can delete their cookies, visit more or fewerURLs than other users, delete downloaded files, and, in general,affect the values of these artifacts in unpredictable ways thatprevent them from being indicative of systems age. Similarly,there are other artifacts, such as the number of cached wirelessnetworks, TCP connections, and firewall rules, which are notunder the direct control of users, clearly do not correlate withage, yet vary sufficiently between sandboxes and real machinesto be of use for differentiation.

    Towards the right end of Figure 6 we see the artifacts thatappear to positively correlate with system age, such as thenumber of previously connected USBs, the number of installapplications, and the elapsed time since the first user event.These artifacts are either proxies of age (e.g., the longera machine is used, the more applications are installed on it),or are direct sources of age information which, unlike HTTPcookies, are buried deeper in an OSs internals, and are thusunlikely to be modified or deleted by users.

    B. Regression

    Having established that some artifacts positively correlatewith age, we now attempt to identify the appropriate way ofcombining their raw values to estimate a systems actual age.To that end, we first process our dataset to remove artifactswith missing values in many of our observations. For artifactswith a high rate of missing values (more than 80% of theirvalues are missing) we remove them from our dataset and donot attempt to utilize them any further. For the remaining 37artifacts, we used the rest of the dataset to predict what wouldhave been the most appropriate value, had we been able toextract it. We use a random-forest-based imputation technique,which is a standard way of handling missing values, withoutthe need to remove entire observations and without relying onoverly generic statistics, like a features mean or mode, whichcould bias our results.

    TABLE VII: Significant coefficients of linear regression forpredicting the age of a system based on wear-and-tear artifacts.

    Feature Coeficient p-Value

    diskartifacts:tempFilesCount 0 2.92 0.0041 **diskartifacts:totalProcesses -2.12 0.0364 *evtartifacts:appevt 02.39 0.0186 *evtartifacts:sysevt 1.71 0.0901 .evtartifacts:syssrc -3.09 0.0025 **evtartifacts:appsrc 2.57 0.0114 *regartifacts:regSize 2.70 0.0080 **regartifacts:uninstallCount -2.07 0.0411 *regartifacts:deviceClsCount 1.85 0.0664 .regartifacts:InstalledApps 5.13 1.2e-06 ***regartifacts:totalApps 1.99 0.0490 *totalDownloadedFiles 2.11 0.0368 *browser:num -2.56 0.0117 *

    Significance Codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1Multiple R-Squared: 0.68, Adjusted R-Squared: 0.583

    1) Linear Regression: We first attempt to use linear regres-sion, since this is a common regression method that workswell in many contexts. In essence, we start with the formulaY = 0 + 1X1 + 2X2 + ... + where Xi is the value ofany given artifact, and try to calculate the weights i thatwould allow us to sum the artifacts in a way that will providea prediction of machines age (Y ).

    To train a linear regression model based on the real usersdataset, we first split the user dataset into two sets, onefor training (60%) and one for testing (40%). We keep thetest dataset aside to use it later to test how well our modelgeneralizes on previously unseen data. Our linear regressionmodel is trained on the training set, and we use a 10-foldcross validation to evaluate it, which results in a mean squareerror (MSE) of 1.23. The coefficients of the model and theirp-values are reported in Table VII. We see that 13 wear-and-tear artifacts correlate with a machines age in a statisticallysignificant way. Even though our model is not perfect, it canstill explain up to 68% of the variability in a systems age.

    Since the training error does not necessarily guarantee theperformance of our model when dealing with new data, weevaluate the accuracy of the linear regression model againstthe previously unseen test dataset. We first apply our model tothe unseen test dataset of systems belonging to real users tomake sure that it can predict their age with sufficient accuracy,and then proceed to use it to predict the age of sandboxes andcompare it with their claimed age. The MSE of the predictedages for the regular systems is 1.88, while sandboxes havea very high MSE of 6.25. Our results indicate that a modeltrained on real user data can not only predict the age of otherunseen systems based on the values of their wear-and-tearartifacts, but it can also identify that a systems artifacts areunrealistically high (or low) for its claimed age.

    Figure 7 shows the CDF of the values of residuals whenapplying our linear regression model on systems from Dreal

  • -0.4

    -0.2

    0.0

    0.2

    0.4

    0.6

    r_deviceClsCount

    d_totalProcesses

    n_tcpC

    onnections

    e_syssrc

    uniqueCookieD

    omains

    d_thum

    bsFolderSize

    r_orphanedCount

    r_totalMissingDlls

    n_arpcacheEntries

    d_recycleBinSize

    d_miniDum

    pSize

    b_num

    totalCookies

    e_winupdt

    r_shimCacheCount

    d_recycleBinCount

    r_FireruleCount

    r_autoRunCount

    totalBookm

    arks

    d_miniDum

    pCount

    uniqueURLs

    totalTypedURLs

    n_wirelessnetCount

    n_certUtilEntries

    d_tempFilesSize

    totalDow

    nloadedFiles

    e_appsrc

    cookieDiffDays

    r_usrassistCount

    e_sysdiffdays

    d_tempFilesCount

    r_uninstallCount

    d_desktopFileCount

    r_regSize

    r_totalApps

    e_appdiffdays

    urlDiffDays

    r_MUICacheEntries

    n_dnscacheEntries

    r_totalSharedD

    lls

    e_sysevt

    r_usbC

    ount

    e_appevt

    r_InstalledApps

    Artifacts

    Correlation

    Fig. 6: Correlation between the reported age of user systems and each of the artifacts.

    TABLE VIII: Lasso regression coefficients of the featuresselected as good predictors of system age by the model.

    Feature Coeficient

    netartifacts:dnscacheEntries 0.1383diskartifacts:tempFilesCount 0.0883evtartifacts:appevt 0.1345evtartifacts:sysevt 0.1097regartifacts:regSize 0.1160regartifacts:InstalledApps 0.6432regartifacts:totalApps 0.0559browser:num -0.0138

    and Dsand. We see that although the majority of real usersystems have a residual error approximately equal to zero(meaning that the predicted age closely matches the systemsactual age), the residuals when attempting to predict the ageof sandboxes are much higher, i.e., the predicted age is muchlower than the reported age, thereby shifting the distributiontowards the right part of the graph.

    2) Lasso Regression: To explore whether a more involvedregression model would result in better prediction accuracy,we also report the results of Lasso regression. Lasso regressionpenalizes the absolute size of the regression coefficients, andsets the coefficients of unnecessary features to zero, reducingin this way the size of our artifacts set. One of the benefitsof Lasso regression is that it typically utilizes less predictingvariables than linear regression, which reduces the complexityof the overall model. In the case of malware, a smaller featureset means fewer API calls to the underlying OS, and therebyless chances of triggering suspicious activity monitors.

    For our experiments, we split the datasets to three parts:training, evaluation, and testing. We first train a Lasso modelon the training set and make use of cross validation to find theoptimum value. We use the discovered value ( = 0.145),as this minimizes the MSE of the model, and train our finalmodel. Table VIII shows the eight features that the Lassomodel selected as good predictors of system age and their

    corresponding coefficients. The only artifact that has a smallnegative coefficient is the number of browsers installed ona system. One reasonable explanation for this is that ownersof older systems may hesitate to install additional browsers,which has an effect on our trained model.

    By applying the model on the unseen dataset that consistsof artifacts from real user systems (Dreal), the MSE is 0.749,which is better than that of linear regression. The MSE onthe testing sandbox dataset is 4.45, which again shows howthe wear and tear of sandboxes does not match their claimedage. Figure 7 shows the residual errors of the trained Lassoregression model, and how these compare to the errors ofthe linear regression model. We observe that although bothmodels can predict the age of real user systems (residualerrors approximately equal zero), the Lasso model can betterdifferentiate some sandboxes based on fewer artifacts comparedto the linear regression model.

    Overall, we see that wear-and-tear artifacts can predict,with sufficient accuracy, the real age of a system and thusreveal the age mismatch present in current sandboxes. As wementioned earlier in this section, other than just improving ourunderstanding of the relationship between artifacts and systemage, our trained models can be of use to sandbox developers tohelp them create environments with wear and tear that matchestheir stated age.

    VII. DISCUSSION

    A. Ethical Considerations and Coordinated Disclosure

    Our research sheds light to the problem of next-generationenvironment-aware malware based on wear and tear arti-facts. By demonstrating the effectiveness of this new evasiontechnique, we will hopefully bring more attention to thisissue, and encourage further research on developing effectivecountermeasures. To that end, our preliminary investigationon predicting a systems age based on the evaluated artifactvalues can help sandbox operators to fine-tune the wear-and-tearcharacteristics of their systems so that they look realistic. Therehave been very recent indications that malware authors havealready started taking into consideration usage-related artifacts

  • 0.00

    0.25

    0.50

    0.75

    1.00

    -5 0 5System age - Predicted age

    Fra

    ctio

    n of

    sys

    tem

    sLM-sandbox LM-user Lasso-sandbox Lasso-user

    Fig. 7: CDF of the difference between the reported andestimated age for sandboxes and real systems, using the linearand Lasso regression models.

    (e.g., whether any documents appear under the recently openeddocuments menu [54]). Consequently, developing effectivecountermeasures before malware authors start taking advantageof wear and tear for evasion purposes more broadly, is ofutmost importance.

    Although our results regarding the analyzed sandboxesused by public malware analysis services showcase theirvulnerability to evasion based on wear and tear, adversaries mayhave already come to the same conclusion even before us, giventhat our probing technique was reasonably straightforward, andthat similar techniques have also been used as part of previousresearch on public sandbox fingerprinting [23]. Hopefully, bypublishing our findings and coordinating with the affectedvendors, our research will eventually result in more robustmalware analysis systems.

    To that end, we contacted all vendors from which our probetool managed to collect wear-and-tear artifact information,notifying them about our findings several months before thepublication of this paper. Some of these vendors acknowledgedthe issue and requested additional information, as well as thecode of our artifact-probing tool so that they can more easilydetect future artifact exfiltration attempts.

    B. Probing Stealthiness

    Depending on the type of artifact, the actual operationsthat some malware needs to perform in order to retrievethe necessary information range from simple memory or filesystem read operations, to more complex parsing and queryingoperations that may involve system APIs and services. Insteadof making sandboxes look realistic from a wear-and-tearperspective, an alternative (at least short-term) defense strategymay thus focus on detecting the probing activity itself. Atthe same time, many environmental aspects can be probed orinferred through non-suspicious system operations commonlyexhibited during the startup or installation of benign software(e.g., retrieving or storing state information from the registryor configuration files, or checking for updates), while malwarecan always switch to a different set of artifacts that are notmonitored for suspicious activity.

    Another aspect that must be considered is that as the principleof least privilege is better enforced in user accounts and systemservices, a malicious program may not have the necessarypermissions to access all system resources and APIs. We didnot encounter any such issues with our probe tool on thetested operating systems, but based on our findings, there isplenty of artifacts that can be probed even by under-privilegedprograms. This issue may become more important in otherenvironments, such as, in malicious mobile apps, which aremuch more constrained in terms of the environmental featuresthey can access.

    C. OS Dependability

    We have focused on the Windows environment, given that itis one of the most severely plagued by malware, and that mostmalware analysis services employ Windows sandboxes. As aresult, many of the identified artifacts are Windows-specific(e.g., the registry-related ones), and clearly not applicable formalware that target other operating systems (e.g., Linux, Mac,Android). Even so, artifacts related to browser usage, the filesystem, and the various network caches, are likely to be presentregardless of the exact operating system.

    Based on the dependability of an artifact on a given OS,sandbox operators may choose different defensive strategies.For instance, operators of sandboxes tailored to the detectionof cross-platform malware (e.g., written in Java) may startaddressing OS-independent artifacts first, before focusing onOS-dependent ones. Finally, since end-user systems are themost popular targets of malware authors, our study is focusedsolely on them. We leave the analysis of wear-and-tear artifactson other platforms, such as embedded devices, for future work.

    D. Defenses

    We have demonstrated that removing system instrumentationand introspection artifacts from malware sandboxes is notenough to prevent malware evasion. Since the lack of wearand tear can allow attackers to differentiate between dynamicanalysis systems and real user systems, introducing wearand tear artifacts to analysis systems becomes an additionalnecessary step. We outline two different potential strategies toachieve this.

    First, a real user system can be cloned and used as a basis fora malware sandbox. In this scenario, the system will alreadyhave organic wear and tear which can be used to confuseattackers. Potential difficulties of adopting this approach are: i)privacy issues (how does one scrub all private informationbefore or after cloning, but leaves wear-and-tear artifactsintact?), and ii) eventual outdating of the artifacts (if the clonedsystem is used over a long period of time, an attacker can useour proposed statistical models to detect that the claimed agedoes not match the level of wear and tear).

    An alternative approach is to start with a freshly installedimage of an operating system and artificially age it byautomatically simulating user actions. This would includeinstalling and uninstalling different software while changingthe system time to the past, browsing popular and less popular

  • webpages, and, in general, taking actions which will affectthe wear-and-tear artifacts on which an attacker could rely fordetecting dynamic analysis systems. While this approach doesnot introduce any privacy concerns and can be repeated asoften as desired, it is unclear to what extent this artificial agingwill produce realistic artifacts that are similar enough to thoseof real systems.

    We leave the implementation and comparison of these twoapproaches for future work.

    VIII. CONCLUSION AND FUTURE WORK

    In this paper, we have systematically assessed the threat ofa malware sandbox evasion strategy that leverages artifactsindicative of the wear and tear of a system to identifyartificial environments, and demonstrated its effectivenessagainst existing sandboxes used by malware analysis services.As long as this aspect is not taken into consideration whenimplementing malware sandboxes, malicious code will continueto be able to effectively alter its behavior when being analyzed.Even approaches that move from emulated and virtualizedenvironments to bare-metal systems [8][10] will be helplessif evasion tactics shift from how realistic a system lookslike, to how realistic its past use looks like. The same alsoholds for existing techniques that identify split personalitiesin malware by comparing malware behavior on an emulatedanalysis environment and on an unmodified reference host [44]or multiple sandboxes [18]. As long as all involved systems donot look realistic from the perspective of a real users activity,they will be easily detectable and, therefore, evadable.

    As a step towards mitigating this threat, we have presentedstatistical models that capture a systems age and degree of use,which can aid sandbox operators in fine-tuning their systemsso that they exhibit more realistic wear-and-tear characteristics.Although addressing each and every identified artifact may bea viable short-term solution, more generic automated agingtechniques will probably be needed to provide a more robustdefense, as many more artifacts may be available.

    As part of our future work, we plan to explore suchapproaches based on simulation-based generation, as well asprivacy-preserving transformation of system images extractedfrom real user devices. We also plan to evaluate the effec-tiveness of sandbox evasion based on wear and tear in otherenvironments, such as, different desktop operating systems aswell as mobile devices.

    ACKNOWLEDGMENTS

    We thank our shepherd, Tudor Dumitras, and the anonymousreviewers for their valuable feedback. This work was supportedin part by the Office of Naval Research (ONR) under grantN00014-16-1-2264, and by the National Science Foundation(NSF) under grants CNS-1617902 and CNS-1617593, withadditional support by Qualcomm. Any opinions, findings,conclusions, or recommendations expressed herein are thoseof the authors, and do not necessarily reflect those of the USGovernment, ONR, NSF, or Qualcomm.

    REFERENCES[1] C. Willems, T. Holz, and F. Freiling, Toward automated dynamic

    malware analysis using CWSandbox, Security Privacy, IEEE, vol. 5,no. 2, pp. 3239, March 2007.

    [2] U. Bayer, C. Kruegel, and E. Kirda, TTAnalyze: A tool for analyzingmalware, in Proceedings of the 15th European Institute for ComputerAntivirus Research Annual Conference (EICAR), April 2006.

    [3] X. Jiang and X. Wang, Out-of-the-box monitoring of VM-basedhigh-interaction honeypots, in Proceedings of the 10th InternationalConference on Recent Advances in Intrusion Detection (RAID), 2007,pp. 198218.

    [4] H. Yin, D. Song, M. Egele, C. Kruegel, and E. Kirda, Panorama:Capturing system-wide information flow for malware detection andanalysis, in Proceedings of the 14th ACM Conference on Computer andCommunications Security (CCS), 2007, pp. 116127.

    [5] Anubis: Malware analysis for unknown binaries, https://anubis.iseclab.org/.

    [6] A. Dinaburg, P. Royal, M. Sharif, and W. Lee, Ether: Malware analysisvia hardware virtualization extensions, in Proceedings of the 15th ACMConference on Computer and Communications Security (CCS), 2008, pp.5162.

    [7] Automated malware analysis: Cuckoo sandbox, https://cuckoosandbox.org/.

    [8] C. Willems, R. Hund, A. Fobian, D. Felsch, T. Holz, and A. Vasudevan,Down to the bare metal: Using processor features for binary analysis,in Proceedings of the 28th Annual Computer Security ApplicationsConference (ACSAC), 2012, pp. 189198.

    [9] D. Kirat, G. Vigna, and C. Kruegel, Barecloud: Bare-metal analysis-based evasive malware detection, in Proceedings of the 23rd USENIXSecurity Symposium, 2014, pp. 287301.

    [10] S. Mutti, Y. Fratantonio, A. Bianchi, L. Invernizzi, J. Corbetta, D. Kirat,C. Kruegel, and G. Vigna, Baredroid: Large-scale analysis of androidapps on real devices, in Proceedings of the 31st Annual ComputerSecurity Applications Conference (ACSAC), December 2015.

    [11] J. Oberheide and C. Miller, Dissecting the Android Bouncer, 2012,https://jon.oberheide.org/files/summercon12-bouncer.pdf.

    [12] S. Neuner, V. van der Veen, M. Lindorfer, M. Huber, G. Merzdovnik,M. Mulazzani, and E. Weippl, Enter sandbox: Android sandbox com-parison, in Proceedings of the 3rd IEEE Mobile Security TechnologiesWorkshop (MoST), 2014.

    [13] FireEye Malware Analysis AX Series, https://www.fireeye.com/products/malware-analysis.html.

    [14] Blue Coat Malware Analysis Appliance, https://www.bluecoat.com/products/malware-analysis-appliance.

    [15] Cisco Anti-Malware System, http://www.cisco.com/c/en/us/products/security/web-security-appliance/anti malware index.html.

    [16] FortiNet Advanced Threat Protection (ATP) - FortiSandbox, http://www.fortinet.com/products/fortisandbox/index.html.

    [17] McAfee Advanced Threat Defense, http://www.mcafee.com/us/products/advanced-threat-defense.aspx.

    [18] M. Lindorfer, C. Kolbitsch, and P. Milani Comparetti, Detectingenvironment-sensitive malware, in Proceedings of the 14th InternationalConference on Recent Advances in Intrusion Detection (RAID), 2011,pp. 338357.

    [19] C. Kolbitsch, Analyzing environment-aware malware, 2014,http://labs.lastline.com/analyzing-environment-aware-malware-a-look-at-zeus-trojan-variant-called-citadel-evading-traditional-sandboxes.

    [20] T. Petsas, G. Voyatzis, E. Athanasopoulos, M. Polychronakis, andS. Ioannidis, Rage against the virtual machine: Hindering dynamicanalysis of mobile malware, in Proceedings of the 7th EuropeanWorkshop on System Security (EuroSec), April 2014.

    [21] A. Kapravelos, Y. Shoshitaishvili, M. Cova, C. Kruegel, and G. Vigna,Revolver: An automated approach to the detection of evasive web-basedmalware, in Proceedings of the 22nd USENIX Security Symposium,2013, pp. 637652.

    [22] G. Stringhini, C. Kruegel, and G. Vigna, Shady paths: Leveragingsurfing crowds to detect malicious web pages, in Proceedings of theACM SIGSAC Conference on Computer & Communications Security(CCS), 2013, pp. 133144.

    [23] A. Yokoyama, K. Ishii, R. Tanabe, Y. Papa, K. Yoshioka, T. Matsumoto,T. Kasama, D. Inoue, M. Brengel, M. Backes, and C. Rossow, SandPrint:Fingerprinting malware sandboxes to provide intelligence for sandboxevasion, in Proceedings of the 19th International Symposium on Researchin Attacks, Intrusions and Defenses (RAID), 2016, pp. 165187.

  • [24] D. Desai, Malicious Documents leveraging new Anti-VM & Anti-Sandbox techniques, https://www.zscaler.com/blogs/research/malicious-documents-leveraging-new-anti-vm-anti-sandbox-techniques, 2016.

    [25] ProofPoint, Ursnif Banking Trojan Campaign Ups the Ante with NewSandbox Evasion Techniques, https://www.proofpoint.com/us/threat-insight/post/ursnif-banking-trojan-campaign-sandbox-evasion-techniques, 2016.

    [26] Y.-M. Wang, D. Beck, X. Jiang, R. Roussev, C. Verbowski, S. Chen, andS. King, Automated web patrol with strider honeymonkeys: Findingweb sites that exploit browser vulnerabilities, in Proceedings of the 13thNetwork and Distributed Systems Security Symposium (NDSS), 2006.

    [27] N. Provos, P. Mavrommatis, M. A. Rajab, and F. Monrose, All youriFRAMEs point to us, in Proceedings of the 17th USENIX SecuritySymposium, 2008.

    [28] A. Moshchuk, T. Bragin, S. D. Gribble, and H. M. Levy, A crawler-based study of spyware on the web, in Proceedings of the 13th Networkand Distributed Systems Security Symposium (NDSS), 2006.

    [29] A. Moshchuk, T. Bragin, D. Deville, S. D. Gribble, and H. M. Levy,Spyproxy: Execution-based detection of malicious web content, inProceedings of the 16th USENIX Security Symposium, 2007.

    [30] M. Cova, C. Kruegel, and G. Vigna, Detection and analysis of drive-by-download attacks and malicious javascript code, in Proceedings ofthe World Wide Web Conference (WWW), 2010.

    [31] Wepawet: Detection and Analysis of Web-based Threats, https://wepawet.iseclab.org/.

    [32] K. Rieck, T. Krueger, and A. Dewald, Cujo: Efficient detection andprevention of drive-by-download attacks, in Proceedings of the 26thAnnual Computer Security Applications Conference (ACSAC), 2010, pp.3139.

    [33] S. Ford, M. Cova, C. Kruegel, and G. Vigna, Analyzing and detectingmalicious flash advertisements, in Proceedings of the Annual ComputerSecurity Applications Conference (ACSAC), Honolulu, HI, December2009.

    [34] M. Egele, T. Scholte, E. Kirda, and C. Kruegel, A survey on automateddynamic malware-analysis techniques and tools, ACM Comput. Surv.,vol. 44, no. 2, pp. 6:16:42, Mar. 2008.

    [35] T. Vidas and N. Christin, Evading android runtime analysis via sandboxdetection, in Proceedings of the 9th ACM Symposium on Information,Computer and Communications Security (AsiaCCS), 2014, pp. 447458.

    [36] T. Klein, ScoopyNG The VMware detection tool, 2008, http://www.trapkit.de/tools/scoopyng/index.html.

    [37] M. G. Kang, H. Yin, S. Hanna, S. McCamant, and D. Song, Emulatingemulation-resistant malware, in Proceedings of the 1st ACM Workshopon Virtual Machine Security (VMSec), 2009, pp. 1122.

    [38] P. Ferrie, Attacks on Virtual Machine Emulators, http://www.symantec.com/avcenter/reference/Virtual Machine Threats.pdf.

    [39] , Attacks on More Virtual Machine Emulators, http://pferrie.tripod.com/papers/attacks2.pdf.

    [40] Y. Bulygin, CPU side-channels vs. virtualization rootkits: the good, thebad, or the ugly. ToorCon, 2008.

    [41] J. Rutkowska, Red Pill... or how to detect VMM using (almost) oneCPU instruction, 2004, https://web.archive.org/web/20071112073957/http:/www.invisiblethings.org/papers/redpill.html.

    [42] O. Bazhaniuk, Y. Bulygin, A. Furtak, M. Gorobets, J. Loucaides, andM. Shkatov, Reaching the far corners of MATRIX: generic VMMfingerprinting. SOURCE Seattle, 2015.

    [43] P. Jung, Bypassing sanboxes for fun! BotConf, 2014,https://www.botconf.eu/wp-content/uploads/2014/12/2014-2.7-Bypassing-Sandboxes-for-Fun.pdf.

    [44] D. Balzarotti, M. Cova, C. Karlberger, C. Kruegel, E. Kirda, and G. Vigna,Efficient detection of split personalities in malware, in Proceedings ofthe Symposium on Network and Distributed System Security (NDSS),2010.

    [45] M. K. Sun, M. J. Lin, M. Chang, C. S. Laih, and H. T. Lin, Malwarevirtualization-resistant behavior detection, in Proceedings of the 17thIEEE International Conference on Parallel and Distributed Systems(ICPADS), 2011, pp. 912917.

    [46] D. Kirat, G. Vigna, and C. Kruegel, BareBox: Efficient malware analysison bare-metal, in Proceedings of the 27th Annual Computer SecurityApplications Conference (ACSAC), 2011, pp. 403412.

    [47] C. Wueest, Does malware still detect virtual machines? 2014,http://www.symantec.com/connect/blogs/does-malware-still-detect-virtual-machines.

    [48] A. Singh, Defeating Darkhotel Just-In-Time Decryption, 2015, http://labs.lastline.com/defeating-darkhotel-just-in-time-decryption.

    [49] D. Maier, T. Muller, and M. Protsenko, Divide-and-conquer: Why an-droid malware cannot be stopped, in Proceedings of the 9th InternationalConference on Availability, Reliability and Security (ARES), 2014, pp.3039.

    [50] J. Blackthorne, A. Bulazel, A. Fasano, P. Biernat, and B. Yener,AVleak: Fingerprinting antivirus emulators through black-box testing,in Proceedings of the 10th USENIX Workshop on Offensive Technologies(WOOT), Austin, TX, 2016.

    [51] CleanMyPC Registry Cleaner, http://www.registry-cleaner.net/.[52] CCleaner - Registry Cleaner, https://www.piriform.com/ccleaner/

    registry-cleaner.[53] R. E. Schapire and Y. Freund, Boosting: Foundations and algorithms.

    MIT press, 2012.[54] M. Cova, Evasive JScript, 2016, http://labs.lastline.com/evasive-jscript.

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 300 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile () /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /CreateJDFFile false /Description > /Namespace [ (Adobe) (Common) (1.0) ] /OtherNamespaces [ > /FormElements false /GenerateStructure false /IncludeBookmarks false /IncludeHyperlinks false /IncludeInteractive false /IncludeLayers false /IncludeProfiles false /MultimediaHandling /UseObjectSettings /Namespace [ (Adobe) (CreativeSuite) (2.0) ] /PDFXOutputIntentProfileSelector /DocumentCMYK /PreserveEditing true /UntaggedCMYKHandling /LeaveUntagged /UntaggedRGBHandling /UseDocumentProfile /UseDocumentBleed false >> ]>> setdistillerparams> setpagedevice

Recommended

View more >