Quantcast
Channel: Windows Incident Response
Viewing all 505 articles
Browse latest View live

More Registry Fun

$
0
0
Once, on a blog far, far away, there was this post that discussed the use of the Unicode RLO control character to "hide" malware in the Registry, particularly from GUI viewers that processed Unicode.

Recently, Jamie shared this Symantec article with me; figure 1 in the article illustrates an interesting aspect of the malware when it comes to persistence...it apparently prepends a null character to the beginning of the value name.  Interesting, some seem to think that this makes tools like RegEdit "brokken".

So, I wrote a RegRipper plugin called "null.pl" that runs through a hive file looking for key and value names that being with a null character.  Jamie also shared a couple of sample hives, so I got to test the plugin out.   The following image illustrates the plugin output when run against one of the hives:













All in all, the turn-around time was pretty quick.  I started this morning, and had the plugin written, tested, and uploaded to Github before lunch.

Later in the day, Eric Zimmerman followed up by testing the hive that Jamie graciously shared with me against the Registry Explorer.  I should also note that MiTeC WRR has no issues with the value names; it displays them as follows:



Resources, Link Mashup

$
0
0
Monitoring
MS's Sysmon was recently updated to version 3.2, with the addition of capturing opens for raw read access to disks and volumes.  If you're interested in monitoring your infrastructure and performing threat hunting at all, I'd highly recommend that you consider installing something like this on your systems.  While Sysmon is not nearly as fully-featured as something like Carbon Black, employing Sysmon along with centralized log collection and filtering will provide you with a level of visibility that you likely hadn't even imagined was possible previously.

This page talks about using Sysmon and NXLog.

The fine analysts of the Dell SecureWorks CTU-SO recently had an article posted  that describes what the bad guys like to do with Windows Event Logs, and both of the case studies could be "caught" with the right instrumentation in place.  You can also use process creation monitoring (via Sysmon, or some other means) to detect when an intruder is living off the land within your environment.

The key to effective monitoring and subsequent threat hunting is visibility, which is achieved through telemetry and instrumentation.  How are bad guys able to persist within an infrastructure for a year or more without being detected?  It's not that they aren't doing stuff, it's that they're doing stuff that isn't detected due to a lack of visibility.

MS KB article 3004375 outlines how to improve Windows command-line auditing, and this post from LogRhythm discusses how to enable Powershell command line logging (another post discussing the same thing is here).  The MS KB article gives you some basic information regarding process creation, and Sysmon provides much more insight.  Regardless of which option you choose, however, all are useless unless you're doing some sort of centralized log collection and filtering, so be sure to incorporate the necessary and appropriate logs into your SEIM, and get those filters written.

Windows Event Logs
Speaking of Windows Event Logs, sometimes it can be very difficult to find information regarding various event source/ID pairs.  Microsoft has a great deal of information available regarding Windows Event Log records, and I very often can easily find the pages with a quick Google search.  For example, I recently found this page on Firewall Rule Processing events, based on a question I saw in an online forum.

From Deus Ex Machina, you can look up a wide range of Windows Event Log records here or here.  I've found both to be very useful.  I've used this site more than once to get information about *.evtx records that I couldn't find any place else.

Another source of information about Windows Event Log records and how they can be used can often be one of the TechNet blogs.  For example, here's a really good blog post from Jessica Payne regarding tracking lateral movement...

With respect to the Windows Event Logs, I've been looking at ways to increase instrumentation on Windows systems, and something I would recommend is putting triggers in place for various activities, and writing a record to the Windows Event Log.  I found this blog post recently that discusses using PowerShell to write to the Windows Event Log, so whatever you trap or trigger on a system can launch the appropriate command or run a batch file the contains the command.  Of course, in a networked environment, I'd highly recommend a SEIM be set up, as well.

One thought regarding filtering and analyzing Windows Event Log records sent to a SEIM...when looking at various Windows Event Log records, we have to look at them in the context of the system, rather than in isolation, as what they actually refer to can be very different.  A suspicious record related to WMI, for example, when viewed in isolation may end up being part of known and documented activity when viewed in the context of the system.

Analysis
PoorBillionaire recently released a Windows Prefetch Parser, which is reportedly capable of handling *.pf files from XP systems all the way up through Windows 10 systems.  On 19 Jan, Eric Zimmerman did the same, making his own Prefetch parser available.

Having tools available is great, but what we really need to do is talk about how those tools can be used most effectively as part of our analysis.  There's no single correct way to use the tool, but the issue becomes, how do you correctly interpret the data once you have it?

I recently encountered a "tale of two analysts", where both had access to the same data.  One analyst did not parse the ShimCache data at all as part of their analysis, while the other did and misinterpreted the information that the tool (whichever one that was) displayed for them.

So, my point is that having tools to parse data is great, but if the focus is tools and parsing data, but not analyzing and correctly interpreting the data, what have the tools really gotten us?

Creating a Timeline
I was browsing around recently and ran across an older blog post (yeah, I know it's like 18 months old...), and in the very beginning of that post, something caught my eye.  Specifically, a couple of quotes from the blog post:

...my reasons for carrying this out after the filesystem timeline is purely down to the time it takes to process.

...and...

The problem with it though is the sheer amount of information it can contain! It is very important when working with a super timeline to have a pivot point to allow you to narrow down the time frame you are interested in.

The post also states that timeline analysis is an extremely powerful tool, and I agree, 100%.  What I would offer to analysts is a more deliberate approach to timeline analysis, based on what Chris Pogue coined as Sniper Forensics.

Speaking of analysis, the folks at RSA released a really good look at analyzing carrier files used during a phish.  The post provides a pretty thorough walk-through of the tool and techniques used to parse through an old (or should I say, "OLE") style MS Word document to identify and analyze embedded macros.

Powershell
Not long ago, I ran across an interesting artifact...a folder with the following name:

C:\Users\user\AppData\Local\Microsoft\Windows\PowerShell\CommandAnalysis\

The folder contained an index file, and a bunch of files with names that follow the format "PowerShell_AnalysisCacheEntry_GUID".  Doing some research into this, I ran across this BoyWonder blog post, which seems to indicate that this is a cache (yeah, okay, that's in the name, I get it...), and possibly used for functionality similar to auto-complete.  It doesn't appear to illustrate what was run, though.  For that, you might want to see the LogRhythm link earlier in this post.

As it turned out, the folder path I listed above was part of legitimate activity performed by an administrator.


Analysis

$
0
0
A bit ago, I posted about doing analysis, and that post didn't really seem to get much traction at all.  What was I trying for?  To start a conversation about how we _do_ analysis.  When we make statements to a client or to another analyst, on what are we basing those findings?  Somewhere between the raw data and our findings is where we _do_ analysis; I know what that looks like for me, and I've shared it (in this blog, in my books, etc.), and what I've wanted to do for some time is go beyond the passivity of sitting in a classroom, and start a conversation where analysts engage and discuss analysis.

I have to wonder...is this even possible?  Will analysts talk about what they do?  For me, I'm more than happy to.  But will this spark a conversation?

I thought I'd try a different tact this time around.  In a recent blog post, I mentioned that two Prefetch parsers had recently been released.  While it is interesting to see these tools being made available, I have to ask...how are analysts using these tools?  How are analysts using these tools to conduct analysis, and achieve the results that they're sharing with their clients?

Don't get me wrong...I think having tools is a wonderful idea.  We all have our favorite tools that we tend to gravitate toward or reach for under different circumstances.  Whether it's commercial or free/open source tools, it doesn't really matter.  Whether you're using a dongle or a Linux distro...it doesn't matter.  What does matter is, how are you using it, and how are you interpreting the data?

Someone told me recently, "...I know you have an issue with EnCase...", and to be honest, that's simply not the case.  I don't have an issue with EnCase at all, nor with FTK.  I do have an issue with how those tools are used by analysts, and the issue extends to any other tool that is run auto-magically and expected to spit out true results with little to no analysis.

What do the tools really do for us?  Well, basically, most tools parse data of some sort, and display it.  It's then up to us, as analysts, to analyze that data...interpret it, either within the context of that and other data, or by providing additional context, by incorporating either additional data from the same source, or data from external sources.

RegRipper is a great example.  The idea behind RegRipper (as well as the other tools I've written) is to parse and display data for analysis...that's it.  RegRipper started as a bunch of scripts I had sitting around...every time I'd work on a system and have to dig through the Registry to find something, I'd write a script to do the actual work for me.  In some cases, a script was simply to follow a key path (or several key paths) that I didn't want to have to memorize. In other cases, I'd write a script to handle ROT-13 decoding or binary parsing; I figured, rather than having to do all of that again, I'd write  a script to automate it.

For a while, that's all RegRipper did...parse and display data.  If you had key words you wanted to "pivot" on, you could do so with just about any text editor, but that's still a lot of data.  So then I started adding "alerts"; I'd have the script (or tool) do some basic searching to look for things that were known to be "bad", in particular, file paths in specific locations.  For example, an .exe file in the root of the user profile, or in the root of the Recycle Bin, is a very bad thing, so I wanted those to pop out and be put right in front of the analyst.  I found...and still find...this to be an incredibly useful functionality, but to date,

Here's an example of what I'm talking about with respect to analysis...I ran across this forensics challenge walk-through recently, and just for sh*ts and grins, I downloaded the Registry hive (NTUSER.DAT, Software, System) files.  I ran the appcompatcache.plRegRipper plugin against the system hive, and found the following "interesting" entries within the AppCompatCache value:

C:\dllhot.exe  Tue Apr  3 18:08:50 2012 Z  Executed
C:\Windows\TEMP\a.exe  Tue Apr  3 23:54:46 2012 Z  Executed
c:\windows\system32\dllhost\svchost.exe  Tue Apr  3 22:40:25 2012 Z  Executed
C:\windows\system32\hydrakatz.exe  Wed Apr  4 01:00:45 2012 Z  Executed
C:\Windows\system32\icacls.exe  Tue Jul 14 01:14:21 2009 Z  Executed

Now, the question is, for each of those entries, what do they mean?  Do they mean that the .exe file was "executed" on the date and time listed?

No, that's not what the entries mean at all.  Check out Mandiant's white paper on the subject.  You can verify what they're saying in the whitepaper by creating a timeline from the shim cache data and file system metadata (just the $MFT will suffice); if the files that had been executed were not deleted from the system, you'll see that the time stamp included in the shim cache data is, in fact, the last modification time from the file system (specifically, the $STANDARD_INFORMATION attribute) metadata.

I use this as an example, simply because it's something that I see a great deal of; in fact, I recently experienced a "tale of two analysts", where I reviewed work that had previously been conducted, by two separate analysts.  The first analyst did not parse the Shim Cache data, and the second parsed it, but assumed that what the data meant was that the .exe files of interested had been executed at the time displayed alongside the entry.

Again, this is just an example, and not meant to focus the spotlight on anyone.  I've talked with a number of analysts, and in just about every conversation, they've either known someone who's made the same mistake misinterpreting the Shim Cache data, or they've admitted to misinterpreting it themselves.  I get it; no one's perfect, and we all make mistakes.  I chose this one as an example, because it's perhaps one of the most misinterpreted data sources.  A lot of analysts who have attended (or conducted) expensive training courses have made this mistake.

Pointing out mistakes isn't the point I'm trying to make...it's that we, as a community, need to engage in a community-wide conversation about analysis.  What resources do we have available now, and what do we need?  We can't all attend training courses, and when we do, what happens most often is that we learn something cool, and then don't see it again for 6 months or a year, and we forget the nuances of that particular analysis.  Dedicated resources are great, but they (forums, emails, documents) need to be searched.  What about just-in-time resources, like asking a question?  Would that help?


Training

$
0
0
On the heels of my Skills Dilemma blog post, I wanted to share some thoughts on training.  Throughout my career, I've been on both sides of that fence...in the military and in private sector consulting, I've received, as well as developed and conducted training at various levels.  I've attended presentations, and developed and conducted presentations, at a number of levels and at a variety of venues.

Corey's also had some pretty interesting thoughts with respect to training in his blog.

Purpose
There are a lot of great training options out there.  When you're looking at the various options, are you looking to use up training funds, or are you looking for specific skills development?  What is the purpose of the training?  What's your intent in either attending the training, or sending someone, a staff member, to that training?

If you're just looking to use up training funds so that you'll get that money included in your budget next year, well, pretty much anything will do.  However, if you're looking for specific skills development, whether it's basic or advanced skills, you may want to look closely at what's being taught in the course.

What would really help with this is a Yelp-like system for reviewing training courses.  Wait...what?  You think someone should actually evaluate the services they pay for and receive?  What are you, out of your mind?

So, here's my thought...as a manager, you sit down with one of your staff and develop performance indicators and goals for the coming year, as well as a plan for achieving those goals.  The two of you decide that in order to meet those goals, one step is to attend a specific training course.  Your staff member attends, and then you both write a review.  You both write a review of the course based on what you agreed you wanted to achieve by attending the course; the staff member based on attending the course, and you (as the manager) based on your observation of your staff member's use of their new skills.

Accountability
I'll say it again...there are a lot of great training options out there, but in my experience, what's missing is accountability for that training.  What I mean by that is, if you're a manager and you send someone off to training (whether they obtain a certification or not), do you hold them accountable for that training once they return?

Here's an example...when I was on active duty and was stationed overseas, there was an NBC (nuclear, biological, chemical) response course being conducted, and being the junior guy, I was sent.  After I'd passed the course and received my certificate, I returned to my unit, and did absolutely NOTHING with respect to NBC.  Basically, I was simply away for a week.  I was sent off to the training for no other reason than I was the low guy on the totem pole, and when I returned, I was neither asked about, nor required to use or implement that training in any way.  There was no accountability.

Later in my tenure in the military, I found an opening for some advanced training for a Sgt who worked for me.  I felt strongly that this Sgt should be promoted and advance in his career, and one way to shore up his chances was to ensure that he advanced in his military occupational specialty.  I got him a seat in the training course, got his travel set up, and while he was gone, I found a position in another unit where he would put his new-found skills to good use.  When he returned, I informed him of his transfer (which had other benefits for him, as well).  His new role required him to teach junior Marines about the device he'd been trained on, as well as train the new officers attending the Basic Communication Officers Course on how to use and deploy the device, as well.  He was held accountable for the training he'd received.

How often do we do this?  Be honest.  I've seen an analyst that had attended some pretty extensive training, only to return and within the next couple of weeks, not know how to determine if a file had been time stomped or not.  I know that the basics of how to conduct that type of analysis were covered in the training they'd attended.

Generalist vs. Specialist
What kind of training are you interested in?  Basic skills development, or advanced training in a very specific skill set?  What specific skills are you looking for?  Are they skills specific to your environment?

There's a lot of good generalist training out there, training that provides a broad range of skills.  You may want to start there, and then develop an in-house training program ("brown bag" lunch presentations, half- or full-day training, mentoring, etc.) that reinforces and extends those basic skills into something that's specific to your environment.

The Need for Instrumentation

$
0
0
Almost everyone likes spies, right?  Jason Bourne, James Bond, that sort of thing?  One of things you don't see in the movies is the training these super spies go through, but you have to imagine that it's pretty extensive, if they can pop up in a city that they maybe haven't been to and transition seamlessly into the environment.

The same thing is true of targeted adversaries...they're able to seamlessly blend into your environment.  Like special operations forces, they learn how to use tools native to the environment in order to get the information that they're after, whether it's initial reconnaissance of the host or the infrastructure, locating items of interest, moving laterally within the infrastructure, or exfiltrating data.

I caught this post from JPCERT/CC that discusses Windows commands abused by attackers.  The author takes a different approach from previous posts and shares some of the command lines used, but also focuses on the frequency of use for each tool.  There's also a section in the post that recommends using GPOs to restrict the use of unnecessary commands.  An alternative approach might be to track attempts to use the tools, by creating a trigger to write a Windows Event Log record (discussed previously in this post).  When incorporated into an overall log management (SEIM, filtering, alerting, etc.) framework, this can be an extremely valuable detection mechanism.

If you're not familiar with some of the tools that you see listed in the JPCERT/CC blog post, try running them, starting by typing the command followed by "/?".

TradeCraft Tuesday - Episode #6 discusses how Powershell can be used and abused. The presenters (one of whom is Kyle Hanslovan) strongly encourage interaction (wow, does that sound familiar at all?) with the presentation via Twitter.  During the presentation, the guys talk about Powershell being used to push base64 encoded commands into the Registry for later use (often referred to as "fileless"), and it doesn't stop there.  Their discussion of the power of Powershell for post-exploitation activities really highlights the need for a suitable level of instrumentation in order to achieve visibility.

The use of native commands by an adversary or intruder is not new...it's been talked about before.  For example, the guys at SecureWorks talked about the same thing in the articles Linking Users to Systems and Living off the Land.  Rather than talking about what could be done, these articles show you data that illustrates what was actually done; not might or could, but did.

So, what do you do?  Well, I've posted previously about how you can go about monitoring for command line activity, which is usually manifest when access is achieved via RATs.

Not all abuse of native Windows commands and functionality is going to be as obvious as some of what's been discussed already.  Take this recent SecureWorks post for example...it illustrates how GPOs have been observed being abused by dedicated actors.  An intruder moving about your infrastructure via Terminal Services won't be as easy to detect using command line process creation monitoring, unless and until they resort to some form of non-GUI interaction.

Updated samparse.pl plugin

$
0
0
I received an email from randomaccess last night, and got a look at it this morning.  In the email, he pointed out there there had been some changes to the SAM Registry hive as of Windows 8/8.1, apparently due to the ability to log into the system using an MSDN Live account.  Several new values seem to be added to the user RID key, specifically, GivenName, SurName, and InternetUserName.  He provided a sample SAM hive and an explanation of what he was looking for, and I was able to update the samparse.pl plugin, send him a copy, and update the GitHub repository, all in pretty short order.

This is a great example of what I've said time and again since I released RegRipper; if you need a plugin and don't feel that you can create or update one yourself, all you need to do is provide a concise description of what you're looking for, and some sample data.  It's that easy, and I've always been able to turn a new or updated plugin around pretty quickly.

Now, I know some folks are hesitant to share data/hive files with me, for fear of exposure.  I know people are afraid to share information for fear it will end up in my blog, and I have actually had someone tell me recently that they were hesitant to share something with me because they thought I would take the information and write a new book around it.  Folks, if you take a close look at the blog and books, I don't expose data in either one.  I've received hive files from two members of law enforcement, one of whom shared hive files from a Windows phone.  That's right...law enforcement.  And I haven't exposed, nor have I shared any of that data.  Just sayin'...

Interestingly enough, randomaccess also asked in his email if I'd "updated the samparse plugin for the latest book", which was kind of an interesting question.  The short answer is "no", I don't generally update plugins only when I'm releasing a new book.  If you've followed this blog, you're aware that plugins get created or updated all the time, without a new book being released.  The more extensive response is that I simply haven't seen a SAM hive myself that contains the information in question, nor has anyone provided a hive that I could used to update and test the plugin, until now.

And yes, the second edition of Windows Registry Forensics is due to hit the shelves in April, 2016.

Links

$
0
0
Presentations
I recently ran across this presentation from Eric Rowe of the Canadian Police College (presentation hosted by Bloomsburg University in PA).  The title of the presentation is Volume Shadow Copy and Registry Forensics, so it caught my eye.  Overall, it was a good presentation, and something I've done myself, more than once.

Education and Training
Now and again, I see posts to various lists and forums asking about how to get hands-on experience.  If you can't afford the courses that provide this, it's still not difficult to get it.  There are a number of sites available online that provide access to images, and tools are available just about...well...everywhere.

For images, Lance's first practical is still available online, having been originally posted about 8 years ago; the image is of a Windows XP system, and it includes System Restore Points.  Want to work with some Volume Shadow Copies (VSCs) in a Win7 image?  David has an image available with this blog post.  For other sorts of images, there's the Digital Corpora site,  the CFreds "hacking case" from NIST, and the InfoSec Short Takes competition scenario and image, to name a few.

From these images, you can select individual artifacts to extract and parse, in order to get familiar with the data and the tools, you can follow the scenario provided with each image (answer the questions, etc.), or you can conduct analysis under the tutelage of a mentor.  All of these provide great opportunities for education and training.

If I were in a position to hire for an opening, and an applicant stated that they'd downloaded and analyzed one of these images, I would ask to see their case notes and report, or even a blog post they'd written, something to show that they had a thought process.  I'd also ask questions about various investigative decisions they'd made throughout the process.

Tools
OLETools - this is a (Python) package of tools for analyzing MS structured storage files, the old style Office docs (more info/links availablespecifically here).  Tools such as these would be most helpful when performing malware detection with documents that use this format.  To upgrade or install the tools, you can use 'pip' within your Python installation.

Jon Glass has released an updated version of his WebCacheV01.dat parser, written in Python.  I've been using esedbexport and esedbviewer, but this will be a great addition to my toolkit, because with the code available, I was able modify it so that the information parsed from the WebCacheV01.dat file can be added directly into my timeline analysis process (that is, I modified the script to output in TLN format).  Thanks to David Cowen's blog, if you need to install (updated) libraries for use by Python, here's a great place you can go to get the compiled binaries.

For converting Python scripts into standalone Windows executable files, py2exe appears to be the solution; at least, that's what I'm finding in my searches.

Speaking of Python, if you're into (or new to) Python programming for DFIR work, you might want to check out (Mastering) Python Forensics; Jon's review can be found here.  I'm considering getting a copy; from what I can see so far, it's similar to Perl Scripting for Windows Security.

Books

Speaking of books, Windows Registry Forensics, 2/e is due out in April.  I'm looking forward to this one for a couple of reasons; first, a lot of the material is completely rewritten, and there's not only some new material with respect to the hives themselves, but I added a chapter on using RegRipper.  My hope is that analysts will read the chapter, and get a better understanding of how to use RegRipper to further their investigations, and go beyond simply downloading the distribution and running everything via the GUI.

Second, this book has entirely new cover art!  This is awesome!  When the first edition came out, I took two copies to a conference to give away at the end of my presentation...I had received the copies the day before leaving for the conference, they were that new.  When I went to give one of the books away, the recipient said, "I already have that one." But there was no way that they could have, because it was brand new.  The issue was that the publisher had decided to several books (not just my titles) with the same color scheme!  Rather than reading the title of the books, most folks were simply looking at the color and thinking, "...yeah, I've seen/got that one...".  Right now, I have 8 Syngress books on my book shelf, comprised of two color schemes...both of which are simply just slightly different shades of green!  Most folks don't know the difference between Digital Forensics with Open Source Tools and Windows Registry Forensics 1/e, because there is no difference in the color scheme.  It's great to see this change.

From the Trenches

$
0
0
I had an idea recently...there are a lot of really fascinating stories from the infosec industry that aren't being shared or documented in any way.  Most folks may not think of it this way, but these stories are sort of our "corporate history", they're what led to us being who and where we are today.

Some of my fondest memories from the military were sitting around with other folks, telling "war stories".  Whether it was at a bar after a long day (or week), or we were just sitting around a fire while we were still out in the field, it was a great way to bond and share a sort of "corporate history".  Even today, when I run into someone I knew "back in the day", the conversation invariably turns to one of us saying, "hey, do you remember when...?" I see a lot of value in sharing this sort of thing within our community, as well.

While I was still on active duty, I applied for and was assigned to the graduate program at the Naval Postgraduate School.  I showed up in June, 1994, and spent most of my time in Spanagel Hall (bldg 17 on this map).  At the time, I had no idea that every day (for about a month), I was walking by Gary Kildall's office.  It was several years later that I was reading a book on some of the "history" behind MS/DOS and Silicon Valley that I read about Digital Research, and made the connection.  I've always found that kind thing fascinating...getting an inside view of things from the people who were there (or, in Gary's case, allegedly not there...), attending the meetings, etc.  Maybe that's why I find the "Live To Tell" show on the History Channel so fascinating.

As a bit of a side note, after taking a class where I learned about "Hamming distance" while I was at NPS, I took a class from Richard Hamming.  That's like reading Marvel Comics and then talking to Stan Lee about developing Marvel Comics characters.

So, my idea is to share experiences I've had within the industry since I started doing this sort (infosec consulting) of work, in hopes that others will do the same.  My intention here is not to embarrass anyone, nor to be negative...rather, to just present humorous things that I've seen or experienced, as a kind of "behind the scenes" sort of thing.  I'm not sure at this point if I'm going to make these posts their own separate standalone posts, or include shorter stories along with other posts...I'll start by doing both and see what works.

War Dialing
One of the first civilian jobs I had after leaving active duty was with SAIC.  I was working with a small team...myself, a retired Viet Nam-era Army Colonel, and two other analysts...that was trying to establish itself in performing assessment services.  If anyone's ever worked for a company like this, they were often described as "400 companies all competing with each other for the same work", and in our case, that was true.  We would sometimes loose work to another group within the company, and then be asked to assist them as their staffing requirements for the work grew.

This was back in 1998, when laptops generally came with a modem and a PCMCIA expansion slot, and your network interface card came in a small hard plastic case.  Also, most of the laptops had 3.5 disk drives built in still, although some came with an external unit that you connected to a port.

One particular engagement I was assigned to was to perform "war dialing" against a client located in one of the WTC towers.  So, we flew to New York, went to the main office and had our introductory meeting.  During the meeting, we went over our "concept of operations" (i.e., what we were planning to do) again, and again requested a separate area from where we could work, preferably something out of view of the employees, and away from the traffic patterns of the office (such as a conference room).  As is often the case, this wasn't something that had been set up for us ahead of time, so two of us ended up piling into an empty cubicle in the cube-farm..not ideal, but it would work for us.

At the time, the tools of choice for this work were Tone Loc and THC Scan.  I don't remember which one we were using at the time, but we kicked off our scan using a range of phone numbers, but without randomizing the list.  As such, two of us were hunkered down in this cubicle, with normal office traffic going on all around us.  We had turned the speakers on the laptop we were using (being in a cubicle rather than a conference room meant we only had access to one phone line...), and leaned in really close so we could hear what was going on over the modem.  It was a game for us to listen to what was going on and try to guess if the system on the other end was a fax machine, someone's desk phone or something else, assuming it picked up.

So, yeah...this was the early version of scanning for vulnerabilities.  This was only a few years after ISS had been formed, and the Internet Scanner product wasn't yet well known, nor heavily used.  While a scan was going on, there really wasn't a great deal to do, beyond monitoring the scan for problems, particularly something that might happen that we needed to tell the boss about; better that he hear it from us first, rather than from the client.

As we're listening to the modem, every now and then we know that we hit a desk phone (rather than a modem in a computer) because the phone would pick up and you'd hear someone saying "hello...hello..." on the other end.  After a while, we heard echos...the sequence of numbers being dialed was in an order that we could hear the person speaking via the laptop speakers, as well as above the din of the office noise.  We knew that the numbers were getting closer, so we threw jackets over the laptop in an attempt to muffle the noise...we were concerned that the person who picked up the phone in the cubicles on either side of us would hear themselves.

Because of the lack of space and phone lines available for the work, it took us another day to finish up the scan.  After we finished, we had a check-out meeting with our client point of contact, who shared a funny story about our scan with us.  It seems that there was a corporate policy to report unusual events; there posters all over the office, and apparently training for employees, telling them what an "unusual event" might look like, to whom to report it, etc.  So, after about a day and a half of the "war dialing", only one call had come in.  Our scan had apparently dialed two sequential numbers that terminated in the mainframe room, and the one person in the room felt that having to get up to answer one phone, then walk across the room to answer the other (both calls of which hung up) constituted an "unusual event"...that's how it was reported to the security staff.

About two years later, when I was working at another company, we used ISS's Internet Scanner, run from within the infrastructure, to perform vulnerability assessments.  This tool would tell us if the computer scanned had modems installed.  No more "war dialing" entire phone lists for me...it was deemed too disruptive or intrusive to the environment.

Tools, Links, From the Trenches, part deux

$
0
0
There's been considerable traffic online in various forums regarding tools lately, and I wanted to take a moment to not just talk about the tools, but the use of tools, in general.

Tools List
I ran across a link recently that pointed to this list of tools from Mary Ellen...it's a pretty long list of tools, but there's nothing about the list that talks about how the tools are used.

Take the TSK tools, for example.  I've used these tools for quite some time, but when I look at my RTFM-ish reference for the tools, it's clear that I don't use them to the fullest extent that they're capable.

LECmd
Eric Zimmerman recently released LECmd, a tool to parse all of the data out of an LNK file.

From the beginning of Eric's page for the tool:
In short, because existing tools didn't expose all available data structures and/or silently dropped data structures. 

In my opinion, the worst sin a forensics tool vendor can commit is to silently drop data without any warning to an end user. To arbitrarily choose to report or withhold information available from a file format takes the decision away from the end user and can lead to an embarrassing situation for an examiner.

I agree with Eric's statement...sort of.  As both an analyst and an open source tool developer, this is something that I've encountered from multiple perspectives.

As an analyst, I believe that it's the responsibility of the analyst to understand the data that they're looking at, or for.  If you're blindly running tools that do automated parsing, how do you know if the tool is missing some data structure, one that may or may not be of any significance?  Or, how do you correctly interpret the data that the tool is showing you?  Is it even possible to do correct analysis if data is missing and you don't know it, or the data's there but viewed or interpreted incorrectly?

Back when I was doing PFI work, our team began to suspect that the commercial forensics suite we were using was missing certain card numbers, something we were able to verify using test data.  I went into the user forum for the tool and asked about the code behind the IsValidCreditCard() function, and asked, "what is a valid credit card number?" In response, I was directed to a wiki page on credit card numbers...but I knew from testing that this wasn't correct.  I persisted, and finally got word from someone a bit higher up within the company that we were correct; the function did not consider certain types of CCNs valid.  With some help, we wrote our own function that was capable of correctly locating and testing the full range of CCNs required for the work we were doing.  It was slower than the original built-in function, but it got the job done and was more accurate.  It was the knowledge of what we were looking for, and some testing, that led us to that result.

As a tool developer, I've tried to keep up with the available data structures as much as possible.  For example, take a look at this post from June, 2013.  The point is that tool development evolves, in part due to what becomes available (i.e., new artifacts), as well as in part due to knowledge and awareness of the available structures.

With respect to RegRipper, it's difficult to keep up with new structures or changes to existing structures, particularly when I don't have access to those data types.  Fortunately, a very few folks (Eric, Mitch, Shafik, to name a few...) have been kind enough to share some sample data with me, so that I can update the plugins in question.

Something that LECmd is capable of is performing MAC address lookups for vendor names.  Wait...what?  Who knew that there were MAC addresses in LNK files/structures?  Apparently, it's been known for some time.  I think it's great that Eric's including this in his tool, but I have to wonder, how is this going to be used?  I'm not disagreeing with his tool getting this information, but I wonder, is "more data" going to give way to "less signal, more noise"?  Are incorrect conclusions going to be drawn by the analyst, as the newly available data is misinterpreted?

I've used the information that Eric mentions to illustrate that VMWare had been installed on the system at one time.  That's right...an LNK file had a MAC address associated with VMWare, and I was able to demonstrate that at one point, VMWare had been installed on the system I was examining.  In that case, it may have been possible that someone had used a VM to perform the actions that resulted in the incident alert.  As such, the information available can be useful, but it requires correct interpretation of the data.  While not the fault of the tool developer, there is a danger that having more information on the analyst's plate will have a detrimental effect on analysis.

My point is that I agree with Eric that information should be available to the analyst, but I also believe that it's the responsibility of the analyst to (a) recognize what information can be derived from various data sources, and (b) be able to correctly interpret and utilize that data.  After all, there are still folks out there, pretty senior folks, who do not correctly interpret ShimCache data, and believe that the time stamp associated with each entry is when that program was executed.

From the Trenches
I thought I'd continue with a couple of "war stories" from the early days of my commercial infosec consulting career...

In 1999, I was working on a commercial vulnerability assessment team with Trident Data Systems (TDS).  I say "commercial" because there were about a dozen of us on this team, with our offices on one side of the building, and the rest of the company was focused on fulfilling contracts for the federal government.  As such, we were somewhat 'apart' from the rest of the company, with respect to contracts, the tools we used, and our profit margin.  It was really good work, and I had an opportunity to use some of the skills I learned in the military while working with some really great folks, most of whom didn't have military experience themselves (some did).

One day, my boss (a retired Army LtCol) called me into his office to let me know that a client had signed a contract.  He laid out what was in the statement of work, and asked me who I wanted to take on the engagement.  Of course, that was a trick question, of sorts...instead of telling me who was available to go, I had the pick of everyone.  Fortunately, I got to take three people with me, two of which were my first choice, and the third I picked when another person wasn't available.  I was also told that we'd be taking a new team member along to get them used to the whole "consulting" thing.  After all this was over with, I got my team together and let everyone know what we were doing, for whom, when we'd be leaving, etc.  Once that was done, I reached out to connect with my point of contact.

When the day came to leave, we met at the office and took a van to the airport.  Everyone was together, and we had all of our gear.  We made our way to the airport, flew to the city, got to our hotel and checked in, all without any incidents or delays.  So far, so good.  What made it really cool was that while I was getting settled in my room, there was a knock at the door...the hotel staff was bringing around a tray of freshly baked (still warm) cookies, and cartons of milk on ice, for all of the new check-ins.  Score!

The next morning, we met for breakfast and did a verbal walk-through of our 'concept of operations' for the day.  I liked to do this to make sure that everyone was on the same sheet of music regarding not just the overall task, but also their individual tasks that would help us complete the work for the client.  We wanted to start off looking and acting professional, and deliver the best service we could to the client.  But we also wanted to be sure that we weren't so willing to help that we got roped into doing things that weren't in the statement of work, to the point where we'd burned the hours but hadn't finished the work that we were there to do.

The day started off with an intro meeting with the CIO.  Our point of contact escorted us up to the CIO's office and we began our introductions, and a description of the work we'd be doing.  Our team was standing in the office (this wasn't going to take long), with our laptop bags at our feet.  The laptops were still in the bags, and hadn't been taken out, let alone even powered on.

Again, this was 1999...no 'bluetooth' (no one had any devices that communicate in that manner), and we were still using laptops that, in order to connect them to a network, you had to plug in a PCMCIA card into one of the slots.

About halfway through our chat with the CIO, his office door popped open, and an IT staff member stuck their head in, to share something urgent.  He said, "...the security guys, their scan took down the server." He never even looked at us or acknowledged our presence in the room...after all, we were "the security guys".  The CIO acknowledged the statement, and the IT guy left.  No one said a word about what had just occurred...there seemed to be an understanding, as if our team would say, "...we hear that all the time...", and the CIO would say, "...see what I have to work with?"

Links

$
0
0
Plugin Update
Thanks to input (and a couple of hives) from two co-workers yesterday, I was able to update the appcompatcache.plRegRipper plugin to work correctly with Windows 10 systems.  In one case, the hive I was testing was reportedly from a Surface tablet.

Last year, Eric documented the changes to that he'd observed in the structure format from Windows 10; they appear to similar to Windows 8.1.

Something interesting that I ran across was similar to the last two images in Eric's blog post; specifically, the odd entries that appeared similar in format to (will appear wrapped):

000000000004000300030000000a000028000000014c9E2F88E3.Twitterwgeqdkkx372wm

If you look closely at the entries in the images from Eric's blog, you'll see that the time stamp reads "12/31/1600 5:00:00pm -0700".  Looking at the raw data for one of the examples I had available indicated that the 64-bit time stamp was "00 09 00 00 00 00 00 00".  The entry at the offset should be a 64-bit FILETIME object, but for some reason with the oddly-formatted entries, what should be the time stamp field is...something else.  Eric's post is from April 2015 (almost a year ago) and as yet, there doesn't appear to have been any additional research conducted as to what these entries refer to.

For the appcompatcache.pl plugin, the time stamp is not included in the output if it's essentially 0.  For the appcompatcache_tln.pl plugin, the "0" time stamp value is still be included in TLN output, so you'll likely have a few entries clustered at 1 Jan 1970.

Hunting for Executable Code in Windows Environments
I ran across this interesting blog post this morning.  What struck me most about it is that it's another means for "hunting" in a Windows environment that looks to processes executing on the endpoint.

This tool, PECapture (runs as a GUI or a service), captures a copy of the executable, as well as the execution time stamp and a hash.

I have to say that as much as I think this is a great idea, it doesn't appear to capture the full command line, which I've found to be very valuable.  Let's say an adversary is staging the data that was found for exfil, and uses a tool like WinRAR; capturing the command line would also allow you to capture the password they use.  In a situation like that, I don't need a copy of rar.exe (or whatever it's been named to...), but I do need the full command line.

I think that for the time being, I'll continue using Sysmon, but I add that if you're doing malware testing, having both Sysmon and PECapture running on your test system might be a very good idea.  One of the things that some malware will do is run intermediate, non-native executables, which are then deleted after use, so having the ability to capture a copy of the executable would be very useful.

I do think that it's interesting that this tool is yet another does part of what Carbon Black does...

Yet Another "From the Trenches"
I had to dig back further into the vault for one of my first "consulting" gigs...

Years and years ago (I should've started, "Once, in a galaxy far, far away...."), while I was still on active duty, I applied for and was able to attend the Naval Postgraduate School.  While preparing to conduct testing and data collection for my master's thesis, I set up a small network in an unused room; the network consisted of a 10-Base2 network (server, two workstations) connected to a 10-BaseT network (server, 2 workstations), connected to Cisco routers, and the entire thing was connected to the campus 10-Base5 backbone via a "vampire" tap.  The network servers were Windows NT 3.51, and the workstations were all Windows 95, running on older systems that I'd repurposed; I had spent considerable time searching the MS KnowledgeBase, just to get information on how to set up Win95 on most of the systems.

For me, the value of setting up this network was what I learned.  If you looked at the curriculum for the school at the time, you could find six classes on "networking", spread across three departments...none of which actually taught students to set up a network.  So for me, this was invaluable experience.

While I was processing out of the military, I spent eight months just hanging around the Marine Detachment at DLI.  I was just a "floater" officer, and spent most of my time just making the Marines nervous.  However, I did end up with a task...the Marine Commandant, Gen Krulak, had made the statement that Marines were authorized to play "Marine DOOM", which was essentially a Marine-specific WAD for DOOM.  So, in the spring of '97, the Marine Det had purchased six Gateway computer systems, and had them linked together via a 10BaseT network (the game ran on a network protocol called "IPX").  The systems were all set up on a circular credenza-type desk, with six individual stations separated by partitions.  I'd come back from exercising during lunch and see half a dozen Marines enthusiastically playing the game.

At one point, we had a Staff Sergeant in the detachment...I'm not sure why he was there, as he didn't seem to be assigned to a language class, but being a typical Marine SSgt, he began looking for an office to make his own.  He settled on the game room, and in order to make the space a bit more usable, decided to separate the credenza-desk in half, and then turn the flat of each half against the opposite wall.  So the SSgt got a bunch of Marines (what we call a "workin' party") and went about disassembling the small six-station LAN, separating the credenza and turning things around.  They were just about done when I happened to walk by the doorway, and I popped my head in just to see how things were going.  The SSgt caught my eye, and came over...they were trying to set the LAN back up again, and it wasn't working.  The SSgt was very enthusiastic, as apparently they were almost done, and getting the LAN working again was the final task.  So putting on my desktop support hat, I listened to the SSgt explain how they'd carefully disassembled and then re-assembled it EXACTLY as it had been before.  I didn't add the emphasis with the word "exactly"...the SSgt had become much more enthusiastic at that word.

So I began looking at the backs of the computer systems nearest to me, and sure enough all of the systems had been connected.  When I got to the system that was as the "end", I noticed that the coax cable had been run directly into the connector for the network card.  I knew enough about networking and Marines that I had an idea of what was going on...and sure enough, when I moved the keyboard aside, I saw the t-connector and 50 ohm terminator sitting there.  To verify the condition of the network, I asked the SSgt to try the command to test the network, and he verified that there was "no joy".  I was reaching down into one of the credenza stations, behind the computer and no one could see what I was doing...I quickly connected the terminator to the t-connector, connected it to the jack on the NIC, and then reconnected the coax cable.  I told the SSgt to try again, and was almost immediately informed (by the Marine's shouts) that things were working again.  The SSgt came running over to ask me what I'd done.

To this day, I haven't told him.  ;-)

Updates

$
0
0
RegRipper Plugin Updates
I made some updates to a couple of plugins recently.  One was to the networklist.pl/networklist_tln.pl plugins; the update was to add collecting subkey names and LastWrite times from beneath the Nla\Cache\Intranet key.  At this point, I'm not 100% clear on what the dates refer to, but I'm hoping that will come as the data is observed and added to timelines.

I also updated the nic2.pl plugin based on input from Yogesh Khatri.  Specifically, he found that in some cases, there's a string (REG_SZ) value named "DhcpNetworkHint" that, if you reverse the individual nibbles of the string, will spell out the SSID.

This is an interesting finding in a couple of ways.  First, Yogesh found that by reading the string in hex and reversing the nibbles, you'd get the SSID.  That in itself is pretty cool.  However, what this also says is that if you'd doing a keyword search for the SSID, that search will not return this value.

jIIr
Corey's most recent blog post, Go Against The Grain, is a pretty interesting read.  It is an interesting thought.  As a consultant, I'm not usually "in" an infrastructure long enough to try to effect change in this manner, but it would be very interesting to hear how others may have utilized this approach.

"New" Tools
Eric Zimmerman recently released a tool for parsing the AmCache.hve file, which is a "new" file on Windows systems that follows the Registry hive file format.  So, the file follows the same format as the more "traditional" Registry hive files, but it's not part of the Registry that we see when we open RegEdit on a live system.

Yogesh Khatri blogged about the AmCache.hve file back in 2013 (here, and then again here).

While Eric's blog post focuses on Windows 10, it's important to point out that the AmCache. hve file was first seen on Windows 8 systems, and I started seeing them in images of Windows 7 systems since about the beginning of 2015.  Volatility has a parser for AmCache.hve files found in memory, and RegRipper has had a plugin to parse the AmCache.hve file since Dec, 2013.

I applaud Eric for adding a winnowing capability to his tool; in an age where threat hunting is a popular topic for discussion, data reduction (or, "how do I find the needle in the haystack with no prior knowledge or experience?") is extremely important.  I've tried doing something similar with my own tools (including, to some extent, some RegRipper plugins) by including an alerting capability based on file paths found in various data sources (i.e., Prefetch file metadata, Registry value data, etc.).  The thought behind adding this capability was that items that would likely be of interest to the analyst would be pulled out.  However, to date, the one comment I've received about this capability has been, "...if it says 'temp was found in the path', what does that mean?"

Again, Eric's addition of the data reduction technique to his tool is really very interesting, and is very likely to be extremely valuable.

Shell Items
I had an interesting chat with David Cowen recently regarding stuff he'd found in LNK files; specifically, Windows shortcut/LNK files can contain shell item ID lists, which can contain extremely valuable information, depending upon the type of examination you're performing.  Specifically, some shell item ID lists (think shellbags) comprise paths to files, such as those found on network shares and USB devices.  In many cases, the individual shell items contain not only the name of a folder in the path, but also timestamp information.  Many of the shell items also contain the MFT reference number (record number and sequence number combined) for the folder.  Using this information, you can build a historical picture of what some portion of the file system looked like, at a point in the past.

Conference Presentations And Questions
Many times when I present at a conference and open up for questions, one question I hear many times is, "What's new in Windows (insert next version number)?" Many times, I'm sort of mystified by questions like this, as I don't tend to see the "newest hotness" as something that requires immediate attention if analysts aren't familiar with "old hotness", such as ADSs, Registry analysis, etc.

As an example, I saw this tweet not long ago, which led to this SANS ISC Handler's Diary post.  Jump Lists are not necessarily "new hotness", and have been part of Windows systems since Windows 7.  As far as resources go, the ForensicsWiki Jump Lists page was originally created on 23 Aug 2011.  I get that the tweet was likely bringing attention back to something of value, rather than pointing out something that is "new".

As a bit of a side note, I tend to use my own tools for parsing files like Jump Lists, because the allow me to incorporate the data in to the timelines I create, if that's something I need to do.

Links: Plugin Updates and Other Things

$
0
0
Plugin Updates
Mari has done some fascinating research into MS Office Trust Records and posted her findings here. Based on her sharing her findings and sample data, I was able to update the trustrecords.pl plugin.  Further, Mari's description of what she had done was so clear and concise that I was able to replicate what she did and generate some of my own sample data.

The last update to the trustrecords.pl plugin was from 16 July 2012; since then, no one's used it or apparently had any issues with it or questions about what it does.  For this update, I added a check for the VBAWarnings value, and added parsing of the last 4 bytes of the TrustRecords value data, printing "Enable Content button clicked" if the data is is in accordance with Mari's findings.  I also changed how the plugin determines which version of Office is installed. I also made sure to update the trustrecords_tln.pl plugin accordingly, as well.

So, from the sample data that Mari provided, the output of the trustrecords.pl plugin looks like this:

**Word**
----------
Security key LastWrite: Wed Feb 24 15:58:02 2016 Z
VBAWarnings = Enable all macros

Wed Feb 24 15:08:55 2016 Z : %USERPROFILE%/Downloads/test-document-domingo.doc
**Enable Content button clicked.

...and the output of the trustrecords_tln.pl plugin looks like this:

1456326535|REG|||TrustRecords - %USERPROFILE%/Downloads/test-document-domingo.doc [Enable Content button clicked]

Addendum, 25 Feb
Default Macro Settings (MSWord 2010)
After publishing this blog post yesterday, there was something that I ran across in my own testing that I felt was important to point out.  Specifically, when I first opened MSWord 2010 and went to the Trust Center, I saw the default Macro Settings, illustrated in the image to the right; this is with no VBAWarnings value in the Registry.  Once I started selecting other options, the VBAWarnings value was created.

What this seems to indicate is that if the VBAWarnings value exists in the Registry, even if the Macro Settings appear as seen in the image above (the data for the value would be "2"), that someone specifically changed the value.  So, if the VBAWarnings value doesn't exist in the Registry, it appears (based on limited testing) that the default behavior is to disable macros with a notification.  If the setting is changed, the VBAWarnings value is created.  If the VBAWarnings value is set to "2", then it may be that the Macro Settings were set to something else, and then changed back.

For example, take a look at the plugin output I shared earlier in this post.  You'll notice that the LastWrite time of the Security key is 50 min later than the TrustRecords time stamp for the document.   In this case, this is due to the fact that Mari produced the sample data (hive) for the document, and then later modified the Macro Settings because I'd reached back to her and said that the hive didn't contain a VBAWarnings value.

Something else to think about...has anyone actually used the reading_locations.pl plugin?  If you read Jason's blog post on the topic, it seems like it could be pretty interesting in the right instance or case.  For example, if an employee was thought to have modified a document and claimed that they hadn't, this data might show otherwise.
**end addendum**

Also, I ran across a report of malware using a persistence mechanism I hadn't seen before, so I updated termserv.pl to address the "new" key.

Process Creation Monitoring
My recent look into and description of PECapture got me thinking about process creation monitoring again.

Speaking of process creation monitoring, Dell SecureWorks recently made information publicly available regarding the AdWind RAT.  If you read through the post, you'll see that the RAT infection process spawns a number of external commands, rather than using APIs to do the work.  As such, if you're recording process creation events on your endpoints, filters can be created to watch for these commands in order to detect this (and other) activity.

Malicious LNK
Wait, what?  Since when did those two words go together?  Well, as of the morning of 24 Feb, the ISC handlers have made it "a thing" with this blog post.  Pretty fascinating, and thanks to the handlers for walking through how they pulled things out of the LNK file; it looks as if their primary tool was a hex editor.

A couple of things...

First, process creation monitoring of what this "looks like" when executing would be very interesting to see.  If there's one thing that I've found interesting of late is how DFIR folks can nod their heads knowingly at something like that, but when it comes to actual detection, that's another matter entirely.  Yes, the blog post lists the command line used  but the question is, how would you detect this if you had process creation monitoring in place?

Second, in this case, the handlers report that "the ACE file contains a .lnk file"; so, the ACE file doesn't contain code that creates the .lnk file, but instead contains the actual .lnk file itself.  Great...so, let's grab Eric Zimmerman's LECmd tool, or my own code, and see what the NetBIOS name is of the system on which the LNK file was created.  Or, just go here to get that (I see the machine name listed, but not the volume serial number...).  But I'd like to parse it myself, just to see what the shell items "look like" in the LNK file.

As a side note, it's always kind of fascinating to me how some within the "community" will have data in front of them, and for whatever reason, just keep it.  Just as an example (and I'm not disparaging the work the handlers did, but commenting on an observation...), the handlers have the LNK file, but they're not sharing the vol SN, NetBIOS name, or shell items included in the LNK file, just the command line and the embedded payload.  I'm sure that this is a case of "this is what we feel is important, and the other stuff isn't...", but what happens when others find something similar?  How do we start correlating, mapping and linking similar incidents if some data that might reveal something useful about the author is deemed unnecessary by some?

Like I said, not disparaging the work that the handlers did, just thinking out loud a bit.

8Kb One-Liner
There was a fascinating post over at Decalage recently regarding a single command line that was 8Kb long.  They did a really good walk-through for determining what a macro was up to, even after the author took some pretty significant steps to make getting to a human-readable format tough.

I think it would be fascinating to get a copy of this sample and run it on a system with SysMon running, to see what the process tree looks like for something like this.  That way, anyone using process creation monitoring could write a filter rule or watchlist to monitor for this in their environment.

From the Trenches
The "From the Trenches" stuff I've been posting doesn't seem to have generated much interest, so I'm going to discontinue those posts and move on to other things.

Event Logs

$
0
0
I've discussed Windows Event Log analysis in this blog before (here, and here), but it's been a while, and I've recently been involved in some analysis that has led me to believe that it might be a good idea to bring up the topic again.

Formats
I've said time and again...to the point that many of you are very likely tired of hearing me say it...that the version of Windows that you're analyzing matters.  The available artifacts and their formats differ significantly between versions of Windows, and any discussion of (Windows) Event Logs is a great example of this fact.

Windows XP and 2003 use (I say "use" because I'm still seeing these systems in my analysis; in the past month alone I've analyzed a small handful of Windows 2003 server images) a logging format referred to as "Event Logs".  MS does a great job in documenting the structure of the Event Log/*.evt file format, header, records, and even the EOF record structure.  In short, these Event Logs are a "circular buffer" to which individual records are written.  The limiting factor for these Event Logs is the file size; as new records are written, older records will simply be overwritten.  These systems have three main Event Logs; Security (secevent.evt), System (sysevent.evt), and Application (appevent.evt).  There may be others but they are often application specific.

Windows Vista systems and beyond use a "binary XML" format for the Windows Event Log/*.evtx files.  Besides the different format structure for event records and the files themselves, perhaps one of the most notable aspects of Windows Event Logs is the number of log files available.  On a default installation of Windows 7, I counted 140+ *.evtx files; on a Windows 10 system, I counted 289 files.  Now, this does not mean that records are written to these logs all the time; in fact, some of the Windows Event Log files may never be written to, based on the configuration and use of the system.  However, it's often likely that if you're following clusters of indicators as part of your analysis process (i.e., looking for groups of indicators close together, rather than one single indicator or event, that indicate a particular action) it's likely that you'll find more indications of the event in question.

Tools
Of the tools I use (and provide along with the book materials) in my daily work, there are two specifically related to (Windows Event) Logs.

First, there is evtparse.exe.  This tool does not use the Windows API to parse Event Logs/*.evt files on a binary basis, bypassing the header information and basically "carving" *.evt files for valid records.

The ability to parse individual event records from *.evt files, regardless of what the file header says with respect the number of event records, etc., is valuable.  I originally wrote this tool after I ran into a case where the Event Logs had been cleared.  When this occurred, the "current" *.evt files were deleted (sectors comprising the files became part of unallocated space) and "new" *.evt files were created from available sectors within unallocated space.  What happened was that one of the *.evt files contained header information that indicated that there were no event records in the file, but there was clearly something there.  I was able to recover or "carve" valid event records from the file.  I've also used evtparse.pl as the basis for a tool that would carve unstructured data (pagefile, unallocated space, even a memory dump) for *.evt records.

The other tool I use is evtxparse.exe.  Note the "" in the name.  This is NOT the same thing as evtparse.exe.  Evtxparse.exe is part of a set of tools, used with wevtx.bat, LogParser (a free tool from MS), and eventmap.txt to parse either an individual *.evtx file or multiple *.evtx files into the timeline/TLN format I use for my analysis.  The wevtx.bat file launches LogParser to parse the file(s), writing the parsed records to a temporary file, which is then parsed by evtxparse.exe.  During that parsing, the eventmap.txt file is used to apply a modicum of "threat intel" (in short, stuff I've learned from previous engagements...) to the entries being included in the timeline events file, so that its easier for me to identify pivot points for analysis.

A major caveat to this is that LogParser relies on the native DLLs/API of the system on which it's being run.  This means that you can't successfully run LogParser on Windows XP while trying to parse *.evtx files, nor can you successfully run LogParser on Windows 10 to parse Windows 2003 *.evt files (without first running wevtutil to change the format of the *.evt files to *.evtx).

Both tools are provided as Windows executables, along with the Perl source code.

When I run across *.evtx files that LogParser has difficulty parsing, my go-to tool is Willi Ballenthin's EVTXtract.  There have been several instances where this tool set has worked extremely well, particularly when the Windows Event Logs that I'm interested in are reported by other tools as being "corrupt".  In one particular instance, we'd found that the Windows Event Logs had been cleared, and we were able to not only retrieve a number of valid event records from unallocated space, but we were able to locate THE smoking gun record that we were looking for.

Gaps
Not long ago, I was asked a question about gaps in Windows Event Logs; specifically, is there something out there that allows someone to remove specific records from an active Windows Event Log on a live machine?  Actually, this question has come up twice since the beginning of this year alone, in two different contexts.

There has been talk about there being, or that there have been, tools for removing specific records from Windows Event Logs on live systems, but all the talk comes back to the same thing...no one I've even spoken to has any actual data showing that this actually happened.  There's been mention of a tool called "WinZapper" likely having been used, but when I've asked if the records were parsed and sorted by record number to confirm this, no one has any explicit data to support the fact that the tool had been used; it all comes back to speculation, and "it could have been used".

As I mentioned, this is pretty trivial to check.  Wevtx.bat, for example, contains a LopParser command line that includes printing the record number for each event.  You can run this command line on a Windows 7 (or 10) system to parse *.evtx files, or on a Windows XP system to parse *.evt files, and get similar results.

Evtparse.exe (note that there is no "x" in the tool name...) includes a switch for listing event records sequentially, displaying only the record number and time generated value for each event record.  This output can then easily be sorted to look for gaps, or parsed via a script to do the same thing.  For example, using either tool, you can then simply import the output into Excel and sort based on the record numbers and search it manually/visually, or write a script that looks for gaps in the record numbers.

So, when someone asks me if it's possible that specific event records were removed from a log, the first question I would ask in response would be, were records removed from the log?  After all, this is pretty trivial to check, and if there are no gaps, then the question itself becomes academic.

Creating Event Records
There are a number of ways to create event records on live systems, should you be inclined to do so.  For example, MS includes the eventcreate.exe tool, which allows you to create event records (with limitations; be sure to read the documentation).

Visual Basic can be used to write to the Event Log; for an example, see this StackOverflow post.  Note that the post also links to this MSDN page, but as is often the case on the InterWebs, the second response goes off-topic.

You can also use Powershell to create new Windows Event Logs, or create event records.

Links and Stuff

$
0
0
Sysmon
I've spent some time discussing MS's Sysmon in this blog, describing how useful it can be, not only in a malware testing environment, but also in a corporate environment.

Mark Russinovich gives a great use case for Sysmon, from RSA in this online PowerPoint presentation.  If you have any questions about Sysmon and how useful it can be, this presentation is definitely worth the time to browse through it.

Ransomware and Computer Speech
Ransomware has been in the "news" quite a bit lately; not just the opportunistic stuff like Locky, but also what appears to be more targeted stuff.  IntelSecurity has an interesting write-up on the Samsam targeted ransomware, although a great deal of the content of that PDF is spent on the code of the ransomware, and not so much the TTPs employed by the threat group.  This sort of thing is a great way to showcase your RE skillz, but may not be as relevant to folks who are trying to find this stuff within their infrastructure.

I ran across a write up regarding a new type of ransomware recently, this one called Cerber.  Apparently, this particular ransomware, as part of the infection, drops a *.vbs script on the system that makes the computer tell the user that its infected.  Wait...what?

Well, I started looking into it and found several sites that discussed this, and provided examples of how to do it.  It turns out that its really pretty simple, and depending upon the version of Windows you're using, you may have a greater range of options available.  For example, per this site, on Windows 10 you can select a different voice (a female "Anna" voice, rather than the default male "David" voice), as well as change the speed or volume of the speech.

...and then, much hilarity ensued...

Document Macros
Okay, back to the topic of ransomware, some of it (Locky, for example) ends up on systems as a result of the user opening a document that contains macros, and choosing to enable the content.

If you do find a system that's been infected (not just with ransomware, but anything else, really...), and you find a suspicious document, this presentation from Decalage provides a really good understanding of what macros can do.  Also, take a look at this blog post, as well as Mari's post, to help you determine if the user chose to enable the content of the MS Office document.

Why bother with this at all?  That's a great question, particularly in the face of ransomware attacks, where some organizations are paying tens or hundreds of thousands of dollars to get their documents back...how can they then justify paying for a DFIR investigation?  Well, my point is this...if you don't know how the stuff gets in, you're not going to stop it the next time it (or something else) gets in.

You need to do a root cause investigation.

Do not...I repeat, do NOT...base any decisions made after an infection, compromise, or breach on assumption or emotion.  Base them on actual data, and facts.  Base them on findings developed from all the data available, not just some of it, with the gaps filled in with speculation.

Jump Lists
One way to determine which files a user had accessed, and with which application, is by analyzing Jump Lists.  Jump Lists are a "new" artifact, as of Windows 7, and they persist through to Windows 10.  Eric Zimmerman recently posted on understanding Jump Lists in depth; as you would expect, his post is written from the perspective of a tool developer.

Eric noted that the format for the DestList stream on Windows 10 systems has changed slightly...that an offset changed.  It's important to know and understand this, as it does affect how tools will work.

Mysterious Records in the Index.dat File
I was conducting analysis on a Windows 2003 server recently, and I found that a user account created in Dec 2015 contained activity within the IE index.dat file dating back to 2013...and like any other analyst, I thought, "okay, that's weird".  I noted it my case notes and continued on with my analysis, knowing that I'd get to the bottom of this issue.

First, parsing the index.dat.  Similar to the Event Logs, I've got a couple of tools that I use, one that parses the file based on the header information, and the other that bypasses the header information all together and parses the file on a binary basis.  These tools provide me with visibility into the records recorded within the files, as well as allowing me to add those records to a timeline as necessary.  I've also developed a modified version of Jon Glass's WebCacheV01.dat parsing script that I use to incorporate the contents of IE10+ web activity database files in timelines.

So, back to why the index.dat file for a user account (and profile) created in Dec 2015 contained activity from 2013.  Essentially, there was malware on the system in 2013 running with System privileges and utilizing the WinInet API, which resulted in web browser activity being recorded in the index.dat file within the "Default User" profile.  As such, when the new user account was created in Dec 2015, and the account was used to access the system, the profile was created by copying content from the "Default User" profile.  As IE wasn't being used/launched via the Windows Explorer shell (another program was using the WinInet API), the index.dat file was not subject to the cache clearance mechanisms we might usually expect to see (by default, using IE on a regular basis causes the cache to be cleared every 20 days).

Getting to the bottom of the analysis didn't take days or weeks of analysis...it just took a few minutes to finish up documenting (yes, I do that...) what I'd already found, and then circling back to confirm some findings, based on a targeted approach to analysis.

Links

$
0
0
RegRipper Plugin Update
Okay, this isn't so much an update as it is a new plugin.  Patrick Seagren sent me a plugin called cortana.pl, which he's been using to extract Cortana searches from the Registry hives.  Patrick sent the plugin and some test data, so I tested the plugin out and added it to the repository.

Process Creation Monitoring
When it comes to process creation monitoring, there appears to be a new kid on the block.  NoVirusThanks is offering their Process Logger Service free for personal use.

Looking at the web site, the service appears to record process creation event information in a flat text file, with the date and time, process ID, as well as the parent process ID.  While this does record some basic information about the processes, it doesn't look like it's the easiest to parse and include in current analysis techniques.

Other alternatives include native Windows auditing for Process Tracking (along with an update to improve the data collected), installing Sysmon, or going for a solution of a commercial nature such as Carbon Black.  Note that incorporating the process creation information into the Windows Event Log (via either process) means that the data can be pulled from live systems via WMI or Powershell, forwarded to a central logging server (Splunk?), or extracted from an acquired image.

Process creation monitoring can be extremely valuable for detecting and responding to things such as Powershell Malware, as well as providing critical information for responders to determine the root cause of a ransomware infection.

AirBusCyberSecurity recently published this post that walks through dynamic analysis of "fileless" malware; in this case, Kovter.  While it's interesting that they went with a pre-compiled platform, pre-stocked with monitoring tools, the results of their analysis did demonstrate how powerful and valuable this sort of technique (monitoring process creation) can be, particularly when it comes to detection of issues.

As a side note, while I greatly appreciate the work that was done to produce and publish that blog post, there are a couple of things that I don't necessarily agree with in the content that begin with this statement:

Kovter is able to conceal itself in the registry and maintain persistence through the use of several concealed run keys.

None of what's done is "concealed".  The Registry is essentially a "file system within a file", and none of what the malware does with respect to persistence is particularly "concealed".  "Run" keys have been used for persistence since the Registry was first used; if you're doing any form of DFIR work and not looking in the Run keys, well, that still doesn't make them "concealed".

Also, I'm not really sure I agree with this "fileless" thing.  Just because persistence is maintained via Registry value doesn't make something "fileless".

Ransomware and Attribution
Speaking of ransomware engagements, a couple of interesting articles have popped up recently with respect to ransomware attacks and attribution.  This recent Reuter's article shares some thoughts from analysts regarding attribution for observed attacks.  Shortly after this article came out, Val Smith expanded upon information from the article in his blog post, and this ThreatPost article went on to suggest that what analyst's are seeing is really "false flag" operations.

While there are clearly theories regarding attribution for the attacks, there doesn't appear to be any clear indicators or evidence...not that are shared, anyway...that tie the attacks to a particular group, or geographic location.

This article popped up recently, describing how another hospital was hit with ransomware.  What's interesting about the article is that there is NO information about how the bad guys gained access to the systems, but the author of the article refers to and quotes a TrustWave blog post; is the implication that this may be how the systems were infected?  Who knows?

Carving
David Cowen recently posted a very interesting article, in which he shared the results of tool testing, specifically several file carving tools.  I've seen comments and reviews from others who've read this same post that've said that David one tool or another "near the bottom", but to be honest, that doesn't appear to be the case at all.  The key to this sort of testing is to understand the strengths and "weaknesses" of various tools.  For example, bulk extractor was listed as the fastest tool in the test, but David also included the statement that it would benefit from more filters, and BE was the only free option.

Testing such as this, as well as what folks like Mari have done, is extremely valuable in not only extending our knowledge as a community, but also for showing others how this sort of thing can be done, and then shared.

Malware Analysis and Threat Intel
I ran across this interesting post regarding Dridex analysis recently...what attracted my attention was this statement:

...detail how I go about analyzing various samples, instead of just presenting my findings...

While I do think that discussing not just the "what" but also the "how" is extremely beneficial, I'm going to jump off of the beaten path here for a bit and take a look at the following statement:

...got the loader binary off virustotal...

The author of the post is clearly stating where they got the copy of the malware that they're analyzing in the post, but this statement jumped out at me, for an entirely different reason all together.

When I read posts such as this, as well as what is shared as "threat intel", I look at it from the perspective of an DF analyst and an incident responder, asking myself, "..how can I use this on an engagement?" While I greatly appreciate the effort that goes into creating this sort of content, I also realize that very often, a good deal of "threat intel" is developed purely through open source collection, without the benefit of context from an active engagement.  Now, this is not a bad thing...not at all.  But it is something that needs to be kept in mind.

In examples such as this one, understanding that the analysis relies primarily on a malware sample collected from VT should tell us that any mention of the initial infection vector (IIV) is likely going to be speculation, or the result of open source collection, as well.  The corollary is that the IIV is not going to be the result of seeing this during an active incident.

I'll say it again...information such as this post, as well as other material shared as "threat intel"...is a valuable part of what we do.  However, at the same time, we do need to understand the source of this information.  Information shared as a result of open source collection and analysis can be used to create filters or triggers, which can then be used to detect these issues much earlier in the infection process, allowing responders to then get to affected systems sooner, and conduct analysis to determine the IIV.

Windows Registry Forensics, 2E

$
0
0
Okay, the book is out!  At last!  This is the second edition to Windows Registry Forensics, and this one comes with a good bit of new material.

Chapter 1 lays out what I see as the core concepts of analysis, in general, as well as providing a foundational understanding of the Registry itself, from a binary perspective.  I know that there are some who likely feel that they've seen all of this before, but I tend to use this information all the time.

Chapter 2 is again about tools.  I only cover available free and open-source tools that run on Windows systems, for the simple fact that I do not have access to the commercial tools.  Some of the old tools are still applicable, there are new tools available, and some tools are now under license, and in some cases, the strict terms of the license prevent me from including them in the book.  Hopefully, chapter 1 laid the foundation for analysts to be able to make educated decisions as to which tool(s) they prefer to use.

Chapters 3 and 4 remain the same in their focus as with the first edition, but the content of the chapters has changed, and in a lot of aspects, been updated.

Chapter 5 is my answer to anyone who has looked or is looking for a manual on how to use RegRipper.  I get that most folks download the tool and run it as it, but for my own use, I do not use the GUI.  At all.  Ever.  I use rip.exe from the command line, exclusively.  But I also want folks to know that there are more targeted (and perhaps efficient) ways to use RegRipper to your advantage.  I also include information regarding how you can write your own plugins, but as always, if you don't feel comfortable doing so, please consider reaching to me, as I'm more that happy to help with a plugin.  It's pretty easy to write a plugin if you can (a) concisely describe what you're looking for, and (b) provide sample data.

Now, I know folks are going to ask about specific content, and that usually comes as the question, "do you talk about Windows 10?" My response to that it to ask specifically what they're referring to, and very often, there's no response to that question.  The purpose of this book is not to provide a list of all possible Registry keys and values of interest or value, for all possible investigations, and for all possible combinations of Windows versions and applications.  That's simply not something that can be achieved.  The purpose of this book is to provide an understanding of the value  and context of the Windows Registry, that can be applied to a number of investigations.

Thoughts on Writing Books
There's no doubt about it, writing a book is hard.  For the most part, actually writing the book is easy, once you get started.  Sometimes it's the "getting started" that can be hard.  I find that I'll go through phases where I'll be writing furiously, and when I really need to stop (for sleep, life, etc.), I'll take a few minutes to jot down some notes on where I wanted to go with a thought.

While I have done this enough to find ways to make the process easier, there are still difficulties associated with writing a book.  That's just the reality.  It's easier now than it was the first time, and even the second time.   I'm much better at the planning for writing a book, and can even provide advice to others on how to best go about it (and what to expect).

At this point, after having written the books that I have, I have to say that the single hardest part of writing books is not getting feedback from the community.

Take the first edition of Windows Registry Forensics, for example.  I received questions such as, "...are you planning a second edition?", and when I asked for input on what that second edition should cover, I didn't get a response.

I think that from a 50,000 foot view, there's an expectation that things will be different in the next version of Windows, but the simple fact is that, when it comes to Registry forensics, the basic principles have remained the same through all available versions. Keys are still keys, deleted keys are still discovered the same way, values are still values, etc.  From an application layer perspective, its inevitable that each new version of Windows would include something "new", with respect to the Registry.  New keys, new values, etc.  The same is true with new versions of applications, and that includes malware, as well.  While the basic principles remain constant, stuff at the application layer changes, and it's very difficult to keep up without some sort of assistance.

Writing a book like this would be significantly easier if those within the community were to provide feedback and input, rather than waiting for the book to be published, and ask, "...did you talk about X?" Even so, I hope that folks find the book useful, and that some who have received their copy of the book find the time to write a review.  Thanks.

Cool Stuff, re: WMI Persistence

$
0
0
In case you missed it, the blog post titled, "A Novel WMI Persistence Implementation" was posted to the Dell SecureWorks web site recently.  In short, this blog post presented the results of several SecureWorks team members working together and bringing technical expertise to bear in order to run an issue of an unusual persistence mechanism to ground.  The specifics of the issue are covered thoroughly in the blog post.

What was found was a novel WMI persistence mechanism that appeared to have been employed to avoid not just detection by those who administered the infected system, but also by forensic analysts.  In short, the persistence mechanism used was a variation on what was discussed during a MIRCon 2014 presentation; you'll see what I mean you compare figure 1 from the blog post to slide 45 of the presentation.

After the blog post was published and SecureWorks marketing had tweeted about the blog post, they saw that Matt Graeber had tweeted a request for additional information.  The ensuing exchange included Matt providing a command line for parsing embedded text from a binary MOF file:

mofcomp.exe -MOF:recovered.mof -MFL:ms_409.mof -Amendment:MS_409 binarymof.tmp

What this command does is go into the binary MOF file (binarymof.tmp), and attempt to extract the text that it was created from, essentially "decompiling" it, and placing that text into the file "recovered.mof".

It was no accident that Matt was asking about this; here is Matt's BlackHat 2015 paper, and his presentation.


Links

$
0
0
Ramdo
I ran across this corporate blog post regarding the Ramdo click-fraud malware recently, and one particular statement caught my eye, namely:

Documented by Microsoft in 2014, Ramdo continues to evolve to evade host-based and network-based detection.

I thought, hold on a second...if this was documented in April 2014 (2 yrs ago), what about it makes host-based detection so difficult?  I decided to take a look at what some of the AV sites were saying about the malware.  After all, the MSRT link indicates that the malware writes it's configuration information to a couple of Registry values, and the Win32/Ramdo.A write-up provides even more information along these lines.

I updated the RegRipper malware.pl plugin with checks for the various values identified by several sources, but because I have limited data for testing, I don't feel comfortable that this new version of the plugin is ready for release.

Book
Speaking of RegRipper plugins, just a reminder that the newly published Windows Registry Forensics 2e not only includes descriptions of a number of current plugins, but also includes an entire chapter devoted just to RegRipper, covering topics such as how to use it, and how to write your own plugins.

Timeline Analysis
The book also covers, starting on page 53 (of the softcover edition), tools that I use to incorporate Registry information into timeline analysis.  I've used to this methodology to considerable effect over the years, including very recently to locate a novel WMI persistence technique, which another analyst was able to completely unravel.

Mimikatz
For those who may not be aware, mimikatz includes the capability to clear the Event Log, as well as reportedly stop the Event Service from generating new events.

Okay, so someone can apparently stop the Windows Event Log service from generating event records, and then steal your credentials. If nothing else, this really illustrates the need for process creation monitoring on endpoints.

Something else to keep in mind is that this isn't the only way that adversaries can be observed interacting with the Windows Event Log.  Not only are native tools (wevtutil.exe, PowerShell, WMI) available, but MS provides LogParser for free.

Ghost in the (Power)Shell
The folks at Carbon Black recently posted an article regarding the use of Powershell in attacks.  As I read through the article, I wasn't abundantly clear on what was meant by the adversary attempting to "cloak" attacks by using PowerShell, but due in part to the statistics shared in the article, it does give a view into how PowerShell is being used in some environments.  I'm going to guess that because many organizations still aren't using any sort of process creation monitoring, nor are many logging the use of Powershell, this is how the use of Powershell would be considered "cloaked".

Be sure to take a look at the United Threat Research report described in the Cb article, as well.

Training Philosophy

$
0
0
I have always felt that everyone, including DFIR analysts, need to take some level of responsibility for their own professional education.  What does this mean?  There are a couple of ways to go about this in any industry; professional reading, attending training courses, engaging with others within the community, etc.  Very often, it's beneficial to engaging in more than one manner, particularly as people tend to take in information and learn new skills in different ways.

Specifically with respect to DFIR, there are training courses available that you can attend, and it doesn't take a great deal of effort to find many of these courses.  You attend the training, sit in a classroom, listen to lecture and run through lab exercises.  All of this is great, and a great way to learn something that is perhaps completely new to you, or simply a new way of performing a task.  But what happens beyond that?  What happens beyond what's presented, beyond the classroom?  Do analysts take responsibility for their education, incorporating what they learned into their day-to-day job and then going beyond what's presented in a formal setting?  Do they explore new data sources, tools, and processes?  Or do they sign up again for the course the following year in order to get new information?

When I was on the IBM ISS ERS team, we had a team member tell us that they could only learn something if they were sitting in a classroom, and someone was teaching them the subject.  On the surface, we were like, "wait...what?" After all, do you really want an employee and a fellow team member who states that they can't learn anything new without being taught, in a classroom?  However, if you look beyond the visceral surface reaction to that statement, what they were saying was, they have too much going on operationally to take the time out to start from square 1 to learn something.  The business model behind their position requires them to be as billable as possible, which ends up meaning that out of their business day, they don't have a great deal of time available for things like non-billable professional development.  Taking them out of operational rotation, and putting them in a classroom environment where they weren't responsible for analysis, reporting, submitting travel claims, sending updates, and other billable commitments, would give them the opportunity to learn something new.  But what was important, following the training, is what they did with it.  Was that training away from the daily grind of analysis, expense reports and conference calls used as the basis for developing new skills, or was the end of the training the end of learning?

Learning New Skills
Back in 1982, I took a BASIC programming class on the Apple IIe, and the teacher's philosophy was to provide us with some basic (no pun intended) information, and then cut us loose to explore.  Those of us in the class would try different things, some (most) of which didn't work, or didn't work as intended.  If we found something that worked really well, we'd share it.  If we found something that didn't work, or didn't work quite right, we'd share that, as well, and someone would usually be able to figure out why we weren't seeing what we expected to see.

Jump ahead about 13 years, and my linear algebra professor during my graduate studies had the same philosophy.  Where most professors would give a project and the students would struggle for the rest of the week to "get" the core part of the project, this professor would provide us with the core bit of code (we were using MatLab) to the exercise or lab, and our "project" was to learn.  Of course, some did the minimum and moved on, and others would really push the boundaries of the subject.  I remember one such project were I spent a lot of time observing not just the effect of the code on different shaped matrices, but also the effect of running the output back into the code.

So now, in my professional life, I still seek to learn new things, and employ what I  learn in an exploratory manner.  What happens when I do this new thing?  Or, what happens if I take this one thing that I learned, and share it with someone else?  When I learn something new, I like to try it out and see how to best employ it as part of my analysis process, even if it means changing what I do, rather than simply adding to it.  As part of that, when someone mentions a tool, I don't wait for them to explain every possible use of the tool to me.  After all, particularly if we're talking about the use of native Windows tool, I can very often go look for myself.

So you wanna learn...
If you're interested in trying your skills out on some available data, Mari recently shared this MindMap of forensic challenges with me.  This one resource provides links to all sorts of challenges, and scenarios with data available for analysts to test their skills, try out new tools, or simply dust off some old techniques.  The available data covers disk, memory, pcap analysis, and more.

This means that if an analyst wants to learn more about a tool or process, there is data available that they can use to develop their knowledge base, and add to their skillz.  So, if someone talks about a tool or process, there's nothing to stop you from taking responsibility for your own education, downloading the data and employing the tool/process on your own.

Manager's Responsibility
When I was a 2ndLt, I learned that one of my responsibilities as a platoon commander was to ensure that my Marines were properly trained, and I learned that there were two aspects to that.  The first was to ensure that they received the necessary training, be it formal, schoolhouse instruction, via an MCI correspondence course, or some other method.  The second was to ensure that once trained, the Marine employed the training.  After all, what good is it to send someone off to learn something new, only to have them return to the operational cycle and simply go back to what they were doing before they left?  I mean, you could have achieved the same thing by letting them go on vacation for a week, and saved yourself the money spent on the training, right?

Now, admittedly, the military is great about training you to do something, and then ensuring that you then have opportunity to employ that new skill.  In the private sector, particularly with DFIR training, things are often...not that way.

The Point
So, the point of all this is simple...for me, learning is a matter of doing.  I'm sure that this is the case for others, as well.  Someone can point to a tool or process, and give general thoughts on how it can be used, or even provide examples of how they've used it.  However, for me to really learn more about the topic, I need to actually do something.

The exception to this is understanding the decision to use the tool or process.  For example, what led an analyst to decide to run, say, plaso against an image, rather than extract specific data sources, in order to create and analyze a timeline while running an AV scan?  What leads an analyst to decide to use a specific process or to look at specific data sources, while not looking at others?  That's something that you can only get by engaging with someone and asking questions...but asking those questions is also taking responsibility for your own education.

Thoughts on Books and Book Writing

$
0
0
The new book has been out for a couple of weeks now, and already there are two customer reviews (many thanks to Daniel Garcia and Amazon Customer for their reviews).  Daniel also wrote a more extensive review of the book on his blog, found here.  Daniel, thanks for the extensive work in reading and then writing about the book, I greatly appreciate it.

Here's my take on what the book covers...not a review, just a description of the book itself for those who may have questions.

Does it cover ... ?
One question I get every time a book is released is, "Does it cover changes to ?" I got the with all of the Windows Forensic Analysis books, and I got it when the first edition of this book was released ("Does it cover changes in Windows 7?").  In fact, I got that question from someone at a conference I was speaking at recently.  I thought that was pretty odd, as most often these questions are posted to public forums, and I don't see them.  As such, I thought I'd try to address the question here, so that maybe people could see my reasoning, and ask questions that way.

What I try to do with the books is address an analysis process, and perhaps show different ways that Registry data can be incorporated into the overall analysis plan.  Here's a really good example of how incorporating Registry data into an analysis process worked out FTW.  But that's just one, and a recent one...the book is full of other examples of how I've incorporated Registry data into an examination, and how doing so has been extremely valuable.

One of the things I wanted to do with this book was not just talk about how I have used Registry data in my analysis, but illustrate how others have done so, as well.  As such, I set up a contest, asking people to send me short write-ups regarding how they've used Registry analysis in their case work.  I thought it would be great to get different perspectives, and illustrate how others across the industry were doing this sort of work.  I got a single submission.

My point is simply this...there really is not suitable forum (online, book, etc.) or means by which to address every change that can occur in the Registry.  I'm not just talking about between versions of Windows...sometimes, it's simply the passage of time that leads to some change creeping into the operating system.  For example, take this blog post that's less than a year old...Yogesh found that a value beneath a Registry key that contains the SSID of a wireless network.  With the operating system alone, there will be changes along the way, possibly a great many.  Add to that applications, and you'll get a whole new level of expansion...so how would that be maintained?  As a list?  Where would it be maintained?

As such, what I've tried to do in the book is share some thoughts on artifact categories and the analysis process, in hopes that the analysis process itself would cast a wide enough net to pick up things that may have changed between versions of Windows, or simply not been discussed (or not discussed at great length) previously.

Book Writing
Sometimes, I think about why I write books; what's my reason or motivation for writing the books that I write?  I ask this question of myself, usually when starting a new book, or following a break after finishing a book.

I guess the biggest reason is that when I first started looking around for resources the covered DFIR work and topics specific to Windows systems, there really weren't any...at least, not any that I wanted to use/own.  Some of those that were available were very general, and with few exceptions, you could replace "Windows" with "Linux" and have the same book.  As such, I set out to write a book that I wanted to use, something I would refer to...and specifically with respect to the Windows Registry Forensics books, I still do.  In fact, almost everything that remained the same between the two editions did so because I still use it, and find it to be extremely valuable reference material.

So, while I wish that those interested in something particular in a book, like covering "changes to the Registry in ", would describe the changes that they're referring to before the book goes to the publisher, that simply hasn't been the case.  I have reached out to the community because I honestly believe that folks have good ideas, and that a book that includes something one person finds interesting will surely be of interest to someone else.  However, the result has been...well, you know where I'm going with this.  Regardless, as long as I have ideas and feel like writing, I will.

Viewing all 505 articles
Browse latest View live