Skip to content

Commit

Permalink
Update default image renderer with SVG aware template (#921)
Browse files Browse the repository at this point in the history
  • Loading branch information
scudette authored Sep 27, 2024
1 parent c40a344 commit 30ea1c2
Show file tree
Hide file tree
Showing 4 changed files with 76 additions and 42 deletions.
39 changes: 20 additions & 19 deletions content/blog/2024/2024-09-10-release-notes-0.73/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ The new parser emulates the windows event log format, with common
fields grouped under the `System` column and variable fields in
`EventData`.

{{< figure caption="Journald parser" src="journald.png" >}}
![Journald parser](journald.png)

This release also introduces a new VQL plugin `watch_journald()` which
follows journald logs and forwards events to the server.
Expand All @@ -67,8 +67,7 @@ the attacker on systems that ran the RDP client. This information is
now easily accessible using the new `Windows.Forensics.RDPCache`
artifact.

{{< figure caption="Viewing the RDP cache tiles" src="rdp_cache.png" >}}

![Viewing the RDP cache tiles](rdp_cache.png)

### Added the ability to dump clear text network traffic for debugging

Expand Down Expand Up @@ -161,7 +160,8 @@ waiting until they complete, before sending further requests. This
means if the client reboots only the currently executing queries are
lost, and further queries will continue once the client reconnects.

{{< figure caption="Collection status show finer granularity" src="collection_states.svg" >}}
![Collection status show finer granularity](collection_states.svg)


### Hunts can be tagged now.

Expand All @@ -172,7 +172,7 @@ Over time there can be many hunts active simultaneously, and they can
be used for multiple uses. In this release, the GUI's hunt view is
streamlined by enabling hunts to contains labels.

{{< figure caption="Hunts can now have Tags" src="hunt_tags.svg" >}}
![Hunts can now have Tags](hunt_tags.svg)

Clicking on the hunt label in the table will automatically filter the
table for that label. Hunt Labels are a way to group large numbers of
Expand All @@ -184,8 +184,7 @@ The Velociraptor GUI presents most data in tabular form. It is
important that tables are easy to navigate. The navigation pager is
now placed at the top of the table.


{{< figure caption="Velociraptor tables have been revamped" src="table_widget.svg" >}}
![Velociraptor tables have been revamped](table_widget.svg)

If a filter term starts with ! it will now be excluded from the rows
(i.e. a negative search term).
Expand All @@ -203,17 +202,17 @@ As in previous versions, the user can set a download password in their
preferences. However, previously the password only applied to hunt or
collection exports.

{{< figure caption="Setting password for downloads globally" src="setting_password.svg" >}}
![Setting password for downloads globally](setting_password.svg)

In this release, the password setting also applies to individual file
downloads such as the VFS


{{< figure caption="Downloads are password protected" src="encrypted_downloads.svg" >}}
![Downloads are password protected](encrypted_downloads.svg)

Or the uploads tab in specific collections.

{{< figure caption="Individual file downloads can be password protected" src="single_file_downloads.svg" >}}
![Individual file downloads can be password protected](single_file_downloads.svg)

### Post-processing preservation artifacts

Expand Down Expand Up @@ -244,17 +243,18 @@ Let's examine a typical workflow. I will begin by preparing an offline
collector with the `Windows.KapeFiles.Targets` artifact configured to
collect all event logs.

{{< figure caption="Building an offline collector" src="building_offline_collector.png" >}}
![Building an offline collector](building_offline_collector.png)


Once the collection is complete I receive a ZIP file containing all
the collected files. I will now import it into Velociraptor.

{{< figure caption="Importing the offline collection" src="importing_offline_collection.svg" >}}
![Importing the offline collection](importing_offline_collection.svg)

Since this is an offline client and not a real client, Velociraptor
will create a new client id to contain the collections.

{{< figure caption="The imported collection looks just like any other collection" src="kapefiles_collection.svg" >}}
![The imported collection looks just like any other collection](kapefiles_collection.svg)

Of course we can not schedule new collections for the client because
it is not a real client, but once imported, the offline collection
Expand All @@ -274,21 +274,22 @@ Instead, the `Windows.KapeFiles.Targets` artifact now offers a VQL
snippet as a notebook suggestion to post process the collection. I
access this from the collection's notebook.

{{< figure caption="Post processing the KapeFiles collection with a notebook suggestion" src="post_process_kapefiles.svg" >}}
![Post processing the KapeFiles collection with a notebook suggestion](post_process_kapefiles.svg)


The new cell contains some template VQL. I can modify it to run other
artifacts. In this case I will collect the `Windows.Hayabusa.Rules`
artifact with all the rules (event noisy ones) and `Windows.NTFS.MFT`
artifact.

{{< figure caption="Modifying VQL to run other artifacts" src="post_process_kapefiles_2.svg" >}}
![Modifying VQL to run other artifacts](post_process_kapefiles_2.svg)

The post processing steps added a new distinct collection to the
offline client, as if we collected it directly from the
endpoint. However, the artifacts were collected from the triage files
directly imported from the offline bundle.

{{< figure caption="A new distinct collection is added" src="post_process_kapefiles_3.svg" >}}
![A new distinct collection is added](post_process_kapefiles_3.svg)

Although this new workflow makes it more convenient to post process
bulk file triage collections, note that this is not an ideal workflow
Expand All @@ -311,7 +312,7 @@ Velociraptor]({{% ref "/blog/2024/2024-09-12-timelines/" %}}) blog
post, but below is a screenshot to illustrate the final product - an
annotated timeline derived from analysis of multiple artifacts.

{{< figure caption="The complete timeline with annotations" src="../2024-09-12-timelines/supertimeline.svg" >}}
![The complete timeline with annotations](../2024-09-12-timelines/supertimeline.svg)

### Added Timesketch integration artifacts

Expand All @@ -321,7 +322,7 @@ source timelining tool. The details of the integration are also
discussed in the blog post above, but here is a view of Timesketch
with some Velociraptor timelines exported.

{{< figure caption="Viewing timelines in Timesketch" src="../2024-09-12-timelines/timesketch_view.svg" >}}
![Viewing timelines in Timesketch](../2024-09-12-timelines/timesketch_view.svg)

### Client metadata fields can now be indexed and searched.

Expand Down Expand Up @@ -351,7 +352,7 @@ department.
Indexed metadata fields exist on all clients. Additional non-indexed
fields can be added by the user.

{{< figure caption="Client metadata fields can be indexed or free form" src="client_metadata.svg" >}}
![Client metadata fields can be indexed or free form](client_metadata.svg)

### Enable a server artifact to specify an impersonation user.

Expand Down
37 changes: 19 additions & 18 deletions content/blog/2024/2024-09-12-timelines/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,10 +117,9 @@ while cells can contain markdown text or VQL queries to evaluate.

I will start off by create a global notebook to hold the timeline.

{{< figure src="new_notebook_timeline_1.svg" caption="Creating a notebook from a template" >}}

{{< figure caption="An empty timeline notebook" src="new_notebook_timeline_2.svg" >}}
![Creating a notebook from a template](new_notebook_timeline_1.svg)

![An empty timeline notebook](new_notebook_timeline_2.svg)

### Step 2: Collect some artifacts!

Expand All @@ -143,7 +142,7 @@ false positives with the probability of missing a detection. However,
in the triage context, I really want to see all rules - including ones
that are noisy and produce a lot of false positives.

{{< figure caption="Configuring the Sigma artifacts" src="hayabusa_parameters.svg" >}}
![Configuring the Sigma artifacts](hayabusa_parameters.svg)

Therefore in this case I will choose to evaluate **All** the rules on
the endpoint. The artifact will then evaluate all rules against each
Expand All @@ -164,14 +163,15 @@ This type of processing is called `Stacking`. Velociraptor has an
inbuilt stacking feature within the GUI - it is available on any
table!

{{< figure caption="Stacking hits by Title" src="hayabusa_stack_1.svg" >}}
![Stacking hits by Title](hayabusa_stack_1.svg)


First I sort by one of the table columns - This will select the column
I want to stack on. In this case, I will sort by the Rule Title. Once
the table is sorted, the GUI shows the stacking button. Clicking the
stacking button shows the stacking overview for this table.

{{< figure caption="Inspecting unique rules" src="hayabusa_stack_2.svg" >}}
![Inspecting unique rules](hayabusa_stack_2.svg)

Stacking is a common technique to view aggregation of data quickly. I
allows us to see what **kind** of rules matches in this case, and how
Expand All @@ -186,7 +186,8 @@ strong signal so I want to drill down on it.
If I click the Link icon in the stacking table, I will be able to
explore the specific times this rule matched.

{{< figure caption="Specific instances when Defender was disabled" src="hayabusa_stack_defender_disabled.svg" >}}
![Specific instances when Defender was disabled](hayabusa_stack_defender_disabled.svg)


I see a match in 2023 and one in 2024. In practice a lot of false
positives will occur, or even evidence of previous compromise
Expand All @@ -206,7 +207,7 @@ produce too many false positives.
This reduces the number of events to consider from over 18,000 to
about 100 high confidence events that I can manually review.

{{< figure caption="Reducing data" src="hayabusa_reduced.svg" >}}
![Reducing data](hayabusa_reduced.svg)

### Adding to the timeline.

Expand Down Expand Up @@ -253,7 +254,7 @@ running commentary of what happened.
Let's add our Sigma analysis to the timeline. Within the Reduced Sigma
table, click `Add to Timeline`.

{{< figure caption="Adding a table to a super timeline timeline" src="add_timeline_1.svg" >}}
![Adding a table to a super timeline timeline](add_timeline_1.svg)

The `Add Timeline` dialog allows us to create a timeline, add it to a
supertimeline and configure how events are created from the current
Expand Down Expand Up @@ -282,7 +283,7 @@ table:
After the reduced Sigma timeline is added, I can see the timeline
notebook updated.

{{< figure caption="The Supertimeline UI" src="timeline_sigma.svg" >}}
![The Supertimeline UI](timeline_sigma.svg)

Following is a description of the UI:

Expand Down Expand Up @@ -320,12 +321,12 @@ When an event seems important, it can be annotated. Annotating an
event will copy it into a special time series within the
`Supertimeline` called `Annotation`.

{{< figure caption="Annotating an event" src="timeline_annotation.svg" >}}
![Annotating an event](timeline_annotation.svg)

The annotation should contain an explanation as to why this event is
relevant to the case.

{{< figure caption="The annotated event" src="timeline_annotation_2.svg" >}}
![The annotated event](timeline_annotation_2.svg)

The annotated event is added to a separate timeline, which may be
enabled or disabled similarly as the other time series. This allows us
Expand Down Expand Up @@ -396,7 +397,7 @@ FROM source(artifact="Windows.System.TaskScheduler/Analysis")
WHERE Mtime > "2024-09-12"
```

{{< figure caption="The complete timeline with annotations" src="supertimeline.svg" >}}
![The complete timeline with annotations](supertimeline.svg)

### Exporting the annotations

Expand All @@ -405,7 +406,7 @@ table for reporting purposes. The `Timeline` notebook template
provides a second cell that when recalculated exports the `Annotation`
time series into a unique table.

{{< figure caption="Exporting the annotations" src="annotations_export.svg" >}}
![Exporting the annotations](annotations_export.svg)

I now can see what the attackers did. Once they logged in as
Administrator, they Disabled Windows Defender, Added a second admin
Expand All @@ -418,7 +419,7 @@ ran `whoami` and used ping to establish network connectivity.

To summarize, the general workflow is illustrated below

{{< figure caption="The general timeline workflow" src="workflow.svg" >}}
![The general timeline workflow](workflow.svg)

As we collect artifact from a group of hosts in a hunt, or
individually from specific clients, we post process the results in
Expand Down Expand Up @@ -505,7 +506,7 @@ automatically exports them to Timesketch in the background. This means
that the user does not need to think about it - all timelines created
within Velociraptor will automatically be added to Timesketch.

{{< figure caption="Configuring the Server.Monitoring.TimesketchUpload artifact" src="configure_timesketch_export.svg" >}}
![Configuring the Server.Monitoring.TimesketchUpload artifact](configure_timesketch_export.svg)

To install the `Server.Monitoring.TimesketchUpload` server monitoring
artifact, select `Server Events` in the sidebar, then click the
Expand All @@ -520,15 +521,15 @@ Finally the path on the server to the timesketch client library tool
is required - this is the external binary we call to upload the actual
data.

{{< figure caption="Automating Timesketch Import" src="automating_timesketch_import.svg" >}}
![Automating Timesketch Import](automating_timesketch_import.svg)

Once the server monitoring artifact is configured it simply waits
until a user adds a timeline to a Supertimeline in Velociraptor, as
described above. When that happens the timeline is automatically added
to Timesketch into a sketch named the same as the Velociraptor
Supertimeline.

{{< figure caption="Viewing timelines in Timesketch" src="timesketch_view.svg" >}}
![Viewing timelines in Timesketch](timesketch_view.svg)

As can be seen in the screenshot above, the same targeted timelines
are exported to Timesketch. This is most useful for existing
Expand Down
33 changes: 28 additions & 5 deletions layouts/_default/_markup/render-image.html
Original file line number Diff line number Diff line change
@@ -1,8 +1,31 @@
{{ if .Text }}
<figure>
<img src="{{ .Destination | safeURL }}" alt="{{ .Text }}" class="captioned">
<figcaption>{{ .Text }}</figcaption>
{{ $src := .Destination }}
{{ $id := md5 $src }}
{{ $filename := path.Join ( path.Dir .Page.File ) $src }}
{{ $caption := .PlainText }}
{{ if ( hasSuffix $filename ".svg" ) }}
{{ $data := readFile $filename }}
{{ $data = strings.ReplaceRE `(?ms)<style class="style-fonts">.+?</style>` `` $data }}

<figure id="{{ $id }}">
<div data-featherlight="#{{ $id }}" class="figure">
{{ $data | safeHTML}}
</div>
<figcaption>
<a class="image-link" href="{{ $src }}"><i class="fa fa-download"></i></a>
{{ $caption }}
</figcaption>
</figure>

{{ else }}
<img src="{{ .Destination | safeURL }}" alt="{{ .Text }}">

<figure id="{{ $id }}">
<div data-featherlight="#{{ $id }}" class="figure">
<img src="{{ $src }}" alt="{{ $caption }}">
</div>
<figcaption>
<a class="image-link" href="{{ $src }}"><i class="fa fa-download"></i></a>
{{ $caption }}
</figcaption>
</figure>

{{ end }}
9 changes: 9 additions & 0 deletions static/css/theme-mine.css
Original file line number Diff line number Diff line change
Expand Up @@ -796,3 +796,12 @@ figure div.figure img {
.featherlight .featherlight-content img {
margin: 0 auto !important;
}

.image-link {
padding-right: 20px;
color: var(--REFERENCE-value-color);
}

.image-link:hover {
color: var(--REFERENCE-value-color);
}

0 comments on commit 30ea1c2

Please sign in to comment.