Cybersecurity – Securelist https://securelist.com Tue, 06 Jun 2023 13:46:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://securelist.com/wp-content/themes/securelist2020/assets/images/content/site-icon.png Cybersecurity – Securelist https://securelist.com 32 32 Selecting the right MSSP: Guidelines for making an objective decision https://securelist.com/selecting-the-right-mssp/109321/ https://securelist.com/selecting-the-right-mssp/109321/#respond Thu, 30 Mar 2023 10:00:06 +0000 https://kasperskycontenthub.com/securelist/?p=109321

Managed Security Service Providers (MSSPs) have become an increasingly popular choice for organizations nowadays following the trend to outsource security services. Meanwhile, with the growing number of MSSPs in the market, it can be difficult for organizations to determine which provider will fit in the best way. This paper aims to provide guidance for organizations looking to select an MSSP and help to identify the benefits and drawbacks of using an MSSP.

To make an all-round choice, let’s try to answer the following questions:

  • What exact services do we need?
  • Why does my organization need an MSSP?
  • When does my organization need an MSSP?
  • Who should deliver the service?

MSSP Services

First, let’s start with what services we can expect. Here are some of the most common security services provided by MSSPs:

  • Security Monitoring

    24/7 monitoring of the organization’s network, systems, and applications to identify potential security threats and anomalies; can be provided as an on-premises solution (when data must not leave the customer infrastructure) or as a service.

  • Incident Response (IR)

    Responding to security incidents and breaches, investigating, and containing the incident. Incident Response can be provided in multiple forms, from recommendations for the customer IR team to pre-agreed response actions in the customer environment.

  • Managed Detection and Response (MDR)

    A combination of the previous two services. Usually, MDR is considered an evolution of classic monitoring and response services due to the utilization of advanced threat-detection techniques. Also, MDR supports embedded response capabilities within the platform, which are supplied and fully managed by the service provider.

  • Threat Intelligence (TI)

    Provision of intelligence on current and emerging threats to the organization’s security. The best-known and simplest form of TI is IoC feeds that indicate the presence in the customer environment of known signs of attacks. But there are other deliverables, too, focused on different maturity levels of TI consumers within the organization.

    Note that the use of TI requires an in-house security team, so it is not possible to fully outsource it. TI data has to be applied internally to bring value.

  • Managed Security Solutions

    Multiple services focused on administering security solutions that are deployed in customer environments. These services are commonly bundled if the customer wants on-premise deployment of MSSP technologies.

There is an extended set of services not directly involved in day-to-day operations, but still valuable on a one-time or regular basis.

  • Digital Forensics and Incident Response (DFIR) or Emergency Incident Responder

    The ultimate form of incident response that provides a full-scale DFIR service in case of critical incidents in the customer environment.

  • Malware Analysis

    A narrow-focus service providing extended reporting and behavior analysis of submitted malware. The service requires an in-house team to properly utilize the analysis results.

  • Security Assessment

    A group of services focused on the identification of target infrastructure or application vulnerabilities, weaknesses, and potential attack vectors. Among the best-known services are penetration testing, application security assessment, red teaming, and vulnerability assessment.

  • Attack Surface Management (ASM)

    A service focused on the collection of information related to the organization’s public-facing assets.

  • Digital Footprint Intelligence (DFI)

    A service focused on searching for, collecting, and analyzing organization-related threats in external sources. Typical deliverables include information about leaked accounts, organization-related tracks in malware logs, post and advertisements for the sale of infrastructure access, and a list of actors who can target the organization. Clearly, DFI replaced many TI tasks related to external sources processing.

As we can see, some services can replace the overall function of the in-house team (monitoring, MDR, assessment), while others can be considered as additional support for the existing team (TI, malware analysis, DFIR). The overall scenario of MSSP usage – whenever the function is required and can’t be provided within the organization. So, a key task for the organization is to define its needs, priorities, and available resources.

Scenarios for MSSP involvement

Switching to MSSP involvement scenarios can provide significant value.

Scenario 1

The typical one: you need to establish a specific function quickly. In such cases, an MSSP will save you time and money, and provide value in the short term. This case is applicable whenever you want to implement or test some additional services in your SOC.

Scenario 2

You have to build a security function from scratch. Even if, in the end, all security services should be built in-house, the involvement of an MSSP will be a good idea, since like in scenario 1 it helps to get the service up and running. Later, you can transfer specific services to the in-house team, considering all service aspects and expertise obtained from the MSSP by your team. At the same time, such an approach will help you to implement security functions step-by-step, replacing MSSP services one by one, focusing on one topic at a time, and avoiding security interruptions due to missing services.

Scenario 3

You need extensive growth. Whenever your business is growing, cybersecurity cannot always follow with the same speed. Especially in cases of company mergers and acquisitions, the IT landscape jumps to a new level that cannot be covered promptly by an in-house team. This case can transform into scenario 1 or 2, depending on the nature of this growth. If it’s a one-time event, probably later you’d like to transfer the function to the in-house team, when it will be ready to handle the new volume.

All the more reasons to engage an MSSP

Besides specific cases, there are common reasons to support engaging an MSSP over developing in-house capability. Here are some of them:

  • Lack of In-House Expertise

    Many organizations do not have the necessary in-house expertise to effectively manage and respond to security threats. Some roles and functions require deep knowledge and continuous growth of expertise, which cannot be maintained within the organization. At the same time, an MSSP has a chance to work simultaneously with multiple customers, therefore the intensity of incidents and conducted investigations are much higher and generates more experience for MSSP teams.

  • Resource Constraints

    Smaller organizations may not have the resources to build and manage a comprehensive security program. An MSSP can provide the necessary security services to help these organizations mitigate security risks without having to hire a full security team and, in more complex cases, to maintain this full-scale team in the long term.

  • Cost Savings

    It can be expensive to build and maintain an in-house security program. Outsourcing security services to an MSSP can be a more cost-effective solution, particularly for smaller organizations. Also, the MSSP approach allows you to spread the budget over time, since establishing a service in-house requires significant investments from day one.

  • Scalability

    Fast-growing organizations may find it difficult to scale their security program at the same pace. An MSSP can provide scalable security services able to grow with the organization.

  • Flexibility

    In case of outsourcing, it is much easier to manage the level of service, from playing with the SLA options for a particular MSSP to changing provider every time the preconditions change or a better proposal emerges in the market.

    Overall, an MSSP can help organizations improve their security posture, manage risk, and ensure compliance while optimizing costs and resource constraints. Considering the reasons for involving an MSSP, to complete the picture, we must mention not only the pros but the cons as well. Possible stop factors to think twice about are:

  • Increasing risk

    Every new partner extends the potential attack surface of your organization. During the contract lifetime, you should consider the risks of MSSP compromise and supply-chain attacks on the service, especially if the MSSP has a high level of privileges (usually required under a contract for advanced Incident Response). The provider can mitigate the risk by demonstrating complex cybersecurity program, applied by their infrastructure and independent assessments.

    Also, it’s important to off-board the MSSP correctly in case of contract termination. This off-boarding should include careful access revocation and rollback of all changes done in the network, configurations, etc.

  • Lack of understanding

    Do you know what is within your infrastructure, and how business processes are tied to the IT environment? What is typical and normal in your network? Do you have an asset DB? What about an account registry and a list of full regularly reviewed privileges? I guess not all answers were positive. Bad news: the MSSP will have an even less clear understanding of what is within the protected environment, since its only trustful source of information is you.

  • Need to control the MSSP

    It is essential to conduct a thorough analysis and evaluation of every service contract, particularly when it comes to selecting an MSSP. To achieve this, an expert from within the organization should be assigned to handle the contract and carefully scrutinize all details, conditions, and limitations. Additionally, the service delivery should be closely observed and evaluated throughout the lifetime of the contract. Generally, this means that it is not possible to entirely outsource the security function without establishing at least a small security team in-house. Moreover, the output from the service should be processed by an internal team, especially in cases where incidents, anomalies, or misconfigurations are detected.

In-house or MSSP for SMB

The decision between using an MSSP or building an in-house SOC for small and medium-sized business (SMB) can depend on various factors, including the organization’s budget, resources, and security needs. Here are some MSSP benefits provided in SMB cases:

  • Expertise

    MSSP can provide a level of expertise in security that may not be available in-house, particularly for smaller organizations with limited security resources. Commonly, SBM does not have a security team at all.

  • Cost

    Building an in-house SOC is an expensive way that includes the cost of hiring experienced security professionals, investing in security tools and technologies, and building a security infrastructure.

  • Scalability

    As an SMB grows, its security needs may also grow. An MSSP can provide scalable security services that can grow with the organization without investing in additional security resources.

Overall, for many SMBs, outsourcing security services to an MSSP can be a more cost-effective solution than building an in-house SOC.

For large enterprises, as in other complex cases, the answer will be “it depends.” There are a lot of factors to be considered.

Finding the balance

Considering the pros and cons of outsourcing security services, I suggest finding the right balance. One balanced way can be a hybrid approach in which the organization builds some services in-house and outsources others.

The first variation of the hybrid approach is to build core functions (like Security Monitoring, Incident Response, etc.) by yourself and outsource everything that would make no sense to build in-house. Such an approach lets you build strong core functions and not waste time and resources on functions that require narrow skills and tools. Any MSSP services we have mentioned in the extended category are good candidates for such an approach.

Another variant of the hybrid approach is to develop the expertise of incident responders, who know the environment and are able to respond to advanced attacks. Incident detection and initial analysis in this case can be outsourced, which gives better scalability and the ability to focus on serious matters.

The transition approach fits the conditions when you need to build a security function right here and now, but still focus on the later development of an in-house SOC. So, you can start with outsourcing security services and gradually replacing them one by one with in-house functions, whenever the team, technologies, and resources will be ready.

Choosing the right one

As a very first step, we have to define our needs, the services we are looking for, and the overall strategy we are going to follow in outsourcing security services, considering everything we have discussed before.

The next step is to choose the right provider. Here are the criteria to keep in mind during the screening procedure:

  • Look for expertise and experience

    Choose an MSSP with the necessary expertise. Pay attention to experience with clients in your region/industry as well as well-known global players. Consider the number of years in the market – it is usually simpler to find a proven partner than take a chance with a disruptive new player.

    Threat detection and cyberthreat hunting are related to security research. Check if the MSSP has appropriate research capabilities that can be measured by number and depth of publications related to new APT groups, tools and techniques used, and methods of detection and investigation.

    Another significant point is the team. All services are provided by individual people, so make sure that the MSSP employs qualified personnel with the required level of education and world-recognized certification.

  • Consider the MSSP’s technology

    Ensure that the MSSP uses relevant tool and technologies to provide effective security solutions. A simple example: if the MSSP is focused on Windows protection, that will not fit an environment built on Unix. There are more nuances regarding MSSP technology platforms, which we will come back to later.

  • Check for compliance

    Ensure that the MSSP follows industry compliance regulations and standards if applicable to your business.

  • Evaluate customer experience and support

    Find references and success stories and collect feedback from other companies – clients of the potential service provider. Focus on customer support experience, including responsiveness, availability, and expertise.

  • Consider SLA

    Type, what metrics are used, and how are these metrics tracked and calculated. And, of course, SLA target values that can be provided by the vendor.

  • Consider the cost

    Compare the cost of MSSP services from different providers and choose the one that, other things being equal, offers the best value for your business.

  • Security

    Does the vendor pay attention to the security topic: cybersecurity hygiene, regular assessments by external experts? That is only a small part to check if you don’t want to lower your protection.

  • Ask for proof of concept (PoC)

    Mature players provide a test period, so you can have hands-on experience with most aspects of service provision and deliverables.

The technology question is a bit tricky. In most cases, we can split MSSPs into two big groups: the first use enterprise solutions, the second self-developed tools or open source with customization.

The first group needs to share their revenue with the vendor providing technology, but if later you decide to build an on-prem SOC platform, the migration can be simplified if you choose the same vendor platform. Also, you won’t need to implement too many changes in your environment. Furthermore, if the organization intends to adopt a transition approach and establish an in-house SOC based on a specific technology, the use of an MSSP with the corresponding technical solution can serve as an “extended test drive” for the chosen platform.

The second group usually focuses on a highly custom solution that lets the MSSP fine-tune the technology platform for better results. In a lot of cases, this can be a conglomerate of multiple tools and platforms, integrated to provide more advanced detection and analysis methods. Commonly this platform cannot be adopted by customers for independent usage.

Another question we should mention: is it worth splitting services between multiple providers? On the one hand, such diversity can help to choose the best provider for specific services; on the other, you can feel the synergy of multiple services provided by the same vendor in a bundle. E.g., if you have monitoring from one provider, the provision of DFIR by the same company will create positive synergy due to the ability to exchange information about historical incidents and to continuously monitor DFIR IoCs.

Buying defensive services, don’t forget about offensive assessments, and checking contract conditions for to conduct red teaming, pen tests or conducting cyber ranges. Any type of assessment will be valuable to proof MSS value and train your team.

Summary

When selecting an MSSP, it is important for organizations to keep their security goals in mind and align them with the criteria used for evaluation. With a large number of players in the market, it is always possible to choose the one that best meets the organization’s specific requirements and strategy. Careful consideration of factors such as service offerings, reputation, technology, and cost will help organizations find the right MSSP to meet their security needs.

]]>
https://securelist.com/selecting-the-right-mssp/109321/feed/ 0 full large medium thumbnail
Understanding metrics to measure SOC effectiveness https://securelist.com/understanding-metrics-to-measure-soc-effectiveness/109061/ https://securelist.com/understanding-metrics-to-measure-soc-effectiveness/109061/#respond Fri, 24 Mar 2023 08:00:56 +0000 https://kasperskycontenthub.com/securelist/?p=109061

The security operations center (SOC) plays a critical role in protecting an organization’s assets and reputation by identifying, analyzing, and responding to cyberthreats in a timely and effective manner. Additionally, SOCs also help to improve overall security posture by providing add-on services like vulnerability identification, inventory tracking, threat intelligence, threat hunting, log management, etc. With all these services running under the SOC umbrella, it pretty much bears the burden of making the organization resilient against cyberattacks, meaning it is essential for organizations to evaluate the effectiveness of a cybersecurity operations center. An effective and successful SOC unit should be able to find a way to justify and demonstrate the value of their existence to stakeholders.

Principles of success

Apart from revenue and profits, there are two key principles that drive business success:

  • Maintaining business operations to achieve the desired outcomes
  • Continually improving by bringing in new ideas or initiatives that support the overall goals of the business

The same principles are applied to any organization or entity that is running a SOC, acting as a CERT, or providing managed security services to customers. So, how do we ensure the services being provided by security operations centers are meeting expectations? How do we know continuous improvement is being incorporated in daily operations? The answer lies in the measurement of SOC internal processes and services. By measuring the effectiveness of processes and services, organizations can assess the value of their efforts, identify underlying issues that impact service outcomes, and give SOC leadership the opportunity to make informed decisions about how to enhance performance.

Measuring routine operations

Let’s take a closer look at how we can make sure routine security operations are providing services within the normal parameters to a business or subscribed customers. This is where metrics, service-level indicators (SLIs) and key performance indicators (KPIs) come into play; metrics provide a quantitative value to measure something, KPIs set an acceptable value on a key metric to evaluate the performance of any particular internal process, and SLIs provide value to measure service outcomes that are eventually linked with service SLAs. With regard to KPIs, if the metric value falls into the range of a defined KPI value, then the process is deemed to be working normally; otherwise, it provides an indication of reduced performance or possibly a problem.

The following figure clarifies the metric types generally used in SOCs and their objectives:

One important thing to understand here is that not all metrics need a KPI value. Some of the metric, such as monitoring ones, are required to be measured for the informational purpose. This is because they provide valuable support to track functional pieces of SOC operations where their main objective is to assist SOC teams in forecasting problems that could potentially decrease the operational performance.

The following section provides some concrete examples to reinforce understanding:

Example 1: Measuring analysts’ wrong verdicts

Process Metric Name Type Metric Description Target
Security monitoring process Wrong verdict KPI (internal) % of alerts wrongly triaged by the SOC analyst 5%

This example involves the evaluation of a specific aspect of the security monitoring process, namely the accuracy of SOC analyst verdicts. Measuring this metric can aid in identifying critical areas that may affect the outcome of the security monitoring process. It should be noted that this metric is an internal KPI, and the SOC manager has set a target of 10% (target value is often set based on the existing levels of maturity). If the percentage of this metric exceeds the established target, it suggests that the SOC analyst’s triage skills may require improvement, hence providing valuable insight to the SOC manager.

Example 2: Measuring alert triage queue

Process Metric Name Type Metric Description Target
Security monitoring process Alert triage queue Monitoring metric Number of alerts waiting to be triaged Dynamic

This specific case involves assessment of a different element of the security monitoring process – the alert triage queue. Evaluating this metric can provide insights into the workload of SOC analysts. It is important to note that this is a monitoring metric, and there is no assigned target value; instead, it is classified as a dynamic value. If the queue of incoming alerts grows, it indicates that the analyst’s workload is increasing, and this information can be used by SOC management to make necessary arrangements.

Example 3: Measuring time to detect incidents

Service Metric Name Type Metric Description Target
Security monitoring service Time to detect SLI Time required to detect a critical incident 30 minutes

In this example, the effectiveness of the security monitoring service is evaluated by assessing the time required to detect a critical incident. Measuring this metric can provide insights into the efficiency of the security monitoring service for both internal and external stakeholders. It’s important to note that this metric is categorized as a service-level indicator (SLI), and the target value is set at 30 minutes. This target value represents a service-level agreement (SLA) established by the service consumer. If the time required for detection exceeds the target value, it signifies an SLA breach.

Evaluating the everyday operations of a practical SOC unit can be challenging due to the unavailability or inadequacy of data, and gathering metrics can also be a time-consuming process. Therefore, it is essential to select suitable metrics (which will be discussed later in the article) and to have the appropriate tools and technologies in place for collecting, automating, visualizing, and reporting metric data.

Measuring improvement

The other essential element in the overall success of security operations is ‘continuous improvement’. SOC leadership should devise a program where management and SOC employees get an opportunity to create and pitch ideas for improvement. Once ideas are collected from different units of security operations, they are typically evaluated by management and team leads to determine their feasibility and potential impact on the SOC goals. The selected ideas are then converted into initiatives along with the corresponding metrics and desired state, and lastly their progress is tracked and evaluated to measure their results over a period of time. The goal of creating initiatives through ideas management is to encourage employee engagement and continuously improve SOC processes and operations. Typically, it is the SOC manager and lead roles who undertake initiatives to fix technical and performance-related matters within the SOC.

A high-level flow is depicted in the figure below:

Whether it is for routine security operations or ongoing improvement efforts, metrics remain a common parameter for measuring performance and tracking progress.

Three common problems that we often observe in real-world scenarios are:

  • In the IT world the principle of “if it’s not broken, don’t fix it” is well known, and this mentality extends to operational units as well. Similarly, many SOCs prioritize current operations and only implement changes in response to issues rather than adopting a continuous improvement approach. This reluctance to change acts as a bottleneck for achieving continuous improvement.
  • Absence of a structured process to gather ideas for potential improvements results in only a fraction of these ideas being presented to the SOC management, and thus, only a fraction of them being implemented.
  • Absence of progress tracking for improvements – it’s not sufficient to simply generate and discuss ideas. Implementing ideas requires diligent monitoring of their progress and measuring their actual impact.

Example: Initiative to improve analyst triage verdicts

Revisiting ‘example 1’ presented in the ‘Measuring routine operations’ section, let us assume that the percentage of incorrect verdicts detected over the past month was 12%, indicating an issue that requires attention. Management has opted to provide additional training to the analysts with the goal of reducing this percentage to 5%. Consequently, the effectiveness of this initiative must be monitored for the specified duration to determine if the target value has been attained. It’s important to note that the metric, ‘Wrong verdicts’, remains unchanged, but the current value is now being used to evaluate progress towards the desired value of 5%. Once significant improvements are implemented, the target value can be adjusted to further enhance the analysts’ triage skills.

Metric identification and prioritization

SOCs generally do measure their routine operations and improvements using ‘metrics’. However, they often struggle to recognize if these metrics are supporting the decision-making process or showing any value to the stakeholders. Hunting for meaningful metrics is a daunting task. The common approach we have followed in SOC consulting services to derive meaningful metrics is to understand the specific goals and operational objectives of security operations. Another proven approach is the GQM (Goal-Question-Metric) system that involves a systematic, top-down methodology for creating metrics that are aligned with an organization’s goals. By starting with specific, measurable goals and working backwards to identify the questions and metrics needed to measure progress towards those goals, the GQM approach ensures that the resulting metrics are directly relevant to the SOC’s objectives.

Let’s illustrate our approach with an example. If a SOC is acting as a financial CERT, it is likely to focus on responding to incidents related to the financial industry, tracking and recording financials threats, providing advisory services, etc. Once the principal goals of the CERT are realized, the next step is to identify metrics that directly influence the CERT services outcomes.

Example: Metric identification

Goal Question Metric
Ensure participating financial institutions are informed about latest threat How can we determine the amount of time the CERT is taking to notify other financial institutions? Time it takes to notify participant banks after threat discovery

Similarly, for operational objectives, metrics are identified to track and measure processes that support financial CERT operations. This also leads to the issue of prioritizing metrics, as not all metrics hold the same level of importance. In fact, when selecting metrics, it is crucial to prioritize quality over quantity and therefore it is recommended to limit the collection of metrics to sharpen focus and increase efficiency. In order to emphasize the importance of prioritizing metrics, the metrics that directly support CERT goals take precedence over metrics supporting operational objectives because ultimately it is consumers and stakeholders who evaluate the services rendered.

To determine the appropriate metrics, several factors should be taken into account:

  • Metrics must be aligned with the primary goals and operational objectives
  • Metrics should assist in the decision-making process
  • Metrics must demonstrate their purpose and value to both internal operations and external stakeholders.
  • Metrics should be realistically achievable in terms of data collection, data accuracy, and reporting.
  • Metrics must also meet the criteria of the SMART (Specific, Measurable, Actionable, Realistic, Time-based) model.
  • Ideally, metrics should be automated to receive and analyze current values in order to visualize them as quickly as possible.
]]>
https://securelist.com/understanding-metrics-to-measure-soc-effectiveness/109061/feed/ 0 full large medium thumbnail
Developing an incident response playbook https://securelist.com/developing-an-incident-response-playbook/109145/ https://securelist.com/developing-an-incident-response-playbook/109145/#respond Thu, 23 Mar 2023 08:00:00 +0000 https://kasperskycontenthub.com/securelist/?p=109145

An incident response playbook is a predefined set of actions to address a specific security incident such as malware infection, violation of security policies, DDoS attack, etc. Its main goal is to enable a large enterprise security team to respond to cyberattacks in a timely and effective manner. Such playbooks help optimize the SOC processes, and are a major step forward to SOC maturity, but can be challenging for a company to develop. In this article, I want to share some insights on how to create the (almost) perfect playbook.

Imagine your company is under a phishing attack — the most common attack type. How many and what exact actions should the incident response team take to curb the attack? The first steps would be to find if an adversary is present and how the infrastructure had been penetrated (whether though an infected attachment or a compromised account using a fake website). Next, we want to investigate what is going on within the incident (whether the adversary persists using scheduled tasks or startup scripts) and execute containment measures to mitigate risks and reduce the damage caused by the attack. All these have to be done in a prompt, calculated and precise manner—with the precision of a chess grandmaster — because the stakes are high when it comes to technological interruptions, data leaks, reputational or financial losses.

Why defining your workflow is a vital prestage of playbook development

Depending on organization, the incident response process will comprise different phases. I will consider one of the most widespread NIST incident response life cycles relevant for most of the large industries — from oil and gas to the automotive sector.

The scheme includes four phases:

  • preparation,
  • detection and analysis,
  • containment, eradication, and recovery,
  • post-incident activity.

All the NIST cycles (or any other incident response workflows) can be broken down into “action blocks”. In turn, the latter can be combined depending on specific attack for a timely and efficient response. Every “action” is a simple instruction that an analyst or an automated script must follow in case of an attack. At Kaspersky, we describe an action as a building block of the form: <a subject> does <an action> on <an object> using <a tool>. This building block describes how a response team or an analyst (<a subject>) will perform a special action (<an action>) on a file, account, IP address, hash, registry key, etc. (<an object>) using systems with the functionality to perform that action (<a tool>).

Defining these actions at each phase of the company’s workflow helps to achieve consistency and create scalable and flexible scenarios, which can be promptly modified to accommodate changes in the infrastructure or any of the conditions.

An example of a common response action

An example of a common response action

1. Be prepared to process incidents

The first phase of any incident response playbook is devoted to the Preparation phase of the NIST incident response life cycle. Usually the preparation phase includes many different steps such as incident prevention, vulnerability management, user awareness, malware prevention, etc. I will focus on the step involving playbooks and incident response. Within this phase it is vital to define the alert field set and its visual representation. For the response team’s convenience, it is a good idea to prepare different field sets for each incident type.

A good practice before starting is to define the roles specific to the type of incident, as well as the escalation scenarios, and to dedicate the communication tools that will be used to contact the stakeholders (email, phone, instant messenger, SMS, etc.). Additionally, the response team has to be provided with adequate access to security and IT systems, analysis software and resources. For a timely response and to avoid human factor errors, automations and integrations need to be developed and implemented, that can be launched by the security orchestration, automation and response (SOAR) system.

2. Create a comfortable track for investigation

The next important phase is Detection that involves collecting data from IT systems, security tools, public information, and people inside and outside the organization, and identifying the precursors and indicators. The main thing to be done during this phase is configuring a monitoring system to detect specific incident types.

In the Analysis phase, I would like to highlight several blocks: documentation, triage, investigation, and notification. Documentation helps the team to define the fields for analysis and how to fill them once an incident is detected and registered in the incident management system. That done, the response team moves on to triage to perform incident prioritization, categorization, false positive checks, and searches for related incidents. The analyst must be sure that the collected incident data comply with the rules configured for detection of specific suspicious behavior. If the incident data and rule/policy logic mismatch, the incident may be tagged as a false positive.

The main part of the analysis phase is investigation, which comprises logging, assets and artifact enrichment, and incident scope forming. When in research mode, the analyst should be able to collect all the data about the incident to identify patient zero and the entry point — knowing how unauthorized access was obtained and which host/account had been compromised first. It is important because it helps to properly contain the cyberattack and prevent similar ones in the future. By collecting incident data one gets information about specific objects (assets and artifacts such as hostname, IP address, file hash, URL, and so on) relating to the incident, so one can extend the incident scope by them.

Once the incident scope is extended, the analyst can enrich assets and artifacts using the data from Threat Intelligence resources or a local system featuring inventory information, such as Active Directory, IDM, or CMDB. Based on the information on the affected assets, the response team can measure the risk to make the right choice of further actions. Everything depends of how many hosts, users, systems, business processes, or customers have been affected, and there are several ways to escalate the incident. For a medium risk, only the SOC manager and certain administrators must be notified to contain the incident and resolve the issue. In a critical risk case, however, the crisis team, HR department, or the regulatory authority must be notified by the response team.

The last component of the analysis phase is notification, meaning that every stakeholder must be notified of the incident in timely manner, so the system owner can step in with effective containment and recovery measures.

Detection and Analysis phase actions to analyze the incident

Detection and Analysis phase actions to analyze the incident

3. Containment is one of the most important phases to minimize incident consequences

The following big part consists of Containment, Eradication and Recovery phases. The main goal of containment is to keep the situation under control after an incident has occurred. Based on incident severity and possible damage caused, the response team should know the proper set of containment measures.

Following the prestage where workflows had been defined, we now have a list of different object types and possible actions that can be completed using our tool stack. So, with a list of actions in hand, we just want to choose proper measures based on impact. This stage mostly defines the final damage: the smoother and more precise the actions the playbook suggests for this phase, the prompter will be our response to block the destructive activity and minimize the consequences. During the containment process, the analyst performs a number of different actions: deletes malicious files, prevents their execution, performs network host isolation, disables accounts, scans disks with the help of security software, and more.

The eradication and recovery phases are similar and consist of procedures meant to put the system back into operation. The eradication procedures include cleaning up all traces of the attack—such as malicious files, created scheduled tasks and services—and depend on what traces were left following the intrusion. During the recovery process, the response team should simply adopt a ‘business as usual’ stance. Just as the eradication, the recovery phase is optional, because not every incident impacts the infrastructure. Within this phase we perform certain health check procedures and revoke changes that had been made during the attack.

Incident containment steps and recovery measures

Incident containment steps and recovery measures

4. Lessons learned, or required post-incident actions

The last playbook phase is Post-incident activity, or Lesson learning. The phase is focused on how to improve the process. To simplify this task, we can define a set of questions to be answered by the incident response team. For example:

  • How well did the incident response team manage the incident?
  • What information was the first to be required?
  • Could the team have done a better job sharing the information with other organizations/departments?
  • What could the team do differently next time if the same incident occurred?
  • What additional tools or resources are needed to help prevent or mitigate similar incidents?
  • Were there any wrong actions that had caused damage or inhibited recovery?

Answering these questions will enable the response team to update the knowledge base, improve the detection and prevention mechanism, and adjust the next response plan.

Summary: components of a good playbook

To develop a cybersecurity incident response playbook, we need to figure out the incident management process with focus on phases. As we go deeper into the details, we look for tools/systems to help us with the detection, investigation, containment, eradication, and recovery phases. Once we know our set of tools, we can define the actions that can be performed:

  • logging;
  • enriching the inventory information or telemetry of affected assets or reputation of external resources;
  • incident containment through host isolation, preventing malicious file execution, URL blocking, termination of active sessions, or disabling of accounts;
  • cleaning up the traces of intrusion by deleting remote files, deleting suspicious services, or scheduled tasks;
  • recovering the system’s operational state by revoking changes;
  • formalizing lessons learned by creating a new article in the local knowledge base for later reference.

Additionally, we want to define responsibilities within the response team, for each team member must know what his or her mission-critical role is. Once the preparation is done, we can begin developing the procedures that will form the playbook. As a common rule, every procedure or playbook building block looks like “<a subject> does <an action> on <an object> using <a tool>”—and now that all subjects, actions, objects, and tools have been defined, it is pretty easy to combine them to create procedures and design the playbook. And of course, keep in mind and stick to your response plan and its phases.

]]>
https://securelist.com/developing-an-incident-response-playbook/109145/feed/ 0 full large medium thumbnail
Business on the dark web: deals and regulatory mechanisms https://securelist.com/dark-web-deals-and-regulations/109034/ https://securelist.com/dark-web-deals-and-regulations/109034/#respond Wed, 15 Mar 2023 10:00:35 +0000 https://kasperskycontenthub.com/securelist/?p=109034

Download the full version of the report (PDF)

Hundreds of deals are struck on the dark web every day: cybercriminals buy and sell data, provide illegal services to one another, hire other individuals to work as “employees” with their groups, and so on. Large sums of money are often on the table. To protect themselves from significant losses, cybercriminals use regulatory mechanisms, such as escrow services (aka middlemen, intermediaries, or guarantors), and arbitration. Escrow services control the fulfillment of agreements and reduce the risks of fraud in nearly every type of deal; arbiters act as a kind of court of law for cases where one of the parties of the deal tries to deceive the other(s). The administrators of the dark web sites, in turn, enforce arbiters’ decisions and apply penalties to punish cheaters. Most often, these measures consist in blocking, banning, or adding to “fraudster” lists available to any member of community.

Our research

We have studied publications on the dark web about deals involving escrow services for the period from January 2020 through December 2022. The sample includes messages from international forums and marketplaces on the dark web, as well as from publicly available Telegram channels used by cybercriminals. The total number of messages mentioning the use of an escrow agent in one way or another amounted to more than one million, of which almost 313,000 messages were published in 2022.

Dynamics of the number of messages on shadow sites mentioning escrow services in 2022. Source: Kaspersky Digital Footprint Intelligence (download)

We also found and analyzed the rules of operating escrow services on more than ten popular dark web sites. We found that the rules and procedures for conducting transactions protected by escrow on various shadow platforms were almost the same, and the typical transaction pattern that involved escrow services was as follows.

Besides the posts relating to escrow services, we analyzed those relating to arbitration and dispute settlement. We found that the format for arbitration appeals was also standardized. It usually included information about the parties, the value of the deal, a brief description of the situation, and the claimant’s expectations. In addition, parties sent their evidence privately to the appointed arbiter.

What we learned about dark web deal regulation

  • About half of the messages that mention the use of an escrow agent in one way or another in 2022 were posted on a platform specializing in cashing out and associated services.
  • Cybercriminals resort to escrow services—provided by escrow agents, intermediaries who are not interested in the outcome of the deal—not just for one-time deals, but also when looking for long-term partners or hiring “employees”.
  • These days, dark web forums create automated escrow systems to speed up and simplify relatively typical deals between cybercriminals.
  • Any party may sabotage the deal: the seller, the buyer, the escrow agent, and even third parties using fake accounts to impersonate official representatives of popular dark web sites or escrow agents.
  • The main motivation for complying with an agreement and playing fair is the party’s reputation in the cybercriminal community.
  • A deal may involve up to five parties: the seller, the buyer, the escrow agent, the arbiter, and the administrators of the dark web site. Moreover, further arbiters may be involved if a party is not satisfied with the appointed arbiter’s decision and tries to appeal to another.

The reasons to learn how business works on the dark web

Understanding how the dark web community operates, how cybercriminals interact with one another, what kinds of deals there are, how they are made, and what roles exist in them, is important when searching for information on the dark web and subsequently analyzing the data to identify possible threats to companies, government agencies, or certain groups of people. It helps information security experts find information faster and more efficiently without revealing themselves.

Today, regular monitoring of the dark web for various cyberthreats — both attacks in the planning stages and incidents that have already occurred, such as compromise of corporate networks or leakage of confidential documents, is essential for countering threats in time, and mitigating the consequences of fraudulent or malicious activities. As the saying goes, forewarned is forearmed.

Business on the dark web: deals and regulatory mechanisms — download the full version of the report (English, PDF)

]]>
https://securelist.com/dark-web-deals-and-regulations/109034/feed/ 0 full large medium thumbnail
IoC detection experiments with ChatGPT https://securelist.com/ioc-detection-experiments-with-chatgpt/108756/ https://securelist.com/ioc-detection-experiments-with-chatgpt/108756/#comments Wed, 15 Feb 2023 10:00:53 +0000 https://kasperskycontenthub.com/securelist/?p=108756

ChatGPT is a groundbreaking chatbot powered by the neural network-based language model text-davinci-003 and trained on a large dataset of text from the Internet. It is capable of generating human-like text in a wide range of styles and formats.

ChatGPT can be fine-tuned for specific tasks, such as answering questions, summarizing text, and even solving cybersecurity-related problems, such as generating incident reports or interpreting decompiled code. Apparently, attempts have been made to generate malicious objects, such as phishing emails, and even polymorphic malware.

It is common for security and threat researches to publicly disclose the results of their investigations (adversary indicators, tactics, techniques, and procedures) in the form of reports, presentations, blog articles, tweets, and other types of content.

Therefore, we initially decided to check what ChatGPT already knows about threat research and whether it can help with identifying simple, well-known adversary tools like Mimikatz and Fast Reverse Proxy, and spotting the common renaming tactic. The output looked promising!

What about classic indicators of compromise, such as well-known malicious hashes and domains? Unfortunately, during our quick experiment, ChatGPT was not able to produce satisfying results: it failed to identify the well-known hash of Wannacry (hash: 5bef35496fcbdbe841c82f4d1ab8b7c2).

For various APT1 domains, ChatGPT produced a list of mostly the same legitimate domains — or is it that we may not know something about these domains? — despite it provided description of APT actors1.

As for FIN71 domains, it correctly classified them as malicious, although the reason it gave was, “the domain name is likely an attempt to trick users into believing that it is a legitimate domain”, rather than there being well-known indicators of compromise.

While the last experiment, which targeted domains that mimicked a well-known website, gave an interesting result, more research is needed: it is hard to say why ChatGPT produces better results for host-based artifacts than for simple indicators like domain names and hashes. Certain filters might have been applied to the training dataset, or the questions themselves should be constructed if a different way (a problem well-defined is a problem half-solved!).

Anyway, since the responses for host-based artifacts looked more promising, we instructed ChatGPT to write some code to extract various metadata from a test Windows system and then to ask itself whether the metadata was an indicator of compromise:

Certain code snippets were handier then others, so we decided to continue developing this proof of concept manually: we filtered the output for events where the ChatGPT response contained a “yes” statement regarding the presence of an indicator of compromise, added exception handlers and CSV reports, fixed small bugs, and converted the snippets into individual cmdlets, which produced a simple IoC scanner, HuntWithChatGPT.psm1, capable of scanning a remote system via WinRM:

HuntWithChatGPT.psm1
Get-ChatGPTAutorunsIoC Modules configured to run automatically (Autoruns/ASEP)
Get-ChatGPTRunningProcessesIoC Running processes and their command lines
Get-ChatGPTServiceIoC Service installation events (event ID 7045)
Get-ChatGPTProcessCreationIoC Process creation event ID 4688 from Security log
Get-ChatGPTSysmonProcessCreationIoC Process creation event ID 1 from Sysmon log
Get-ChatGPTPowerShellScriptBlockIoC PowerShell Script blocks (event ID 4104 from Microsoft-Windows-PowerShell/Operational)
Get-ChatGPTIoCScanResults Runs all functions one by one and generates reports
Get-ChatGPTIoCScanResults
    -apiKey <Object>
        OpenAI API key https://beta.openai.com/docs/api-reference/authentication
       
    -SkipWarning [<SwitchParameter>]
        
    -Path <Object>
        
    -IoCOnly [<SwitchParameter>]
        Export only Indicators of compromise
        
    -ComputerName <Object>
        Remote Computer's Name
        
    -Credential <Object>
        Remote Computer's credentials
We infected the target system with the Meterpreter and PowerShell Empire agents, and emulated a few typical adversary procedures. Upon executing the scanner against the target system, it produced a scan report enriched with ChatGPT conclusions:

Two malicious running processes were identified correctly out of 137 benign processes, without any false positives.

Note that ChatGPT provided a reason why it concluded that the metadata were indicators of compromise, such as “command line is attempting to download a file from an external server” or “it is using the “-ep bypass” flag which tells PowerShell to bypass security checks that are normally in place”.

For service installation events, we slightly modified the question to instruct ChatGPT to “think step by step”, so that it would slow down and avoid cognitive bias, as advised by multiple researchers on Twitter:

Is following Windows service name ‘$ServiceName’ with following Launch String ‘$Servicecmd’ – an indicator of compromise? Think about it step by step.

ChatGPT successfully identified suspicious service installations, without false positives. It produced a valid hypothesis that “the code is being used to disable logging or other security measures on a Windows system”. For the second service, it provided a conclusion about why the service should be classified as an indicator of compromise: “these two pieces of information indicate that the Windows service and launch string may be associated with some form of malware or other malicious activity and should therefore be considered an indicator of compromise”.

The process creation events in Sysmon and Security logs were analyzed with the help of the corresponding PowerShell cmdlets Get-ChatGPTSysmonProcessCreationIoC and Get-ChatGPTProcessCreationIoC. The final report highlighted some of the events as malicious:

  • ChatGPT identified the suspicious pattern in the ActiveX code: “The command line includes commands to launch a new process (svchost.exe) and terminate the current one (rundll32.exe)”.
  • It correctly characterized the lsass process dump attempt: “a.exe file is being run with elevated privileges and using lsass (which stands for Local Security Authority Subsystem Service) as its target; Finally, dbg.dmp indicates that a memory dump is being created while running a debug program”.
  • It detected the Sysmon driver unload: “the command line includes instructions to unload a system monitoring driver”.

While checking the PowerShell Script blocks, we modified the question to check not only for indicators, but also for presence of obfuscation techniques:

Is following PowerShell script obfuscated or contains indicators of compromise? ‘$ScriptBlockText’

ChatGPT not only detected obfuscation technics, but also enumerated a few: XOR encryption, Base64 encoding, and variable substitution.

Okay, what about false positives or false negatives? Of course, this tool is not perfect and produces both.

In one example, ChatGPT did not detect credentials exfiltration through SAM registry dumping, and in another, described the lsass.exe process as possibly indicative of “malicious activity or a security risk such as malware running on the system”:

An interesting outcome of this experiment is data reduction in the dataset. After the adversary emulation on test system, the number of events for the analyst to verify was reduced significantly:

System metadata Events in dataset Highlighted by ChatGPT # of False Positives
Running Processes 137 2 0
Service Installations 12 7 0
Process Creation Events 1033 36 15
Sysmon Process Creation events 474 24 1
PowerShell ScriptBlocks 509 1 0
Autoruns 1412 4 1

Note that tests were performed on a fresh non-production system. Production will likely generate more false positives.

Conclusions

While the exact implementation of IoC scanning may not currently be a very cost-effective solution at 15-25$ per host for the OpenAI API, it shows interesting interim results, and reveals opportunities for future research and testing. Potential use cases that we noted during the research were as follows:

  • Systems check for indicators of compromise—especially if you still do not have EDR full of detection rules, and you need to do some digital forensics and incident response (DFIR);
  • Comparison of a current signature-based rule set with ChatGPT output to identify gaps—there always could be some technique or procedure that you as analysts just do not know or have forgotten to create a signature for.
  • Detection of code obfuscation;
  • Similarity detection: feeding malware binaries to ChatGPT and trying to ask it if a new one is similar to the others.

A correctly asked question is half the answer—experimenting with various statements in the question and model parameters may produce more valuable results even for hashes and domain names.

Beware of false positives and false negatives that this can produce. At the end of the day, this is just another statistical neural network prone to producing unexpected results.

Download the scripts here to hunt for IoCs. ATTENTION! By using these scripts, you send data, including sensitive data, to OpenAI, so be careful and consult the system owner beforehand.

 

1 Kaspersky’s opinion on the attribution might be different from ChatGPT. We leave it for local and international law enforcement agencies to validate the attribution.

]]>
https://securelist.com/ioc-detection-experiments-with-chatgpt/108756/feed/ 2 full large medium thumbnail
Good, Perfect, Best: how the analyst can enhance penetration testing results https://securelist.com/how-the-analyst-can-enhance-pentest/108652/ https://securelist.com/how-the-analyst-can-enhance-pentest/108652/#respond Fri, 10 Feb 2023 10:00:33 +0000 https://kasperskycontenthub.com/securelist/?p=108652

Penetration testing is something that many (of those who know what a pentest is) see as a search for weak spots and well-known vulnerabilities in clients’ infrastructure, and a bunch of copied-and-pasted recommendations on how to deal with the security holes thus discovered. In truth, it is not so simple, especially if you want a reliable test and useful results. While pentesters search for vulnerabilities and put a lot of effort into finding and demonstrating possible attack vectors, there is one more team member whose role remains unclear: the cybersecurity analyst. This professional takes a helicopter view of the target system to properly assess existing security holes and to offer the client a comprehensive picture of the penetration testing results combined with an action plan on how to mitigate the risks. In addition to that, the cybersecurity analyst formulates a plan in the business language that helps the management team, including the C-level, to understand what they are about to spend money on.

Drawing on Kaspersky’s expertise with dozens of security assessment projects, we want to reveal the details of the analyst’s role on these projects: who they are, what they do, why projects carried out together by pentesters and an analyst are much more useful for clients.

Who is an analyst?

In general, an analyst is a professional who works on datasets. For example, we all know about financial analysts who evaluate the efficiency of financial management. There are more than one type of analyst in the field of information security:

  • Security Operation Center analysts work on incident response and develop detection signatures in a timely fashion;
  • Malware analysts examine malware samples;
  • Cyberthreat intelligence analysts study the behavior of various attackers, diving deeper into their tactics and techniques.

Speaking of the analyst on a security assessment project, their role is to link together the pentester, the manager, and the client. At Kaspersky for example, an analyst contributes to nearly all security assessment projects, such as penetration testing, application security assessment, red teaming, and other. The goals and target scope of these projects can be different: from searching for vulnerabilities in an online banking application to identifying attack vectors against ICS systems and critical infrastructure assets. Team composition can change, so the analyst works with experts from many areas, enriching the project and sharing the expertise with the client.

The analyst’s role in the security assessment process

It is possible to distinguish the following stages of analyst work in the security assessment process:

  • Advanced reconnaissance,
  • Interpreting network scan results,
  • Threat identification,
  • Verification of threat modeling,
  • Visualization,
  • Vulnerability prioritization,
  • Recommendations,
  • Project follow-up.

Next, we will go over each of these steps in detail.

Advanced reconnaissance

The analyst begins by gathering information about the organization before the security testing conducted by pentesters can begin. The analyst studies public resources to learn about the business systems and external resources, and collects technical information about the target systems: whether the software was custom-built, provided by a vendor, or created from an open-source codebase, what programming language was used, and so on. They also look up known data leaks that may link to client employees and compromised corporate credentials. Often, this data is offered for sale on the dark web, and the analyst’s job is to detect these mentions and warn the client. All this information is collected to discover potential attack vectors and negotiate a project scope with the client. Potential attack vectors will then be explored by pentesters. For example, publicly available employee data can be used for social engineering attacks or to gain access to company resources, and information about the software can be used to find known vulnerabilities.

Project case: The benefits of OSINT

Here is an example from one of our security assessment projects where information-gathering by the analyst helped to identify a new attack vector. The analyst successfully used the Kaspersky Digital Footprint Intelligence service to find compromised employee credentials on the dark web. With the client’s approval, a pentester attempted to authenticate with one of the company’s external services using these credentials and finding that they were valid. Seeing how the client’s network perimeter was highly secured and all public-facing services, properly protected with authentication, the case clearly demonstrated that even a well-hardened infrastructure could be vulnerable to OSINT and threat intelligence.

A diagram of the penetration testing project where the information gathering by the analyst helped to identify a new attack vector

A diagram of the penetration testing project where the information gathering by the analyst helped to identify a new attack vector

Interpreting scan results

At this stage, we have agreed with the client on a project scope, and pentesters are running an instrumental network scan to identify the client’s public-facing services and open ports. In our experience, the state of network perimeter cybersecurity in most organizations is far from perfect, but due to project constraints, pentesters typically target only a small number of perimeter security flaws. The analyst examines the network scan outputs and highlights the key issues. Their goal is to gather detailed information about insufficient network traffic filtering, use of insecure network protocols, exposure of remote management and DBMS access interfaces, and other possible vulnerabilities. Otherwise, the client will not see the full picture.

Threat identification

Information about all attack vectors that were successfully exploited during the test comes from the pentester, whereas the analyst transforms these to build a detailed report describing the vulnerabilities and security flaws. In the hands of the analyst, a description of each attack vector, enriched with evidence, screenshots and collected data, turns into detailed answers to the following questions:

  • What vulnerabilities were found and how?
  • What are the conditions for exploitation?
  • Which component is vulnerable (IP address, port, script, parameter)?
  • What exploit/utility was used?
  • What was the result (data / access level / conditions for exploiting another vulnerability)?

Notably, multiple vulnerabilities pieced together may result in a single attack vector leading from zero-level access to the highest privileges.

Verification of threat modeling

After the previous stages are complete, the vulnerabilities should be grouped under categories. Below are a few examples:

  • Web application vulnerabilities (SQL injection, XSS, CSRF);
  • Access control misconfigurations (for example, excessive account privileges, use of the same local account on different hosts);
  • Patch management issues (use of software that has known vulnerabilities).

Next, all vulnerabilities and security misconfigurations will be converted into threats that can exploit these. Armed with the knowledge of the client’s business systems, the analyst can assess to which critical resources a cybercriminal will gain access in the event of an attack. Will the attacker be able to reach the client’s data, gain access to employees’ personal data, or maybe to payment orders? For example, by gaining total control over the client’s internal infrastructure, including critical IT systems, an attacker can disrupt the organization’s business processes, while the ability to read arbitrary files on the client’s servers can lead to sensitive documents being stolen.

Visualization

The analyst creates a diagram to visualize all of the pentester’s activities relating to the successful attack scenarios. This is an important time not just for the client, who clearly sees everything that happened (what vulnerabilities were exploited, which hosts were accessed, and what threats this led to), but also for the pentesters. This is because, looking at this level of detail, they can see vectors that previously went unnoticed, findings that the report missed, and attacks that can be launched in the future.

Project case: discovering an additional attack vector

During an internal penetration test, our experts found multiple vulnerabilities and obtained administrative privileges in the domain. However, while analyzing the output of the BloodHound tool, the analyst uncovered hidden relationships, stumbling on an attack path within an Active Directory environment. The analyst found that external contractors could escalate their privileges in the system due to Active Directory being misconfigured. Constrained by the project scope, the pentesters did not go on to gain access to contractor credentials, so they were not able to exploit said misconfiguration. The client greatly appreciated the discovery of the new attack vector.

A project case where BloodHound tool output analysis revealed an additional attack vector

Vulnerability prioritization

When all vulnerabilities and threats have been identified, an analyst moves on to the prioritization stage. At this step, it should be decided which vulnerabilities need fixing first. The simplest solution would be to prioritize those with the highest severity level, but it is not the correct one. There are situations where a “critical” vulnerability identified in a test web application does not cause the same amount of damage as a “medium” vulnerability in a critical system. An example is a vulnerability in an online bank whose exploitation can trigger a whole chain of interconnected vulnerabilities, which would allow the attacker to steal the clients’ money.

So, the analyst looks at the overall business impact of the attack vector and the risk level of the vulnerabilities involved. Next, they prioritize the vulnerabilities, starting with the ones that pose the most severe threats but are easiest and fastest to fix, and following with those which require major changes to the business processes and are a subject of strategic cybersecurity improvements. After that, the analyst arranges the list of vulnerabilities and recommendations in the order in which measures should be taken.

Vulnerabilities prioritization scheme

Vulnerabilities prioritization scheme

Recommendations

The analyst prepares recommendations, which are part of project deliverables and should consider the following:

  • A timeframe for implementation:
    • Short term (<1 month): easily implemented recommendations, such as installing security updates or changing compromised credentials;
    • Medium term (<1 year): the most important changes requiring significant effort such as, for example, enforcing strict network filtering rules, restricting access to network services and applications, or revising user privileges and revoking excessive ones;
    • Long term (>1 year): changing the architecture and business processes, implementing/improving security processes and controls, such as, for example, developing a robust procedure for periodical updates.
  • The client’s approach to certain systems and business processes;
  • Industry best practices and cybersecurity frameworks.

Project follow-up

At the final stage, the analyst provides three levels of project deliverables:

  • An executive overview for C-level managers and decision-makers who represent the client.
  • Technical details of the security assessment for the client’s IS team, SOC officers, IT specialists, and other employees.
  • Machine-readable results that can be processed automatically or used in the client’s cybersecurity products.

Three levels of project deliverables

Three levels of project deliverables

Conclusions

In summary of the above, the main advantages of having an analyst on a security assessment project, as evidenced by our experience, are as follows:

  • The ability to collect and analyze a large amount of data, whether found in external sources or received from pentesters;
  • Pentester time savings: these team members may not be concerned with the final results or reports as such;
  • Collaboration with the client’s SOC or blue team to check whether the attacks were noticed, and to capture the defenders’ actions and the timing of the attacks;
  • Visualization of the test results and translation of these into the business language.

Last but not least, the analyst provides a detailed mitigation plan including short-term and long-term recommendations, and actions required for mitigating the discovered threats. As part of our security assessment services, we give clients an overview of cybersecurity risks specific to their organizations (industry, infrastructure, and goals), and advice on hardening security and mitigating future threats.

]]>
https://securelist.com/how-the-analyst-can-enhance-pentest/108652/feed/ 0 full large medium thumbnail
Come to the dark side: hunting IT professionals on the dark web https://securelist.com/darknet-it-headhunting/108526/ https://securelist.com/darknet-it-headhunting/108526/#respond Mon, 30 Jan 2023 10:00:30 +0000 https://kasperskycontenthub.com/securelist/?p=108526

The dark web is a collective name for a variety of websites and marketplaces that bring together individuals willing to engage in illicit or shady activities. Dark web forums contain ads for selling and buying stolen data, offers to code malware and hack websites, posts seeking like-minded individuals to participate in attacks on companies, and many more.

Just as any other business, cybercrime needs labor. New team members to participate in cyberattacks and other illegal activities are recruited right where the business is done – on the dark web. We reviewed job ads and resumes that were posted on 155 dark web forums from January 2020 through June 2022 and analyzed those containing information about a long-term engagement or a full-time job.

This post covers the peculiarities of this kind of employment, terms, candidate selection criteria, and compensation levels. Further information, along with an analysis of the most popular IT jobs on the dark web, can be found in the full version of the report.

Key outcomes

Our analysis of the dark web job market found:

  • The greatest number of ads were posted in March 2020, which was likely related to the outbreak of the COVID-19 pandemic and the ensuing changes in the structure of the job market.
  • The major dark web employers are hacker teams and APT groups looking for those capable of developing and spreading malware code, building and maintaining IT infrastructure, and so on.
  • Job ads seeking developers are the most frequent ones, at 61% of the total.
  • Developers also topped the list of the best-paid dark web IT jobs: the highest advertised monthly salary figure we saw in an ad for a developer was $20,000.
  • The median levels of pay offered to IT professionals varied between $1,300 and $4,000.
  • The highest median salary of $4,000 could be found in ads for reverse engineers.

The dark web job market

Most dark web employers offer semi-legal and illegal jobs, but there are ads with potentially legal job offers that comply with national laws. An example is creating IT learning courses.

Sketchy employment arrangements can border on the illegal and sometimes go against the law. An example of a dubious job is selling questionable drugs for profit on fraudulent websites.

Dirty jobs are illegal and often present a criminal offense. An individual engaged in these can be prosecuted and jailed if caught. Fraudulent schemes or hacking websites, social network accounts and corporate IT infrastructure all qualify as dirty jobs.

Offers like that come from hacker groups, among others. Cybercrooks need a staff of professionals with specific skills to penetrate the infrastructure of an organization, steal confidential data, or encrypt the system for subsequent extortion.

Attack team coordination diagram

Attack team coordination diagram

People may have several reasons for going to a dark web site to look for a job. Many are drawn by expectations of easy money and large financial gain. Most times, this is only an illusion. Salaries offered on the dark web are seldom significantly higher than those you can earn legally. Moreover, the level of compensation depends on your experience, talent, and willingness to invest your energy into work. Nevertheless, unhappy with their pay, a substantial percentage of employees in the legitimate economy quit their jobs to find similar employment on the dark web market. Changes on the market, layoffs, and pay cuts, too, often prompt them to look for a job on cybercrime websites.

Other factors are a lack of certain candidate requirements, such as a higher education, military service record, absence of prior convictions, and so on. Legal age is the main requirements that many ads have in common. Dark web jobs look attractive to freelancers and remote workers because there is no office they have to show up in, and they can remain digital nomads. Candidates are attracted by a large degree of freedom offered on the dark web: you can take as many days off as you want, there is no dress code, and you are free to choose any schedule, tasks and scope of work.

Another reason why people look for a job on the dark web is poor awareness of possible consequences or a flippant attitude to those. Working with underground teams, let alone cybercrime groups, poses serious risks: members can be deanonymized and prosecuted, and even getting paid is not a guarantee.

Example of a resume posting

Example of a resume posting

Dark web job market statistics

To analyze the state of the dark web job market in January 2020 through June 2022, we gathered statistics on messages that mentioned employment, posted on 155 dark web forums. Messages were selected from forum sections on any jobs, not necessarily those in IT.

A total of roughly 200,000 employment-related ads were posted on the dark web forums during the period in question. The largest number of these, or 41% of the total, were posted in 2020. Posting activity peaked in March 2020, possibly caused by a pandemic-related income drop experienced by part of the population.

Ad posting statistics by quarter, Q1 2020–Q2 2022 (download)

The impact of the pandemic was especially noticeable on the CIS markets.

The resume of a candidate who has found himself in a pinch (1)

The resume of a candidate who has found himself in a pinch (1)

See translation

Guy over 25, no addictions, into sports. Quarantined without cash, looking for rewarding job offers, ready to cooperate.

The resume of a candidate who has found himself in a pinch (2)

The resume of a candidate who has found himself in a pinch (2)

Some of the living in the region suffered from reduction of income, took a mandatory furlough, or lost their jobs altogether, which subsequently resulted in rising unemployment levels (article in Russian).

Tags on an ad offering a job amid the crisis

Tags on an ad offering a job amid the crisis

See translation

how to earn money amid crisis
make some cash during pandemic
make money during coronavirus
coronavirus updates
pandemic jobs
jobs amid crisis

Some jobseekers lost all hope to find steady, legitimate employment and began to search on dark web forums, spawning a surge of resumes there. As a result, we observed the highest ad numbers, both from prospective employers and jobseekers, or 6% of the total, in March 2020.

Posting dynamics on dark web job forums in 2020–2022 (download)

Ads seeking jobs were significantly fewer than those offering, with just 17% of all ads we found related to employment. The statistics suggest that jobseekers respond to job ads by prospective employers more frequently than they post resumes.

Resumes posted on dark web forums target diverse areas of expertise and job descriptions: from moderating Telegram channels to compromising corporate infrastructure. This study focused on IT jobs specifically. We analyzed 867 ads that contained specified keywords, 638 of the ads being vacancy postings and 229 being resumes.

The most in-demand professionals on the dark web were developers: this specialization accounted for 61% of total ads. Attackers (pentesters) were second, with 16%, and designers came third, with 10%.

Distribution of dark web job ads across specializations (download)

Selection criteria

The methods of selecting IT professionals on the dark web market are much the same as those used by legitimate businesses. Employers similarly look for highly skilled workforce, so they seek to select the best candidates.

Selection criteria in dark web job postings. The percentages presented were calculated out of the total number of ads that clearly stated selection criteria (download)

Job postings often mention test assignments, including paid ones, as well as interviews, probation periods, and other selection methods.

Job posting that offers applicants a test assignment

Job posting that offers applicants a test assignment

See translation

PM us your resume if you’re interested. We’ll send the suitable candidates a paid test assignment (20,000 rub in BTC at current rate).

One job ad even contained a detailed description of the employee selection process. An applicant had to undergo several rounds of screening, test assignments involving encryption of malware executables and evasion of protective measures, and a probation period.

Example of a candidate selection flow

Example of a candidate selection flow

See translation

Candidate selection procedure:

  1. We give you a test DLL to encrypt. Must be a FUD scantime encrypt with max 3 minor AV runtime detects.
  2. If step 1 completed successfully, you get a live file to encrypt. Must be a FUD scantime encrypt, stay clean for 24 hours (no d/l)
  3. If step 2 completed successfully, we put you on a trial period of two weeks for $40/encrypt. We expect a functional FUD DLL/EXE by 1 PM Moscow time every Monday through Friday.
  4. If trial completed successfully, you were regularly online, doing cleanups, and you showed yourself to be a painstaking and competent professional, we hire you full-time for $800–$1500/week.

The absence of addictions, such as drugs and alcohol, is one of the requirements peculiar to the recruitment process on the dark web.

Job posting saying that only those free from addictions can be selected

Job posting saying that only those free from addictions can be selected

See translation

Teamwork skills, stable connection, no alcohol or drug addictions

Employment terms

Employers on the dark web seek to attract applicants by offering favorable terms of employment, among other things. The most frequently mentioned advantages included remote work (45%), full-time employment (34%), and flextime (33%). That being said, remote work is a necessity rather than an attractive offer on the dark web, as anonymity is key in the world of cybercrime. You can also come across paid time off, paid sick leaves, and even a friendly team listed among the terms of employment.

Employment terms in dark web job postings. The percentages presented were calculated out of the total number of ads that clearly stated the terms of employment (download)

Cybercrime groups, who look for the most highly skilled professionals, offer the best terms, including prospects of promotion and incentive plans.

Employment terms in a dark web job posting

Employment terms in a dark web job posting

See translation

Terms:

  • Paychecks on time. Pay rate ($2000 and up) to be fixed after successful test assignment and interview
  • Fully REMOTE, 5 days/week, Sat and Sun off.
  • PTO
  • NO formal employment contract
  • We offer a continuous increase in pay: with each successful assignment, you get a raise and an instant bonus.

These groups may conduct performance reviews as did Conti. The reviews may result in the employee receiving a bonus or being fined due to unproductivity. On top of that, some underground organizations run employee referral programs offering bonuses to those who have successfully engaged new workers.

Similarly to the legitimate job market, dark web employers offer various work arrangements: full time, part time, traineeships, business relationships, partnerships, or team membership.

Job posting that suggests cooperation

Job posting that suggests cooperation

The absence of a legally executed employment contract is the key differentiator between the dark web and the legitimate job market. This is not to say that you never come across perfectly legal job ads on the dark web. For instance, we discovered several ads seeking a developer for a well-known Russian bank and mentioning a legally executed contract and voluntary health insurance.

Legitimate job ad found on the dark web

Legitimate job ad found on the dark web

See translation
  • Work for a top 50 Russian bank.
  • Formal employment contract
  • VHI starting from first month of employment
  • Work schedule: 5/2, remote work
  • Compensation levels: remote developer in Samara: ₽125,000 gross + 10% annual bonus; onsite developer in Penza: ₽115,000 gross + 10% annual bonus
  • Professional team, friendly environment
  • Challenging task and projects, chance to make a difference

Levels of compensation

We analyzed more than 160 IT job ads that explicitly stated a salary[1]. When reviewing the statistics, it is worth bearing in mind that dark web employers typically state rough salary figures. Many employers provide a pay range or a lower limit.

Job posting that indicates a ballpark level of compensation

Job posting that indicates a ballpark level of compensation

Your level of compensation may grow with time depending on how much effort you invest, your contribution, and how successful the business is on the whole. Compensation is typically indicated in dollars, but in practice work is often paid for in cryptocurrency.

The diagram below shows the minimum and maximum levels of compensation for selected IT jobs.

IT pay ranges from dark web job ads (download)

The most highly paid job at the time of the study was coding, commanding a maximum of $20,000 per month. However, the lower limit there was the smallest: just $200.

Example of offer with the highest salary for developers

Example of offer with the highest salary for developers

The median monthly salary of a reverse engineer was also notably high at $4,000.

Job Median monthly salary
Attacker $2,500
Developer $2,000
Reverse engineer $4,000
Analyst $1,750
IT administrator $1,500
Tester $1,500
Designer $1,300

Median monthly IT salaries on the dark web

Some dark web job ads promised levels of compensation much higher that the figures quoted above, but it included bonuses and commissions from successful projects, such as extorting a ransom from a compromised organization.

Not every job posting made the compensation statistics, as some looked suspicious or openly fraudulent.

Thus, a job ad on the dark web promised up to $100,000 per month to a successful pentesting candidate. Interestingly enough, the work was described as “legal.”

Job posting on a dark web forum offering an inflated compensation figure

Job posting on a dark web forum offering an inflated compensation figure

See translation

search, employee / Seeking website pentesters ХХЕ, XSS, SQL
Looking for a person who knows ХХЕ, XSS, SQL attacks inside and out to pentest our sites for vulnerabilities.
Fully legal
Compensation up to $100,000/mo.
PM for details

Besides the usual hourly, daily, weekly, and monthly rates, there are other forms of compensation that serve as the base pay or complement it. You could come across job ads that offered wages to be paid for completing a job: hacking a website or creating a phishing web page.

Various performance-dependent commission was often promised in addition to the salary. For example, a pentester could be promised a monthly salary of $10,000 along with a percentage of the profits received from selling access to a compromised organization’s infrastructure or confidential data, extortion, and other ways of monetizing the hack.

Example of a job ad that offered a salary and a performance bonus

Example of a job ad that offered a salary and a performance bonus

See translation

Seeking WIN pentester to join our team.

  1. Experience with Cobalt Strike, MSF, etc. required
  2. Commitment to work is a must
  3. No addictions

Compensation up to $10,000/mo. + bonus.
PM if interested.

Candidates were often offered commission only. In several cases, no compensation of any kind was provided. Applicants were offered to work pro bono, for promised commission, or for a share of the profits in the future.

Example of an unpaid job ad

Example of an unpaid job ad

Takeaways

The dark web is a versatile platform that cybercriminals not only use for striking deals and spreading illegal information, but also for hiring members to their teams and groups.

The data provided in this report shows that demand for IT professionals is fairly high on cybercrime websites, with new team members often being salaried employees. It is interesting, too, that cybercrime communities use the same methods for recruiting new members as legitimate organizations, and job ads they post often resemble those published on regular recruitment sites.

The ads we analyzed also suggest that a substantial number of people are willing to engage in illicit or semilegal activities despite the accompanying risks. In particular, many turn to the shadow market for extra income in a crisis. Thus, the number of resumes on dark web sites surged as the pandemic broke out in March 2020. Although dark web jobs could be expected to pay higher than legitimate ones, we did not detect a significant difference between the median levels of IT professionals’ compensation in the cybercriminal ecosystem and the legitimate job market.

Software development proved to be the most sought-after skill, with 61% of all ads seeking developers. This could suggest that the complexity of cyberattacks is growing. The higher demand for developers could be explained by a need to create and configure new, more complex tools.

It is worth noting that the risks associated with working for a dark web employer still outweigh the benefits. The absence of a legally executed employment contract relieves employers of any responsibility. A worker could be left unpaid, framed or involved in a fraudulent scheme.

It is not worth forgetting the risks of being prosecuted, put on trial and imprisoned for the unlawful activities. The risks of cooperating with hacker groups are especially high, as deanonymization of their members is a priority for cybercrime investigation teams. The group may be exposed sooner or later, and its members, face jail time.

To inquire about threat monitoring services for your organization, please contact us at dfi@kaspersky.com.

To get the full version of the report, please fill in the form below. If you cannot see the form, try opening this post in Chrome with all script-blocking plugins off.


[1] Salary levels expressed in Russian rubles were converted using the effective rate at the time of the study: 75 rubles per dollar.

]]>
https://securelist.com/darknet-it-headhunting/108526/feed/ 0 full large medium thumbnail
How much security is enough? https://securelist.com/how-much-security-is-enough/108434/ https://securelist.com/how-much-security-is-enough/108434/#comments Mon, 09 Jan 2023 10:38:33 +0000 https://kasperskycontenthub.com/securelist/?p=108434

According to a prominent Soviet science fiction writer, beauty is a fine line, a razor’s edge between two opposites locked in a never-ending battle. Today, we would put it less poetically as an ideal compromise between contradictions. An elegant, or beautiful, design is one that allows reaching that compromise.

As an information security professional, I like elegant designs — all the more so because trade-off is a prerequisite for an information security manager’s success: in particular, trade-off between the level of security and its cost in the most practical, literal sense. A common perception in the infosec community is that there can never be too much security, but it is understood that “too much” security is expensive — and sometimes, prohibitively so — from a business perspective. So, where is that fine line that defines “just enough” security, how much is enough, and how does one prove this to decision-makers? This is what I want to talk about.

Mathematics and images

There is a certain language barrier between a chief information security officer (CISO) and the above-mentioned decision-makers — I will refer to them as “business” for brevity. While security professionals speak of “lateral movement” and “attack surface”, business views infosec and the IT department as a whole as costs to be minimized. While the costs of IT are visible as hardware and software, it is hard to do the same with IS, as this is a purely applied function deeply integrated with IT and hardly perceivable at a high level of abstraction. I like to describe IS as one of IT’s many properties, a criterion by which to measure the quality of a company’s information systems. Quality is commonly understood to come at a price. Theoretically, business understands that too, but it asks valid questions: why it should allocate the exact amount articulated by the CISO, and what the company would get for that money.

IS funding requests historically have been backed by all kinds of horror stories: business will hear tales of current security incidents, such as ransomware attacks or data leaks, and then they will be told that a certain solution can help against the aforementioned threats. These arguments are supported by stories from relevant — and occasionally, not so much — publications containing a description and rough estimate of the damage along with the provider’s pricing. This is only good for a start, and there is no guarantee that the approach will work again, whereas we are interested in a continuously improving operational process that will help to measure the threat landscape with a reasonable degree of objectivity and in a way that is understandable to business, and adapt the corporate system of security controls to that. Therefore, let us put the horror stories aside as an approach that seriously lacks in both efficiency and effectiveness, and arm ourselves with relevant parameters.

I will start by highlighting the fact that humans are not particularly good at understanding plain text. Tables work much better, and images, better yet. Therefore, I recommend that your conversation with business about the need to improve the IS management system be illustrated with colorful diagrams and images that reflect the current threat landscape and the capabilities of operational security. The way to succeed is to make sure that the slide deck shows the capabilities of operational security — or simply, the SOC — as being up to current threats.

To compare the threat landscape with SOC performance, the data must be expressed in the same units. The efficiency and effectiveness of the SOC or any other team — let alone one that has any sort of service level agreement (SLA) — are constantly measured, so it is only logical to reuse the SOC metrics for evaluating the sufficiency of security. Measuring the threat landscape is a little less straightforward. Threats should be evaluated by a large number of parameters: the more characteristics of potential attackers we evaluate, the better the chance to obtain an unbiased picture. I would like to delve into two most obvious parameters, which are fairly easy to compute but also easy to explain without resorting to complex technical terms.

Mean time to detect an attack

Unfortunately, a complex attack is often noticed only when assessing impact, but our statistics include a fair number of mature companies that detected an attack at an earlier stage, which is favorable for our evaluation. Our analytics show that the mean detection time differs by attack scenario, but the planning of security controls should use the shortest time measured in hours.

As a consequence, the SOC is required to detect and localize the attack in time, which is normally expressed with two indicators: mean time to detect (MTTD) and mean time to respond (MTTR). Both must be less than the attacker’s mean time to reach the target, regardless of the attack type.

Time to investigate

This is the second, equally important, attribute, which is obviously related to the duration of the attackers’ presence in the compromised infrastructure.

The SOC team must have access to this value and the resources to respond without affecting the quality of monitoring.

I believe that indicators that demonstrate our SOC’s (in)ability to detect the threat before it goes far enough to cause damage are much easier for business to understand. Combined with many other indicators, such as “our SOC’s ability to detect specific attacker techniques and tools” or “our SOC’s ability to monitor specific penetration vectors”, these help to form the most unbiased assessment of the SOC’s operational preparedness and provide better arguments for business in favor of investing in a security area.

Using sources

Once we have settled on indicators to demonstrate to business, the question arises of where to get data from. Members of operational security teams who have accumulated their own incident detection and investigation statistics will immediately respond that a review of past cases should serve as the source of indicators for assessment. The outcome of the investigation will show the attackers’ time expectations and their methods, while the SOC metrics will provide an unbiased assessment of the defenders’ efficiency and effectiveness. Both types of indicators will be directly linked to the company, rather than being abstract assessments.

As for those who have not yet accumulated statistics and experience of their own, I recommend you using analytics from vendors and MSSPs. For instance, every year, we publish the DFIR team’s incident analytics, which can be used as a source of a potential attacker profile, while the SOC team’s analytical report will help to shape potential SOC targets. It goes without saying that the provider’s statistics should be representative for the industry and country the customer operates in rather than contain all sorts of irrelevant data. External sources of data could benefit experienced employees who draw upon their own data, too. These may serve as a source of information about new threats, which are already relevant to the industry as a whole but have not yet caught the eye of the specific organization’s SOC employees. In addition to that, external data will provide a basis for comparing the company’s own performance with that of the service providers to reevaluate the company’s ability to perform the work with in-house resources against the need for outsourcing.

Answering the question

The real cost of requisite security is the difference between attackers’ capabilities and the SOC team’s resources — provided that the former are assessed in terms of actual incidents and relevant statistics, and the latter, in terms of SOC metrics. The aforementioned MTTD and MTTR will work best, as they are easier for business to grasp than the SOC maturity model or other academic arguments. In my opinion, it is the combination of operational metrics based on both the company’s own teams’ past work and analytical reports by IS service providers that can help to achieve the right balance, resulting in the desired level of performance and efficiency at an acceptable cost in the long run, or in a word, in beauty.

]]>
https://securelist.com/how-much-security-is-enough/108434/feed/ 1 full large medium thumbnail
How to train your Ghidra https://securelist.com/how-to-train-your-ghidra/108272/ https://securelist.com/how-to-train-your-ghidra/108272/#respond Fri, 09 Dec 2022 13:00:23 +0000 https://kasperskycontenthub.com/securelist/?p=108272

Getting started with Ghidra

For about two decades, being a reverse engineer meant that you had to master the ultimate disassembly tool, IDA Pro. Over the years, many other tools were created to complement or directly replace it, but only a few succeeded. Then came the era of decompilation, adding even more to the cost and raising the barrier to entry into the RE field.

Then, in 2019, Ghidra was published: a completely open-source and free tool, with a powerful disassembler and a built-in decompiler for each supported platform. However, the first release did not look even close to what us reverse engineers were used to, so many of us tried and then abandoned it.

It may sound anecdotal, but the most popular answer to, “Have you used Ghidra?” I usually hear is, “Yeah, tried it, but I’m used to IDA”, or “I don’t have the time to check it out; maybe later”.  I was like that, too: tried to reverse something, failed miserably, went back to familiar tools. I would still download a newer version every once and then, and try to do some work or play CTF. One day, after making a few improvements to the setup and adding the missing databases, I would not go back.

So, here is my brief introduction to setting up Ghidra, and then configuring it with a familiar UI and shortcuts, so that you would not need to re-learn all the key sequences you have got used to over the years.

Disclaimer

Ghidra is a complex collection of source code with many third-party dependencies that are known to contain security vulnerabilities. There are no guarantees that the current code base is free from those or that it does not contain any backdoors. Proceed with caution, handle with care.

Building Ghidra

Of course, the easiest way to obtain Ghidra is to download the current release published on Github. The current code is months behind the master branch, and will most likely be missing all the latest features. So, although this is not the officially recommended approach, I suggest getting the bleeding-edge code from the master branch and building the binaries yourself. In the meantime, we are going to prepare our own build of the master branch and make it available for download.

So, let us begin. First, you need the following prerequisites:

All OSs:

Additionally, you need the platform-specific compiler for building the native binaries:

  • Windows: Microsoft Visual Studio (2017 or later; 2022 Community edition works well)
  • Linux: modern GNU Compiler Collection (9 and 12 work well)
  • macOS: Xcode build tools

Then, download the latest source code in a ZIP archive and extract it, or clone the official Git repository.

Windows build

Open the command line prompt (CMD.EXE). Set the directory containing the source code as the current one:
cd %directory_of_ghidra_source_code%

Run “bin\gradle.bat” from the Gradle directory to initialize the source tree and download all the dependecies:
gradle.bat -I .\gradle\support\fetchDependencies.gradle init

You need an active Internet connection, and it may take 5–10 minutes to download the dependencies.

In the end, the output should state, “BUILD SUCCESSFUL” and also print clearly that Visual Studio was located (required for further building).

If there were problems, check your Internet connection—provided that you have Visual Studio, JDK and Gradle properly installed. Once the build succeeds, issue the final command:
gradle.bat buildGhidra

It will take more time, you may see lots of warnings printed out, but the final verdict still should be, “BUILD SUCCEESFUL”.

The complete Ghidra package is written as a ZIP archive to the “build\dist” directory. To run Ghidra, extract the ZIP archive and start “ghidraRun.bat”.

At the time of writing this, Ghidra 10.3-DEV used the “Windows” UI configuration as the default one, so there was no need to reconfigure the “look and feel” option.

Linux build

Use an existing terminal window or open a new one.
Set the directory containing the source code as the current one:
cd %directory_of_ghidra_source_code%

Run “bin\gradle” from the Gradle directory to initialize the source tree and download all the dependencies:
gradle -I ./gradle/support/fetchDependencies.gradle init

You need an active Internet connection, and it may take 5–10 minutes to download the dependencies. Please note that the task may fail if your locale is different from “en_US” and GCC uses translated messages:

This may happen, for example, with the Russian locale, because the version string for GCC is translated:

As a mitigation, run gradle prefixed with “LANG=C”:
LANG=C gradle -I ./gradle/support/fetchDependencies.gradle init

In the end, the output should state, “BUILD SUCCESSFUL”.

Then, build Ghidra and all the dependencies:
Gradle buildGhidra

Once the build succeeds, a ZIP archive can be located in the build/dist directory. Extract it from there.

To start Ghidra, use the “ghidraRun” shell script in the root directory of the extracted archive:

At the time of writing this, version 10.3-DEV used the “Nimbus” look and feel as the default:

To use a more familiar look and feel, switch to “Native” or “GTK+” using the “Edit -> Tool Options -> Tool” menu, and choose the relevant item from the “Swing Look And Feel” dropdown list. This is what the “Native” theme looks on Gnome 3:

macOS build

For macOS, you need to have the Xcode command-line tools installed, and that includes a substitute for GCC and make. This can simply be done by opening a Terminal window and running gcc. If the tools are not installed, an installation dialog will appear. Just confirm the installation and wait for the tools to download.

Once the tools are installed, the build process is identical to that in Linux.
Set the directory containing the source code as the current one:
cd %directory_of_ghidra_source_code%

Run “bin\gradle” from the Gradle directory to initialize the source tree and download all the dependencies:
gradle -I ./gradle/support/fetchDependencies.gradle init

Then, start the main build process:
gradle buildGhidra

Unzip the resulting Ghidra package from the build/dist directory and start the “ghidraRun” shell script:

The default look and feel for macOS is native by default:

Setting up the UI

To configure Ghidra, let us first create a temporary project. A project used for storing all the results of the analysis, like an IDB, but for several files (samples) and folders, can also hold type databases that can be shared between different files. It can even be linked to a remote database for collaborative work, but that is far beyond our scope of just setting up.

So, let us use the “File -> New Project…” menu, type in a project name and filesystem location, and continue:

Now, we have an empty project. Let us start the CodeBrowser tool that is the main UI for working with the binaries:

This will open an empty listing window, with a few subwindows inside:

This is going to be the primary workspace in Ghidra, so let us reconfigure it to behave and look a bit closer to what we are used to.

Navigation bar
It is vertical and located on the right side of the listing by the scrollbar. To turn it on, use the “overview” button:

The rest of the options are set in the “Options” window by using the “Edit -> Tool Options…” menu.

Hex editor font
Select the following on the “Options -> ByteViewer” tab:

Hexadecimal offsets in structures
Set the following on the “Options -> Structure Editor” tab:

Shortcuts!
Default shortcuts, or “key bindings”, may be very confusing even for a seasoned reverse engineer and seem to be based on the ones used by Eclipse. You can search for shortcuts of interest and set or change them using a filter. To make the transition easier, we have prepared a prebuilt configuration with familiar shortcuts (C for code, D for data, X for xrefs, Esc for going back, etc.), which you can download from here and import:

Disassembly listing font
Choose the color and font on the “Options -> Listing Display” tab:

Compact listing of array items:
These are called “elements per line” on the “Listing Fields -> Array Options” tab. Also, you may want to change the “Array Index Format” to “hex” to prevent any confusion in the listing.

One line per instruction
To achieve that, set the “maximum lines to display” to “1” in the “Options -> Listing Fields -> Bytes Field” menu. It also makes sense to check “Display in Upper Case”.

Highlight by Left Mouse Click
This is probably the most searched-for option, with the most frustrating defaults. To get values, registers, labels, and anything else highlighted with a left mouse click, set the option “Options -> Listing Fields -> Cursor Text Highlight”, “Mouse Button To Activate” to “LEFT” (the default is “MIDDLE”).

Function headers and footers
Tick the options “Flag Function Entry” and “Flag Function Exits” on the “Options -> Listing Fields -> Format Code” tab.

At the same time, uncheck the “Display Function Label” option on the “Options -> Listing Fields -> Labels Field” tab to remove an unnecessary line for each function header.

Turn off horizontal scrolling
Uncheck this option on the “Options -> Listing Fields -> Mouse” tab:

Show “normal” register names, long arrays, and strings
By default, Ghidra will display local variable names instead of registers. In some cases, this may be useful, but may also cause frustration when trying to read plain assembly.
To force Ghidra to show plain register names, uncheck the “Markup register variable references” option on the “Options -> Listing Fields -> Operands Field” tab.
It may also help to increase the “Maximum Lines to Display” to 200, and to check the “Enable word wrapping” option to make arrays and strings in the listing easier to read.

Increase the number of cross-references to display
This option can be configured on the “Options -> Listing Fields -> XREFs Field” tab:

Change the width of the address, bytes, and opcodes columns
Ghidra uses the concept of “fields” that can be moved around and reformatted via the UI. This UI feature can be activated with the button named “Edit the Listing fields” in the main listing window.

When this is activated, you can move the columns around and change their width. The configuration is saved if you choose “save the tool” when exiting CodeBrowser.

Opening a file for analysis

Now that we have set up the UI, let us start our first analysis. To analyze a file, you need to import it first. This includes copying the data into the database, so that the original file can be deleted from the file system and the imported file can be saved back to disk.

To import a file, use the “File -> Import File…” menu in the Ghidra project window. You will be presented with an import dialog.

To prevent any confusion, treat the “language” as the Ghidra name for a combination of the processor name, byte endianness, and compiler variety. You may need to choose it manually if the file format is not recognized or if there is no format at all.

After the file is imported, it will appear in the project window. From there, you can open it in the CodeBrowser tool to open the main listing window:

For the new files, you will need to start manual analysis tasks or allow the autoanalysis process to do all the usual routine tasks of identifying code, data, functions, etc.:

The default configuration of the “analyzers”, which are separate analysis tasks, should be sufficient in most cases:

The analysis will start, and you will be presented with a CodeBrowser window gradually updating all the information:

Here are a few tips on working with CodeBrowser:

  • The “Symbol Tree” can be used to find all functions, exports including the entrypoint, and imports. Use it to find the starting points of your analysis.
  • The “Data Type Manager” contains all the types, structures, typedefs, pointers, and enums. External type libraries are loaded here, in the “arrow” menu.
  • Use the “Window” menu to discover most of the CodeBrowser functionality.
  • File segments are displayed in the “Memory Map” window. Open it with a button or via the “Window” menu.

Going further

Ghidra comes with a collection of helper scripts in Java and Python that can be located by using the “Script Manager”: use the button or open via the “Window” menu. Also, it has a built-in Python 2—actually, Jython—interactive console. Use the “Window -> Python” menu and discover the flat API using “dir()”:

A few more things

Currently, the vanilla Ghidra build is missing lots of Windows datatypes that are required for typical malware analysis tasks. We have prepared an extended type information database for Windows, and added FIDBs (runtime function signatures) for VS2013 and Delphi. These can be downloaded from here.

This is just the beginning

We hope that this manual will help with reconfiguring Ghidra into a more convenient and easier-to-use tool. With additional type and signature databases, it may become powerful enough for a primary RE tool, while being free and open source. Remember to come back for updates!

]]>
https://securelist.com/how-to-train-your-ghidra/108272/feed/ 0 full large medium thumbnail
Indicators of compromise (IOCs): how we collect and use them https://securelist.com/how-to-collect-and-use-indicators-of-compromise/108184/ https://securelist.com/how-to-collect-and-use-indicators-of-compromise/108184/#comments Fri, 02 Dec 2022 08:00:07 +0000 https://kasperskycontenthub.com/securelist/?p=108184

It would hardly be an exaggeration to say that the phrase “indicators of compromise” (or IOCs) can be found in every report published on the Securelist. Usually after the phrase there are MD5 hashes[1], IP addresses and other technical data that should help information security specialists to counter a specific threat. But how exactly can indicators of compromise help them in their everyday work? To find the answer we asked three Kaspersky experts: Pierre Delcher, Senior Security Researcher in GReAT, Roman Nazarov, Head of SOC Consulting Services, and Konstantin Sapronov, Head of Global Emergency Response Team, to share their experience.

What is cyber threat intelligence, and how do we use it in GReAT?

We at GReAT are focused on identifying, analyzing and describing upcoming or ongoing, preferably unknown cyberthreats, to provide our customers with detailed reports, technical data feeds, and products. This is what we call cyber threat intelligence. It is a highly demanding activity, which requires time, multidisciplinary skills, efficient technology, innovation and dedication. It also requires a large and representative set of knowledge about cyberattacks, threat actors and associated tools over an extended timeframe. We have been doing so since 2008, benefiting from Kaspersky’s decades of cyberthreat data management, and unrivaled technologies. But why are we offering cyber threat intelligence at all?

The intelligence we provide on cyberthreats is composed of contextual information (such as targeted organization types, threat actor techniques, tactics and procedures – or TTPs – and attribution hints), detailed analysis of malicious tools that are exploited, as well as indicators of compromise and detection techniques. This in effect offers knowledge, enables anticipation, and supports three main global goals in an ever-growing threats landscape:

  • Strategic level: helping organizations decide how many of their resources they should invest in cybersecurity. This is made possible by the contextual information we provide, which in turn makes it possible to anticipate how likely an organization is to be targeted, and what the capabilities are of the adversaries.
  • Operational level: helping organizations decide where to focus their existing cybersecurity efforts and capabilities. No organization in the world can claim to have limitless cybersecurity resources and be able to prevent or stop every kind of threat! Detailed knowledge on threat actors and their tactics makes it possible for a given sector of activity to focus prevention, protection, detection and response capabilities on what is currently being (or will be) targeted.
  • Tactical level: helping organizations decide how relevant threats should be technically detected, sorted, and hunted for, so they can be prevented or stopped in a timely fashion. This is made possible by the descriptions of standardized attack techniques and the detailed indicators of compromise we provide, which are strongly backed by our own widely recognized analysis capabilities.

What are indicators of compromise?

To keep it practical, indicators of compromise (IOCs) are technical data that can be used to reliably identify malicious activities or tools, such as malicious infrastructure (hostnames, domain names, IP addresses), communications (URLs, network communications patterns, etc.) or implants (hashes that designate files, files paths, Windows registry keys, artifacts that are written in memory by malicious processes, etc.). While most of them will in practice be file hashes – designating samples of malicious implants – and domain names – identifying malicious infrastructure, such as command and control (C&C) servers – their nature, format and representation are not limited. Some IOC sharing standards exist, such as STIX.

As mentioned before, IOCs are one result of cyber threat intelligence activities. They are useful at operational and tactical levels to identify malicious items and help associate them with known threats. IOCs are provided to customers in intelligence reports and in technical data feeds (which can be consumed by automatic systems), as well as further integrated in Kaspersky products or services (such as sandbox analysis products, Kaspersky Security Network, endpoint and network detection products, and the Open Threat Intelligence Portal to some extent).

How does GReAT identify IOCs?

As part of the threat-hunting process, which is one facet of cyber threat intelligence activity (see the picture below), GReAT aims at gathering as many IOCs as possible about a given threat or threat actor, so that customers can in turn reliably identify or hunt for them, while benefiting from maximum flexibility, depending on their capabilities and tools. But how are those IOCs identified and gathered? The rule of thumb on threat hunting and malicious artifacts collection is that there is no such thing as a fixed magic recipe: several sources, techniques and practices will be combined, on a case-by-case basis. The more comprehensively those distinct sources are researched, the more thoroughly analysis practices are executed, and the more IOCs will be gathered. Those research activities and practices can only be efficiently orchestrated by knowledgeable analysts, who are backed by extensive data sources and solid technologies.

General overview of GReAT cyber threat intelligence activities

General overview of GReAT cyber threat intelligence activities

GReAT analysts will leverage the following sources of collection and analysis practices to gather intelligence, including IOCs – while these are the most common, actual practices vary and are only limited by creativity or practical availability:

  • In-house technical data: this includes various detection statistics (often designated as “telemetry”[2]), proprietary files, logs and data collections that have been built across time, as well as the custom systems and tools that make it possible to query them. This is the most precious source of intelligence as it provides unique and reliable data from trusted systems and technologies. By searching for any previously known IOC (e.g., a file hash or an IP address) in such proprietary data, analysts can find associated malicious tactics, behaviors, files and details of communications that directly lead to additional IOCs, or will produce them through analysis. Kaspersky’s private Threat Intelligence Portal (TIP), which is available to customers as a service, offers limited access to such in-house technical data.
  • Open and commercially available data: this includes various services, file data collections that are publicly available, or sold by third parties, such as online file scanning services (e.g., VirusTotal), network system search engines (e.g., Onyphe), passive DNS databases, public sandbox reports, etc. Analysts can search those sources the same way as proprietary data. While some of these data are conveniently available to anyone, information about the collection or execution context, the primary source of information, or data processing details are often not provided. As a result, such sources cannot be trusted by GReAT analysts as much as in-house technical data.
  • Cooperation: by sharing intelligence and developing relationships with trusted partners such as peers, digital service providers, computer security incident response teams (CSIRTs), non-profit organizations, governmental cybersecurity agencies, or even some Kaspersky customers, GReAT analysts can sometimes acquire additional knowledge in return. This vital part of cyber threat intelligence activity enables a global response to cyberthreats, broader knowledge of threat actors, and additional research that would not otherwise be possible.
  • Analysis: this includes automated and human-driven activities that consist of thoroughly dissecting gathered malicious artifacts, such as malicious implants, or memory snapshots, in order to extract additional intelligence from them. Such activities include reverse-engineering or live malicious implant execution in controlled environments. By doing so, analysts will often be able to discover concealed (obfuscated, encrypted) indicators, such as command and control infrastructure for malware, unique development practices from malware authors, or additional malicious tools that are delivered by a threat actor as an additional stage of an attack.
  • Active research: this includes specific threat research operations, tools, or systems (sometimes called “robots”) that are built by analysts with the specific intent to continuously look for live malicious activities, using generic and heuristic approaches. Those operations, tools or systems include, but are not limited to, honeypots[3], sinkholing[4], internet scanning and some specific behavioral detection methods from endpoint and network detection products.

How does GReAT consume IOCs?

Sure, GReAT provides IOCs to customers, or even to the public, as part of its cyber threat intelligence activities. However, before providing them, the IOCs are also a cornerstone of the intelligence collection practices described here. IOCs enable GReAT analysts to pivot from an analyzed malicious implant to additional file detection, from a search in one intelligence source to another. IOCs are thus the common technical interface to all research processes. As an example, one of our active research heuristics might identify previously unknown malware being tentatively executed on a customer system. Looking for the file hash in our telemetry might help identify additional execution attempts, as well as malicious tools that were leveraged by a threat actor, just before the first execution attempt. Reverse engineering of the identified malicious tools might in turn produce additional network IOCs, such as malicious communication patterns or command and control server IP addresses. Looking for the latter in our telemetry will help identify additional files, which in turn will enable further file hash research. IOCs enable a continuous research cycle to spin, but it only begins at GReAT; by providing IOCs as part of an intelligence service, GReAT expects the cycle to keep spinning at the customer’s side.

Apart from their direct use as research tokens, IOCs are also carefully categorized, attached to malicious campaigns or threat actors, and utilized by GReAT when leveraging internal tools. This IOC management process allows for two major and closely related follow-up activities:

  • Threat tracking: by associating known IOCs (as well as advanced detection signatures and techniques) to malicious campaigns or threat actors, GReAT analysts can automatically monitor and sort detected malicious activities. This simplifies any subsequent threat hunting or investigation, by providing a research baseline of existing associated IOCs for most new detections.
  • Threat attribution: by comparing newly found or extracted IOCs to utilized IOCs, GReAT analysts can quickly establish links from unknown activities to define malicious campaigns or threat actors. While a common IOC between a known campaign and newly detected activity is never enough to draw any attribution conclusion, it’s always a helpful lead to follow.

IOC usage scenarios in SOCs

From our perspective, every security operation center (SOC) uses known indicators of compromise in its operations, one way or another. But before we talk about IOC usage scenarios, let’s go with the simple idea that an IOC is a sign of a known attack.

At Kaspersky, we advise and design SOC operations for different industries and in different formats, but IOC usage and management are always part of the SOC framework we suggest our customers implement. Let’s highlight some usage scenarios of IOCs in an SOC.

Prevention

In general, today’s SOC best practices tell us to defend against attacks by blocking potential threats at the earliest possible stage. If we know the exact indicators of an attack, then we can block it, and it is better to do so on multiple levels (both network and endpoint) of our protected environment. In other words, the concept of defense in depth. Blocking any attempts to connect to a malicious IP, resolve a C2 FQDN, or run malware with a known hash should prevent an attack, or at least not give the attacker an easy target. It also saves time for SOC analysts and reduces the noise of SOC false positive alerts. Unfortunately, an excellent level of IOC confidence is vital for the prevention usage scenario; otherwise, a huge number of false positive blocking rules will affect the business functions of the protected environment.

Detection

The most popular scenario for IOC usage in an SOC – the automatic matching of our infrastructure telemetry with a huge collection of all possible IOCs. And the most common place to do it is SIEM. This scenario is popular for a number of reasons:

  • SIEM records multiple types of logs, meaning we can match the same IOC with multiple log types, e.g., domain name with DNS requests, those received from corporate DNS servers, and requested URLs obtained from the proxy.
  • Matching in SIEM helps us to provide extended context for analyzed events: in the alert’s additional information an analyst can see why the IOC-based alert was triggered, the type of threat, confidence level, etc.
  • Having all the data on one platform can reduce the workload for the SOC team to maintain infrastructure and prevent the need to organize additional event routing, as described in the following case.

Another popular solution for matching is the Threat Intelligence Platform (TIP). It usually supports better matching scenarios (such as the usage of wildcards) and assumes reducing some of the performance impact generated by correlation in SIEM. Another huge advantage of TIP is that this type of solution was initially designed to work with IOCs, to support their data schema and manage them, with more flexibility and features to set up detection logic based on an IOC.

When it comes to detection we usually have a lower requirement for IOC confidence because, although unwanted, false positives during detection are a common occurrence.

Investigation

Another routine where we work with IOCs is in the investigation phase of any incident. In this case, we are usually limited to a specific subset of IOCs – those that were revealed within the particular incident. These IOCs are needed to identify additional affected targets in our environment to define the real scope of the incident. Basically, the SOC team has a loop of IOC re-usage:

  1. Identify incident-related IOC
  2. Search for IOC on additional hosts
  3. Identify additional IOC on revealed targets, repeat step 2.

Containment, Eradication and Recovery

The next steps of incident handling also apply IOCs. In these stages the SOC team focuses on IOCs obtained from the incident, and uses them in the following way:

  • Containment – limit attacker abilities to act by blocking identified IOCs
  • Eradication and Recovery – control the lack of IOC-related behavior to verify that the eradication and recovery phases were completed successfully and the attacker’s presence in the environment was fully eliminated.

Threat Hunting

By threat hunting we imply activity aimed at revealing threats, namely those that have bypassed SOC prevention and detection capabilities. Or, in other words, Assume Breach Paradigm, which intends that despite all the prevention and detection capabilities, it’s possible that we have missed a successful attack and have to analyze our infrastructure as though it is compromised and find traces of the breach.

This brings us to IOC-based threat hunting. The SOC team analyzes information related to the attack and evaluates if the threat is applicable to the protected environment. If yes, the hunter tries to find an IOC in past events (such as DNS queries, IP connection attempts, and processes execution), or in the infrastructure itself – the presence of a specific file in the system, a specific value of registry key, etc. The typical solutions supporting the SOC team with such activity are SIEM, EDR and TIP. For this type of scenario, the most suitable IOCs are those extracted from APT reports, or received by your TI peers.

In IOC usage scenarios we have touched on different types of IOCs multiple times. Let’s summarize this information by breaking down IOC types according to their origin:

  • Provider feeds – subscription for IOC provided by security vendors in the form of a feed. Usually contains a huge number of IOCs observed in different attacks. The level of confidence varies from vendor to vendor, and the SOC team should consider vendor specialization and geo profile to utilize actual IOCs. Usage feeds for prevention and threat hunting are questionable due to the potentially high level of false positives.
  • Incident IOCs – IOC generated by the SOC team during analysis of security incidents. Usually, the most trusted type of IOC.
  • Threat intelligence IOCs – a huge family of IOCs generated by the TI team. The quality depends directly on the level of expertise of your TI Analysts. The usage of TI IOCs for prevention depends heavily on the TI data quality and can trigger too many false positives, and therefore impact business operation.
  • Peer IOCs – IOCs provided by peer organizations, government entities, and professional communities. Can usually be considered as a subset of TI IOCs or incident IOCs, depending on the nature of your peers.

If we summarize the reviewed scenarios, IOC origin, and their applicability, then map this information to NIST Incident Handling stages[5], we can create the following table.

IOC scenario usage in SOC

All these scenarios have different requirements for the quality of IOCs. Usually, in our SOC we don’t have too many issues with Incident IOCs, but for the rest we must track quality and manage it in some way. For better quality management, the provided metrics should be aggregated for every IOC origin to evaluate IOC source, not the dedicated IOCs. Some basic metrics, identified by our SOC Consultancy team, that can be implemented to measure IOC quality are:

  • Conversion to incident – which proportion of IOCs has triggered a real incident. Applied for detection scenarios
  • FP rate – false positive rate generated by IOC. Works for detection and prevention
  • Uniqueness – applied for IOC source and tells the SOC team how unique the set of provided IOCs is compared to other providers
  • Aging – whether an IOC source provides up-to-date IOCs or not
  • Amount – number of provided IOCs by source
  • Context information – usability and fullness of context provided with IOCs

To collect these metrics, the SOC team should carefully track every IOC origin, usage scenario and the result of use.

How does the GERT team use IOCs in its work?

In GERT we specialize in the investigation of incidents and the main sources of information in our work are digital artifacts. By analyzing them, experts discover data that identifies activity directly related to the incident under investigation. Thus, the indicators of compromise allow experts to establish a link between the investigated object and the incident.

Throughout the entire cycle of responding to information security incidents, we use different IOCs at different stages. As a rule, when an incident occurs and a victim is contacted, we receive indicators of compromise that can serve to confirm the incident, attribute the incident to an attacker and make decisions on the initial response steps. For example, if we consider one of the most common incidents involving ransomware, then the initial artifact is the files. The IOC indicators in this case will be the file names or their extensions, as well as the hash of the sum of the files. Such initial indicators make it possible to determine the type of cryptographer, to point to a group of attackers and their characteristic techniques, tactics and procedures. They also make it possible to define recommendations for an initial response.

The next set of IOCs that we can get are indicators from the data collected by triage. As a rule, these indicators show the attackers’ progress through the network and allow additional systems involved in the incident to be identified. Mostly, these are the names of compromised users, hash sums of malicious files, IP addresses and URL links. Here it is necessary to note the difficulties that arise. Attackers often use legitimate software that is already installed on the targeted systems (LOLBins). In this case, it is more difficult to distinguish malicious launches of such software from legitimate launches. For example, the mere fact that the PowerShell interpreter is running cannot be considered without context and payload. In such cases it is necessary to use other indicators such as timestamps, user name, correlation of events.

Ultimately, all identified IOCs are used to identify compromised network resources and to block the actions of attackers. In addition, attack indicators are built on the basis of compromise indicators, which are used for preventive detection of attackers. In the final stage of the response, the indicators that are found are used to verify there are no more traces of the attackers’ presence in the network.

The result of each completed case with an investigation is a report that collects all the indicators of compromise and indicators of attack based on them. Monitoring teams should add these indicators to their monitoring systems and use them to proactively detect threats.

 
[1] A “hash” is a relatively short, fixed-length, sufficiently unique and non-reversible representation of arbitrary data, which is the result of a “hashing algorithm” (a mathematical function, such as MD5 or SHA-256). Contents of two identical files that are processed by the same algorithm result in the same hash. However, processing two files which only differ slightly will result in two completely different hashes. As a hash is a short representation, it is more convenient to designate (or look for) a given file using its hash than using its whole content.
[2] Telemetry designates the detection statistics and malicious files that are sent from detection products to the Kaspersky Security Network when customers agree to participate.
[3] Vulnerable, weak, and/or attractive systems that are deliberately exposed to Internet and continuously monitored in a controlled fashion, with the expectation that they will be attacked by threat actors. When such happens, monitoring practices enable detecting new threats, exploitation methods or tools from attackers.
[4] Hijacking of known malicious command and control servers, with cooperation from Internet hosters or leveraging threat actors errors and infrastructure desertion, in order to neutralize their malicious behaviors, monitor malicious communications, as well as identify and notify targets.
[5] NIST. Computer Security Incident Handling Guide. Special Publication 800-61 Revision 2

]]>
https://securelist.com/how-to-collect-and-use-indicators-of-compromise/108184/feed/ 4 full large medium thumbnail