Dmitry Momotov – Securelist https://securelist.com Mon, 28 Nov 2022 04:13:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://securelist.com/wp-content/themes/securelist2020/assets/images/content/site-icon.png Dmitry Momotov – Securelist https://securelist.com 32 32 Privacy predictions 2023 https://securelist.com/privacy-predictions-2023/108068/ https://securelist.com/privacy-predictions-2023/108068/#respond Mon, 28 Nov 2022 08:00:47 +0000 https://kasperskycontenthub.com/securelist/?p=108068

Our last edition of privacy predictions focused on a few important trends where business and government interests intersect, with regulators becoming more active in a wide array of privacy issues. Indeed, we saw regulatory activity around the globe. In the US, for example, the FTC has requested public comments on the “prevalence of commercial surveillance and data security practices that harm consumers” to inform future legislation. In the EU, lawmakers are working on the Data Act, meant to further protect sensitive data, as well as a comprehensive AI legal strategy that might put a curb on a range of invasive machine-learning technologies and require greater accountability and transparency.

On the other hand, we saw the repeal of Roe vs Wade and the subsequent controversy surrounding female reproductive health data in the US as well as investigations into companies selling fine-grained commercial data and facial recognition services to law enforcement. This showed how consumer data collection can directly impact the relationships between citizens and governments.

We think the geopolitical and economic events of 2022, as well as new technological trends, will be the major factors influencing the privacy landscape in 2023. Here we take a look at the most important developments that, in our opinion, will affect online privacy in 2023.

  1. Internet balkanization will lead to more diverse (and localized) behavior tracking market and checks on cross-border data transfer.

    As we know, most web pages are crawling with invisible trackers, collecting behavioral data that is further aggregated and used primarily for targeted advertising. While there are many different companies in the business of behavioral ads, Meta, Amazon, and Google are the unquestionable leaders. However, these are all US companies, and in many regions, authorities are becoming increasingly wary of sharing data with foreign companies. This may be due to an incompatibility of legal frameworks: for example, in July 2022, European authorities issued multiple rulings stating use of Google Analytics may be in violation of GDPR.

    Moreover, the use of commercial data by law enforcement (and potentially intelligence bodies) makes governments suspicious of foreign data-driven enterprises. Some countries, such as Turkey, already have strict data localization legislation.

    These factors will probably lead to a more diverse and fragmented data market, with the emergence and re-emergence of local web tracking and mobile app tracking companies, especially on government and educational websites. While some countries, such as France, Russia, or South Korea, already have a developed web tracking ecosystem with strong players, more countries may follow suit and show a preference for local players.

    This might have various implications for privacy. While big tech companies may spend more on security than smaller players, even they have their share of data breaches. A smaller entity might be less interesting for hackers, but also faces less scrutiny from regulatory bodies.

  2. Smartphones will replace more paper documents.

    Using smartphones or other smart devices to pay via NFC (e.g., Apple Pay, Samsung Pay) or QR code (e.g., Swish in Sweden, SBPay in Russia or WeChat in China) is rapidly growing and will probably render the classic plastic debit and credit card obsolete, especially where cashless payments already dominate. COVID-19, however, showed that smartphones can also be used as proof of vaccination or current COVID-negative health status, as many countries used dedicated apps or QR codes, for example, to provide access to public facilities for vaccinated citizens.

    Why stop there? Smartphones can also be used as IDs. A digitized version of an ID card, passport or driver license can be used instead of the old-fashioned plastic and paper. In fact, several US states are already using or plan to use digital IDs and driver licenses stored in Apple Wallet.

    Having your ID stored on a phone brings both convenience as well as risks. On the one hand, a properly implemented system would, for example, allow you to verify at a store that you are of legal age to buy alcohol without brandishing the whole document with other details like name or street address to the cashier. Also digitized IDs can significantly speed up KYC procedures, for example, to apply for a loan online from a smartphone.

    On the other hand, using a smartphone to store an increasing amount of personal data creates a single point of failure, raising serious security concerns. This places serious demands on security of mobile devices and privacy-preserving ways of storing the data.

  3. Companies will fight the human factor in cybersecurity to curb insider threat and social engineering to protect user data.

    As companies deploy increasingly comprehensive cybersecurity measures moving from endpoint protection to XDR (eXtended Detection & Response) and even proactive threat hunting, people remain the weakest link. According to estimates, 91% of all cyberattacks begin with a phishing email, and phishing techniques are involved in 32% of all successful data breaches. Also, a lot of damage can be done by a disgruntled employee or a person who joined the company for nefarious purposes. The FBI has even warned recently that deep fakes can be used by those seeking remote jobs to confuse the employer, probably with the goal of gaining access to internal IT systems.

    We expect less data leaks caused by misconfiguration of S3 buckets or Elasticsearch instances, and more breaches caused by exploiting the human factor. To mitigate these threats, companies might invest in data leak prevention solutions as well as more thorough user education to raise cybersecurity awareness.

  4. We will hear more concerns about metaverse privacy – but with smartphones and IoT, aren’t we already in a metaverse?

    While skeptics and enthusiasts keep fighting over whether a metaverse is a gamechanger or just a fad, tech companies and content creators continue to polish the technology. Meta has recently announced Meta Quest Pro, and an Apple headset is rumored to appear in 2023. Some, however, raise concerns over metaverse privacy. While smartphones with their multiple sensors from accelerometers to cameras can feel quite intrusive, a VR headset is in a league of its own. For example, one of the latest VR headsets features four front-facing cameras, three cameras on each controller and several cameras to track eyes and facial expressions. This means that in a nightmare scenario such devices would not only have a very deep insight into your activity in the metaverse services provided by the platform, they may be very effective, for example, in reading your emotional reaction to ads and making inferences about you from the interior of your home — from what colors you like to how many pets and children you have.

    While this sounds scary (which is why Meta addresses these concerns in a separate blog post), the fears might actually be exaggerated. The amount of data we generate just by using cashless payments and carrying a mobile phone around during the day is enough to make the most sensitive inferences. Smart home devices, smart cities with ubiquitous video surveillance, cars equipped with multiple cameras and further adoption of IoT, as well as continuous digitalization of services will make personal privacy, at least in cities, a thing of the past. So, while a metaverse promises to bring offline experiences to the online world, the online world is already taking hold of the physical realm.

  5. Desperate to stop data leaks, people will insure against them.

    Privacy experts are eagerly giving advice on how to secure your accounts and minimize your digital footprint. However, living a convenient modern life comes with a cost to privacy, whether you like it or not: for example, ordering food deliveries or using a ride-hailing service will generate, at the very least, sensitive geodata. And as the data leaves your device, you have little control over it, and it is up to the company to store it securely. However, we see that due to misconfigurations, hacker attacks and malicious insiders, data might leak and appear for sale on the dark web or even on the open web for everyone to see.

    Companies take measures to protect the data, as breaches cause reputation damage, regulatory scrutiny and, depending on local legislation, heavy fines. In countries like the US, people use class action lawsuits to receive compensation for damages. However, privacy awareness is growing, and people might start to take preventive measures. One way to do that might be to insure yourself against data breaches. While there are already services that recoup losses in case of identity theft, we could expect a larger range of insurance offers in the future.

We have looked at several factors that, in our opinion, will most prominently affect the way data flows, and possibly leaks, between countries, businesses and individuals. As the digital world continues to permeate the physical realm, we expect even more interesting developments in the future.

]]>
https://securelist.com/privacy-predictions-2023/108068/feed/ 0 full large medium thumbnail
Privacy predictions 2022 https://securelist.com/privacy-predictions-2022/104912/ https://securelist.com/privacy-predictions-2022/104912/#respond Tue, 23 Nov 2021 10:00:20 +0000 https://kasperskycontenthub.com/securelist/?p=104912

We no longer rely on the Internet just for entertainment or chatting with friends. Global connectivity underpins the most basic functions of our society, such as logistics, government services and banking. Consumers connect to businesses via instant messengers and order food delivery instead of going to brick-and-mortar shops, scientific conferences take place on virtual conferencing platforms, and the remote work is the new normal in an increasing number of industries.

All these processes have consequences for privacy. Businesses want better visibility into the online activity of their clients to improve their services, as well as more rigorous know-your-customer procedures to prevent fraud. Governments in many countries push for easier identification of Internet users to fight cybercrime, as well as “traditional” crime coordinated online. Citizens, for their part, are increasingly concerned with surveillance capitalism, a lack of anonymity and dependence on online services.

Reflecting on the previous installment of the privacy predictions, we see that most of them indeed have been big trends this year. Most of all, privacy-preserving technologies were among the most discussed tech topics, even if opinions on some of the implementations, e.g. NeuralHash or Federated Learning of Cohorts, were mixed. Nevertheless, things like on-device sound processing for Siri and Private Compute Core in Android are big steps towards user privacy. We have also seen many new private services, with many privacy-focused companies taking their first steps towards monetization, as well as a bigger push for privacy – both in technology and in marketing – on both iOS and Android. Facebook (now Meta) moved towards more privacy for its users as well, providing end-to-end encrypted backups in WhatsApp and removing the facial recognition system in its entirety from Facebook.
While we hope 2022 will be the last pandemic year, we do not think the privacy trends will reverse. What will be the consequences of these processes? Here, we present some of our ideas about what key forces will shape the privacy landscape in 2022.

  1. BigTech will give people more tools to control their privacy – to an extent.

    As companies have to comply with stricter and more diverse privacy regulations worldwide, they are giving users more tools for controlling their privacy as they use their services. With more knobs and buttons, experienced users might be able to set up their privacy to the extent that suits their needs. As for less computer-savvy folk, do not expect privacy by default: even when legally obliged to provide privacy by default, enterprises whose bottom ine depends on data collection will continue to find loopholes to trick people into choosing less private settings.

  2. Governments are wary of the growing big tech power and data hoarding, which will lead to conflicts – and compromises.

    With governments building their own digital infrastructures to allow both simpler and wider access to government services and, hopefully, more transparency and accountability, as well as deeper insights into the population and more control over it, it is not surprising they will show more interest in the data about their citizens that flows through big commercial ecosystems. This will lead to more regulation, such as privacy laws, data localization laws and more regulation on what data and when are accessible to law enforcement. The Apple CSAM scanning privacy conundrum shows exactly how difficult it can be to find the balance between encryption and user privacy on the one side and pinpointing criminal behavior on the other.

  3. Machine learning is sure great, but we are going to hear more about machine unlearning.

    Modern machine learning often entails training huge neural networks with astounding numbers of parameters (while this is not entirely correct, one can think of these parameters as neurons in the brain), sometimes on the order of billions. Thanks to this, neural networks not only learn simple relationships, but also memorize entire chunks of data, which can lead to leaks of private data and copyrighted materials, or recitations of social biases. Moreover, this leads to an interesting legal question: if a machine learning model was trained using my data, can I, for example, under GDPR, demand to remove all influence that my data had on the model? If the answer is yes, what does it mean for data-driven industries? A simple answer is that a company would have to retrain the model from scratch, which sometimes can be costly. This is why we expect more interesting development, both in technologies that prevent memorization (such as differentially private training) and those that enable researchers to remove data from already trained systems (machine unlearning).

  4. People and regulators will demand more algorithmic transparency.

    Complicated algorithms, such as machine learning, are increasingly used to make decisions about us in various situations, from credit scoring to face recognition to advertising. While some might enjoy the personalization, for others, it may lead to frustrating experiences and discrimination. Imagine an online store that divides its users into more and less valuable based on some obscure LTV (lifetime value) prediction algorithm and provides its more valued customers with live customer support chats while leaving less lucky shoppers to a far-from-perfect chatbot. If you are deemed by a computer to be an inferior customer, would you want to know why? Or, if you are denied a credit card? A mortgage? A kidney transplant? As more industries are touched by algorithms, we expect more discussion and regulations about explaining, contesting and amending decisions made by automated systems, as well as more research into machine learning explainability techniques.

  5. Thanks to work from home, many people will become more privacy-aware – with the help of their employers.

    If you have been working from home due to the pandemic, odds are you have learned lots of new IT slang: virtual desktop infrastructure, one-time password, two-factor security keys and so on – even if you work in banking or online retail. Even when the pandemic is over, the work-from-home culture might persist. With people using the same devices both for work and personal needs, corporate security services would need more security-minded users to protect this bigger perimeter from attacks and leaks. This means more security and privacy trainings – and more people translating these work skills, such as using 2FA, into their personal lives.

To conclude, privacy is no longer a topic for geeks and cypherpunks, and we see how it has become a mainstream topic in the public debate touching on the subjects of personal and human rights, safety and security, and business ethics. We hope that this debate, involving the society, business and governments, will lead to more transparency, accountability, and fair and balanced use of personal data, and that both legal, social and technological solutions to the most pressing privacy issues will be found.

]]>
https://securelist.com/privacy-predictions-2022/104912/feed/ 0 full large medium thumbnail
Privacy predictions for 2021 https://securelist.com/privacy-predictions-for-2021/100311/ https://securelist.com/privacy-predictions-for-2021/100311/#respond Thu, 28 Jan 2021 10:00:13 +0000 https://kasperskycontenthub.com/securelist/?p=100311

2020 saw an unprecedented increase in the importance and value of digital services and infrastructure. From the rise of remote working and the global shift in consumer habits to huge profits booked by internet entertainers, we are witnessing how overwhelmingly important the connected infrastructure has become for the daily functioning of society.

What does all this mean for privacy? With privacy more often than not being traded for convenience, we believe that for many 2020 has fundamentally changed how much privacy people are willing to sacrifice in exchange for security (especially from the COVID-19 threat) and access to digital services. How are governments and enterprises going to react to this in 2021? Here are some of our thoughts on what the coming year may look like from the privacy perspective, and which diverse and sometimes contrary forces are going to shape it.

  1. Smart health device vendors are going to collect increasingly diverse data – and use it in increasingly diverse ways.
    Heart rate monitors and step counters are already a standard in even the cheapest smart fitness band models. More wearables, however, now come with an oximeter and even an ECG, allowing you to detect possible heart rate issues before they can even cause you any trouble. We think more sensors are on the way, with body temperature among the most likely candidates. And with your body temperature being an actual public health concern nowadays, how long before health officials want to tap into this pool of data? Remember, heart rate and activity tracker data – as well as consumer gene sequencing – has already been used as evidence in a court of law. Add in more smart health devices, such as smart body scales, glucose level monitors, blood pressure monitors and even toothbrushes and you have huge amounts of data that is invaluable for marketers and insurers.
  2. Consumer privacy is going to be a value proposition, and in most cases cost money.
    Public awareness of the perils of unfettered data collection is growing, and the free market is taking notice. Apple has publicly clashed with Facebook claiming it has to protect its users’ privacy, while the latter is wrestling with regulators to implement end-to-end encryption in its messaging apps. People are more and more willing to choose services that have at least a promise of privacy, and even pay for them. Security vendors are promoting privacy awareness, backing it with privacy-oriented products; incumbent privacy-oriented services like DuckDuckGo show they can have a sustainable business model while leaving you in control of your data; and startups like You.com claim you can have a Google-like experience without the Google-like tracking.
  3. Governments are going to be increasingly jealous of big-tech data hoarding – and increasingly active in regulation.
    The data that the big tech companies have on people is a gold mine for governments, democratic and oppressive alike. It can be used in a variety of ways, from using geodata to build more efficient transportation to sifting through cloud photos to fight child abuse and peeking into private conversations to silence dissent. However, private companies are not really keen on sharing it. We have already seen governments around the world oppose companies’ plans to end-to-end encrypt messaging and cloud backups, pass legislation forcing developers to plant backdoors into their software, or voice concerns with DNS-over-HTTPS, as well as more laws regulating cryptocurrency being enacted everywhere, and so on and so forth. But big tech is called big for a reason, and it will be interesting to see how this confrontation develops.
  4. Data companies are going to find ever more creative, and sometimes more intrusive, sources of data to fuel the behavioral analytics machine.
    Some sources of behavioral analytics data are so common we can call them conventional, such as using your recent purchases to recommend new goods or using your income and spending data to calculate credit default risk. But what about using data from your web camera to track your engagement in work meetings and decide on your yearly bonus? Using online tests that you take on social media to determine what kind of ad will make you buy a coffee brewer? The mood of your music playlist to choose the goods to market to you? How often you charge your phone to determine your credit score? We have already seen these scenarios in the wild, but we are expecting the marketers to get even more creative with what some data experts call AI snake oil. The main implication of this is the chilling effect of people having to weigh every move before acting. Imagine knowing that choosing your Cyberpunk 2077 hero’s gender, romance line and play style (stealth or open assault) will somehow influence some unknown factor in your real life down the line. And would it change how you play the game?
  5. Multi-party computations, differential privacy and federated learning are going to become more widely adopted – as well as edge computing.
    It is not all bad news. As companies become more conscious as to what data they actually need and consumers push back against unchecked data collection, more advanced privacy tools are emerging and becoming more widely adopted. From the hardware perspective, we will see more powerful smartphones and more specialized data processing hardware, like Google Coral, Nvidia Jetson, Intel NCS enter the market at affordable prices. This will allow developers to create tools that are capable of doing fancy data processing, such as running neural networks, on-device instead of the cloud, dramatically limiting the amount of data that is transferred from you to the company. From the software standpoint, more companies like Apple, Google and Microsoft are adopting differential privacy techniques to give people strict (in the mathematical sense) privacy guarantees while continuing to make use of data. Federated learning is going to become the go-to method for dealing with data deemed too private for users to share and for companies to store. With more educational and non-commercial initiatives, such as OpenMined, surrounding them, these methods might lead to groundbreaking collaborations and new results in privacy-heavy areas such as healthcare.

We have seen over the last decade, and the last few years in particular, how privacy has become a hot-button issue at the intersection of governmental, corporate and personal interests, and how it has given rise to such different and sometimes even conflicting trends. In more general terms, we hope this year helps us, as a society, to move closer to a balance where the use of data by governments and companies is based on privacy guarantees and respect of individual rights.

]]>
https://securelist.com/privacy-predictions-for-2021/100311/feed/ 0 full large medium thumbnail