745 resultados para patron privacy
Resumo:
La sorveglianza di massa è sempre più utilizzata ai fini dell'anti-terrorismo e determina necessariamente una limitazione della protezione dei dati personali e l'intrusione nella sfera privata. Ovviamente non abbiamo la presunzione di pensare di poter trovare una soluzione al problema terrorismo, o anche al continuo contrasto tra terrorismo e privacy. Lo scopo di questa tesi è principalmente quello di analizzare cosa è stato fatto negli ultimi anni (più precisamente dal 2001 ad oggi), capire perché i governi sono stati finora “inefficienti” e cosa potrebbe succedere nel prossimo futuro.
Resumo:
In recent years, there has been an enormous growth of location-aware devices, such as GPS embedded cell phones, mobile sensors and radio-frequency identification tags. The age of combining sensing, processing and communication in one device, gives rise to a vast number of applications leading to endless possibilities and a realization of mobile Wireless Sensor Network (mWSN) applications. As computing, sensing and communication become more ubiquitous, trajectory privacy becomes a critical piece of information and an important factor for commercial success. While on the move, sensor nodes continuously transmit data streams of sensed values and spatiotemporal information, known as ``trajectory information". If adversaries can intercept this information, they can monitor the trajectory path and capture the location of the source node. This research stems from the recognition that the wide applicability of mWSNs will remain elusive unless a trajectory privacy preservation mechanism is developed. The outcome seeks to lay a firm foundation in the field of trajectory privacy preservation in mWSNs against external and internal trajectory privacy attacks. First, to prevent external attacks, we particularly investigated a context-based trajectory privacy-aware routing protocol to prevent the eavesdropping attack. Traditional shortest-path oriented routing algorithms give adversaries the possibility to locate the target node in a certain area. We designed the novel privacy-aware routing phase and utilized the trajectory dissimilarity between mobile nodes to mislead adversaries about the location where the message started its journey. Second, to detect internal attacks, we developed a software-based attestation solution to detect compromised nodes. We created the dynamic attestation node chain among neighboring nodes to examine the memory checksum of suspicious nodes. The computation time for memory traversal had been improved compared to the previous work. Finally, we revisited the trust issue in trajectory privacy preservation mechanism designs. We used Bayesian game theory to model and analyze cooperative, selfish and malicious nodes' behaviors in trajectory privacy preservation activities.
Resumo:
We propose a model, based on the work of Brock and Durlauf, which looks at how agents make choices between competing technologies, as a framework for exploring aspects of the economics of the adoption of privacy-enhancing technologies. In order to formulate a model of decision-making among choices of technologies by these agents, we consider the following: context, the setting in which and the purpose for which a given technology is used; requirement, the level of privacy that the technology must provide for an agent to be willing to use the technology in a given context; belief, an agent’s perception of the level of privacy provided by a given technology in a given context; and the relative value of privacy, how much an agent cares about privacy in this context and how willing an agent is to trade off privacy for other attributes. We introduce these concepts into the model, admitting heterogeneity among agents in order to capture variations in requirement, belief, and relative value in the population. We illustrate the model with two examples: the possible effects on the adoption of iOS devices being caused by the recent Apple–FBI case; and the recent revelations about the non-deletion of images on the adoption of Snapchat.
Resumo:
Healthcare systems have assimilated information and communication technologies in order to improve the quality of healthcare and patient's experience at reduced costs. The increasing digitalization of people's health information raises however new threats regarding information security and privacy. Accidental or deliberate data breaches of health data may lead to societal pressures, embarrassment and discrimination. Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance. This thesis consists of publications contributing to mHealth security and privacy in various ways: with a comprehensive literature review about mHealth in Brazil; with the design of a security framework for MDCSs (SecourHealth); with the design of a MDCS (GeoHealth); with the design of Privacy Impact Assessment template for MDCSs; and with the study of ontology-based obfuscation and anonymisation functions for health data.
Resumo:
Situational Awareness provides a user centric approach to security and privacy. The human factor is often recognised as the weakest link in security, therefore situational perception and risk awareness play a leading role in the adoption and implementation of security mechanisms. In this study we assess the understanding of security and privacy of users in possession of wearable devices. The findings demonstrate privacy complacency, as the majority of users trust the application and the wearable device manufacturer. Moreover the survey findings demonstrate a lack of understanding of security and privacy by the sample population. Finally the theoretical implications of the findings are discussed.
Resumo:
As mechatronic devices and components become increasingly integrated with and within wider systems concepts such as Cyber-Physical Systems and the Internet of Things, designer engineers are faced with new sets of challenges in areas such as privacy. The paper looks at the current, and potential future, of privacy legislation, regulations and standards and considers how these are likely to impact on the way in which mechatronics is perceived and viewed. The emphasis is not therefore on technical issues, though these are brought into consideration where relevant, but on the soft, or human centred, issues associated with achieving user privacy.
Resumo:
Data sharing between organizations through interoperability initiatives involving multiple information systems is fundamental to promote the collaboration and integration of services. However, in terms of data, the considerable increase in its exposure to additional risks, require a special attention to issues related to privacy of these data. For the Portuguese healthcare sector, where the sharing of health data is, nowadays, a reality at national level, data privacy is a central issue, which needs solutions according to the agreed level of interoperability between organizations. This context led the authors to study the factors with influence on data privacy in a context of interoperability, through a qualitative and interpretative research, based on the method of case study. This article presents the final results of the research that successfully identifies 10 subdomains of factors with influence on data privacy, which should be the basis for the development of a joint protection program, targeted at issues associated with data privacy.
Resumo:
In digital markets personal information is pervasively collected by firms. In the first chapter I study data ownership and product customization when there is exclusive access to non rival but excludable data about consumer preferences. I show that an incumbent firm does not have an incentive to sell an exclusively held dataset with a rival firm, but instead it has an incentive to trade a customizing technology with the other firm. In the second chapter I investigate the effects of consumer information on the intensity of competition. In a two dimensional model of product differentiation, firms use information on preferences to practice price discrimination. I contrast a full privacy and a no privacy benchmark with a regime in which firms are able to target consumers only partially. When data is partially informative, firms are always better-off with price discrimination and an exclusive access to user data is not necessarily a competition policy concern. From a consumer protection perspective, the policy recommendation is that the regulator should promote either no privacy or full privacy. In the third chapter I introduce a data broker that observes either only one or both dimensions of consumer information and sells this data to competing firms for price discrimination purposes. When the seller exogenously holds a partially informative dataset, an exclusive allocation arises. Instead, when the dataset held is fully informative, the data broker trades information non exclusively but each competitor acquires consumer data on a different dimension. When data collection is made endogenous, non exclusivity is robust if collection costs are not too high. The competition policy suggestion is that exclusivity should not be banned per se, but it is data differentiation in equilibrium that rises market power in competitive markets. Upstream competition is sufficient to ensure that both firms get access to consumer information.
Resumo:
The chapters of the thesis focus on a limited variety of selected themes in EU privacy and data protection law. Chapter 1 sets out the general introduction on the research topic. Chapter 2 touches upon the methodology used in the research. Chapter 3 conceptualises the basic notions from a legal standpoint. Chapter 4 examines the current regulatory regime applicable to digital health technologies, healthcare emergencies, privacy, and data protection. Chapter 5 provides case studies on the application deployed in the Covid-19 scenario, from the perspective of privacy and data protection. Chapter 6 addresses the post-Covid European regulatory initiatives on the subject matter, and its potential effects on privacy and data protection. Chapter 7 is the outcome of a six-month internship with a company in Italy and focuses on the protection of fundamental rights through common standardisation and certification, demonstrating that such standards can serve as supporting tools to guarantee the right to privacy and data protection in digital health technologies. The thesis concludes with the observation that finding and transposing European privacy and data protection standards into scenarios, such as public healthcare emergencies where digital health technologies are deployed, requires rapid coordination between the European Data Protection Authorities and the Member States guarantee that individual privacy and data protection rights are ensured.
Resumo:
In recent years, there has been exponential growth in using virtual spaces, including dialogue systems, that handle personal information. The concept of personal privacy in the literature is discussed and controversial, whereas, in the technological field, it directly influences the degree of reliability perceived in the information system (privacy ‘as trust’). This work aims to protect the right to privacy on personal data (GDPR, 2018) and avoid the loss of sensitive content by exploring sensitive information detection (SID) task. It is grounded on the following research questions: (RQ1) What does sensitive data mean? How to define a personal sensitive information domain? (RQ2) How to create a state-of-the-art model for SID?(RQ3) How to evaluate the model? RQ1 theoretically investigates the concepts of privacy and the ontological state-of-the-art representation of personal information. The Data Privacy Vocabulary (DPV) is the taxonomic resource taken as an authoritative reference for the definition of the knowledge domain. Concerning RQ2, we investigate two approaches to classify sensitive data: the first - bottom-up - explores automatic learning methods based on transformer networks, the second - top-down - proposes logical-symbolic methods with the construction of privaframe, a knowledge graph of compositional frames representing personal data categories. Both approaches are tested. For the evaluation - RQ3 – we create SPeDaC, a sentence-level labeled resource. This can be used as a benchmark or training in the SID task, filling the gap of a shared resource in this field. If the approach based on artificial neural networks confirms the validity of the direction adopted in the most recent studies on SID, the logical-symbolic approach emerges as the preferred way for the classification of fine-grained personal data categories, thanks to the semantic-grounded tailor modeling it allows. At the same time, the results highlight the strong potential of hybrid architectures in solving automatic tasks.
Resumo:
The thesis represents the conclusive outcome of the European Joint Doctorate programmein Law, Science & Technology funded by the European Commission with the instrument Marie Skłodowska-Curie Innovative Training Networks actions inside of the H2020, grantagreement n. 814177. The tension between data protection and privacy from one side, and the need of granting further uses of processed personal datails is investigated, drawing the lines of the technological development of the de-anonymization/re-identification risk with an explorative survey. After acknowledging its span, it is questioned whether a certain degree of anonymity can still be granted focusing on a double perspective: an objective and a subjective perspective. The objective perspective focuses on the data processing models per se, while the subjective perspective investigates whether the distribution of roles and responsibilities among stakeholders can ensure data anonymity.
Resumo:
The purpose of this research study is to discuss privacy and data protection-related regulatory and compliance challenges posed by digital transformation in healthcare in the wake of the COVID-19 pandemic. The public health crisis accelerated the development of patient-centred remote/hybrid healthcare delivery models that make increased use of telehealth services and related digital solutions. The large-scale uptake of IoT-enabled medical devices and wellness applications, and the offering of healthcare services via healthcare platforms (online doctor marketplaces) have catalysed these developments. However, the use of new enabling technologies (IoT, AI) and the platformisation of healthcare pose complex challenges to the protection of patient’s privacy and personal data. This happens at a time when the EU is drawing up a new regulatory landscape for the use of data and digital technologies. Against this background, the study presents an interdisciplinary (normative and technology-oriented) critical assessment on how the new regulatory framework may affect privacy and data protection requirements regarding the deployment and use of Internet of Health Things (hardware) devices and interconnected software (AI systems). The study also assesses key privacy and data protection challenges that affect healthcare platforms (online doctor marketplaces) in their offering of video API-enabled teleconsultation services and their (anticipated) integration into the European Health Data Space. The overall conclusion of the study is that regulatory deficiencies may create integrity risks for the protection of privacy and personal data in telehealth due to uncertainties about the proper interplay, legal effects and effectiveness of (existing and proposed) EU legislation. The proliferation of normative measures may increase compliance costs, hinder innovation and ultimately, deprive European patients from state-of-the-art digital health technologies, which is paradoxically, the opposite of what the EU plans to achieve.
Resumo:
The thesis aims to present a comprehensive and holistic overview on cybersecurity and privacy & data protection aspects related to IoT resource-constrained devices. Chapter 1 introduces the current technical landscape by providing a working definition and architecture taxonomy of ‘Internet of Things’ and ‘resource-constrained devices’, coupled with a threat landscape where each specific attack is linked to a layer of the taxonomy. Chapter 2 lays down the theoretical foundations for an interdisciplinary approach and a unified, holistic vision of cybersecurity, safety and privacy justified by the ‘IoT revolution’ through the so-called infraethical perspective. Chapter 3 investigates whether and to what extent the fast-evolving European cybersecurity regulatory framework addresses the security challenges brought about by the IoT by allocating legal responsibilities to the right parties. Chapters 4 and 5 focus, on the other hand, on ‘privacy’ understood by proxy as to include EU data protection. In particular, Chapter 4 addresses three legal challenges brought about by the ubiquitous IoT data and metadata processing to EU privacy and data protection legal frameworks i.e., the ePrivacy Directive and the GDPR. Chapter 5 casts light on the risk management tool enshrined in EU data protection law, that is, Data Protection Impact Assessment (DPIA) and proposes an original DPIA methodology for connected devices, building on the CNIL (French data protection authority) model.
Resumo:
This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.
Resumo:
Utilizzando un servizio basato sulla posizione milioni di utenti acconsentono ogni giorno all'utilizzo e alla memorizzazione, da parte delle aziende fornitrici, dei propri dati personali. La legislazione attuale consente agli utilizzatori di questi servizi un discreto grado di protezione attraverso l'anonimizzazione dei dati. Esistono tuttavia situazioni in cui queste informazioni sono a rischio: se un malintenzionato dovesse penetrare con successo nel server in cui questi dati sono memorizzati potrebbe comunque essere in grado di accedere ai dati sensibili di un utente. Attraverso alcune tecniche, infatti, è possibile risalire a chi sono riferite le informazioni attraverso dei quasi-identifier. La soluzione può essere di approssimare i dati sulla posizione di un utente in modo da non offrire una visione troppo precisa a un possibile avversario nel caso in cui esso riesca a recuperarli. Allo scopo di comprendere i parametri con cui offuscare l'utente è stato scritto uno script in grado di simulare l'attività di diversi utenti circolanti per la città di New York. Questi ultimi simuleranno delle richieste ad un ipotetico servizio basato sulla posizione ad intervalli regolari. Queste richieste simulano il refresh automatico che uno smartphone compie. Attraverso i dati di queste ultime sarà possibile capire quali utenti si trovino in prossimità l'uno dell'altro, in modo da confondere le reciproche informazioni. Questo sistema fa sì che un avversario veda ridotte le sue possibilità di risalire ai dati relativi all'utente. Al ridursi dell'intervallo di esecuzione delle query si avrà un percorso più definito che però comporterà una maggiore quantità di dati recuperati. All'aumentare del raggio si avrà una maggiore incertezza nella posizione che ridurrà però il valore che i dati portano per un fornitore di servizi. Bilanciare quindi il valore economico dei dati e la protezione a cui è sottoposto un utente è fondamentale per comprendere i valori di offuscamento utilizzabili.