841 resultados para Personal data protection
Resumo:
"Mémoire présenté à la Faculté des études supérieures en vue de l'obtention du grade de maîtrise en droit (LL.M.) option Nouvelles technologies de l'information"
Resumo:
Article publié dans le journal « Journal of Information Security Research ». March 2012.
Resumo:
La protection des données personnelles en Suisse trouve son fondement dans la constitution et se concrétise avant tout dans une loi fédérale adoptée avant l'avènement d'Internet et la généralisation de la transmission d'informations personnelles sur des réseaux numériques. Cette réglementation est complétée par les engagements internationaux de la Suisse et notamment la Convention européenne des Droits de l'Homme du Conseil de l'Europe. L'article délimite tout d'abord le champ d'application de la législation, qui joue un rôle pour le traitement de données personnelles par des particuliers comme par les autorités de l'administration fédérale. Suit une brève analyse des principes fondamentaux (licéité, bonne foi, proportionnalité, finalité, exactitude, communication à l'étranger, sécurité, droit d'accès) et de leur application sur Internet. Enfin, la protection du contenu des messages électroniques privés est brièvement abordée sous l'angle du secret des télécommunications et à la lumière d'une jurisprudence récente du Tribunal fédéral.
Resumo:
L'Italie a été l'avant-dernier pays européen, suivi seulement de la Grèce, à se doter d'une loi sur la protection de la vie privée (loi du 31 décembre 1996). Paradoxalement, c'est en Italie qu'ont été écrites quelques-uns des meilleurs ouvrages sur ce sujet, notamment ceux du professeur Rodotà. En dépit du retard du législateur italien, il doit être précisé que la loi de 1996, faisant suite à la Directive communautaire relative à la protection des données personnelles, introduit un concept moderne de la vie privée, qui ne se limite pas simplement à un « right to be let alone », selon la célèbre conception de la fin du dix-neuvième siècle, mais qui se réfère plutôt à la protection de la personne humaine. Le concept de vie privée, entendu comme l’interdiction d’accéder à des informations personnelles, se transforme en un contrôle des renseignements relatifs à la personne. De cette manière, se développe une idée de la vie privée qui pose comme fondements : le droit de contrôle, de correction et d'annulation d'informations sur la personne. À cet égard, il est important de souligner le double système d’autorisation pour le traitement licite des informations. Le consentement de l'intéressé est requis pour les données personnelles. Pour les données dites « sensibles », en revanche, l'autorisation du Garant sera nécessaire en plus de l'expression du consentement de l’intéressé. En revanche, aucune autorisation n'est requise pour le traitement de données n'ayant qu'un but exclusivement personnel, ainsi que pour les données dites « anonymes », à condition qu'elles ne permettent pas d'identifier le sujet concerné. Le type de responsabilité civile prévu par la loi de 1996 se révèle particulièrement intéressant : l'article 18 prévoit l'application de l'article 2050 du Code civil italien (exercice d'activités dangereuses), alors que l'article 29 prévoit, lui, l'octroi de dommages et intérêts pour les préjudices non patrimoniaux (cette disposition est impérative, conformément à l'article 2059 du Code civil italien). Le présent article se propose d'examiner l'application des normes évoquées ci-dessus à Internet.
Resumo:
Le développement exponentiel des réseaux informatiques a largement contribué à augmenter le volume des renseignements personnels disponibles et à remplacer les méthodes désuètes de collecte des renseignements par des méthodes plus rapides et plus efficaces. La vie privée et le contrôle sur les informations personnelles, tels que nous les connaissions il y a quelques décennies, sont des notions difficilement compatibles avec la société ouverte et commerciale comme la nôtre. Face à cette nouvelle réalité menaçante pour les droits et libertés de l’homme, il est essentiel de donner un cadre technique et légal stable qui garantisse une protection adéquate de ces données personnelles. Pour rester dans le marché ou bénéficier de la confiance des individus, les entreprises et les gouvernements doivent posséder une infrastructure de sécurité efficace. Cette nouvelle donne a tendance à devenir plus qu’une règle de compétitivité, elle se transforme en une authentique obligation légale de protéger les données à caractère personnel par des mesures de sécurité adéquates et suffisantes. Ce mémoire aborde justement ces deux points, soit l’étude du développement d’une obligation légale de sécurité et l’encadrement juridique de la mise en place d’un programme de sécurisation des données personnelles par des mesures de sécurités qui respectent les standards minimaux imposés par les textes législatifs nationaux et internationaux.
Resumo:
La notion de vie privée, et plus précisément le droit à la protection des renseignements personnels, est reconnue aussi bien dans les textes provinciaux, régionaux, nationaux et internationaux, que dans les politiques mises en place par les sites Web. Il est admis que toutes informations identifiant ou permettant d’identifier une personne peut porter atteinte à sa vie privée, à savoir son nom, prénom, numéro de téléphone, de carte bancaire, de sécurité sociale, ou encore ses adresses électronique et Internet. Cette protection, admise dans le monde réel, doit aussi exister sur les inforoutes, étant entendu que « l ’informatique (…) ne doit porter atteinte ni à l ’identité humaine, ni aux droits de l ’homme, ni à la vie privée, ni aux libertés individuelles ou publiques » (art. 1er de la Loi française dite « Informatique et Libertés » du 6 janvier 1978). Ce principe étant admis, il est pertinent de s’interroger sur les moyens envisagés pour parvenir à le réaliser. Faut-il avoir recours à la réglementation étatique, à l’autoréglementation ou à la corégulation ? Cette dernière notion « n’est pas à proprement parler une nouvelle forme de régulation », mais elle préconise une collaboration entre les acteurs du secteur public et privé. L’idée de partenariat semble retenir l’attention du gouvernement français dans sa mission d’adaptation du cadre législatif à la société de l’information, comme nous le montre le rapport Du droit et des libertés sur l’Internet remis dernièrement au Premier ministre. Par conséquent, cet article a pour objectif de dresser un tableau de la législation française, et de ses multiples rapports, applicables à la protection de la vie privée et, plus particulièrement, aux données personnelles sur le réseau des réseaux. En prenant en considération les solutions étatiques et non étatiques retenues depuis ces deux dernières décennies, nous envisagerons une étude de l’avant-projet de loi du Gouvernement visant à transposer en droit interne la Directive européenne du 24 octobre 1995 relative à la protection des données personnelles.
Resumo:
Speaker: Dr Kieron O'Hara Organiser: Time: 04/02/2015 11:00-11:45 Location: B32/3077 Abstract In order to reap the potential societal benefits of big and broad data, it is essential to share and link personal data. However, privacy and data protection considerations mean that, to be shared, personal data must be anonymised, so that the data subject cannot be identified from the data. Anonymisation is therefore a vital tool for data sharing, but deanonymisation, or reidentification, is always possible given sufficient auxiliary information (and as the amount of data grows, both in terms of creation, and in terms of availability in the public domain, the probability of finding such auxiliary information grows). This creates issues for the management of anonymisation, which are exacerbated not only by uncertainties about the future, but also by misunderstandings about the process(es) of anonymisation. This talk discusses these issues in relation to privacy, risk management and security, reports on recent theoretical tools created by the UKAN network of statistics professionals (on which the author is one of the leads), and asks how long anonymisation can remain a useful tool, and what might replace it.
Resumo:
Since the advent of the internet in every day life in the 1990s, the barriers to producing, distributing and consuming multimedia data such as videos, music, ebooks, etc. have steadily been lowered for most computer users so that almost everyone with internet access can join the online communities who both produce, consume and of course also share media artefacts. Along with this trend, the violation of personal data privacy and copyright has increased with illegal file sharing being rampant across many online communities particularly for certain music genres and amongst the younger age groups. This has had a devastating effect on the traditional media distribution market; in most cases leaving the distribution companies and the content owner with huge financial losses. To prove that a copyright violation has occurred one can deploy fingerprinting mechanisms to uniquely identify the property. However this is currently based on only uni-modal approaches. In this paper we describe some of the design challenges and architectural approaches to multi-modal fingerprinting currently being examined for evaluation studies within a PhD research programme on optimisation of multi-modal fingerprinting architectures. Accordingly we outline the available modalities that are being integrated through this research programme which aims to establish the optimal architecture for multi-modal media security protection over the internet as the online distribution environment for both legal and illegal distribution of media products.
Resumo:
This article analyses the results of an empirical study on the 200 most popular UK-based websites in various sectors of e-commerce services. The study provides empirical evidence on unlawful processing of personal data. It comprises a survey on the methods used to seek and obtain consent to process personal data for direct marketing and advertisement, and a test on the frequency of unsolicited commercial emails (UCE) received by customers as a consequence of their registration and submission of personal information to a website. Part One of the article presents a conceptual and normative account of data protection, with a discussion of the ethical values on which EU data protection law is grounded and an outline of the elements that must be in place to seek and obtain valid consent to process personal data. Part Two discusses the outcomes of the empirical study, which unveils a significant departure between EU legal theory and practice in data protection. Although a wide majority of the websites in the sample (69%) has in place a system to ask separate consent for engaging in marketing activities, it is only 16.2% of them that obtain a consent which is valid under the standards set by EU law. The test with UCE shows that only one out of three websites (30.5%) respects the will of the data subject not to receive commercial communications. It also shows that, when submitting personal data in online transactions, there is a high probability (50%) of incurring in a website that will ignore the refusal of consent and will send UCE. The article concludes that there is severe lack of compliance of UK online service providers with essential requirements of data protection law. In this respect, it suggests that there is inappropriate standard of implementation, information and supervision by the UK authorities, especially in light of the clarifications provided at EU level.
Resumo:
A coleta e o armazenamento de dados em larga escala, combinados à capacidade de processamento de dados que não necessariamente tenham relação entre si de forma a gerar novos dados e informações, é uma tecnologia amplamente usada na atualidade, conhecida de forma geral como Big Data. Ao mesmo tempo em que possibilita a criação de novos produtos e serviços inovadores, os quais atendem a demandas e solucionam problemas de diversos setores da sociedade, o Big Data levanta uma série de questionamentos relacionados aos direitos à privacidade e à proteção dos dados pessoais. Esse artigo visa proporcionar um debate sobre o alcance da atual proteção jurídica aos direitos à privacidade e aos dados pessoais nesse contexto, e consequentemente fomentar novos estudos sobre a compatibilização dos mesmos com a liberdade de inovação. Para tanto, abordará, em um primeiro momento, pontos positivos e negativos do Big Data, identificando como o mesmo afeta a sociedade e a economia de forma ampla, incluindo, mas não se limitando, a questões de consumo, saúde, organização social, administração governamental, etc. Em seguida, serão identificados os efeitos dessa tecnologia sobre os direitos à privacidade e à proteção dos dados pessoais, tendo em vista que o Big Data gera grandes mudanças no que diz respeito ao armazenamento e tratamento de dados. Por fim, será feito um mapeamento do atual quadro regulatório brasileiro de proteção a tais direitos, observando se o mesmo realmente responde aos desafios atuais de compatibilização entre inovação e privacidade.
Resumo:
Linked Data assets (RDF triples, graphs, datasets, mappings...) can be object of protection by the intellectual property law, the database law or its access or publication be restricted by other legal reasons (personal data pro- tection, security reasons, etc.). Publishing a rights expression along with the digital asset, allows the rightsholder waiving some or all of the IP and database rights (leaving the work in the public domain), permitting some operations if certain conditions are satisfied (like giving attribution to the author) or simply reminding the audience that some rights are reserved.
Resumo:
In its recent Schrems judgment the Luxembourg Court annulled Commission Decision 2000/520 according to which US data protection rules are sufficient to satisfy EU privacy rules regarding EU-US transfers of personal data, otherwise known as the ‘Safe Harbour’ framework. What does this judgment mean and what are its main implications for EU-US data transfers? In this paper the authors find that this landmark judgment sends a strong message to EU and US policy-makers about the need to ensure clear rules governing data transfers, so that people whose personal data is transferred to third countries have sufficient legal guarantees. Without such rules there is legal uncertainty and mistrust. Any future arrangement for the transatlantic transfer of data will therefore need to be firmly anchored in a framework of protection commensurate to the EU Charter of Fundamental Rights and the EU data protection architecture.
Resumo:
Traditional classrooms have been often regarded as closed spaces within which experimentation, discussion and exploration of ideas occur. Professors have been used to being able to express ideas frankly, and occasionally rashly while discussions are ephemeral and conventional student work is submitted, graded and often shredded. However, digital tools have transformed the nature of privacy. As we move towards the creation of life-long archives of our personal learning, we collect material created in various 'classrooms'. Some of these are public, and open, but others were created within 'circles of trust' with expectations of privacy and anonymity by learners. Taking the Creative Commons license as a starting point, this paper looks at what rights and expectations of privacy exist in learning environments? What methods might we use to define a 'privacy license' for learning? How should the privacy rights of learners be balanced with the need to encourage open learning and with the creation of eportfolios as evidence of learning? How might we define different learning spaces and the privacy rights associated with them? Which class activities are 'private' and closed to the class, which are open and what lies between? A limited set of set of metrics or zones is proposed, along the axes of private-public, anonymous-attributable and non-commercial-commercial to define learning spaces and the digital footprints created within them. The application of these not only to the artefacts which reflect learning, but to the learning spaces, and indeed to digital media more broadly are explored. The possibility that these might inform not only teaching practice but also grading rubrics in disciplines where public engagement is required will also be explored, along with the need for consideration by educational institutions of the data rights of students.
Resumo:
Healthcare systems have assimilated information and communication technologies in order to improve the quality of healthcare and patient's experience at reduced costs. The increasing digitalization of people's health information raises however new threats regarding information security and privacy. Accidental or deliberate data breaches of health data may lead to societal pressures, embarrassment and discrimination. Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance. This thesis consists of publications contributing to mHealth security and privacy in various ways: with a comprehensive literature review about mHealth in Brazil; with the design of a security framework for MDCSs (SecourHealth); with the design of a MDCS (GeoHealth); with the design of Privacy Impact Assessment template for MDCSs; and with the study of ontology-based obfuscation and anonymisation functions for health data.
Resumo:
This paper draws on the work of the ‘EU Kids Online’ network funded by the EC (DG Information Society) Safer Internet plus Programme (project code SIP-KEP-321803); see www.eukidsonline.net, and addresses Australian children’s online activities in terms of risk, harm and opportunity. In particular, it draws upon data that indicates that Australian children are more likely to encounter online risks — especially around seeing sexual images, bullying, misuse of personal data and exposure to potentially harmful user-generated content — than is the case with their EU counterparts. Rather than only comparing Australian children with their European equivalents, this paper places the risks experienced by Australian children in the context of the mediation and online protection practices adopted by their parents, and asks about the possible ways in which we might understand data that seems to indicate that Australian children’s experiences of online risk and harm differ significantly from the experiences of their Europe-based peers. In particular, and as an example, this paper sets out to investigate the apparent conundrum through which Australian children appear twice as likely as most European children to have seen sexual images in the past 12 months, but parents are more likely to filter their access to the internet than is the case with most children in the wider EU Kids Online study. Even so, one in four Australian children (25%) believes that what their parents do helps ‘a lot’ to improve their internet experience, and Australian children and their parents are a little less likely to agree about the mediation practices taking place in the family home than is the case in the EU. The AU Kids Online study was carried out as a result of the ARC Centre of Excellence for Creative Industries and Innovation’s funding of a small scale randomised sample (N = 400) of Australian families with at least one child, aged 9–16, who goes online. The report on Risks and safety for Australian children on the internet follows the same format and uses much of the contextual statement around these issues as the ‘county level’ reports produced by the 25 EU nations involved in EU Kids Online, first drafted by Livingstone et al. (2010). The entirely new material is the data itself, along with the analysis of that data.