801 resultados para EU Data Protection Framework
Resumo:
"US 84-10/8."
Resumo:
The present paper is devoted to creation of cryptographic data security and realization of the packet mode in the distributed information measurement and control system that implements methods of optical spectroscopy for plasma physics research and atomic collisions. This system gives a remote access to information and instrument resources within the Intranet/Internet networks. The system provides remote access to information and hardware resources for the natural sciences within the Intranet/Internet networks. The access to physical equipment is realized through the standard interface servers (PXI, CАМАC, and GPIB), the server providing access to Ethernet devices, and the communication server, which integrates the equipment servers into a uniform information system. The system is used to make research task in optical spectroscopy, as well as to support the process of education at the Department of Physics and Engineering of Petrozavodsk State University.
Resumo:
After years of deliberation, the EU commission sped up the reform process of a common EU digital policy considerably in 2015 by launching the EU digital single market strategy. In particular, two core initiatives of the strategy were agreed upon: General Data Protection Regulation and the Network and Information Security (NIS) Directive law texts. A new initiative was additionally launched addressing the role of online platforms. This paper focuses on the platform privacy rationale behind the data protection legislation, primarily based on the proposal for a new EU wide General Data Protection Regulation. We analyse the legislation rationale from an Information System perspective to understand the role user data plays in creating platforms that we identify as “processing silos”. Generative digital infrastructure theories are used to explain the innovative mechanisms that are thought to govern the notion of digitalization and successful business models that are affected by digitalization. We foresee continued judicial data protection challenges with the now proposed Regulation as the adoption of the “Internet of Things” continues. The findings of this paper illustrate that many of the existing issues can be addressed through legislation from a platform perspective. We conclude by proposing three modifications to the governing rationale, which would not only improve platform privacy for the data subject, but also entrepreneurial efforts in developing intelligent service platforms. The first modification is aimed at improving service differentiation on platforms by lessening the ability of incumbent global actors to lock-in the user base to their service/platform. The second modification posits limiting the current unwanted tracking ability of syndicates, by separation of authentication and data store services from any processing entity. Thirdly, we propose a change in terms of how security and data protection policies are reviewed, suggesting a third party auditing procedure.
Resumo:
Public agencies are increasingly required to collaborate with each other in order to provide high-quality e-government services. This collaboration is usually based on the service-oriented approach and supported by interoperability platforms. Such platforms are specialized middleware-based infrastructures enabling the provision, discovery and invocation of interoperable software services. In turn, given that personal data handled by governments are often very sensitive, most governments have developed some sort of legislation focusing on data protection. This paper proposes solutions for monitoring and enforcing data protection laws within an E-government Interoperability Platform. In particular, the proposal addresses requirements posed by the Uruguayan Data Protection Law and the Uruguayan E-government Platform, although it can also be applied in similar scenarios. The solutions are based on well-known integration mechanisms (e.g. Enterprise Service Bus) as well as recognized security standards (e.g. eXtensible Access Control Markup Language) and were completely prototyped leveraging the SwitchYard ESB product.
Resumo:
Tutkielman tarkoituksena on selvittää lukijalle, mistä syistä ja miten Euroopan unionin tietosuojainstrumentit – nykyinen tietosuojadirektiivi ja tuleva tietosuoja-asetus – asettavat rajoituksia EU:n kansalaisten henkilötietojen siirroille kolmansiin maihin kaupallisia tarkoituksia varten. Erityisen tarkastelun kohteena on henkilötietojen siirrot EU:n alueelta Yhdysvaltoihin mahdollistanut Safe Harbor-järjestelmä, jonka Euroopan unionin tuomioistuin katsoi pätemättömäksi asiassa C-362/14 Maximillian Schrems v Data Protection Commissioner. Tutkimusaiheen eli henkilötietojen rajat ylittävien siirtojen ollessa kansainvälisen oikeuden ja tietosuojaoikeuden leikkauspisteessä on tutkimuksessa käytetty molempien oikeudenalojen asiantuntijoiden tutkimuksia lähteenä. Kansainvälisen oikeuden peruslähteenä on käytetty Brownlien teosta Principles of Public International Law (6. painos), jota vasten on peilattu tutkimusaihetta tarkemmin käsittelevää kirjallisuutta. Erityisesti on syytä nostaa esille Bygraven tietosuojaoikeutta kansainvälisessä kontekstissa käsittelevä Data Privacy Law: An International Perspective sekä Kunerin nimenomaisesti henkilötietojen kansainvälisiä siirtoja käsittelevä Transborder Data Flows and Data Privacy Law. Uusien teknologioiden myötä nopeasti kehittyvästä tutkimusilmiöstä ja oikeudenalasta johtuen tutkimuksessa on käytetty lähdemateriaaleina runsaasti aihepiiriä käsitteleviä artikkeleita arvostetuista julkaisuista, sekä EU:n tietosuojaviranomaisten ja YK:n raportteja virallislähteinä. Keskeiset tutkimustulokset osoittavat EU:n ja sen jäsenvaltioiden intressit henkilötietojen siirroissa sekä EU:n asettamien henkilötietojen siirtosääntelyiden vaikutukset kolmansiin maihin. Globaalin konsensuksen saavuttamisen koskien henkilötietojen kansainvälisiä siirtosääntelyitä arvioitiin olevan ainakin lähitulevaisuudessa epätodennäköistä. Nykyisten alueellisten sääntelyratkaisujen osalta todettiin Euroopan neuvoston yleissopimuksen No. 108 eniten osoittavan potentiaalia maailmanlaajuiselle implementoinnille. Lopuksi arvioitiin oikeudellisen pluralismin mallin puitteissa tarkoituksenmukaisia keinoja EU:n kansalaisten perusoikeuksina turvattujen yksityisyyden ja henkilötietojen suojan parantamiseksi. Tarkastelu osoittaa EU:n kansalaisten sekä näiden henkilötietoja käsittelevien ja siirtävien yritysten välillä olleen tiedollinen ja voimallinen epätasapaino, joka ilmenee yksilön tiedollisen itseautonomian ja suostumuksen merkityksen heikentymisenä, joskin EU:n vuonna 2018 voimaan astuva tietosuoja-asetus organisaatioiden vastuuta korostamalla pyrkii poistamaan tätä ongelmaa.
Resumo:
The thesis represents the conclusive outcome of the European Joint Doctorate programmein Law, Science & Technology funded by the European Commission with the instrument Marie Skłodowska-Curie Innovative Training Networks actions inside of the H2020, grantagreement n. 814177. The tension between data protection and privacy from one side, and the need of granting further uses of processed personal datails is investigated, drawing the lines of the technological development of the de-anonymization/re-identification risk with an explorative survey. After acknowledging its span, it is questioned whether a certain degree of anonymity can still be granted focusing on a double perspective: an objective and a subjective perspective. The objective perspective focuses on the data processing models per se, while the subjective perspective investigates whether the distribution of roles and responsibilities among stakeholders can ensure data anonymity.
Resumo:
Big data and AI are paving the way to promising scenarios in clinical practice and research. However, the use of such technologies might clash with GDPR requirements. Today, two forces are driving the EU policies in this domain. The first is the necessity to protect individuals’ safety and fundamental rights. The second is to incentivize the deployment of innovative technologies. The first objective is pursued by legislative acts such as the GDPR or the AIA, the second is supported by the new data strategy recently launched by the European Commission. Against this background, the thesis analyses the issue of GDPR compliance when big data and AI systems are implemented in the health domain. The thesis focuses on the use of co-regulatory tools for compliance with the GDPR. This work argues that there are two level of co-regulation in the EU legal system. The first, more general, is the approach pursued by the EU legislator when shaping legislative measures that deal with fast-evolving technologies. The GDPR can be deemed a co-regulatory solution since it mainly introduces general requirements, which implementation shall then be interpretated by the addressee of the law following a risk-based approach. This approach, although useful is costly and sometimes burdensome for organisations. The second co-regulatory level is represented by specific co-regulatory tools, such as code of conduct and certification mechanisms. These tools are meant to guide and support the interpretation effort of the addressee of the law. The thesis argues that the lack of co-regulatory tools which are supposed to implement data protection law in specific situations could be an obstacle to the deployment of innovative solutions in complex scenario such as the health ecosystem. The thesis advances hypothesis on theoretical level about the reasons of such a lack of co-regulatory solutions.
Resumo:
This article analyses the results of an empirical study on the 200 most popular UK-based websites in various sectors of e-commerce services. The study provides empirical evidence on unlawful processing of personal data. It comprises a survey on the methods used to seek and obtain consent to process personal data for direct marketing and advertisement, and a test on the frequency of unsolicited commercial emails (UCE) received by customers as a consequence of their registration and submission of personal information to a website. Part One of the article presents a conceptual and normative account of data protection, with a discussion of the ethical values on which EU data protection law is grounded and an outline of the elements that must be in place to seek and obtain valid consent to process personal data. Part Two discusses the outcomes of the empirical study, which unveils a significant departure between EU legal theory and practice in data protection. Although a wide majority of the websites in the sample (69%) has in place a system to ask separate consent for engaging in marketing activities, it is only 16.2% of them that obtain a consent which is valid under the standards set by EU law. The test with UCE shows that only one out of three websites (30.5%) respects the will of the data subject not to receive commercial communications. It also shows that, when submitting personal data in online transactions, there is a high probability (50%) of incurring in a website that will ignore the refusal of consent and will send UCE. The article concludes that there is severe lack of compliance of UK online service providers with essential requirements of data protection law. In this respect, it suggests that there is inappropriate standard of implementation, information and supervision by the UK authorities, especially in light of the clarifications provided at EU level.
Resumo:
The General Data Protection Regulation (GDPR) has been designed to help promote a view in favor of the interests of individuals instead of large corporations. However, there is the need of more dedicated technologies that can help companies comply with GDPR while enabling people to exercise their rights. We argue that such a dedicated solution must address two main issues: the need for more transparency towards individuals regarding the management of their personal information and their often hindered ability to access and make interoperable personal data in a way that the exercise of one's rights would result in straightforward. We aim to provide a system that helps to push personal data management towards the individual's control, i.e., a personal information management system (PIMS). By using distributed storage and decentralized computing networks to control online services, users' personal information could be shifted towards those directly concerned, i.e., the data subjects. The use of Distributed Ledger Technologies (DLTs) and Decentralized File Storage (DFS) as an implementation of decentralized systems is of paramount importance in this case. The structure of this dissertation follows an incremental approach to describing a set of decentralized systems and models that revolves around personal data and their subjects. Each chapter of this dissertation builds up the previous one and discusses the technical implementation of a system and its relation with the corresponding regulations. We refer to the EU regulatory framework, including GDPR, eIDAS, and Data Governance Act, to build our final system architecture's functional and non-functional drivers. In our PIMS design, personal data is kept in a Personal Data Space (PDS) consisting of encrypted personal data referring to the subject stored in a DFS. On top of that, a network of authorization servers acts as a data intermediary to provide access to potential data recipients through smart contracts.
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
The document should be read as supplementary to existing requirements as set out both in statute â?" particularly legislation specific to your organisation, the Health Acts 1947-2004, Ombudsman Act, 1980, Data Protection Acts 1988 & 2003, Freedom of Information Acts 1997-2003, Ethics in Public Office Acts 1995 & 2001, Ombudsman for Children Act, 2002 and the Comptroller and Auditor General (Amendment) Act, 1993 – and in Government approved guidelines, including the Code of Practice for the Governance of State Bodies (2001), Public Financial Procedures, The Role and Responsibilities of Accounting Officers (2003) and Risk Management Guidance for Government Departments and Offices (2004). Read the report (PDF, 1.4mb) Â
Resumo:
Medical research on minors entails both risks and benefits. Under Swiss law, clinical trials on children, including nontherapeutic drug trials, are permissible. However, ethics committees must systematically verify that all clinical studies have a favorable risk-benefit profile. Additional safeguards are designed to ensure that children are not unnecessarily involved in research and that proper consent is always obtained. Federal Swiss law is undergoing revision to extend these protections beyond clinical trials to a broad array of health research. The Swiss drug agency also seeks to improve the incentives for pharmaceutical firms to develop new paediatric drugs and relevant paediatric drug labels.
Resumo:
Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.
Resumo:
L’évolution continue des besoins d’apprentissage vers plus d’efficacité et plus de personnalisation a favorisé l’émergence de nouveaux outils et dimensions dont l’objectif est de rendre l’apprentissage accessible à tout le monde et adapté aux contextes technologiques et sociaux. Cette évolution a donné naissance à ce que l’on appelle l'apprentissage social en ligne mettant l'accent sur l’interaction entre les apprenants. La considération de l’interaction a apporté de nombreux avantages pour l’apprenant, à savoir établir des connexions, échanger des expériences personnelles et bénéficier d’une assistance lui permettant d’améliorer son apprentissage. Cependant, la quantité d'informations personnelles que les apprenants divulguent parfois lors de ces interactions, mène, à des conséquences souvent désastreuses en matière de vie privée comme la cyberintimidation, le vol d’identité, etc. Malgré les préoccupations soulevées, la vie privée en tant que droit individuel représente une situation idéale, difficilement reconnaissable dans le contexte social d’aujourd’hui. En effet, on est passé d'une conceptualisation de la vie privée comme étant un noyau des données sensibles à protéger des pénétrations extérieures à une nouvelle vision centrée sur la négociation de la divulgation de ces données. L’enjeu pour les environnements sociaux d’apprentissage consiste donc à garantir un niveau maximal d’interaction pour les apprenants tout en préservant leurs vies privées. Au meilleur de nos connaissances, la plupart des innovations dans ces environnements ont porté sur l'élaboration des techniques d’interaction, sans aucune considération pour la vie privée, un élément portant nécessaire afin de créer un environnement favorable à l’apprentissage. Dans ce travail, nous proposons un cadre de vie privée que nous avons appelé « gestionnaire de vie privée». Plus précisément, ce gestionnaire se charge de gérer la protection des données personnelles et de la vie privée de l’apprenant durant ses interactions avec ses co-apprenants. En s’appuyant sur l’idée que l’interaction permet d’accéder à l’aide en ligne, nous analysons l’interaction comme une activité cognitive impliquant des facteurs contextuels, d’autres apprenants, et des aspects socio-émotionnels. L'objectif principal de cette thèse est donc de revoir les processus d’entraide entre les apprenants en mettant en oeuvre des outils nécessaires pour trouver un compromis entre l’interaction et la protection de la vie privée. ii Ceci a été effectué selon trois niveaux : le premier étant de considérer des aspects contextuels et sociaux de l’interaction telle que la confiance entre les apprenants et les émotions qui ont initié le besoin d’interagir. Le deuxième niveau de protection consiste à estimer les risques de cette divulgation et faciliter la décision de protection de la vie privée. Le troisième niveau de protection consiste à détecter toute divulgation de données personnelles en utilisant des techniques d’apprentissage machine et d’analyse sémantique.