856 resultados para Privacy By Design, Data Protection Officer, Privacy Officer, trattamento, dati personali, PETs
Resumo:
Pós-graduação em Direito - FCHS
Resumo:
The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.
Resumo:
Questa tesi si prefigge l’obiettivo di analizzare alcuni aspetti critici della sicurezza in ambito cloud. In particolare, i problemi legati alla privacy, dai termini di utilizzo alla sicurezza dei dati personali più o meno sensibili. L’aumento esponenziale di dati memorizzati nei sistemi di cloud storage (es. Dropbox, Amazon S3) pone il problema della sensibilità dei dati su un piano tutt'altro che banale, dovuto anche a non ben chiare politiche di utilizzo dei dati, sia in termini di cessione degli stessi a società di terze parti, sia per quanto riguarda le responsabilità legali. Questa tesi cerca di approfondire ed esaminare le mancanze più preoccupanti degli stessi. Oltre ad analizzare le principali criticità e i punti deboli dei servizi cloud, l’obiettivo di questo lavoro sarà quello di fare chiarezza sui passi e le infrastrutture che alcune aziende (es. Amazon) hanno implementato per avvicinarsi all’idea di 'safeness' nel cloud. Infine, l’ultimo obiettivo posto sarà l’individuazione di criteri per la valutazione/misura del grado di fiducia che l’utente può porre in questo contesto, distinguendo diversi criteri per classi di utenti. La tesi è strutturata in 4 capitoli: nel primo sarà effettuata una tassonomia dei problemi presenti nei sistemi cloud. Verranno presentati anche alcuni avvenimenti della storia recente, in cui queste problematiche sono affiorate. Nel secondo capitolo saranno trattate le strategie di 'safeness' adottate da alcune aziende, in ambito cloud. Inoltre, saranno presentate alcune possibili soluzioni, dal punto di vista architetturale. Si vedrà come il ruolo dell'utente sarà di estrema importanza. Il terzo capitolo sarà incentrato sulla ricerca di strumenti e metodi di valutazione che un utente, o gruppo di utenti, può utilizzare nei confronti di questi sistemi. Infine, il quarto capitolo conterrà alcune considerazioni conlusive sul lavoro svolto e sui possibili sviluppi di questa tesi.
Resumo:
Power transformers are key components of the power grid and are also one of the most subjected to a variety of power system transients. The failure of a large transformer can cause severe monetary losses to a utility, thus adequate protection schemes are of great importance to avoid transformer damage and maximize the continuity of service. Computer modeling can be used as an efficient tool to improve the reliability of a transformer protective relay application. Unfortunately, transformer models presently available in commercial software lack completeness in the representation of several aspects such as internal winding faults, which is a common cause of transformer failure. It is also important to adequately represent the transformer at frequencies higher than the power frequency for a more accurate simulation of switching transients since these are a well known cause for the unwanted tripping of protective relays. This work develops new capabilities for the Hybrid Transformer Model (XFMR) implemented in ATPDraw to allow the representation of internal winding faults and slow-front transients up to 10 kHz. The new model can be developed using any of two sources of information: 1) test report data and 2) design data. When only test-report data is available, a higher-order leakage inductance matrix is created from standard measurements. If design information is available, a Finite Element Model is created to calculate the leakage parameters for the higher-order model. An analytical model is also implemented as an alternative to FEM modeling. Measurements on 15-kVA 240?/208Y V and 500-kVA 11430Y/235Y V distribution transformers were performed to validate the model. A transformer model that is valid for simulations for frequencies above the power frequency was developed after continuing the division of windings into multiple sections and including a higher-order capacitance matrix. Frequency-scan laboratory measurements were used to benchmark the simulations. Finally, a stability analysis of the higher-order model was made by analyzing the trapezoidal rule for numerical integration as used in ATP. Numerical damping was also added to suppress oscillations locally when discontinuities occurred in the solution. A maximum error magnitude of 7.84% was encountered in the simulated currents for different turn-to-ground and turn-to-turn faults. The FEM approach provided the most accurate means to determine the leakage parameters for the ATP model. The higher-order model was found to reproduce the short-circuit impedance acceptably up to about 10 kHz and the behavior at the first anti-resonant frequency was better matched with the measurements.
Resumo:
Recently, political voices have stressed the need to introduce a right to be forgotten as new human right. Individuals should have the right to make potentially damaging information disappear after a certain time has elapsed. Such new right, however, can come in conflict with the principle of free speech. Therefore, its scope needs to be evaluated in the light of appropriate data protection rules. Insofar, a more user-centered approach is to be realized. “Delete” can not be a value as such, but must be balanced within a new legal framework.
Resumo:
In light of the recent European Court of Justice ruling (ECJ C-131/12, Google Spain v Spanish Data Protection Agency),the “right to be forgotten” has once again gained worldwide media attention. Already in 2012, whenthe European Commission proposed aright to be forgotten,this proposal received broad public interest and was debated intensively. Under certain conditions, individuals should thereby be able todelete personal data concerning them. More recently – in light of the European Parliament’s approval of the LIBE Committee’samendments onMarch 14, 2014 – the concept seems tobe close to its final form.Although it remains, for the most part,unchanged from the previously circulated drafts, it has beenre-labelled as a“right of erasure”. This article argues that, despite its catchy terminology, the right to be forgotten can be understood as a generic term, bringing together existing legal provisions: the substantial right of oblivion and the rather procedural right to erasure derived from data protection. Hereinafter, the article presents an analysis of selected national legal frameworks and corresponding case law, accounting for data protection, privacy, and general tort law as well as defamation law. This comparative analysis grasps the practical challenges which the attempt to strengthen individual control and informational self-determination faces. Consequently, it is argued that narrowing the focus on the data protection law amendments neglects the elaborate balancing of conflicting interests in European legal tradition. It is shown thatthe attemptto implement oblivion, erasure and forgetting in the digital age is a complex undertaking.
Resumo:
Background Heart valve diseases are common with an estimated prevalence of 2.5% in the Western world. The number is rising due to an ageing population. Once symptomatic, heart valve diseases are potentially lethal, and heavily influence daily living and quality of life. Surgical treatment, either valve replacement or repair, remains the treatment of choice. However, post surgery, the transition to daily living may become a physical, mental and social challenge. We hypothesise that a comprehensive cardiac rehabilitation programme can improve physical capacity and self-assessed mental health and reduce hospitalisation and healthcare costs after heart valve surgery. Methods A randomised clinical trial, CopenHeartVR, aims to investigate whether cardiac rehabilitation in addition to usual care is superior to treatment as usual after heart valve surgery. The trial will randomly allocate 210 patients, 1:1 intervention to control group, using central randomisation, and blinded outcome assessment and statistical analyses. The intervention consists of 12 weeks of physical exercise, and a psycho-educational intervention comprising five consultations. Primary outcome is peak oxygen uptake (VO2 peak) measured by cardiopulmonary exercise testing with ventilatory gas analysis. Secondary outcome is self-assessed mental health measured by the standardised questionnaire Short Form 36. Also, long-term healthcare utilisation and mortality as well as biochemistry, echocardiography and cost-benefit will be assessed. A mixed-method design is used to evaluate qualitative and quantitative findings encompassing a survey-based study before the trial and a qualitative pre- and post-intervention study. Discussion The study is approved by the local regional Research Ethics Committee (H-1-2011-157), and the Danish Data Protection Agency (j.nr. 2007-58-0015).
Resumo:
A life table methodology was developed which estimates the expected remaining Army service time and the expected remaining Army sick time by years of service for the United States Army population. A measure of illness impact was defined as the ratio of expected remaining Army sick time to the expected remaining Army service time. The variances of the resulting estimators were developed on the basis of current data. The theory of partial and complete competing risks was considered for each type of decrement (death, administrative separation, and medical separation) and for the causes of sick time.^ The methodology was applied to world-wide U.S. Army data for calendar year 1978. A total of 669,493 enlisted personnel and 97,704 officers were reported on active duty as of 30 September 1978. During calendar year 1978, the Army Medical Department reported 114,647 inpatient discharges and 1,767,146 sick days. Although the methodology is completely general with respect to the definition of sick time, only sick time associated with an inpatient episode was considered in this study.^ Since the temporal measure was years of Army service, an age-adjusting process was applied to the life tables for comparative purposes. Analyses were conducted by rank (enlisted and officer), race and sex, and were based on the ratio of expected remaining Army sick time to expected remaining Army service time. Seventeen major diagnostic groups, classified by the Eighth Revision, International Classification of Diseases, Adapted for Use In The United States, were ranked according to their cumulative (across years of service) contribution to expected remaining sick time.^ The study results indicated that enlisted personnel tend to have more expected hospital-associated sick time relative to their expected Army service time than officers. Non-white officers generally have more expected sick time relative to their expected Army service time than white officers. This racial differential was not supported within the enlisted population. Females tend to have more expected sick time relative to their expected Army service time than males. This tendency remained after diagnostic groups 580-629 (Genitourinary System) and 630-678 (Pregnancy and Childbirth) were removed. Problems associated with the circulatory system, digestive system and musculoskeletal system were among the three leading causes of cumulative sick time across years of service. ^
Resumo:
With most clinical trials, missing data presents a statistical problem in evaluating a treatment's efficacy. There are many methods commonly used to assess missing data; however, these methods leave room for bias to enter the study. This thesis was a secondary analysis on data taken from TIME, a phase 2 randomized clinical trial conducted to evaluate the safety and effect of the administration timing of bone marrow mononuclear cells (BMMNC) for subjects with acute myocardial infarction (AMI).^ We evaluated the effect of missing data by comparing the variance inflation factor (VIF) of the effect of therapy between all subjects and only subjects with complete data. Through the general linear model, an unbiased solution was made for the VIF of the treatment's efficacy using the weighted least squares method to incorporate missing data. Two groups were identified from the TIME data: 1) all subjects and 2) subjects with complete data (baseline and follow-up measurements). After the general solution was found for the VIF, it was migrated Excel 2010 to evaluate data from TIME. The resulting numerical value from the two groups was compared to assess the effect of missing data.^ The VIF values from the TIME study were considerably less in the group with missing data. By design, we varied the correlation factor in order to evaluate the VIFs of both groups. As the correlation factor increased, the VIF values increased at a faster rate in the group with only complete data. Furthermore, while varying the correlation factor, the number of subjects with missing data was also varied to see how missing data affects the VIF. When subjects with only baseline data was increased, we saw a significant rate increase in VIF values in the group with only complete data while the group with missing data saw a steady and consistent increase in the VIF. The same was seen when we varied the group with follow-up only data. This essentially showed that the VIFs steadily increased when missing data is not ignored. When missing data is ignored as with our comparison group, the VIF values sharply increase as correlation increases.^
Resumo:
A previous study in our lab has shown that the transforming neu oncogene ($neu\sp\*$) was able to initiate signals that lead to repression of the neu promoter activity. Further deletion mapping of the neu promoter identified that the GTG element (GGTGGGGGGG), located between $-$243 and $-$234 relative to the translation initiation codon, mediates such a repression effect. I have characterized the four major protein complexes that interact with this GTG element. In situ UV-crosslinking indicated that each complex contains proteins of different molecular weights. The slowest migrating complex (S) contain Sp1 or Sp1-related proteins, as indicated by the data that both have similar molecular weights, similar properties in two affinity chromatographies, and both are antigenically related in gel shift analysis. Methylation protection and interference experiments demonstrated these complexes bind to overlapping regions of the GTG element. Mutations within the GTG element that either abrogate or enhance complex S binding conferred on the neu promoter with lower activity, indicating that positive factors other than Sp1 family proteins also contribute to neu promoter activity. A mutated version (mutant 4) of the GTG element, which binds mainly the fastest migrating complex that contains a very small protein of 26-kDa, can repress transcription when fused to a heterologous promoter. Further deletion and mutation studies suggested that this GTG mutant and its binding protein(s) may cooperate with some DNA element within a heterologous promoter to lock the basal transcription machinery; such a repressor might also repress neu transcription by interfering with the DNA binding of other transactivators. Our results suggest that both positive and negative trans-acting factors converge their binding sites on the GTG element and confer combinatorial control on the neu gene expression. ^
Resumo:
In 2005, the International Ocean Colour Coordinating Group (IOCCG) convened a working group to examine the state of the art in ocean colour data merging, which showed that the research techniques had matured sufficiently for creating long multi-sensor datasets (IOCCG, 2007). As a result, ESA initiated and funded the DUE GlobColour project (http://www.globcolour.info/) to develop a satellite based ocean colour data set to support global carbon-cycle research. It aims to satisfy the scientific requirement for a long (10+ year) time-series of consistently calibrated global ocean colour information with the best possible spatial coverage. This has been achieved by merging data from the three most capable sensors: SeaWiFS on GeoEye's Orbview-2 mission, MODIS on NASA's Aqua mission and MERIS on ESA's ENVISAT mission. In setting up the GlobColour project, three user organisations were invited to help. Their roles are to specify the detailed user requirements, act as a channel to the broader end user community and to provide feedback and assessment of the results. The International Ocean Carbon Coordination Project (IOCCP) based at UNESCO in Paris provides direct access to the carbon cycle modelling community's requirements and to the modellers themselves who will use the final products. The UK Met Office's National Centre for Ocean Forecasting (NCOF) in Exeter, UK, provides an understanding of the requirements of oceanography users, and the IOCCG bring their understanding of the global user needs and valuable advice on best practice within the ocean colour science community. The three year project kicked-off in November 2005 under the leadership of ACRI-ST (France). The first year was a feasibility demonstration phase that was successfully concluded at a user consultation workshop organised by the Laboratoire d'Océanographie de Villefranche, France, in December 2006. Error statistics and inter-sensor biases were quantified by comparison with insitu measurements from moored optical buoys and ship based campaigns, and used as an input to the merging. The second year was dedicated to the production of the time series. In total, more than 25 Tb of input (level 2) data have been ingested and 14 Tb of intermediate and output products created, with 4 Tb of data distributed to the user community. Quality control (QC) is provided through the Diagnostic Data Sets (DDS), which are extracted sub-areas covering locations of in-situ data collection or interesting oceanographic phenomena. This Full Product Set (FPS) covers global daily merged ocean colour products in the time period 1997-2006 and is also freely available for use by the worldwide science community at http://www.globcolour.info/data_access_full_prod_set.html. The GlobColour service distributes global daily, 8-day and monthly data sets at 4.6 km resolution for, chlorophyll-a concentration, normalised water-leaving radiances (412, 443, 490, 510, 531, 555 and 620 nm, 670, 681 and 709 nm), diffuse attenuation coefficient, coloured dissolved and detrital organic materials, total suspended matter or particulate backscattering coefficient, turbidity index, cloud fraction and quality indicators. Error statistics from the initial sensor characterisation are used as an input to the merging methods and propagate through the merging process to provide error estimates for the output merged products. These error estimates are a key component of GlobColour as they are invaluable to the users; particularly the modellers who need them in order to assimilate the ocean colour data into ocean simulations. An intensive phase of validation has been undertaken to assess the quality of the data set. In addition, inter-comparisons between the different merged datasets will help in further refining the techniques used. Both the final products and the quality assessment were presented at a second user consultation in Oslo on 20-22 November 2007 organised by the Norwegian Institute for Water Research (NIVA); presentations are available on the GlobColour WWW site. On request of the ESA Technical Officer for the GlobColour project, the FPS data set was mirrored in the PANGAEA data library.
Resumo:
Publishing Linked Data is a process that involves several design decisions and technologies. Although some initial guidelines have been already provided by Linked Data publishers, these are still far from covering all the steps that are necessary (from data source selection to publication) or giving enough details about all these steps, technologies, intermediate products, etc. Furthermore, given the variety of data sources from which Linked Data can be generated, we believe that it is possible to have a single and uni�ed method for publishing Linked Data, but we should rely on di�erent techniques, technologies and tools for particular datasets of a given domain. In this paper we present a general method for publishing Linked Data and the application of the method to cover di�erent sources from di�erent domains.
Resumo:
Abstract Due to recent scientific and technological advances in information sys¬tems, it is now possible to perform almost every application on a mobile device. The need to make sense of such devices more intelligent opens an opportunity to design data mining algorithm that are able to autonomous execute in local devices to provide the device with knowledge. The problem behind autonomous mining deals with the proper configuration of the algorithm to produce the most appropriate results. Contextual information together with resource information of the device have a strong impact on both the feasibility of a particu¬lar execution and on the production of the proper patterns. On the other hand, performance of the algorithm expressed in terms of efficacy and efficiency highly depends on the features of the dataset to be analyzed together with values of the parameters of a particular implementation of an algorithm. However, few existing approaches deal with autonomous configuration of data mining algorithms and in any case they do not deal with contextual or resources information. Both issues are of particular significance, in particular for social net¬works application. In fact, the widespread use of social networks and consequently the amount of information shared have made the need of modeling context in social application a priority. Also the resource consumption has a crucial role in such platforms as the users are using social networks mainly on their mobile devices. This PhD thesis addresses the aforementioned open issues, focusing on i) Analyzing the behavior of algorithms, ii) mapping contextual and resources information to find the most appropriate configuration iii) applying the model for the case of a social recommender. Four main contributions are presented: - The EE-Model: is able to predict the behavior of a data mining algorithm in terms of resource consumed and accuracy of the mining model it will obtain. - The SC-Mapper: maps a situation defined by the context and resource state to a data mining configuration. - SOMAR: is a social activity (event and informal ongoings) recommender for mobile devices. - D-SOMAR: is an evolution of SOMAR which incorporates the configurator in order to provide updated recommendations. Finally, the experimental validation of the proposed contributions using synthetic and real datasets allows us to achieve the objectives and answer the research questions proposed for this dissertation.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
Resumo:
Funding The International Primary Care Respiratory Group (IPCRG) provided funding for this research project as an UNLOCK group study for which the funding was obtained through an unrestricted grant by Novartis AG, Basel, Switzerland. The latter funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Database access for the OPCRD was provided by the Respiratory Effectiveness Group (REG) and Research in Real Life; the OPCRD statistical analysis was funded by REG. The Bocholtz Study was funded by PICASSO for COPD, an initiative of Boehringer Ingelheim, Pfizer and the Caphri Research Institute, Maastricht University, The Netherlands.