866 resultados para Compositional data analysis-roots in geosciences
Resumo:
Encryption of personal data is widely regarded as a privacy preserving technology which could potentially play a key role for the compliance of innovative IT technology within the European data protection law framework. Therefore, in this paper, we examine the new EU General Data Protection Regulation’s relevant provisions regarding encryption – such as those for anonymisation and pseudonymisation – and assess whether encryption can serve as an anonymisation technique, which can lead to the non-applicability of the GDPR. However, the provisions of the GDPR regarding the material scope of the Regulation still leave space for legal uncertainty when determining whether a data subject is identifiable or not. Therefore, we inter alia assess the Opinion of the Advocate General of the European Court of Justice (ECJ) regarding a preliminary ruling on the interpretation of the dispute concerning whether a dynamic IP address can be considered as personal data, which may put an end to the dispute whether an absolute or a relative approach has to be used for the assessment of the identifiability of data subjects. Furthermore, we outline the issue of whether the anonymisation process itself constitutes a further processing of personal data which needs to have a legal basis in the GDPR. Finally, we give an overview of relevant encryption techniques and examine their impact upon the GDPR’s material scope.
Resumo:
Con la crescita in complessità delle infrastrutture IT e la pervasività degli scenari di Internet of Things (IoT) emerge il bisogno di nuovi modelli computazionali basati su entità autonome capaci di portare a termine obiettivi di alto livello interagendo tra loro grazie al supporto di infrastrutture come il Fog Computing, per la vicinanza alle sorgenti dei dati, e del Cloud Computing per offrire servizi analitici complessi di back-end in grado di fornire risultati per milioni di utenti. Questi nuovi scenarii portano a ripensare il modo in cui il software viene progettato e sviluppato in una prospettiva agile. Le attività dei team di sviluppatori (Dev) dovrebbero essere strettamente legate alle attività dei team che supportano il Cloud (Ops) secondo nuove metodologie oggi note come DevOps. Tuttavia, data la mancanza di astrazioni adeguata a livello di linguaggio di programmazione, gli sviluppatori IoT sono spesso indotti a seguire approcci di sviluppo bottom-up che spesso risulta non adeguato ad affrontare la compessità delle applicazione del settore e l'eterogeneità dei compomenti software che le formano. Poichè le applicazioni monolitiche del passato appaiono difficilmente scalabili e gestibili in un ambiente Cloud con molteplici utenti, molti ritengono necessaria l'adozione di un nuovo stile architetturale, in cui un'applicazione dovrebbe essere vista come una composizione di micro-servizi, ciascuno dedicato a uno specifica funzionalità applicativa e ciascuno sotto la responsabilità di un piccolo team di sviluppatori, dall'analisi del problema al deployment e al management. Poichè al momento non si è ancora giunti a una definizione univoca e condivisa dei microservices e di altri concetti che emergono da IoT e dal Cloud, nè tantomento alla definzione di linguaggi sepcializzati per questo settore, la definzione di metamodelli custom associati alla produzione automatica del software di raccordo con le infrastrutture potrebbe aiutare un team di sviluppo ad elevare il livello di astrazione, incapsulando in una software factory aziendale i dettagli implementativi. Grazie a sistemi di produzione del sofware basati sul Model Driven Software Development (MDSD), l'approccio top-down attualmente carente può essere recuperato, permettendo di focalizzare l'attenzione sulla business logic delle applicazioni. Nella tesi viene mostrato un esempio di questo possibile approccio, partendo dall'idea che un'applicazione IoT sia in primo luogo un sistema software distribuito in cui l'interazione tra componenti attivi (modellati come attori) gioca un ruolo fondamentale.
Resumo:
The object of this report is to present the data and conclusions drawn from the analysis of the origin and destination information. Comments on the advisability and correctness of the approach used by Iowa are encouraged.
Resumo:
Magnetically-induced forces on the inertial masses on-board LISA Path finder are expected to be one of the dominant contributions to the mission noise budget, accounting for up to 40%. The origin of this disturbance is the coupling of the residual magnetization and susceptibility of the test masses with the environmental magnetic field. In order to fully understand this important part of the noise model, a set of coils and magnetometers are integrated as a part of the diagnostics subsystem. During operations a sequence of magnetic excitations will be applied to precisely determine the coupling of the magnetic environment to the test mass displacement using the on-board magnetometers. Since no direct measurement of the magnetic field in the test mass position will be available, an extrapolation of the magnetic measurements to the test mass position will be carried out as a part of the data analysis activities. In this paper we show the first results on the magnetic experiments during an end-to-end LISA Path finder simulation, and we describe the methods under development to map the magnetic field on-board.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Part 15: Performance Management Frameworks
Resumo:
Active regeneration experiments were carried out on a production 2007 Cummins 8.9L ISL engine and associated DOC and CPF aftertreatment system. The effects of SME biodiesel blends were investigated in this study in order to determine the PM oxidation kinetics associated with active regeneration, and to determine the effect of biodiesel on them. The experimental data from this study will also be used to calibrate the MTU-1D CPF model. Accurately predicting the PM mass retained in the CPF and the oxidation characteristics will provide the basis for computation in the ECU that will minimize the fuel penalty associated with active regeneration. An active regeneration test procedure was developed based on previous experimentation at MTU. During each experiment, the PM mass in the CPF is determined by weighing the filter at various phases. In addition, DOC and CPF pressure drop, particle size distribution, gaseous emissions, temperature, and PM concentration data are collected and recorded throughout each experiment. The experiments covered a range of CPF inlet temperatures using ULSD, B10, and B20 blends of biodiesel. The majority of the tests were performed at CPF PM loading of 2.2 g/L with in-cylinder dosing, although 4.1 g/L and a post-turbo dosing injector were also used. The PM oxidation characteristics at different test conditions were studied in order to determine the effects of biodiesel on PM oxidation during active regeneration. A PM reaction rate calculation method was developed to determine the global activation energy and the corresponding pre-exponential factor for all test fuels. The changing sum of the total flow resistance of the wall, cake, and channels was also determined as part of the data analysis process in order to check on the integrity of the data and to correct input data to be consistent with the expected trends of the resistance based on the engine conditions used in the test procedure. It was determined that increasing the percent biodiesel content in the test fuel tends to increase the PM reaction rate and the regeneration efficiency of fuel dosing, i.e., at a constant CPF inlet temperature, B20 test fuel resulted in the highest PM reaction rate and regeneration efficiency of fuel dosing. Increasing the CPF inlet temperature also increases PM reaction rate and regeneration efficiency of fuel dosing. Performing active regeneration with B20 as opposed to ULSD allows for a lower CPF temperature to be used to reach the same level of regeneration efficiency, or it allows for a shorter regeneration time at a constant CPF temperature, resulting in decreased fuel consumption for the engine during active regeneration in either scenario.
Resumo:
Purpose: To evaluate psychometric properties of Quinn’s leadership questionnaire (CFV questionnaire; 1988) to the Portuguese health services. Design: Cross-sectional study, using the Quinn’s leadership questionnaire, administered to registered nurses and physicians in Portuguese health care services (N = 687). Method: Self-administered survey applied to two samples. In the first (of convenience; N = 249 Portuguese health professionals) were performed exploratory factor and reliability analysis to the CFV questionnaire. In the second sample (stratified; N = 50 surgical units of 33 Portuguese hospitals) was performed confirmatory factor analysis using LISREL 8.80. Findings: The first sample supported an eight-factor solution accounting for 65.46% of the variance, in an interpretable factorial structure (loadings> .50), with Cronbach’s α upper than .79. This factorial structure, replicated with the second sample, showed reasonable fit for each of the 8 leadership roles, quadrants, and global model. The models evidenced, generally, nomological validity, with scores between good and acceptable (.235 < x2/df < 2.055 e .00 < RMSEA < .077). Conclusions: Quinn’s leadership questionnaire presented good reliability and validity for the eight leadership roles, showing to be suitable for use in hospital health care context. Key-Words: Leadership; Quinn’s CVF questionnaire; health services; Quinn’s competing values.
Resumo:
Online learning algorithms have recently risen to prominence due to their strong theoretical guarantees and an increasing number of practical applications for large-scale data analysis problems. In this paper, we analyze a class of online learning algorithms based on fixed potentials and nonlinearized losses, which yields algorithms with implicit update rules. We show how to efficiently compute these updates, and we prove regret bounds for the algorithms. We apply our formulation to several special cases where our approach has benefits over existing online learning methods. In particular, we provide improved algorithms and bounds for the online metric learning problem, and show improved robustness for online linear prediction problems. Results over a variety of data sets demonstrate the advantages of our framework.
Resumo:
In defining the contemporary role of the specialist nurse it is necessary to challenge the concept of nursing as merely a combination of skills and knowledge. Nursing must be demonstrated and defined in the context of client care and include the broader notions of professional development and competence. This qualitative study sought to identify the competency standards for nurse specialists in critical care and to articulate the differences between entry-to-practice standards and the advanced practice of specialist nurses. Over 800 hours of specialist critical care nursing practice were observed and grouped into 'domains' or major themes of specialist practice using a constant comparison qualitative technique. These domains were further refined to describe attributes of the registered nurses which resulted in effective and/or superior performance (competency standards) and to provide examples of performance (performance criteria) which met the defined standard. Constant comparison of the emerging domains, competency standards and performance criteria to observations of specialist critical care practice, ensured the results provided a true reflection of the specialist nursing role. Data analysis resulted in 20 competency standards grouped into six domains: professional practice, reflective practice, enabling, clinical problem solving, teamwork, and leadership. Each of these domains is comprised of between two and seven competency standards. Each standard is further divided into component parts or 'elements' and the elements are illustrated with performance criteria. The competency standards are currently being used in several Australian critical care educational programmes and are the foundation for an emerging critical care credentialling process. They have been viewed with interest by a variety of non-critical care specialty groups and may form a common precursor from which further specialist nursing practice assessment will evolve.