956 resultados para Production engineering Data processing


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applying location-focused data protection law within the context of a location-agnostic cloud computing framework is fraught with difficulties. While the Proposed EU Data Protection Regulation has introduced a lot of changes to the current data protection framework, the complexities of data processing in the cloud involve various layers and intermediaries of actors that have not been properly addressed. This leaves some gaps in the regulation when analyzed in cloud scenarios. This paper gives a brief overview of the relevant provisions of the regulation that will have an impact on cloud transactions and addresses the missing links. It is hoped that these loopholes will be reconsidered before the final version of the law is passed in order to avoid unintended consequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In cattle, at least 39 variants of the 4 casein proteins (α(S1)-, β-, α(S2)- and κ-casein) have been described to date. Many of these variants are known to affect milk-production traits, cheese-processing properties, and the nutritive value of milk. They also provide valuable information for phylogenetic studies. So far, the majority of studies exploring the genetic variability of bovine caseins considered European taurine cattle breeds and were carried out at the protein level by electrophoretic techniques. This only allows the identification of variants that, due to amino acid exchanges, differ in their electric charge, molecular weight, or isoelectric point. In this study, the open reading frames of the casein genes CSN1S1, CSN2, CSN1S2, and CSN3 of 356 animals belonging to 14 taurine and 3 indicine cattle breeds were sequenced. With this approach, we identified 23 alleles, including 5 new DNA sequence variants, with a predicted effect on the protein sequence. The new variants were only found in indicine breeds and in one local Iranian breed, which has been phenotypically classified as a taurine breed. A multidimensional scaling approach based on available SNP chip data, however, revealed an admixture of taurine and indicine populations in this breed as well as in the local Iranian breed Golpayegani. Specific indicine casein alleles were also identified in a few European taurine breeds, indicating the introgression of indicine breeds into these populations. This study shows the existence of substantial undiscovered genetic variability of bovine casein loci, especially in indicine cattle breeds. The identification of new variants is a valuable tool for phylogenetic studies and investigations into the evolution of the milk protein genes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medical instrumentation used in diagnosis and treatment relies on the accurate detection and processing of various physiological events and signals. While signal detection technology has improved greatly in recent years, there remain inherent delays in signal detection/ processing. These delays may have significant negative clinical consequences during various pathophysiological events. Reducing or eliminating such delays would increase the ability to provide successful early intervention in certain disorders thereby increasing the efficacy of treatment. In recent years, a physical phenomenon referred to as Negative Group Delay (NGD), demonstrated in simple electronic circuits, has been shown to temporally advance the detection of analog waveforms. Specifically, the output is temporally advanced relative to the input, as the time delay through the circuit is negative. The circuit output precedes the complete detection of the input signal. This process is referred to as signal advance (SA) detection. An SA circuit model incorporating NGD was designed, developed and tested. It imparts a constant temporal signal advance over a pre-specified spectral range in which the output is almost identical to the input signal (i.e., it has minimal distortion). Certain human patho-electrophysiological events are good candidates for the application of temporally-advanced waveform detection. SA technology has potential in early arrhythmia and epileptic seizure detection and intervention. Demonstrating reliable and consistent temporally advanced detection of electrophysiological waveforms may enable intervention with a pathological event (much) earlier than previously possible. SA detection could also be used to improve the performance of neural computer interfaces, neurotherapy applications, radiation therapy and imaging. In this study, the performance of a single-stage SA circuit model on a variety of constructed input signals, and human ECGs is investigated. The data obtained is used to quantify and characterize the temporal advances and circuit gain, as well as distortions in the output waveforms relative to their inputs. This project combines elements of physics, engineering, signal processing, statistics and electrophysiology. Its success has important consequences for the development of novel interventional methodologies in cardiology and neurophysiology as well as significant potential in a broader range of both biomedical and non-biomedical areas of application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The article proposes granular computing as a theoretical, formal and methodological basis for the newly emerging research field of human–data interaction (HDI). We argue that the ability to represent and reason with information granules is a prerequisite for data legibility. As such, it allows for extending the research agenda of HDI to encompass the topic of collective intelligence amplification, which is seen as an opportunity of today’s increasingly pervasive computing environments. As an example of collective intelligence amplification in HDI, we introduce a collaborative urban planning use case in a cognitive city environment and show how an iterative process of user input and human-oriented automated data processing can support collective decision making. As a basis for automated human-oriented data processing, we use the spatial granular calculus of granular geometry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Navigation of deep space probes is most commonly operated using the spacecraft Doppler tracking technique. Orbital parameters are determined from a series of repeated measurements of the frequency shift of a microwave carrier over a given integration time. Currently, both ESA and NASA operate antennas at several sites around the world to ensure the tracking of deep space probes. Just a small number of software packages are nowadays used to process Doppler observations. The Astronomical Institute of the University of Bern (AIUB) has recently started the development of Doppler data processing capabilities within the Bernese GNSS Software. This software has been extensively used for Precise Orbit Determination of Earth orbiting satellites using GPS data collected by on-board receivers and for subsequent determination of the Earth gravity field. In this paper, we present the currently achieved status of the Doppler data modeling and orbit determination capabilities in the Bernese GNSS Software using GRAIL data. In particular we will focus on the implemented orbit determination procedure used for the combined analysis of Doppler and intersatellite Ka-band data. We show that even at this earlier stage of the development we can achieve an accuracy of few mHz on two-way S-band Doppler observation and of 2 µm/s on KBRR data from the GRAIL primary mission phase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A wide variety of spatial data collection efforts are ongoing throughout local, state and federal agencies, private firms and non-profit organizations. Each effort is established for a different purpose but organizations and individuals often collect and maintain the same or similar information. The United States federal government has undertaken many initiatives such as the National Spatial Data Infrastructure, the National Map and Geospatial One-Stop to reduce duplicative spatial data collection and promote the coordinated use, sharing, and dissemination of spatial data nationwide. A key premise in most of these initiatives is that no national government will be able to gather and maintain more than a small percentage of the geographic data that users want and desire. Thus, national initiatives depend typically on the cooperation of those already gathering spatial data and those using GIs to meet specific needs to help construct and maintain these spatial data infrastructures and geo-libraries for their nations (Onsrud 2001). Some of the impediments to widespread spatial data sharing are well known from directly asking GIs data producers why they are not currently involved in creating datasets that are of common or compatible formats, documenting their datasets in a standardized metadata format or making their datasets more readily available to others through Data Clearinghouses or geo-libraries. The research described in this thesis addresses the impediments to wide-scale spatial data sharing faced by GIs data producers and explores a new conceptual data-sharing approach, the Public Commons for Geospatial Data, that supports user-friendly metadata creation, open access licenses, archival services and documentation of parent lineage of the contributors and value- adders of digital spatial data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, profiling floats, which form the basis of the successful international Argo observatory, are also being considered as platforms for marine biogeochemical research. This study showcases the utility of floats as a novel tool for combined gas measurements of CO2 partial pressure (pCO2) and O2. These float prototypes were equipped with a small-sized and submersible pCO2 sensor and an optode O2 sensor for highresolution measurements in the surface ocean layer. Four consecutive deployments were carried out during November 2010 and June 2011 near the Cape Verde Ocean Observatory (CVOO) in the eastern tropical North Atlantic. The profiling float performed upcasts every 31 h while measuring pCO2, O2, salinity, temperature, and hydrostatic pressure in the upper 200 m of the water column. To maintain accuracy, regular pCO2 sensor zeroings at depth and surface, as well as optode measurements in air, were performed for each profile. Through the application of data processing procedures (e.g., time-lag correction), accuracies of floatborne pCO2 measurements were greatly improved (10-15 µatm for the water column and 5 µatm for surface measurements). O2 measurements yielded an accuracy of 2 µmol/kg. First results of this pilot study show the possibility of using profiling floats as a platform for detailed and unattended observations of the marine carbon and oxygen cycle dynamics.