945 resultados para Lead Analysis Data processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is divided into three volumes: Volume I: Strain-Based Damage Detection; Volume II: Acceleration-Based Damage Detection; Volume III: Wireless Bridge Monitoring Hardware. Volume I: In this work, a previously-developed structural health monitoring (SHM) system was advanced toward a ready-for-implementation system. Improvements were made with respect to automated data reduction/analysis, data acquisition hardware, sensor types, and communication network architecture. The statistical damage-detection tool, control-chart-based damage-detection methodologies, were further investigated and advanced. For the validation of the damage-detection approaches, strain data were obtained from a sacrificial specimen attached to the previously-utilized US 30 Bridge over the South Skunk River (in Ames, Iowa), which had simulated damage,. To provide for an enhanced ability to detect changes in the behavior of the structural system, various control chart rules were evaluated. False indications and true indications were studied to compare the damage detection ability in regard to each methodology and each control chart rule. An autonomous software program called Bridge Engineering Center Assessment Software (BECAS) was developed to control all aspects of the damage detection processes. BECAS requires no user intervention after initial configuration and training. Volume II: In this work, a previously developed structural health monitoring (SHM) system was advanced toward a ready-for-implementation system. Improvements were made with respect to automated data reduction/analysis, data acquisition hardware, sensor types, and communication network architecture. The objective of this part of the project was to validate/integrate a vibration-based damage-detection algorithm with the strain-based methodology formulated by the Iowa State University Bridge Engineering Center. This report volume (Volume II) presents the use of vibration-based damage-detection approaches as local methods to quantify damage at critical areas in structures. Acceleration data were collected and analyzed to evaluate the relationships between sensors and with changes in environmental conditions. A sacrificial specimen was investigated to verify the damage-detection capabilities and this volume presents a transmissibility concept and damage-detection algorithm that show potential to sense local changes in the dynamic stiffness between points across a joint of a real structure. The validation and integration of the vibration-based and strain-based damage-detection methodologies will add significant value to Iowa’s current and future bridge maintenance, planning, and management Volume III: In this work, a previously developed structural health monitoring (SHM) system was advanced toward a ready-for-implementation system. Improvements were made with respect to automated data reduction/analysis, data acquisition hardware, sensor types, and communication network architecture. This report volume (Volume III) summarizes the energy harvesting techniques and prototype development for a bridge monitoring system that uses wireless sensors. The wireless sensor nodes are used to collect strain measurements at critical locations on a bridge. The bridge monitoring hardware system consists of a base station and multiple self-powered wireless sensor nodes. The base station is responsible for the synchronization of data sampling on all nodes and data aggregation. Each wireless sensor node include a sensing element, a processing and wireless communication module, and an energy harvesting module. The hardware prototype for a wireless bridge monitoring system was developed and tested on the US 30 Bridge over the South Skunk River in Ames, Iowa. The functions and performance of the developed system, including strain data, energy harvesting capacity, and wireless transmission quality, were studied and are covered in this volume.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-centre data repositories like the Alzheimer's Disease Neuroimaging Initiative (ADNI) offer a unique research platform, but pose questions concerning comparability of results when using a range of imaging protocols and data processing algorithms. The variability is mainly due to the non-quantitative character of the widely used structural T1-weighted magnetic resonance (MR) images. Although the stability of the main effect of Alzheimer's disease (AD) on brain structure across platforms and field strength has been addressed in previous studies using multi-site MR images, there are only sparse empirically-based recommendations for processing and analysis of pooled multi-centre structural MR data acquired at different magnetic field strengths (MFS). Aiming to minimise potential systematic bias when using ADNI data we investigate the specific contributions of spatial registration strategies and the impact of MFS on voxel-based morphometry in AD. We perform a whole-brain analysis within the framework of Statistical Parametric Mapping, testing for main effects of various diffeomorphic spatial registration strategies, of MFS and their interaction with disease status. Beyond the confirmation of medial temporal lobe volume loss in AD, we detect a significant impact of spatial registration strategy on estimation of AD related atrophy. Additionally, we report a significant effect of MFS on the assessment of brain anatomy (i) in the cerebellum, (ii) the precentral gyrus and (iii) the thalamus bilaterally, showing no interaction with the disease status. We provide empirical evidence in support of pooling data in multi-centre VBM studies irrespective of disease status or MFS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report is divided into two volumes. This volume (Volume I) summarizes a structural health monitoring (SHM) system that was developed for the Iowa DOT to remotely and continuously monitor fatigue critical bridges (FCB) to aid in the detection of crack formation. The developed FCB SHM system enables bridge owners to remotely monitor FCB for gradual or sudden damage formation. The SHM system utilizes fiber bragg grating (FBG) fiber optic sensors (FOSs) to measure strains at critical locations. The strain-based SHM system is trained with measured performance data to identify typical bridge response when subjected to ambient traffic loads, and that knowledge is used to evaluate newly collected data. At specified intervals, the SHM system autonomously generates evaluation reports that summarize the current behavior of the bridge. The evaluation reports are collected and distributed to the bridge owner for interpretation and decision making. Volume II summarizes the development and demonstration of an autonomous, continuous SHM system that can be used to monitor typical girder bridges. The developed SHM system can be grouped into two main categories: an office component and a field component. The office component is a structural analysis software program that can be used to generate thresholds which are used for identifying isolated events. The field component includes hardware and field monitoring software which performs data processing and evaluation. The hardware system consists of sensors, data acquisition equipment, and a communication system backbone. The field monitoring software has been developed such that, once started, it will operate autonomously with minimal user interaction. In general, the SHM system features two key uses. First, the system can be integrated into an active bridge management system that tracks usage and structural changes. Second, the system helps owners to identify damage and deterioration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: PCR has the potential to detect and precisely quantify specific DNA sequences, but it is not yet often used as a fully quantitative method. A number of data collection and processing strategies have been described for the implementation of quantitative PCR. However, they can be experimentally cumbersome, their relative performances have not been evaluated systematically, and they often remain poorly validated statistically and/or experimentally. In this study, we evaluated the performance of known methods, and compared them with newly developed data processing strategies in terms of resolution, precision and robustness. RESULTS: Our results indicate that simple methods that do not rely on the estimation of the efficiency of the PCR amplification may provide reproducible and sensitive data, but that they do not quantify DNA with precision. Other evaluated methods based on sigmoidal or exponential curve fitting were generally of both poor resolution and precision. A statistical analysis of the parameters that influence efficiency indicated that it depends mostly on the selected amplicon and to a lesser extent on the particular biological sample analyzed. Thus, we devised various strategies based on individual or averaged efficiency values, which were used to assess the regulated expression of several genes in response to a growth factor. CONCLUSION: Overall, qPCR data analysis methods differ significantly in their performance, and this analysis identifies methods that provide DNA quantification estimates of high precision, robustness and reliability. These methods allow reliable estimations of relative expression ratio of two-fold or higher, and our analysis provides an estimation of the number of biological samples that have to be analyzed to achieve a given precision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivation: The comparative analysis of gene gain and loss rates is critical for understanding the role of natural selection and adaptation in shaping gene family sizes. Studying complete genome data from closely related species allows accurate estimation of gene family turnover rates. Current methods and software tools, however, are not well designed for dealing with certain kinds of functional elements, such as microRNAs or transcription factor binding sites. Results: Here, we describe BadiRate, a new software tool to estimate family turnover rates, as well as the number of elements in internal phylogenetic nodes, by likelihood-based methods and parsimony. It implements two stochastic population models, which provide the appropriate statistical framework for testing hypothesis, such as lineage-specific gene family expansions or contractions. We have assessed the accuracy of BadiRate by computer simulations, and have also illustrated its functionality by analyzing a representative empirical dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivation: The comparative analysis of gene gain and loss rates is critical for understanding the role of natural selection and adaptation in shaping gene family sizes. Studying complete genome data from closely related species allows accurate estimation of gene family turnover rates. Current methods and software tools, however, are not well designed for dealing with certain kinds of functional elements, such as microRNAs or transcription factor binding sites. Results: Here, we describe BadiRate, a new software tool to estimate family turnover rates, as well as the number of elements in internal phylogenetic nodes, by likelihood-based methods and parsimony. It implements two stochastic population models, which provide the appropriate statistical framework for testing hypothesis, such as lineage-specific gene family expansions or contractions. We have assessed the accuracy of BadiRate by computer simulations, and have also illustrated its functionality by analyzing a representative empirical dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistics has become an indispensable tool in biomedical research. Thanks, in particular, to computer science, the researcher has easy access to elementary "classical" procedures. These are often of a "confirmatory" nature: their aim is to test hypotheses (for example the efficacy of a treatment) prior to experimentation. However, doctors often use them in situations more complex than foreseen, to discover interesting data structures and formulate hypotheses. This inverse process may lead to misuse which increases the number of "statistically proven" results in medical publications. The help of a professional statistician thus becomes necessary. Moreover, good, simple "exploratory" techniques are now available. In addition, medical data contain quite a high percentage of outliers (data that deviate from the majority). With classical methods it is often very difficult (even for a statistician!) to detect them and the reliability of results becomes questionable. New, reliable ("robust") procedures have been the subject of research for the past two decades. Their practical introduction is one of the activities of the Statistics and Data Processing Department of the University of Social and Preventive Medicine, Lausanne.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DnaSP, DNA Sequence Polymorphism, is a software package for the analysis of nucleotide polymorphism from aligned DNA sequence data. DnaSP can estimate several measures of DNA sequence variation within and between populations (in noncoding, synonymous or nonsynonymous sites, or in various sorts of codon positions), as well as linkage disequilibrium, recombination, gene flow and gene conversion parameters. DnaSP can also carry out several tests of neutrality: Hudson, Kreitman and Aguadé (1987), Tajima (1989), McDonald and Kreitman (1991), Fu and Li (1993), and Fu (1997) tests. Additionally, DnaSP can estimate the confidence intervals of some test-statistics by the coalescent. The results of the analyses are displayed on tabular and graphic form.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

n this work we analyze the behavior of complex information in Fresnel domain taking into account the limited capability to display complex transmittance values of current liquid crystal devices, when used as holographic displays. In order to do this analysis we compute the reconstruction of Fresnel holograms at several distances using the different parts of the complex distribution (real and imaginary parts, amplitude and phase) as well as using the full complex information adjusted with a method that combines two configurations of the devices in an adding architecture. The RMS error between the amplitude of these reconstructions and the original amplitude is used to evaluate the quality of the information displayed. The results of the error analysis show different behavior for the reconstructions using the different parts of the complex distribution and using the combined method of two devices. Better reconstructions are obtained when using two devices whose configurations densely cover the complex plane when they are added. Simulated and experimental results are also presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä tutkielmassa on tarkasteltu case- yrityksen henkilöstön näkemyksiä henkilöstön kehittämisestä sekä case- yrityksen kehittämisestä. Tutkielman tavoitteena on ollut selvittää kuinka Oy Gust. Raninin henkilöstöä voitaisiin kehittää, jotta yritys kykenisi kehittämään toimintojaan, lisäämään työntekijöiden viihtyvyyttä ja ammattitaitoa sekä vastaamaan kilpailuun. Tutkimus on laadullinen case- tutkimus. Tutkimusmenetelmänä on käytetty teemahaastatteluja, joita tehtiin case- yrityksen eri henkilöstöryhmien edustajille. Tutkimus tehtiin Oy Gust. Raninille. Henkilöstön mielestä parhaita tapoja kehittää osaamista ovat erilaiset yrityksen ulkopuolisen ammattilaisen pitämät läheisesti työtehtäviin liittyvät kurssit joilta saa mukaan kurssimateriaalia. Työntekijät haluavat itse valita ne kurssit joihin he osallistuvat. Toinen suosittu kehittämistapa on työn- tai tehtävien kierto. Suurimmaksi ongelmakohdaksi koettiin puutteellinen tiedonkulku sekä esimiehiltä alaisille että työntekijöiden välillä. Muita kehittämiskohteita ovat atk- taidot sekä kielitaito. Henkilöstön suhtautuminen kaikenlaisiin kehittämistoimiin on positiivista ja henkilöstö on halukas kehittämään työtehtäviään.