919 resultados para data analysis software
Resumo:
Nanoindentation is a valuable tool for characterization of biomaterials due to its ability to measure local properties in heterogeneous, small or irregularly shaped samples. However, applying nanoindentation to compliant, hydrated biomaterials leads to many challenges including adhesion between the nanoindenter tip and the sample. Although adhesion leads to overestimation of the modulus of compliant samples when analyzing nanoindentation data using traditional analysis techniques, most studies of biomaterials have ignored its effects. This paper demonstrates two methods for managing adhesion in nanoindentation analysis, the nano-JKR force curve method and the surfactant method, through application to two biomedically-relevant compliant materials, poly(dimethyl siloxane) (PDMS) elastomers and poly(ethylene glycol) (PEG) hydrogels. The nano-JKR force curve method accounts for adhesion during data analysis using equations based on the Johnson-Kendall-Roberts (JKR) adhesion model, while the surfactant method eliminates adhesion during data collection, allowing data analysis using traditional techniques. In this study, indents performed in air or water resulted in adhesion between the tip and the sample, while testing the same materials submerged in Optifree Express() contact lens solution eliminated tip-sample adhesion in most samples. Modulus values from the two methods were within 7% of each other, despite different hydration conditions and evidence of adhesion. Using surfactant also did not significantly alter the properties of the tested material, allowed accurate modulus measurements using commercial software, and facilitated nanoindentation testing in fluids. This technique shows promise for more accurate and faster determination of modulus values from nanoindentation of compliant, hydrated biological samples. Copyright 2013 Elsevier Ltd. All rights reserved.
Resumo:
In this article, we will link neuroimaging, data analysis, and intervention methods in an important psychiatric condition: auditory verbal hallucinations (AVH). The clinical and phenomenological background as well as neurophysiological findings will be covered and discussed with respect to noninvasive brain stimulation. Additionally, methods of noninvasive brain stimulation will be presented as ways to intervene with AVH. Finally, preliminary conclusions and possible future perspectives will be proposed.
Resumo:
The Simulation Automation Framework for Experiments (SAFE) streamlines the de- sign and execution of experiments with the ns-3 network simulator. SAFE ensures that best practices are followed throughout the workflow a network simulation study, guaranteeing that results are both credible and reproducible by third parties. Data analysis is a crucial part of this workflow, where mistakes are often made. Even when appearing in highly regarded venues, scientific graphics in numerous network simulation publications fail to include graphic titles, units, legends, and confidence intervals. After studying the literature in network simulation methodology and in- formation graphics visualization, I developed a visualization component for SAFE to help users avoid these errors in their scientific workflow. The functionality of this new component includes support for interactive visualization through a web-based interface and for the generation of high-quality, static plots that can be included in publications. The overarching goal of my contribution is to help users create graphics that follow best practices in visualization and thereby succeed in conveying the right information about simulation results.
Resumo:
The analysis of Komendant's design of the Kimbell Art Museum was carried out in order to determine the effectiveness of the ring beams, edge beams and prestressing in the shells of the roof system. Finite element analysis was not available to Komendant or other engineers of the time to aid them in the design and analysis. Thus, the use of this tool helped to form a new perspective on the Kimbell Art Museum and analyze the engineer's work. In order to carry out the finite element analysis of Kimbell Art Museum, ADINA finite element analysis software was utilized. Eight finite element models (FEM-1 through FEM-8) of increasing complexity were created. The results of the most realistic model, FEM-8, which included ring beams, edge beams and prestressing, were compared to Komendant's calculations. The maximum deflection at the crown of the mid-span surface of -0.1739 in. in FEM-8 was found to be larger than Komendant's deflection in the design documents before the loss in prestressing force (-0.152 in.) but smaller than his prediction after the loss in prestressing force (-0.3814 in.). Komendant predicted a larger longitudinal stress of -903 psi at the crown (vs. -797 psi in FEM-8) and 37 psi at the edge (vs. -347 psi in FEM-8). Considering the strength of concrete of 5000 psi, the difference in results is not significant. From the analysis it was determined that both FEM-5, which included prestressing and fixed rings, and FEM-8 can be successfully and effectively implemented in practice. Prestressing was used in both models and thus served as the main contribution to efficiency. FEM-5 showed that ring and edge beams can be avoided, however an architect might find them more aesthetically appropriate than rigid walls.
Resumo:
The purpose of this research project is to study an innovative method for the stability assessment of structural steel systems, namely the Modified Direct Analysis Method (MDM). This method is intended to simplify an existing design method, the Direct Analysis Method (DM), by assuming a sophisticated second-order elastic structural analysis will be employed that can account for member and system instability, and thereby allow the design process to be reduced to confirming the capacity of member cross-sections. This last check can be easily completed by substituting an effective length of KL = 0 into existing member design equations. This simplification will be particularly useful for structural systems in which it is not clear how to define the member slenderness L/r when the laterally unbraced length L is not apparent, such as arches and the compression chord of an unbraced truss. To study the feasibility and accuracy of this new method, a set of 12 benchmark steel structural systems previously designed and analyzed by former Bucknell graduate student Jose Martinez-Garcia and a single column were modeled and analyzed using the nonlinear structural analysis software MASTAN2. A series of Matlab-based programs were prepared by the author to provide the code checking requirements for investigating the MDM. By comparing MDM and DM results against the more advanced distributed plasticity analysis results, it is concluded that the stability of structural systems can be adequately assessed in most cases using MDM, and that MDM often appears to be a more accurate but less conservative method in assessing stability.
Resumo:
OBJECTIVE: Computed tomography (CT) and magnetic resonance imaging (MRI) are introduced as an alternative to traditional autopsy. The purpose of this study was to investigate their accuracy in mass estimation of liver and spleen. METHODS: In 44 cases, the weights of spleen and liver were estimated based on MRI and CT data using a volume-analysis software and a postmortem tissue-specific density factor. In a blinded approach, the results were compared with the weights noted at autopsy. RESULTS: Excellent correlation between estimated and real weights (r = 0.997 for MRI, r = 0.997 for CT) was found. Putrefaction gas and venous air embolism led to an overestimation. Venous congestion and drowning caused higher estimated weights. CONCLUSION: Postmortem weights of liver and spleen can accurately be assessed by nondestructive imaging. Multislice CT overcomes the limitation of putrefaction and venous air embolism by the possibility to exclude gas. Congestion seems to be even better assessed.
Resumo:
Software repositories have been getting a lot of attention from researchers in recent years. In order to analyze software repositories, it is necessary to first extract raw data from the version control and problem tracking systems. This poses two challenges: (1) extraction requires a non-trivial effort, and (2) the results depend on the heuristics used during extraction. These challenges burden researchers that are new to the community and make it difficult to benchmark software repository mining since it is almost impossible to reproduce experiments done by another team. In this paper we present the TA-RE corpus. TA-RE collects extracted data from software repositories in order to build a collection of projects that will simplify extraction process. Additionally the collection can be used for benchmarking. As the first step we propose an exchange language capable of making sharing and reusing data as simple as possible.
Resumo:
Estimation for bivariate right censored data is a problem that has had much study over the past 15 years. In this paper we propose a new class of estimators for the bivariate survival function based on locally efficient estimation. We introduce the locally efficient estimator for bivariate right censored data, present an asymptotic theorem, present the results of simulation studies and perform a brief data analysis illustrating the use of the locally efficient estimator.
Resumo:
We propose robust and e±cient tests and estimators for gene-environment/gene-drug interactions in family-based association studies. The methodology is designed for studies in which haplotypes, quantitative pheno- types and complex exposure/treatment variables are analyzed. Using causal inference methodology, we derive family-based association tests and estimators for the genetic main effects and the interactions. The tests and estimators are robust against population admixture and strati¯cation without requiring adjustment for confounding variables. We illustrate the practical relevance of our approach by an application to a COPD study. The data analysis suggests a gene-environment interaction between a SNP in the Serpine gene and smok- ing status/pack years of smoking that reduces the FEV1 volume by about 0.02 liter per pack year of smoking. Simulation studies show that the pro- posed methodology is su±ciently powered for realistic sample sizes and that it provides valid tests and effect size estimators in the presence of admixture and stratification.
Resumo:
Background: The recent development of semi-automated techniques for staining and analyzing flow cytometry samples has presented new challenges. Quality control and quality assessment are critical when developing new high throughput technologies and their associated information services. Our experience suggests that significant bottlenecks remain in the development of high throughput flow cytometry methods for data analysis and display. Especially, data quality control and quality assessment are crucial steps in processing and analyzing high throughput flow cytometry data. Methods: We propose a variety of graphical exploratory data analytic tools for exploring ungated flow cytometry data. We have implemented a number of specialized functions and methods in the Bioconductor package rflowcyt. We demonstrate the use of these approaches by investigating two independent sets of high throughput flow cytometry data. Results: We found that graphical representations can reveal substantial non-biological differences in samples. Empirical Cumulative Distribution Function and summary scatterplots were especially useful in the rapid identification of problems not identified by manual review. Conclusions: Graphical exploratory data analytic tools are quick and useful means of assessing data quality. We propose that the described visualizations should be used as quality assessment tools and where possible, be used for quality control.
Resumo:
Genomic alterations have been linked to the development and progression of cancer. The technique of Comparative Genomic Hybridization (CGH) yields data consisting of fluorescence intensity ratios of test and reference DNA samples. The intensity ratios provide information about the number of copies in DNA. Practical issues such as the contamination of tumor cells in tissue specimens and normalization errors necessitate the use of statistics for learning about the genomic alterations from array-CGH data. As increasing amounts of array CGH data become available, there is a growing need for automated algorithms for characterizing genomic profiles. Specifically, there is a need for algorithms that can identify gains and losses in the number of copies based on statistical considerations, rather than merely detect trends in the data. We adopt a Bayesian approach, relying on the hidden Markov model to account for the inherent dependence in the intensity ratios. Posterior inferences are made about gains and losses in copy number. Localized amplifications (associated with oncogene mutations) and deletions (associated with mutations of tumor suppressors) are identified using posterior probabilities. Global trends such as extended regions of altered copy number are detected. Since the posterior distribution is analytically intractable, we implement a Metropolis-within-Gibbs algorithm for efficient simulation-based inference. Publicly available data on pancreatic adenocarcinoma, glioblastoma multiforme and breast cancer are analyzed, and comparisons are made with some widely-used algorithms to illustrate the reliability and success of the technique.