938 resultados para PM3 semi-empirical method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Scilla rock avalanche occurred on 6 February 1783 along the coast of the Calabria region (southern Italy), close to the Messina Strait. It was triggered by a mainshock of the Terremoto delle Calabrie seismic sequence, and it induced a tsunami wave responsible for more than 1500 casualties along the neighboring Marina Grande beach. The main goal of this work is the application of semi-analtycal and numerical models to simulate this event. The first one is a MATLAB code expressly created for this work that solves the equations of motion for sliding particles on a two-dimensional surface through a fourth-order Runge-Kutta method. The second one is a code developed by the Tsunami Research Team of the Department of Physics and Astronomy (DIFA) of the Bologna University that describes a slide as a chain of blocks able to interact while sliding down over a slope and adopts a Lagrangian point of view. A wide description of landslide phenomena and in particular of landslides induced by earthquakes and with tsunamigenic potential is proposed in the first part of the work. Subsequently, the physical and mathematical background is presented; in particular, a detailed study on derivatives discratization is provided. Later on, a description of the dynamics of a point-mass sliding on a surface is proposed together with several applications of numerical and analytical models over ideal topographies. In the last part, the dynamics of points sliding on a surface and interacting with each other is proposed. Similarly, different application on an ideal topography are shown. Finally, the applications on the 1783 Scilla event are shown and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By measuring the total crack lengths (TCL) along a gunshot wound channel simulated in ordnance gelatine, one can calculate the energy transferred by a projectile to the surrounding tissue along its course. Visual quantitative TCL analysis of cut slices in ordnance gelatine blocks is unreliable due to the poor visibility of cracks and the likely introduction of secondary cracks resulting from slicing. Furthermore, gelatine TCL patterns are difficult to preserve because of the deterioration of the internal structures of gelatine with age and the tendency of gelatine to decompose. By contrast, using computed tomography (CT) software for TCL analysis in gelatine, cracks on 1-cm thick slices can be easily detected, measured and preserved. In this, experiment CT TCL analyses were applied to gunshots fired into gelatine blocks by three different ammunition types (9-mm Luger full metal jacket, .44 Remington Magnum semi-jacketed hollow point and 7.62 × 51 RWS Cone-Point). The resulting TCL curves reflected the three projectiles' capacity to transfer energy to the surrounding tissue very accurately and showed clearly the typical energy transfer differences. We believe that CT is a useful tool in evaluating gunshot wound profiles using the TCL method and is indeed superior to conventional methods applying physical slicing of the gelatine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel microfluidic method is proposed for studying diffusion of small molecules in a hydrogel. Microfluidic devices were prepared with semi-permeable microchannels defined by crosslinked poly(ethylene glycol) (PEG). Uptake of dye molecules from aqueous solutions flowing through the microchannels was observedoptically and diffusion of the dye into the hydrogel was quantified. To complement the diffusion measurements from the microfluidic studies, nuclear magnetic resonance(NMR) characterization of the diffusion of dye in the PEG hydrogels was performed. The diffusion of small molecules in a hydrogel is relevant to applications such asdrug delivery and modeling transport for tissue-engineering applications. The diffusion of small molecules in a hydrogel is dependent on the extent of crosslinking within the gel, gel structure, and interactions between the diffusive species and the hydrogel network. These effects were studied in a model environment (semi-infinite slab) at the hydrogelfluid boundary in a microfluidic device. The microfluidic devices containing PEG microchannels were fabricated using photolithography. The unsteady diffusion of small molecules (dyes) within the microfluidic device was monitored and recorded using a digital microscope. The information was analyzed with techniques drawn from digital microscopy and image analysis to obtain concentration profiles with time. Using a diffusion model to fit this concentration vs. position data, a diffusion coefficient was obtained. This diffusion coefficient was compared to those from complementary NMR analysis. A pulsed field gradient (PFG) method was used to investigate and quantify small molecule diffusion in gradient (PFG) method was used to investigate and quantify small molecule diffusion in hydrogels. There is good agreement between the diffusion coefficients obtained from the microfluidic methods and those found from the NMR studies. The microfluidic approachused in this research enables the study of diffusion at length scales that approach those of vasculature, facilitating models for studying drug elution from hydrogels in blood-contacting applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Meta-analysis of studies of the accuracy of diagnostic tests currently uses a variety of methods. Statistically rigorous hierarchical models require expertise and sophisticated software. We assessed whether any of the simpler methods can in practice give adequately accurate and reliable results. STUDY DESIGN AND SETTING: We reviewed six methods for meta-analysis of diagnostic accuracy: four simple commonly used methods (simple pooling, separate random-effects meta-analyses of sensitivity and specificity, separate meta-analyses of positive and negative likelihood ratios, and the Littenberg-Moses summary receiver operating characteristic [ROC] curve) and two more statistically rigorous approaches using hierarchical models (bivariate random-effects meta-analysis and hierarchical summary ROC curve analysis). We applied the methods to data from a sample of eight systematic reviews chosen to illustrate a variety of patterns of results. RESULTS: In each meta-analysis, there was substantial heterogeneity between the results of different studies. Simple pooling of results gave misleading summary estimates of sensitivity and specificity in some meta-analyses, and the Littenberg-Moses method produced summary ROC curves that diverged from those produced by more rigorous methods in some situations. CONCLUSION: The closely related hierarchical summary ROC curve or bivariate models should be used as the standard method for meta-analysis of diagnostic accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The central question for this paper is how to improve the production process by closing the gap between industrial designers and software engineers of television(TV)-based User Interfaces (UI) in an industrial environment. Software engineers are highly interested whether one UI design can be converted into several fully functional UIs for TV products with different screen properties. The aim of the software engineers is to apply automatic layout and scaling in order to speed up and improve the production process. However, the question is whether a UI design lends itself for such automatic layout and scaling. This is investigated by analysing a prototype UI design done by industrial designers. In a first requirements study, industrial designers had created meta-annotations on top of their UI design in order to disclose their design rationale for discussions with software engineers. In a second study, five (out of ten) industrial designers assessed the potential of four different meta-annotation approaches. The question was which annotation method industrial designers would prefer and whether it could satisfy the technical requirements of the software engineering process. One main result is that the industrial designers preferred the method they were already familiar with, which therefore seems to be the most effective one although the main objective of automatic layout and scaling could still not be achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Prospective memory (PM), the ability to remember to perform intended activities in the future (Kliegel & Jäger, 2007), is crucial to succeed in everyday life. PM seems to improve gradually over the childhood years (Zimmermann & Meier, 2006), but yet little is known about PM competences in young school children in general, and even less is known about factors influencing its development. Currently, a number of studies suggest that executive functions (EF) are potentially influencing processes (Ford, Driscoll, Shum & Macaulay, 2012; Mahy & Moses, 2011). Additionally, metacognitive processes (MC: monitoring and control) are assumed to be involved while optimizing one’s performance (Krebs & Roebers, 2010; 2012; Roebers, Schmid, & Roderer, 2009). Yet, the relations between PM, EF and MC remain relatively unspecified. We intend to empirically examine the structural relations between these constructs. Method A cross-sectional study including 119 2nd graders (mage = 95.03, sdage = 4.82) will be presented. Participants (n = 68 girls) completed three EF tasks (stroop, updating, shifting), a computerised event-based PM task and a MC spelling task. The latent variables PM, EF and MC that were represented by manifest variables deriving from the conducted tasks, were interrelated by structural equation modelling. Results Analyses revealed clear associations between the three cognitive constructs PM, EF and MC (rpm-EF = .45, rpm-MC = .23, ref-MC = .20). A three factor model, as opposed to one or two factor models, appeared to fit excellently to the data (chi2(17, 119) = 18.86, p = .34, remsea = .030, cfi = .990, tli = .978). Discussion The results indicate that already in young elementary school children, PM, EF and MC are empirically well distinguishable, but nevertheless substantially interrelated. PM and EF seem to share a substantial amount of variance while for MC, more unique processes may be assumed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identifying drivers of species diversity is a major challenge in understanding and predicting the dynamics of species-rich semi-natural grasslands. In particular in temperate grasslands changes in land use and its consequences, i.e. increasing fragmentation, the on-going loss of habitat and the declining importance of regional processes such as seed dispersal by livestock, are considered key drivers of the diversity loss witnessed within the last decades. It is a largely unresolved question to what degree current temperate grassland communities already reflect a decline of regional processes such as longer distance seed dispersal. Answering this question is challenging since it requires both a mechanistic approach to community dynamics and a sufficient data basis that allows identifying general patterns. Here, we present results of a local individual- and trait-based community model that was initialized with plant functional types (PFTs) derived from an extensive empirical data set of species-rich grasslands within the `Biodiversity Exploratories' in Germany. Driving model processes included above- and belowground competition, dynamic resource allocation to shoots and roots, clonal growth, grazing, and local seed dispersal. To test for the impact of regional processes we also simulated seed input from a regional species pool. Model output, with and without regional seed input, was compared with empirical community response patterns along a grazing gradient. Simulated response patterns of changes in PFT richness, Shannon diversity, and biomass production matched observed grazing response patterns surprisingly well if only local processes were considered. Already low levels of additional regional seed input led to stronger deviations from empirical community pattern. While these findings cannot rule out that regional processes other than those considered in the modeling study potentially play a role in shaping the local grassland communities, our comparison indicates that European grasslands are largely isolated, i.e. local mechanisms explain observed community patterns to a large extent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In contrast to preoperative brain tumor segmentation, the problem of postoperative brain tumor segmentation has been rarely approached so far. We present a fully-automatic segmentation method using multimodal magnetic resonance image data and patient-specific semi-supervised learning. The idea behind our semi-supervised approach is to effectively fuse information from both pre- and postoperative image data of the same patient to improve segmentation of the postoperative image. We pose image segmentation as a classification problem and solve it by adopting a semi-supervised decision forest. The method is evaluated on a cohort of 10 high-grade glioma patients, with segmentation performance and computation time comparable or superior to a state-of-the-art brain tumor segmentation method. Moreover, our results confirm that the inclusion of preoperative MR images lead to a better performance regarding postoperative brain tumor segmentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growth rate of atmospheric carbondioxide(CO2) concentrations since industrialization is characterized by large interannual variability, mostly resulting from variability in CO 2 uptake by terrestrial ecosystems (typically termed carbon sink). However, the contributions of regional ecosystems to that variability are not well known. Using an ensemble of ecosystem and land-surface models and an empirical observation-based product of global gross primary production, we show that the mean sink, trend, and interannual variability in CO2 uptake by terrestrial ecosystems are dominated by distinct biogeographic regions. Whereas the mean sink is dominated by highly productive lands (mainly tropical forests), the trend and interannual variability of the sink are dominated by semi-arid ecosystems whose carbon balance is strongly associated with circulation-driven variations in both precipitation and temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random Forests™ is reported to be one of the most accurate classification algorithms in complex data analysis. It shows excellent performance even when most predictors are noisy and the number of variables is much larger than the number of observations. In this thesis Random Forests was applied to a large-scale lung cancer case-control study. A novel way of automatically selecting prognostic factors was proposed. Also, synthetic positive control was used to validate Random Forests method. Throughout this study we showed that Random Forests can deal with large number of weak input variables without overfitting. It can account for non-additive interactions between these input variables. Random Forests can also be used for variable selection without being adversely affected by collinearities. ^ Random Forests can deal with the large-scale data sets without rigorous data preprocessing. It has robust variable importance ranking measure. Proposed is a novel variable selection method in context of Random Forests that uses the data noise level as the cut-off value to determine the subset of the important predictors. This new approach enhanced the ability of the Random Forests algorithm to automatically identify important predictors for complex data. The cut-off value can also be adjusted based on the results of the synthetic positive control experiments. ^ When the data set had high variables to observations ratio, Random Forests complemented the established logistic regression. This study suggested that Random Forests is recommended for such high dimensionality data. One can use Random Forests to select the important variables and then use logistic regression or Random Forests itself to estimate the effect size of the predictors and to classify new observations. ^ We also found that the mean decrease of accuracy is a more reliable variable ranking measurement than mean decrease of Gini. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In epidemiology literature, it is often required to investigate the relationships between means where the levels of experiment are actually monotone sets forming a partition on the range of sampling values. With this need, the analysis of these group means is generally performed using classical analysis of variance (ANOVA). However, this method has never been challenged. In this dissertation, we will formulate and present our examination of its validity. First, the classical assumptions of normality and constant variance are not always true. Second, under the null hypothesis of equal means, the test statistic for the classical ANOVA technique is still valid. Third, when the hypothesis of equal means is rejected, the classical analysis techniques for hypotheses of contrasts are not valid. Fourth, under the alternative hypothesis, we can show that the monotone property of levels leads to the conclusion that the means are monotone. Fifth, we propose an appropriate method for handing the data in this situation. ^