967 resultados para Modeling methods
Resumo:
This paper presents a new approach to improving the effectiveness of autonomous systems that deal with dynamic environments. The basis of the approach is to find repeating patterns of behavior in the dynamic elements of the system, and then to use predictions of the repeating elements to better plan goal directed behavior. It is a layered approach involving classifying, modeling, predicting and exploiting. Classifying involves using observations to place the moving elements into previously defined classes. Modeling involves recording features of the behavior on a coarse grained grid. Exploitation is achieved by integrating predictions from the model into the behavior selection module to improve the utility of the robot's actions. This is in contrast to typical approaches that use the model to select between different strategies or plays. Three methods of adaptation to the dynamic features of the environment are explored. The effectiveness of each method is determined using statistical tests over a number of repeated experiments. The work is presented in the context of predicting opponent behavior in the highly dynamic and multi-agent robot soccer domain (RoboCup)
Resumo:
Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.
Resumo:
The ability to grow microscopic spherical birefringent crystals of vaterite, a calcium carbonate mineral, has allowed the development of an optical microrheometer based on optical tweezers. However, since these crystals are birefringent, and worse, are expected to have non-uniform birefringence, computational modeling of the microrheometer is a highly challenging task. Modeling the microrheometer - and optical tweezers in general - typically requires large numbers of repeated calculations for the same trapped particle. This places strong demands on the efficiency of computational methods used. While our usual method of choice for computational modelling of optical tweezers - the T-matrix method - meets this requirement of efficiency, it is restricted to homogeneous isotropic particles. General methods that can model complex structures such as the vaterite particles, such as finite-difference time-domain (FDTD) or finite-difference frequency-domain (FDFD) methods, are inefficient. Therefore, we have developed a hybrid FDFD/T-matrix method that combines the generality of volume-discretisation methods such as FDFD with the efficiency of the T-matrix method. We have used this hybrid method to calculate optical forces and torques on model vaterite spheres in optical traps. We present and compare the results of computational modelling and experimental measurements.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
Receptor activity modifying proteins (RAMPs) are a family of single-pass transmembrane proteins that dimerize with G-protein-coupled receptors. They may alter the ligand recognition properties of the receptors (particularly for the calcitonin receptor-like receptor, CLR). Very little structural information is available about RAMPs. Here, an ab initio model has been generated for the extracellular domain of RAMP1. The disulfide bond arrangement (Cys 27-Cys82, Cys40-Cys72, and Cys 57-Cys104) was determined by site-directed mutagenesis. The secondary structure (a-helices from residues 29-51, 60-80, and 87-100) was established from a consensus of predictive routines. Using these constraints, an assemblage of 25,000 structures was constructed and these were ranked using an all-atom statistical potential. The best 1000 conformations were energy minimized. The lowest scoring model was refined by molecular dynamics simulation. To validate our strategy, the same methods were applied to three proteins of known structure; PDB:1HP8, PDB:1V54 chain H (residues 21-85), and PDB:1T0P. When compared to the crystal structures, the models had root mean-square deviations of 3.8 Å, 4.1 Å, and 4.0 Å, respectively. The model of RAMP1 suggested that Phe93, Tyr 100, and Phe101 form a binding interface for CLR, whereas Trp74 and Phe92 may interact with ligands that bind to the CLR/RAMP1 heterodimer. © 2006 by the Biophysical Society.
Resumo:
A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.
Resumo:
Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. miniDVMS v1.8 provides a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualisation domain. The advantage of this interface is that the user is directly involved in the data mining process. Principled projection methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), are integrated with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, and user interaction facilities, to provide this integrated visual data mining framework. The software also supports conventional visualisation techniques such as principal component analysis (PCA), Neuroscale, and PhiVis. This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install and use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.
Resumo:
PURPOSE. A methodology for noninvasively characterizing the three-dimensional (3-D) shape of the complete human eye is not currently available for research into ocular diseases that have a structural substrate, such as myopia. A novel application of a magnetic resonance imaging (MRI) acquisition and analysis technique is presented that, for the first time, allows the 3-D shape of the eye to be investigated fully. METHODS. The technique involves the acquisition of a T2-weighted MRI, which is optimized to reveal the fluid-filled chambers of the eye. Automatic segmentation and meshing algorithms generate a 3-D surface model, which can be shaded with morphologic parameters such as distance from the posterior corneal pole and deviation from sphericity. Full details of the method are illustrated with data from 14 eyes of seven individuals. The spatial accuracy of the calculated models is demonstrated by comparing the MRI-derived axial lengths with values measured in the same eyes using interferometry. RESULTS. The color-coded eye models showed substantial variation in the absolute size of the 14 eyes. Variations in the sphericity of the eyes were also evident, with some appearing approximately spherical whereas others were clearly oblate and one was slightly prolate. Nasal-temporal asymmetries were noted in some subjects. CONCLUSIONS. The MRI acquisition and analysis technique allows a novel way of examining 3-D ocular shape. The ability to stratify and analyze eye shape, ocular volume, and sphericity will further extend the understanding of which specific biometric parameters predispose emmetropic children subsequently to develop myopia. Copyright © Association for Research in Vision and Ophthalmology.
Resumo:
Gene expression is frequently regulated by multiple transcription factors (TFs). Thermostatistical methods allow for a quantitative description of interactions between TFs, RNA polymerase and DNA, and their impact on the transcription rates. We illustrate three different scales of the thermostatistical approach: the microscale of TF molecules, the mesoscale of promoter energy levels and the macroscale of transcriptionally active and inactive cells in a cell population. We demonstrate versatility of combinatorial transcriptional activation by exemplifying logic functions, such as AND and OR gates. We discuss a metric for cell-to-cell transcriptional activation variability known as Fermi entropy. Suitability of thermostatistical modeling is illustrated by describing the experimental data on transcriptional induction of NF?B and the c-Fos protein.
Resumo:
Purpose - To evaluate adherence to prescribed antiepileptic drugs (AEDs) in children with epilepsy using a combination of adherence-assessment methods. Methods - A total of 100 children with epilepsy (≤17 years old) were recruited. Medication adherence was determined via parental and child self-reporting (≥9 years old), medication refill data from general practitioner (GP) prescribing records, and via AED concentrations in dried blood spot (DBS) samples obtained from children at the clinic and via self- or parental-led sampling in children's own homes. The latter were assessed using population pharmacokinetic modeling. Patients were deemed nonadherent if any of these measures were indicative of nonadherence with the prescribed treatment. In addition, beliefs about medicines, parental confidence in seizure management, and the presence of depressed mood in parents were evaluated to examine their association with nonadherence in the participating children. Key Findings - The overall rate of nonadherence in children with epilepsy was 33%. Logistic regression analysis indicated that children with generalized epilepsy (vs. focal epilepsy) were more likely (odds ratio [OR] 4.7, 95% confidence interval [CI] 1.37–15.81) to be classified as nonadherent as were children whose parents have depressed mood (OR 3.6, 95% CI 1.16–11.41). Significance - This is the first study to apply the novel methodology of determining adherence via AED concentrations in clinic and home DBS samples. The present findings show that the latter, with further development, could be a useful approach to adherence assessment when combined with other measures including parent and child self-reporting. Seizure type and parental depressed mood were strongly predictive of nonadherence.
Resumo:
The evolution of cognitive neuroscience has been spurred by the development of increasingly sophisticated investigative techniques to study human cognition. In Methods in Mind, experts examine the wide variety of tools available to cognitive neuroscientists, paying particular attention to the ways in which different methods can be integrated to strengthen empirical findings and how innovative uses for established techniques can be developed. The book will be a uniquely valuable resource for the researcher seeking to expand his or her repertoire of investigative techniques. Each chapter explores a different approach. These include transcranial magnetic stimulation, cognitive neuropsychiatry, lesion studies in nonhuman primates, computational modeling, psychophysiology, single neurons and primate behavior, grid computing, eye movements, fMRI, electroencephalography, imaging genetics, magnetoencephalography, neuropharmacology, and neuroendocrinology. As mandated, authors focus on convergence and innovation in their fields; chapters highlight such cross-method innovations as the use of the fMRI signal to constrain magnetoencephalography, the use of electroencephalography (EEG) to guide rapid transcranial magnetic stimulation at a specific frequency, and the successful integration of neuroimaging and genetic analysis. Computational approaches depend on increased computing power, and one chapter describes the use of distributed or grid computing to analyze massive datasets in cyberspace. Each chapter author is a leading authority in the technique discussed.
Resumo:
This work is supported by the Hungarian Scientific Research Fund (OTKA), grant T042706.
Resumo:
This paper reviews the state of the art in measuring, modeling, and managing clogging in subsurface-flow treatment wetlands. Methods for measuring in situ hydraulic conductivity in treatment wetlands are now available, which provide valuable insight into assessing and evaluating the extent of clogging. These results, paired with the information from more traditional approaches (e.g., tracer testing and composition of the clog matter) are being incorporated into the latest treatment wetland models. Recent finite element analysis models can now simulate clogging development in subsurface-flow treatment wetlands with reasonable accuracy. Various management strategies have been developed to extend the life of clogged treatment wetlands, including gravel excavation and/or washing, chemical treatment, and application of earthworms. These strategies are compared and available cost information is reported. © 2012 Elsevier Ltd.