13 resultados para Modeling methods

em University of Queensland eSpace - Australia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current Physiologically based pharmacokinetic (PBPK) models are inductive. We present an additional, different approach that is based on the synthetic rather than the inductive approach to modeling and simulation. It relies on object-oriented programming A model of the referent system in its experimental context is synthesized by assembling objects that represent components such as molecules, cells, aspects of tissue architecture, catheters, etc. The single pass perfused rat liver has been well described in evaluating hepatic drug pharmacokinetics (PK) and is the system on which we focus. In silico experiments begin with administration of objects representing actual compounds. Data are collected in a manner analogous to that in the referent PK experiments. The synthetic modeling method allows for recognition and representation of discrete event and discrete time processes, as well as heterogeneity in organization, function, and spatial effects. An application is developed for sucrose and antipyrine, administered separately and together PBPK modeling has made extensive progress in characterizing abstracted PK properties but this has also been its limitation. Now, other important questions and possible extensions emerge. How are these PK properties and the observed behaviors generated? The inherent heuristic limitations of traditional models have hindered getting meaningful, detailed answers to such questions. Synthetic models of the type described here are specifically intended to help answer such questions. Analogous to wet-lab experimental models, they retain their applicability even when broken apart into sub-components. Having and applying this new class of models along with traditional PK modeling methods is expected to increase the productivity of pharmaceutical research at all levels that make use of modeling and simulation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In a recent study, severe distortions in the proton images of an excised, fixed, human brain in an 11.1 Tesla/40 cm MR instrument have been observed, and the effect modeled on phantom images using a finite difference time domain (FDTD) model. in the present study, we extend these simulations to that of a complete human head, employing a hybrid FDTD and method of moments (MoM) approach, which provides a validated method for simulating biological samples in coil structures. The effect of fixative on the image distortions is explored. importantly, temperature distributions within the head are also simulated using a bioheat method based on parameters derived from the electromagnetic simulations. The MoM/FDTD simulations confirm that the transverse magnetic field (B,) from a ReCav resonator exhibits good homogeneity in air but strong inhomogeneity when loaded with the head with or without fixative. The fixative serves to increase the distortions, but they are still significant for the in vivo simulations. The simulated signal intensity (SI) distribution within the sample confirm the distortions in the experimental images are caused by the complex interactions of the incident electromagnetic fields with tissue, which is heterogeneous in terms of conductivity and permittivity. The temperature distribution is likewise heterogeneous, raising concerns regarding hot spot generation in the sample that may exceed acceptable levels in future in vivo studies. As human imaging at 11.1 T is some time away, simulations are important in terms of predicting potential safety issues as well as evaluating practical concerns about the quality of images. Simulation on a whole human head at 11.1 T implies the wave behavior presents significant engineering challenges for ultra-high-field (UHF) MRI. Novel strategies will have to be employed in imaging technique and resonator design for UHF MRI to achieve the theoretical signal-to-noise ratio (SNR) improvements it offers over lower field systems. (C) 2005 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A review is given on the fundamental studies of gas-carbon reactions using electronic structure methods in the last several decades. The three types of electronic structure methods including semi-empirical, ab initio and density functional theory, methods are briefly introduced first, followed by the studies on carbon reactions with hydrogen and oxygen-containing gases (non-catalysed and catalysed). The problems yet to solve and possible promising directions are discussed. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advantages of antennas that can resemble the shape of the body to which they are attached are obvious. However, electromagnetic modeling of such unusually shaped antennas can be difficult. In this paper, the commercially available software SolidWorks(TM) is used for accurately drawing complex shapes in conjunction with the electromagnetic software FEKO(TM) to model the EM behavior of conformal antennas. The application of SolidWorks and custom-written software allows all the required information that forms the analyzed structure to be automatically inserted into FEKO, and gives the user complete control over the antenna being modeled. This approach is illustrated by a number of simulation examples of single, wideband, multi-band planar and curved patch antennas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much research has been devoted over the years to investigating and advancing the techniques and tools used by analysts when they model. As opposed to what academics, software providers and their resellers promote as should be happening, the aim of this research was to determine whether practitioners still embraced conceptual modeling seriously. In addition, what are the most popular techniques and tools used for conceptual modeling? What are the major purposes for which conceptual modeling is used? The study found that the top six most frequently used modeling techniques and methods were ER diagramming, data flow diagramming, systems flowcharting, workflow modeling, UML, and structured charts. Modeling technique use was found to decrease significantly from smaller to medium-sized organizations, but then to increase significantly in larger organizations (proxying for large, complex projects). Technique use was also found to significantly follow an inverted U-shaped curve, contrary to some prior explanations. Additionally, an important contribution of this study was the identification of the factors that uniquely influence the decision of analysts to continue to use modeling, viz., communication (using diagrams) to/from stakeholders, internal knowledge (lack of) of techniques, user expectations management, understanding models' integration into the business, and tool/software deficiencies. The highest ranked purposes for which modeling was undertaken were database design and management, business process documentation, business process improvement, and software development. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to complex field/tissue interactions, high-field magnetic resonance (MR) images suffer significant image distortions that result in compromised diagnostic quality. A new method that attempts to remove these distortions is proposed in this paper and is based on the use of transceiver-phased arrays. The proposed system uses, in the examples presented herein, a shielded four-element transceive-phased array head coil and involves performing two separate scans of the same slice with each scan using different excitations during transmission. By optimizing the amplitudes and phases for each scan, antipodal signal profiles can be obtained, and by combining both the images together, the image distortion can be reduced several fold. A combined hybrid method of moments (MoM)/finite element method (FEM) and finite-difference time-domain (FDTD) technique is proposed and used to elucidate the concept of the new method and to accurately evaluate the electromagnetic field (EMF) in a human head model. In addition, the proposed method is used in conjunction with the generalized auto-calibrating partially parallel acquisitions (GRAPPA) reconstruction technique to enable rapid imaging of the two scans. Simulation results reported herein for 11-T (470-MHz) brain imaging applications show that the new method with GRAPPA reconstruction theoretically results in improved image quality and that the proposed combined hybrid MoM/FEM and FDTD technique is. suitable for high-field magnetic resonance imaging (MRI) numerical analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper was to evaluate the psychometric properties of a stage-specific selfefficacy scale for physical activity with classical test theory (CTT), confirmatory factor analysis (CFA) and item response modeling (IRM). Women who enrolled in the Women On The Move study completed a 20-item stage-specific self-efficacy scale developed for this study [n = 226, 51.1% African-American and 48.9% Hispanic women, mean age = 49.2 (67.0) years, mean body mass index = 29.7 (66.4)]. Three analyses were conducted: (i) a CTT item analysis, (ii) a CFA to validate the factor structure and (iii) an IRM analysis. The CTT item analysis and the CFA results showed that the scale had high internal consistency (ranging from 0.76 to 0.93) and a strong factor structure. Results also showed that the scale could be improved by modifying or eliminating some of the existing items without significantly altering the content of the scale. The IRM results also showed that the scale had few items that targeted high self-efficacy and the stage-specific assumption underlying the scale was rejected. In addition, the IRM analyses found that the five-point response format functioned more like a four-point response format. Overall, employing multiple methods to assess the psychometric properties of the stage-specific self-efficacy scale demonstrated the complimentary nature of these methods and it highlighted the strengths and weaknesses of this scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new approach to improving the effectiveness of autonomous systems that deal with dynamic environments. The basis of the approach is to find repeating patterns of behavior in the dynamic elements of the system, and then to use predictions of the repeating elements to better plan goal directed behavior. It is a layered approach involving classifying, modeling, predicting and exploiting. Classifying involves using observations to place the moving elements into previously defined classes. Modeling involves recording features of the behavior on a coarse grained grid. Exploitation is achieved by integrating predictions from the model into the behavior selection module to improve the utility of the robot's actions. This is in contrast to typical approaches that use the model to select between different strategies or plays. Three methods of adaptation to the dynamic features of the environment are explored. The effectiveness of each method is determined using statistical tests over a number of repeated experiments. The work is presented in the context of predicting opponent behavior in the highly dynamic and multi-agent robot soccer domain (RoboCup)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to grow microscopic spherical birefringent crystals of vaterite, a calcium carbonate mineral, has allowed the development of an optical microrheometer based on optical tweezers. However, since these crystals are birefringent, and worse, are expected to have non-uniform birefringence, computational modeling of the microrheometer is a highly challenging task. Modeling the microrheometer - and optical tweezers in general - typically requires large numbers of repeated calculations for the same trapped particle. This places strong demands on the efficiency of computational methods used. While our usual method of choice for computational modelling of optical tweezers - the T-matrix method - meets this requirement of efficiency, it is restricted to homogeneous isotropic particles. General methods that can model complex structures such as the vaterite particles, such as finite-difference time-domain (FDTD) or finite-difference frequency-domain (FDFD) methods, are inefficient. Therefore, we have developed a hybrid FDFD/T-matrix method that combines the generality of volume-discretisation methods such as FDFD with the efficiency of the T-matrix method. We have used this hybrid method to calculate optical forces and torques on model vaterite spheres in optical traps. We present and compare the results of computational modelling and experimental measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.