27 resultados para Data Modeling

em University of Queensland eSpace - Australia


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Comprehensive published radiocarbon data from selected atmospheric records, tree rings, and recent organic matter were analyzed and grouped into 4 different zones (three for the Northern Hemisphere and one for the whole Southern Hemisphere). These C-14 data for the summer season of each hemisphere were employed to construct zonal, hemispheric, and global data sets for use in regional and global carbon model calculations including calibrating and comparing carbon cycle models. In addition, extended monthly atmospheric C-14 data sets for 4 different zones were compiled for age calibration purposes. This is the first time these data sets were constructed to facilitate the dating of recent organic material using the bomb C-14 curves. The distribution of bomb C-14 reflects the major zones of atmospheric circulation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We model nongraphitized carbon black surfaces and investigate adsorption of argon on these surfaces by using the grand canonical Monte Carlo simulation. In this model, the nongraphitized surface is modeled as a stack of graphene layers with some carbon atoms of the top graphene layer being randomly removed. The percentage of the surface carbon atoms being removed and the effective size of the defect ( created by the removal) are the key parameters to characterize the nongraphitized surface. The patterns of adsorption isotherm and isosteric heat are particularly studied, as a function of these surface parameters as well as pressure and temperature. It is shown that the adsorption isotherm shows a steplike behavior on a perfect graphite surface and becomes smoother on nongraphitized surfaces. Regarding the isosteric heat versus loading, we observe for the case of graphitized thermal carbon black the increase of heat in the submonolayer coverage and then a sharp decline in the heat when the second layer is starting to form, beyond which it increases slightly. On the other hand, the isosteric heat versus loading for a highly nongraphitized surface shows a general decline with respect to loading, which is due to the energetic heterogeneity of the surface. It is only when the fluid-fluid interaction is greater than the surface energetic factor that we see a minimum-maximum in the isosteric heat versus loading. These simulation results of isosteric heat agree well with the experimental results of graphitization of Spheron 6 (Polley, M. H.; Schaeffer, W. D.; Smith, W. R. J. Phys. Chem. 1953, 57, 469; Beebe, R. A.; Young, D. M. J. Phys. Chem. 1954, 58, 93). Adsorption isotherms and isosteric heat in pores whose walls have defects are also studied from the simulation, and the pattern of isotherm and isosteric heat could be used to identify the fingerprint of the surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remotely sensed data have been used extensively for environmental monitoring and modeling at a number of spatial scales; however, a limited range of satellite imaging systems often. constrained the scales of these analyses. A wider variety of data sets is now available, allowing image data to be selected to match the scale of environmental structure(s) or process(es) being examined. A framework is presented for use by environmental scientists and managers, enabling their spatial data collection needs to be linked to a suitable form of remotely sensed data. A six-step approach is used, combining image spatial analysis and scaling tools, within the context of hierarchy theory. The main steps involved are: (1) identification of information requirements for the monitoring or management problem; (2) development of ideal image dimensions (scene model), (3) exploratory analysis of existing remotely sensed data using scaling techniques, (4) selection and evaluation of suitable remotely sensed data based on the scene model, (5) selection of suitable spatial analytic techniques to meet information requirements, and (6) cost-benefit analysis. Results from a case study show that the framework provided an objective mechanism to identify relevant aspects of the monitoring problem and environmental characteristics for selecting remotely sensed data and analysis techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relative stability and magnitude of genetic and environmental effects underlying major dimensions of adolescent personality across time were investigated. The Junior Eysenck Personality Questionnaire was administered to over 540 twin pairs at ages 12, 14 and 16 years. Their personality scores were analyzed using genetic simplex modeling which explicitly took into account the longitudinal nature of the data. With the exception of the dimension lie, multivariate model fitting results revealed that familial aggregation was entirely explained by additive genetic effects. Results from simplex model fitting suggest that large proportions of the additive genetic variance observed at ages 14 and 16 years could be explained by genetic effects present at the age of 12 years. There was also evidence for smaller but significant genetic innovations at 14 and 16 years of age for male and female neuroticism, at 14 years for male extraversion, at 14 and 16 years for female psychoticism, and at 14 years for male psychoticism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stratum corneum (SC) desorption experiments have yielded higher calculated steady-state fluxes than those obtained by epidermal penetration studies. A possible explanation of this result is a variable diffusion or partition coefficient across the SC. We therefore developed the diffusion model for percutaneous penetration and desorption to study the effects of either a variable diffusion coefficient or variable partition coefficient in the SC over the diffusion path length. Steady-state flux, lag time, and mean desorption time were obtained from Laplace domain solutions. Numerical inversion of the Laplace domain solutions was used for simulations of solute concentration-distance and amount penetrated (desorbed)-time profiles. Diffusion and partition coefficients heterogeneity were examined using six different models. The effect of heterogeneity on predicted flux from desorption studies was compared with that obtained in permeation studies. Partition coefficient heterogeneity had a more profound effect on predicted fluxes than diffusion coefficient heterogeneity. Concentration-distance profiles show even larger dependence on heterogeneity, which is consistent with experimental tape-stripping data reported for clobetasol propionate and other solutes. The clobetasol propionate tape-stripping data were most consistent with the partition coefficient decreasing exponentially for half the SC and then becoming a constant for the remaining SC. (C) 2004 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A finite difference method for simulating voltammograms of electrochemically driven enzyme catalysis is presented. The method enables any enzyme mechanism to be simulated. The finite difference equations can be represented as a matrix equation containing a nonlinear sparse matrix. This equation has been solved using the software package Mathematica. Our focus is on the use of cyclic voltammetry since this is the most commonly employed electrochemical method used to elucidate mechanisms. The use of cyclic voltammetry to obtain data from systems obeying Michaelis-Menten kinetics is discussed, and we then verify our observations on the Michaelis-Menten system using the finite difference simulation. Finally, we demonstrate how the method can be used to obtain mechanistic information on a real redox enzyme system, the complex bacterial molybdoenzyme xanthine dehydrogenase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a new approach accounting for the nonadditivity of attractive parts of solid-fluid and fluidfluid potentials to improve the quality of the description of nitrogen and argon adsorption isotherms on graphitized carbon black in the framework of non-local density functional theory. We show that the strong solid-fluid interaction in the first monolayer decreases the fluid-fluid interaction, which prevents the twodimensional phase transition to occur. This results in smoother isotherm, which agrees much better with experimental data. In the region of multi-layer coverage the conventional non-local density functional theory and grand canonical Monte Carlo simulations are known to over-predict the amount adsorbed against experimental isotherms. Accounting for the non-additivity factor decreases the solid-fluid interaction with the increase of intermolecular interactions in the dense adsorbed fluid, preventing the over-prediction of loading in the region of multi-layer adsorption. Such an improvement of the non-local density functional theory allows us to describe experimental nitrogen and argon isotherms on carbon black quite accurately with mean error of 2.5 to 5.8% instead of 17 to 26% in the conventional technique. With this approach, the local isotherms of model pores can be derived, and consequently a more reliab * le pore size distribution can be obtained. We illustrate this by applying our theory against nitrogen and argon isotherms on a number of activated carbons. The fitting between our model and the data is much better than the conventional NLDFT, suggesting the more reliable PSD obtained with our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much research has been devoted over the years to investigating and advancing the techniques and tools used by analysts when they model. As opposed to what academics, software providers and their resellers promote as should be happening, the aim of this research was to determine whether practitioners still embraced conceptual modeling seriously. In addition, what are the most popular techniques and tools used for conceptual modeling? What are the major purposes for which conceptual modeling is used? The study found that the top six most frequently used modeling techniques and methods were ER diagramming, data flow diagramming, systems flowcharting, workflow modeling, UML, and structured charts. Modeling technique use was found to decrease significantly from smaller to medium-sized organizations, but then to increase significantly in larger organizations (proxying for large, complex projects). Technique use was also found to significantly follow an inverted U-shaped curve, contrary to some prior explanations. Additionally, an important contribution of this study was the identification of the factors that uniquely influence the decision of analysts to continue to use modeling, viz., communication (using diagrams) to/from stakeholders, internal knowledge (lack of) of techniques, user expectations management, understanding models' integration into the business, and tool/software deficiencies. The highest ranked purposes for which modeling was undertaken were database design and management, business process documentation, business process improvement, and software development. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work was to model lung cancer mortality as a function of past exposure to tobacco and to forecast age-sex-specific lung cancer mortality rates. A 3-factor age-period-cohort (APC) model, in which the period variable is replaced by the product of average tar content and adult tobacco consumption per capita, was estimated for the US, UK, Canada and Australia by the maximum likelihood method. Age- and sex-specific tobacco consumption was estimated from historical data on smoking prevalence and total tobacco consumption. Lung cancer mortality was derived from vital registration records. Future tobacco consumption, tar content and the cohort parameter were projected by autoregressive moving average (ARIMA) estimation. The optimal exposure variable was found to be the product of average tar content and adult cigarette consumption per capita, lagged for 2530 years for both males and females in all 4 countries. The coefficient of the product of average tar content and tobacco consumption per capita differs by age and sex. In all models, there was a statistically significant difference in the coefficient of the period variable by sex. In all countries, male age-standardized lung cancer mortality rates peaked in the 1980s and declined thereafter. Female mortality rates are projected to peak in the first decade of this century. The multiplicative models of age, tobacco exposure and cohort fit the observed data between 1950 and 1999 reasonably well, and time-series models yield plausible past trends of relevant variables. Despite a significant reduction in tobacco consumption and average tar content of cigarettes sold over the past few decades, the effect on lung cancer mortality is affected by the time lag between exposure and established disease. As a result, the burden of lung cancer among females is only just reaching, or soon will reach, its peak but has been declining for I to 2 decades in men. Future sex differences in lung cancer mortality are likely to be greater in North America than Australia and the UK due to differences in exposure patterns between the sexes. (c) 2005 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore both the rheology and complex flow behavior of monodisperse polymer melts. Adequate quantities of monodisperse polymer were synthesized in order that both the materials rheology and microprocessing behavior could be established. In parallel, we employ a molecular theory for the polymer rheology that is suitable for comparison with experimental rheometric data and numerical simulation for microprocessing flows. The model is capable of matching both shear and extensional data with minimal parameter fitting. Experimental data for the processing behavior of monodisperse polymers are presented for the first time as flow birefringence and pressure difference data obtained using a Multipass Rheometer with an 11:1 constriction entry and exit flow. Matching of experimental processing data was obtained using the constitutive equation with the Lagrangian numerical solver, FLOWSOLVE. The results show the direct coupling between molecular constitutive response and macroscopic processing behavior, and differentiate flow effects that arise separately from orientation and stretch. (c) 2005 The Society of Rheology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reconstructed cellular metabolic network of Mus musculus, based on annotated genomic data, pathway databases, and currently available biochemical and physiological information, is presented. Although incomplete, it represents the first attempt to collect and characterize the metabolic network of a mammalian cell on the basis of genomic data. The reaction network is generic in nature and attempts to capture the carbon, energy, and nitrogen metabolism of the cell. The metabolic reactions were compartmentalized between the cytosol and the mitochondria, including transport reactions between the compartments and the extracellular medium. The reaction list consists of 872 internal metabolites involved in a total of 1220 reactions, whereof 473 relate to known open reading frames. Initial in silico analysis of the reconstructed model is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adsorption of argon at its boiling point infinite cylindrical pores is considered by means of the non-local density functional theory (NLDFT) with a reference to MCM-41 silica. The NLDFT was adjusted to amorphous solids, which allowed us to quantitatively describe argon adsorption isotherm on nonporous reference silica in the entire bulk pressure range. In contrast to the conventional NLDFT technique, application of the model to cylindrical pores does not show any layering before the phase transition in conformity with experimental data. The finite pore is modeled as a cylindrical cavity bounded from its mouth by an infinite flat surface perpendicular to the pore axis. The adsorption of argon in pores of 4 and 5 nm diameters is analyzed in canonical and grand canonical ensembles using a two-dimensional version of NLDFT, which accounts for the radial and longitudinal fluid density distributions. The simulation results did not show any unusual features associated with accounting for the outer surface and support the conclusions obtained from the classical analysis of capillary condensation and evaporation. That is, the spontaneous condensation occurs at the vapor-like spinodal point, which is the upper limit of mechanical stability of the liquid-like film wetting the pore wall, while the evaporation occurs via a mechanism of receding of the semispherical meniscus from the pore mouth and the complete evaporation of the core occurs at the equilibrium transition pressure. Visualization of the pore filling and empting in the form of contour lines is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term forecasts of pest pressure are central to the effective management of many agricultural insect pests. In the eastern cropping regions of Australia, serious infestations of Helicoverpa punctigera (Wallengren) and H. armigera (Hübner)(Lepidoptera: Noctuidae) are experienced annually. Regression analyses of a long series of light-trap catches of adult moths were used to describe the seasonal dynamics of both species. The size of the spring generation in eastern cropping zones could be related to rainfall in putative source areas in inland Australia. Subsequent generations could be related to the abundance of various crops in agricultural areas, rainfall and the magnitude of the spring population peak. As rainfall figured prominently as a predictor variable, and can itself be predicted using the Southern Oscillation Index (SOI), trap catches were also related to this variable. The geographic distribution of each species was modelled in relation to climate and CLIMEX was used to predict temporal variation in abundance at given putative source sites in inland Australia using historical meteorological data. These predictions were then correlated with subsequent pest abundance data in a major cropping region. The regression-based and bioclimatic-based approaches to predicting pest abundance are compared and their utility in predicting and interpreting pest dynamics are discussed.