898 resultados para Analysis Model


Relevância:

70.00% 70.00%

Publicador:

Resumo:

A Lagrangian model of photochemistry and mixing is described (CiTTyCAT, stemming from the Cambridge Tropospheric Trajectory model of Chemistry And Transport), which is suitable for transport and chemistry studies throughout the troposphere. Over the last five years, the model has been developed in parallel at several different institutions and here those developments have been incorporated into one "community" model and documented for the first time. The key photochemical developments include a new scheme for biogenic volatile organic compounds and updated emissions schemes. The key physical development is to evolve composition following an ensemble of trajectories within neighbouring air-masses, including a simple scheme for mixing between them via an evolving "background profile", both within the boundary layer and free troposphere. The model runs along trajectories pre-calculated using winds and temperature from meteorological analyses. In addition, boundary layer height and precipitation rates, output from the analysis model, are interpolated to trajectory points and used as inputs to the mixing and wet deposition schemes. The model is most suitable in regimes when the effects of small-scale turbulent mixing are slow relative to advection by the resolved winds so that coherent air-masses form with distinct composition and strong gradients between them. Such air-masses can persist for many days while stretching, folding and thinning. Lagrangian models offer a useful framework for picking apart the processes of air-mass evolution over inter-continental distances, without being hindered by the numerical diffusion inherent to global Eulerian models. The model, including different box and trajectory modes, is described and some output for each of the modes is presented for evaluation. The model is available for download from a Subversion-controlled repository by contacting the corresponding authors.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Purpose – The purpose of this article is to present an empirical analysis of complex sample data with regard to the biasing effect of non-independence of observations on standard error parameter estimates. Using field data structured in the form of repeated measurements it is to be shown, in a two-factor confirmatory factor analysis model, how the bias in SE can be derived when the non-independence is ignored.

Design/methodology/approach – Three estimation procedures are compared: normal asymptotic theory (maximum likelihood); non-parametric standard error estimation (naïve bootstrap); and sandwich (robust covariance matrix) estimation (pseudo-maximum likelihood).

Findings – The study reveals that, when using either normal asymptotic theory or non-parametric standard error estimation, the SE bias produced by the non-independence of observations can be noteworthy.

Research limitations/implications –
Considering the methodological constraints in employing field data, the three analyses examined must be interpreted independently and as a result taxonomic generalisations are limited. However, the study still provides “case study” evidence suggesting the existence of the relationship between non-independence of observations and standard error bias estimates.

Originality/value – Given the increasing popularity of structural equation models in the social sciences and in particular in the marketing discipline, the paper provides a theoretical and practical insight into how to treat repeated measures and clustered data in general, adding to previous methodological research. Some conclusions and suggestions for researchers who make use of partial least squares modelling are also drawn.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper continues the prior research undertaken by Warren and Leitch (2009), in which a series of initial research findings were presented. These findings identified that in Australia, Supply Chain Management (SCM) systems were the weak link of Australian critical infrastructure. This paper focuses upon the security and risk issues associated with SCM systems and puts forward a new SCM Security Risk Management method, continuing the research presented at the European Conference of Information Warfare in 2009.This paper proposes a new Security Risk Analysis model that deals with the complexity of protecting SCM critical infrastructure systems and also introduces a new approach that organisations can apply to protect their SCM systems. The paper describes the importance of SCM systems from a critical infrastructure protection perspective. The paper then discusses the importance of SCM systems in relation to supporting centres of populations and gives examples of the impact of failure. The paper proposes a new SCM security risk analysis method that deals with the security issues related to SCM security and the security issues associated with Information Security. The paper will also discuss a risk framework that can be used to protect against high and low level associated security risks using a new SCM security risk analysis method.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Goal-directed problem solving as originally advocated by Herbert Simon’s means-ends analysis model has primarily shaped the course of design research on artificially intelligent systems for problem-solving. We contend that there is a definite disregard of a key phase within the overall design process that in fact logically precedes the actual problem solving phase. While systems designers have traditionally been obsessed with goal-directed problem solving, the basic determinants of the ultimate desired goal state still remain to be fully understood or categorically defined. We propose a rational framework built on a set of logically interconnected conjectures to specifically recognize this neglected phase in the overall design process of intelligent systems for practical problem-solving applications.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Within the increasing body of research that examines students' reasoning on socioscientific issues, we consider in particular student reasoning concerning acute, open-ended questions that bring out the complexities and uncertainties embedded in ill-structured problems. In this paper, we propose a socioscientific sustainability reasoning (S3R) model to analyze students' reasoning exchanges on environmental socially acute questions (ESAQs). The paper describes the development of an epistemological analysis of how sustainability perspectives can be integrated into socioscientific reasoning, which emphasizes the need for S3R to be both grounded in context and collective. We argue the complexity of ESAQs requires a consideration of multiple dimensions that form the basis of our S3R analysis model: problematization, interactions, knowledge, uncertainties, values, and governance. For each dimension, in the model we have identified indicators of four levels of complexity. We investigated the usefulness of the model in identifying improvements in reasoning that flow from cross-national web-based exchanges between groups of French and Australian students, concerning a local and a global ESAQ. The S3R model successfully captured the nature of reasoning about socioscientific sustainability issues, with the collective negotiation of multiple forms of knowledge as a key characteristic in improving reasoning levels. The paper provides examples of collaborative argumentation in collective texts (wikis) to illustrate the various levels of reasoning in each dimension, and diagrammatic representation of the evolution of collective reflections. We observe that a staged process of construction and confrontation, involving groups representing to some extent different cultural and contextual stances, is powerful in eliciting reasoned argument of enhanced quality. © 2014 Wiley Periodicals, Inc.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The search for better performance in the structural systems has been taken to more refined models, involving the analysis of a growing number of details, which should be correctly formulated aiming at defining a representative model of the real system. Representative models demand a great detailing of the project and search for new techniques of evaluation and analysis. Model updating is one of this technologies, it can be used to improve the predictive capabilities of computer-based models. This paper presents a FRF-based finite element model updating procedure whose the updating variables are physical parameters of the model. It includes the damping effects in the updating procedure assuming proportional and non proportional damping mechanism. The updating parameters are defined at an element level or macro regions of the model. So, the parameters are adjusted locally, facilitating the physical interpretation of the adjusting of the model. Different tests for simulated and experimental data are discussed aiming at evaluating the characteristics and potentialities of the methodology.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The search for better performance in the structural systems has been taken to more refined models, involving the analysis of a growing number of details, which should be correctly formulated aiming at defining a representative model of the real system. Representative models demand a great detailing of the project and search for new techniques of evaluation and analysis. Model updating is one of this technologies, it can be used to improve the predictive capabilities of computer-based models. This paper presents a FRF-based finite element model updating procedure whose the updating variables are physical parameters of the model. It includes the damping effects in the updating procedure assuming proportional and none proportional damping mechanism. The updating parameters are defined at an element level or macro regions of the model. So, the parameters are adjusted locally, facilitating the physical interpretation of the adjusting of the model. Different tests for simulated and experimental data are discussed aiming at defining the characteristics and potentialities of the methodology.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We investigate the Heston model with stochastic volatility and exponential tails as a model for the typical price fluctuations of the Brazilian São Paulo Stock Exchange Index (IBOVESPA). Raw prices are first corrected for inflation and a period spanning 15 years characterized by memoryless returns is chosen for the analysis. Model parameters are estimated by observing volatility scaling and correlation properties. We show that the Heston model with at least two time scales for the volatility mean reverting dynamics satisfactorily describes price fluctuations ranging from time scales larger than 20min to 160 days. At time scales shorter than 20 min we observe autocorrelated returns and power law tails incompatible with the Heston model. Despite major regulatory changes, hyperinflation and currency crises experienced by the Brazilian market in the period studied, the general success of the description provided may be regarded as an evidence for a general underlying dynamics of price fluctuations at intermediate mesoeconomic time scales well approximated by the Heston model. We also notice that the connection between the Heston model and Ehrenfest urn models could be exploited for bringing new insights into the microeconomic market mechanics. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

An adaptive scheme is shown by the authors of the above paper (ibid. vol. 71, no. 2, pp. 275-276, Feb. 1983) for continuous time model reference adaptive systems (MRAS), where relays replace the usual multipliers in the existing MRAS. The commenter shows an error in the analysis of the hyperstability of the scheme, such that the validity of this configuration becomes an open question.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We consider the contributions to the neutrinoless double beta decays in a SU(3)L⊗U(1)N electroweak model. We show that for a range of parameters in the model there are diagrams involving vector-vector-scalar and trilinear scalar couplings which can be potentially as contributing as the light massive Majorana neutrino exchange one. We use these contributions to obtain constraints upon some mass scales of the model, such as the masses of the new charged vector and scalar bosons. We also consider briefly the decay in which, in addition to the two electrons, a Majoron-like boson is emitted. ©2001 The American Physical Society.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Heat-transfer studies were carried out in a packed bed of glass beads, cooled by the wall, through which air percolated. Tube-to-particle diameter ratios (D/dp) ranged from 1.8 to 55, while the air mass flux ranged from 0.204 to 2.422 kg/m2·s. The outlet bed temperature (TL) was measured by a brass ring-shaped sensor and by aligned thermocouples. The resulting radial temperature profiles differed statistically. Angular temperature fluctuations were observed through measurements made at 72 angular positions. These fluctuations do not follow a normal distribution around the mean for low ratios D/dp. The presence of a restraining screen, as well as the increasing distance between the temperature measuring device and the bed surface, distorts TL. The radial temperature profile at the bed entrance (T0) was measured by a ring-shaped sensor, and T 0 showed to be a function of the radial position, the particle diameter, and the fluid flow rate.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Objective: To To conduct a cost-effectiveness analysis of a universal childhood hepatitis A vaccination program in Brazil. Methods: An age and time-dependent dynamic model was developed to estimate the incidence of hepatitis A for 24 years. The analysis was run separately according to the pattern of regional endemicity, one for South + Southeast (low endemicity) and one for the North + Northeast + Midwest (intermediate endemicity). The decision analysis model compared universal childhood vaccination with current program of vaccinating high risk individuals. Epidemiologic and cost estimates were based on data from a nationwide seroprevalence survey of viral hepatitis, primary data collection, National Health Information Systems and literature. The analysis was conducted from both the health system and societal perspectives. Costs are expressed in 2008 Brazilian currency (Real). Results: A universal immunization program would have a significant impact on disease epidemiology in all regions, resulting in 64% reduction in the number of cases of icteric hepatitis, 59% reduction in deaths for the disease and a 62% decrease of life years lost, in a national perspective. With a vaccine price of R$16.89 (US$7.23) per dose, vaccination against hepatitis A was a cost-saving strategy in the low and intermediate endemicity regions and in Brazil as a whole from both health system and society perspective. Results were most sensitive to the frequency of icteric hepatitis, ambulatory care and vaccine costs. Conclusions: Universal childhood vaccination program against hepatitis A could be a cost-saving strategy in all regions of Brazil. These results are useful for the Brazilian government for vaccine related decisions and for monitoring population impact if the vaccine is included in the National Immunization Program. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This work evaluates the spatial distribution of normalised rates of droplet breakage and droplet coalescence in liquidliquid dispersions maintained in agitated tanks at operation conditions normally used to perform suspension polymerisation reactions. Particularly, simulations are performed with multiphase computational fluid dynamics (CFD) models to represent the flow field in liquidliquid styrene suspension polymerisation reactors for the first time. CFD tools are used first to compute the spatial distribution of the turbulent energy dissipation rates (e) inside the reaction vessel; afterwards, normalised rates of droplet breakage and particle coalescence are computed as functions of e. Surprisingly, multiphase simulations showed that the rates of energy dissipation can be very high near the free vortex surfaces, which has been completely neglected in previous works. The obtained results indicate the existence of extremely large energy dissipation gradients inside the vessel, so that particle breakage occurs primarily in very small regions that surround the impeller and the free vortex surface, while particle coalescence takes place in the liquid bulk. As a consequence, particle breakage should be regarded as an independent source term or a boundary phenomenon. Based on the obtained results, it can be very difficult to justify the use of isotropic assumptions to formulate particle population balances in similar systems, even when multiple compartment models are used to describe the fluid dynamic behaviour of the agitated vessel. (C) 2011 Canadian Society for Chemical Engineering