927 resultados para Poisson model with common shocks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A low resolution coupled ocean-atmosphere general circulation model OAGCM is used to study the characteristics of the large scale ocean circulation and its climatic impacts in a series of global coupled aquaplanet experiments. Three configurations, designed to produce fundamentally different ocean circulation regimes, are considered. The first has no obstruction to zonal flow, the second contains a low barrier that blocks zonal flow in the ocean at all latitudes, creating a single enclosed basin, whilst the third contains a gap in the barrier to allow circumglobal flow at high southern latitudes. Warm greenhouse climates with a global average air surface temperature of around 27C result in all cases. Equator to pole temperature gradients are shallower than that of a current climate simulation. Whilst changes in the land configuration cause regional changes in temperature, winds and rainfall, heat transports within the system are little affected. Inhibition of all ocean transport on the aquaplanet leads to a reduction in global mean surface temperature of 8C, along with a sharpening of the meridional temperature gradient. This results from a reduction in global atmospheric water vapour content and an increase in tropical albedo, both of which act to reduce global surface temperatures. Fitting a simple radiative model to the atmospheric characteristics of the OAGCM solutions suggests that a simpler atmosphere model, with radiative parameters chosen a priori based on the changing surface configuration, would have produced qualitatively different results. This implies that studies with reduced complexity atmospheres need to be guided by more complex OAGCM results on a case by case basis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This investigation determines the accuracy of estimation of methanogenesis by a dynamic mechanistic model with real data determined in a respiration trial, where cows were fed a wide range of different carbohydrates included in the concentrates. The model was able to predict ECM (Energy corrected milk) very well, while the NDF digestibility of fibrous feed was less well predicted. Methane emissions were predicted quite well, with the exception of one diet containing wheat. The mechanistic model is therefore a helpful tool to estimate methanogenesis based on chemical analysis and dry matter intake, but the prediction can still be improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past decade, a number of mechanistic, dynamic simulation models of several components of the dairy production system have become available. However their use has been limited due to the detailed technical knowledge and special software required to run them, and the lack of compatibility between models in predicting various metabolic processes in the animal. The first objective of the current study was to integrate the dynamic models of [Brit. J. Nutr. 72 (1994) 679] on rumen function, [J. Anim. Sci. 79 (2001) 1584] on methane production, [J. Anim. Sci. 80 (2002) 2481 on N partition, and a new model of P partition. The second objective was to construct a decision support system to analyse nutrient partition between animal and environment. The integrated model combines key environmental pollutants such as N, P and methane within a nutrient-based feed evaluation system. The model was run under different scenarios and the sensitivity of various parameters analysed. A comparison of predictions from the integrated model with the original simulation models showed an improvement in N excretion since the integrated model uses the dynamic model of [Brit. J. Nutr. 72 (1994) 6791 to predict microbial N, which was not represented in detail in the original model. The integrated model can be used to investigate the degree to which production and environmental objectives are antagonistic, and it may help to explain and understand the complex mechanisms involved at the ruminal and metabolic levels. A part of the integrated model outputs were the forms of N and P in excreta and methane, which can be used as indices of environmental pollution. (C) 2004 Elsevier B.V All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: The present paper investigates the question of a suitable basic model for the number of scrapie cases in a holding and applications of this knowledge to the estimation of scrapie-ffected holding population sizes and adequacy of control measures within holding. Is the number of scrapie cases proportional to the size of the holding in which case it should be incorporated into the parameter of the error distribution for the scrapie counts? Or, is there a different - potentially more complex - relationship between case count and holding size in which case the information about the size of the holding should be better incorporated as a covariate in the modeling? Methods: We show that this question can be appropriately addressed via a simple zero-truncated Poisson model in which the hypothesis of proportionality enters as a special offset-model. Model comparisons can be achieved by means of likelihood ratio testing. The procedure is illustrated by means of surveillance data on classical scrapie in Great Britain. Furthermore, the model with the best fit is used to estimate the size of the scrapie-affected holding population in Great Britain by means of two capture-recapture estimators: the Poisson estimator and the generalized Zelterman estimator. Results: No evidence could be found for the hypothesis of proportionality. In fact, there is some evidence that this relationship follows a curved line which increases for small holdings up to a maximum after which it declines again. Furthermore, it is pointed out how crucial the correct model choice is when applied to capture-recapture estimation on the basis of zero-truncated Poisson models as well as on the basis of the generalized Zelterman estimator. Estimators based on the proportionality model return very different and unreasonable estimates for the population sizes. Conclusion: Our results stress the importance of an adequate modelling approach to the association between holding size and the number of cases of classical scrapie within holding. Reporting artefacts and speculative biological effects are hypothesized as the underlying causes of the observed curved relationship. The lack of adjustment for these artefacts might well render ineffective the current strategies for the control of the disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimation of population size with missing zero-class is an important problem that is encountered in epidemiological assessment studies. Fitting a Poisson model to the observed data by the method of maximum likelihood and estimation of the population size based on this fit is an approach that has been widely used for this purpose. In practice, however, the Poisson assumption is seldom satisfied. Zelterman (1988) has proposed a robust estimator for unclustered data that works well in a wide class of distributions applicable for count data. In the work presented here, we extend this estimator to clustered data. The estimator requires fitting a zero-truncated homogeneous Poisson model by maximum likelihood and thereby using a Horvitz-Thompson estimator of population size. This was found to work well, when the data follow the hypothesized homogeneous Poisson model. However, when the true distribution deviates from the hypothesized model, the population size was found to be underestimated. In the search of a more robust estimator, we focused on three models that use all clusters with exactly one case, those clusters with exactly two cases and those with exactly three cases to estimate the probability of the zero-class and thereby use data collected on all the clusters in the Horvitz-Thompson estimator of population size. Loss in efficiency associated with gain in robustness was examined based on a simulation study. As a trade-off between gain in robustness and loss in efficiency, the model that uses data collected on clusters with at most three cases to estimate the probability of the zero-class was found to be preferred in general. In applications, we recommend obtaining estimates from all three models and making a choice considering the estimates from the three models, robustness and the loss in efficiency. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time-resolved kinetic studies of the reaction of silylene, SiH2, generated by laser flash photolysis of phenylsilane, have been carried out to obtain rate constants for its bimolecular reaction with NO. The reaction was studied in the gas phase over the pressure range 1-100 Torr in SF6 bath gas at five temperatures in the range 299-592 K. The second-order rate constants at 10 Torr fitted the Arrhenius equation log(k/cm(3) molecule(-1) s(-1)) = (- 11.66 +/- 0.01) + (6.20 +/- 0.10 kJ mol(-1))IRT In 10 The rate constants showed a variation with pressure of a factor of ca. 2 over the available range, almost independent of temperature. The data could not be fitted by RRKM calculations to a simple third body assisted association reaction alone. However, a mechanistic model with an additional (pressure independent) side channel gave a reasonable fit to the data. Ab initio calculations at the G3 level supported a mechanism in which the initial adduct, bent H2SiNO, can ring close to form cyclo-H2SiNO, which is partially collisionally stabilized. In addition, bent H2SiNO can undergo a low barrier isomerization reaction leading, via a sequence of steps, ultimately to dissociation products of which the lowest energy pair are NH2 + SiO. The rate controlling barrier for this latter pathway is only 16 kJ mol(-1) below the energy of SiH2 + NO. This is consistent with the kinetic findings. A particular outcome of this work is that, despite the pressure dependence and the effects of the secondary barrier (in the side reaction), the initial encounter of SiH2 with NO occurs at the collision rate. Thus, silylene can be as reactive with odd electron molecules as with many even electron species. Some comparisons are drawn with the reactions of CH2 + NO and SiCl2 + NO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emergent requirements for effective e-learning calls for a paradigm shift for instructional design. Constructivist theory and semiotics offer a sound underpinning to enable such revolutionary change by employing the concepts of Learning Objects. E-learning guidelines adopted by the industry have led successfully to the development of training materials. Inadequacy and deficiency of those methods for Higher Education have been identified in this paper. Based on the best practice in industry and our empirical research, we present an instructional design model with practical templates for constructivist learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effects of meson fluctuations are studied in a nonlocal generalization of the Nambu–Jona-Lasinio model, by including terms of next-to-leading order (NLO) in 1/Nc. In the model with only scalar and pseudoscalar interactions NLO contributions to the quark condensate are found to be very small. This is a result of cancellation between virtual mesons and Fock terms, which occurs for the parameter sets of most interest. In the quark self-energy, similar cancellations arise in the tadpole diagrams, although not in other NLO pieces which contribute at the 25% level. The effects on pion properties are also found to be small. NLO contributions from real pi-pi intermediate states increase the sigma meson mass by 30%. In an extended model with vector and axial interactions, there are indications that NLO effects could be larger.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite a number of earlier studies which seemed to confirm molecular adsorption of water on close-packed surfaces of late transition metals, new controversy has arisen over a recent theoretical work by Feibelman, according to which partial dissociation occurs on the Ru{0001} surface leading to a mixed (H2O + OH + H) superstructure. Here, we present a refined LEED-IV analysis of the (root3 x root3)R30degrees-D2O-Ru{0001} structure, testing explicitly this new model by Feibelman. Our results favour the model proposed earlier by Held and Menzel assuming intact water molecules with almost coplanar oxygen atoms and out-of-plane hydrogen atoms atop the slightly higher oxygen atoms. The partially dissociated model with an almost identical arrangement of oxygen atoms can, however, not unambiguously be excluded, especially when the single hydrogen atoms are not present in the surface unit cell. In contrast to the earlier LEED-IV analysis, we can, however, clearly exclude a buckled geometry of oxygen atoms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a case study of a comparison of an Eulerian chemical transport model (CTM) and Lagrangian chemical model with measurements taken by aircraft. High-resolution Eulerian integrations produce improved point-by-point comparisons between model results and measurements compared to low resolution. The Lagrangian model requires mixing to be introduced in order to model the measurements.