169 resultados para reference models
Resumo:
Three kinds of integrable Kondo impurity additions to one-dimensional q-deformed extended Hubbard models are studied by means of the boundary Z(2)-graded quantum inverse scattering method. The boundary K matrices depending on the local magnetic moments of the impurities are presented as nontrivial realisations of the reflection equation algebras in an impurity Hilbert space. The models are solved by using the algebraic Bethe ansatz method, and the Bethe ansatz equations are obtained.
Resumo:
Numerical optimisation methods are being more commonly applied to agricultural systems models, to identify the most profitable management strategies. The available optimisation algorithms are reviewed and compared, with literature and our studies identifying evolutionary algorithms (including genetic algorithms) as superior in this regard to simulated annealing, tabu search, hill-climbing, and direct-search methods. Results of a complex beef property optimisation, using a real-value genetic algorithm, are presented. The relative contributions of the range of operational options and parameters of this method are discussed, and general recommendations listed to assist practitioners applying evolutionary algorithms to the solution of agricultural systems. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Background: The measured values of specific traits of occlusion may be subject to significant change due to growth and maturation of the dentofacial structures. Some traits may show improvement while others may show deterioration. Rarely is there an opportunity to examine a sample of occlusions 25 years after the acquisition of the original set of records. This study examines the changes in traits of occlusion in a sample of 46 subjects who were originally examined between 1971-1973 and for whom records were again obtained in 1998. Methods: The 46 patients were a sub-group of a previously selected randomised school-based sample and study models obtained in 1971-1973 were still available. New models for each patient were obtained in 1998. Of the 46 subjects, only eight had received orthodontic treatment. Results: Assessments of the changes in specific traits were made using the methods proposed in the Harry L Draker, California Modification (HLD Cal Mod) index. This simple index was chosen because the main component traits were well defined and, when analysed separately, reflected changes with time. The total index score gave a broad indication of the global changes in the individual's occlusion. The five basic traits of the HLD index include overjet, overbite, openbite, mandibular protrusion and labio-lingual spread. Three additional traits (ectopic eruption, anterior crowding and posterior crossbite) are used in the HLD Cal Mod index. These traits provided a useful reflection of occlusal changes with time. Measurements were made with reference to specifications and the details outlined in the HLD Cal Mod protocol. The results revealed an increase in total index scores over time with a significant increase in lower labio-lingual spread associated with an increased score in anterior crowding. Overjet and overbite, however, displayed a significant decrease with time. Conclusions: These findings are in keeping with previous studies and highlight the importance of time as a significant issue in the assessment of occlusion.
Resumo:
This paper assesses the capacity of local communities and sub-national governments to influence patterns of tourism development, within the context of a globalizing economy. Through a comparison of the contrasting examples of Hawaii and Queensland, the paper indicates the consequences of different approaches to land use regulation. It points to the importance of planning and policy processes that integrate community interests, in order to achieve long-term, sustainable tourism development. Effective regulation of development can minimize the social and environmental impacts of tourism. The paper illustrates how community organizations and sub-national governments can articulate local interests, despite the global demands of investors for more deregulated markets in land.
Resumo:
Impulsivity based on Gray's [Gray, J. A. (1982) The neuropsychology of anxiety: an enquiry into the function of the septo-hippocampal system. New York: Oxford University Press: (1991). The neurophysiology of temperament. In J. Strelau & A. Angleitner. Explorations in temperament: international perspectives on theory and measurement. London. Plenum Press]. physiological model of personality was hypothesised to be more predictive of goal oriented criteria within the workplace than scales derived From Eysenck's [Eysenck. H.J. (1967). The biological basis of personality. Springfield, IL: Charles C. Thompson.] physiological model of personality. Results confirmed the hypothesis and also showed that Gray's scale of Impulsivity was generally a better predictor than attributional style and interest in money. Results were interpreted as providing support for Gray's Behavioural Activation System which moderates response to reward. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The Eysenck Personality Questionnaire-Revised (EPQ-R), the Eysenck Personality Profiler Short Version (EPP-S), and the Big Five Inventory (BFI-V4a) were administered to 135 postgraduate students of business in Pakistan. Whilst Extraversion and Neuroticism scales from the three questionnaires were highly correlated, it was found that Agreeableness was most highly correlated with Psychoticism in the EPQ-R and Conscientiousness was most highly correlated with Psychoticism in the EPP-S. Principal component analyses with varimax rotation were carried out. The analyses generally suggested that the five factor model rather than the three-factor model was more robust and better for interpretation of all the higher order scales of the EPQ-R, EPP-S, and BFI-V4a in the Pakistani data. Results show that the superiority of the five factor solution results from the inclusion of a broader variety of personality scales in the input data, whereas Eysenck's three factor solution seems to be best when a less complete but possibly more important set of variables are input. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
In this paper, we look at three models (mixture, competing risk and multiplicative) involving two inverse Weibull distributions. We study the shapes of the density and failure-rate functions and discuss graphical methods to determine if a given data set can be modelled by one of these models. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
We solve the Sp(N) Heisenberg and SU(N) Hubbard-Heisenberg models on the anisotropic triangular lattice in the large-N limit. These two models may describe respectively the magnetic and electronic properties of the family of layered organic materials K-(BEDT-TTF)(2)X, The Heisenberg model is also relevant to the frustrated antiferromagnet, Cs2CuCl4. We find rich phase diagrams for each model. The Sp(N) :antiferromagnet is shown to have five different phases as a function of the size of the spin and the degree of anisotropy of the triangular lattice. The effects of fluctuations at finite N are also discussed. For parameters relevant to Cs2CuCl4 the ground state either exhibits incommensurate spin order, or is in a quantum disordered phase with deconfined spin-1/2 excitations and topological order. The SU(N) Hubbard-Heisenberg model exhibits an insulating dimer phase, an insulating box phase, a semi-metallic staggered flux phase (SFP), and a metallic uniform phase. The uniform and SFP phases exhibit a pseudogap, A metal-insulator transition occurs at intermediate values of the interaction strength.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Despite their limitations, linear filter models continue to be used to simulate the receptive field properties of cortical simple cells. For theoreticians interested in large scale models of visual cortex, a family of self-similar filters represents a convenient way in which to characterise simple cells in one basic model. This paper reviews research on the suitability of such models, and goes on to advance biologically motivated reasons for adopting a particular group of models in preference to all others. In particular, the paper describes why the Gabor model, so often used in network simulations, should be dropped in favour of a Cauchy model, both on the grounds of frequency response and mutual filter orthogonality.
Resumo:
Selection in the thymus restricted by MHC and self-peptide shapes the diverse reactivities of the T-cell population which subsequently seeds into the peripheral tissues, in anticipation of the universe of pathogen antigens to which the organism may be exposed. A necessary corollary is the potential for T-cell self-reactivity (autoimmunity) in the periphery. Transgenic mouse models in which transgene expression in the thymus is prevented or excluded, have been particularly useful for determining the immunological outcome when T-cells encounter transgene-encoded 'self' antigen in peripheral tissues. Data suggest that non-mutually exclusive mechanisms of T-cells 'ignoring' self-antigen, T-cell deletion, T-cell anergy and T-cell immunoregulation have evolved to prevent self-reactivity while maintaining T-cell diversity. The peripheral T-cell repertoire, far from being static following maturation through the thymus, is in a dynamic stated determined by these peripheral selective and immunoregulatory influences. This article reviews the evidence with particular reference to CD8+ive T-cells.
Resumo:
Five kinetic models for adsorption of hydrocarbons on activated carbon are compared and investigated in this study. These models assume different mass transfer mechanisms within the porous carbon particle. They are: (a) dual pore and surface diffusion (MSD), (b) macropore, surface, and micropore diffusion (MSMD), (c) macropore, surface and finite mass exchange (FK), (d) finite mass exchange (LK), and (e) macropore, micropore diffusion (BM) models. These models are discriminated using the single component kinetic data of ethane and propane as well as the multicomponent kinetics data of their binary mixtures measured on two commercial activated carbon samples (Ajax and Norit) under various conditions. The adsorption energetic heterogeneity is considered for all models to account for the system. It is found that, in general, the models assuming diffusion flux of adsorbed phase along the particle scale give better description of the kinetic data.
Resumo:
Objective-To determine reference values and test variability for glucose tolerance tests (GTT), insulin tolerance tests (ITT), and insulin sensitivity tests (IST) in cats, Animals-32 clinically normal cats. Procedure-GTT, ITT, and IST were performed on consecutive days. Tolerance intervals tie, reference values) were calculated as means +/- 2.397 SD for plasma glucose and insulin concentrations, half-life of glucose (T-1/2glucose), rate constants for glucose disappearance (K-glucose and K-itt), and insulin sensitivity index (S-l). Tests were repeated after 6 weeks in 8 cats to determine test variability. Results-Reference values for T-1/2glucose, K-glucose, and fasting plasma glucose and insulin concentrations during GTT were 45 to 74 minutes, 0.93 to 1.54 %/min, 37 to 104 mg/dl, and 2.8 to 20.6 muU/ml, respectively. Mean values did not differ between the 2 tests. Coefficients of variation for T-1/2glucose, K-glucose, and fasting plasma glucose and insulin concentrations were 20, 20, 11, and 23%, respectively. Reference values for K-itt were 1.14 to 7.3%/min, and for S-l were 0.57 to 10.99 x 10(-4) min/muU/ml. Mean values did not differ between the 2 tests performed 6 weeks apart, Coefficients of variation for K-itt and S-l were 60 and 47%, respectively. Conclusions and Clinical Relevance-GTT, ITT, and IST can be performed in cats, using standard protocols. Knowledge of reference values and test variability will enable researchers to better interpret test results for assessment of glucose tolerance, pancreatic beta -cell function, and insulin sensitivity in cats.
Resumo:
Performance indicators in the public sector have often been criticised for being inadequate and not conducive to analysing efficiency. The main objective of this study is to use data envelopment analysis (DEA) to examine the relative efficiency of Australian universities. Three performance models are developed, namely, overall performance, performance on delivery of educational services, and performance on fee-paying enrolments. The findings based on 1995 data show that the university sector was performing well on technical and scale efficiency but there was room for improving performance on fee-paying enrolments. There were also small slacks in input utilisation. More universities were operating at decreasing returns to scale, indicating a potential to downsize. DEA helps in identifying the reference sets for inefficient institutions and objectively determines productivity improvements. As such, it can be a valuable benchmarking tool for educational administrators and assist in more efficient allocation of scarce resources. In the absence of market mechanisms to price educational outputs, which renders traditional production or cost functions inappropriate, universities are particularly obliged to seek alternative efficiency analysis methods such as DEA.