13 resultados para Unified Model Reference

em University of Queensland eSpace - Australia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structure and function of the pharyngeal jaw apparatus (PJA) and postpharyngeal alimentary tract of Arrhamphus sclerolepis krefftii, an herbivorous hemiramphid, were investigated by dissection, light and scanning electron microscopy, and X-ray analysis of live specimens. A simple model of PJA operation is proposed, consisting of an adductive power stroke of the third pharyngobranchial that draws it posteriorly while the fifth ceratobranchial is adducted, and a return stroke in which the third pharyngobranchial bone is drawn anteriorly during abduction of the fifth ceratobranchial. Teeth in the posteromedial region of the PJA are eroded into an occlusion zone where the teeth of the third pharyngobranchial are spatulate incisiform and face posteriorly in opposition to the rostrally oriented spatulate incisiform teeth in the wear zone of the fifth ceratobranchial. The shape of the teeth and their pedestals (bone of attachment) is consistent with the model and with the forces likely to operate on the elements of the PJA during mastication. The role of pharyngeal tooth replacement in maintaining the occlusal surfaces in the PJA during growth is described. The postpharyngeal alimentary tract of A. sclerolepis krefftii comprises a stomachless cylinder that attenuates gradually as it passes straight to the anus, interrupted only by a rectal valve. The ratio of gut length to standard length is about 0.5. Despite superficial similarities to the cichlid PJA (Stiassny and Jensen [1987] Bull Mus Comp Zool 151: 269-319), the hemiramphid PJA differs in the fusion of the third pharyngobranchial bones, teeth in the second pharyngobranchials and the fifth ceratobranchial face anteriorly, the presence of a slide-like diarthroses between the heads of the fourth epibranchials and the third pharyngobranchial, the occlusion zone of constantly wearing teeth, and the unusual form of the muscularis craniopharyngobranchialis. The functional relationship between these structures is explained and the consequence for the fish of a complex PJA and a simple gut is discussed. (C) 2002 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose : The purpose of this article is to critically review the literature to examine factors that are most consistently related to employment outcome following traumatic brain injury (TBI), with a particular focus on metacognitive skills. It also aims to develop a conceptual model of factors related to employment outcome. Method : The first stage of the review considered 85 studies published between 1980 and December 2003 which investigated factors associated with employment outcome following TBI. English-language studies were identified through searches of Medline and PsycINFO, as well as manual searches of journals and reference lists. The studies were evaluated and rated by two independent raters (Kappa = 0.835) according to the quality of their methodology based upon nine criteria. Fifty studies met the criteria for inclusion in the second stage of the review, which examined the relationship between a broad range of variables and employment outcome. Results : The factors most consistently associated with employment outcome included pre-injury occupational status, functional status at discharge, global cognitive functioning, perceptual ability, executive functioning, involvement in vocational rehabilitation services and emotional status. Conclusions : A conceptual model is presented which emphasises the importance of metacognitive, emotional and social environment factors for improving employment outcome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Smallholder farmers in Africa practice traditional cropping techniques such as intercropping. Intercropping is thought to offer higher productivity and resource milisation than sole cropping. In this study, risk associated with maize-bean intercropping was evaluated by quantifying long-term yield in both intercropping and sole cropping in a semi-arid region of South Africa (Bloemfontein, Free State) with reference to rainfall variability. The crop simulation model was run with different cultural practices (planting date and plant density) for 52 summer crop growing seasons (1950/1951-2001/2002). Eighty-one scenarios, consisted of three levels of initial soil water, planting date, maize population, and bean population, were simulated. From the simulation outputs, the total land equivalent ratio (LER) was greater than one. The intercrop (equivalent to sole maize) had greater energy value (EV) than sole beans, and the intercrop (equivalent to sole beans) had greater monetary value (MV) than sole maize. From these results, it can be concluded that maize-bean intercropping is advantageous for this semi-arid region. Soil water at planting was the most important factor of all scenario factors, followed by planting date. Irrigation application at planting, November/December planting and high plant density of maize for EV and beans for MV can be one of the most effective cultural practices in the study region. With regard to rainfall variability, seasonal (October-April) rainfall positively affected EV and MV, but not LER. There was more intercrop production in La Nina years than in El Nino years. Thus, better cultural practices may be selected to maximize maize-bean intercrop yields for specific seasons in the semi-arid region based on the global seasonal outlook. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present unified, systematic derivations of schemes in the two known measurement-based models of quantum computation. The first model (introduced by Raussendorf and Briegel, [Phys. Rev. Lett. 86, 5188 (2001)]) uses a fixed entangled state, adaptive measurements on single qubits, and feedforward of the measurement results. The second model (proposed by Nielsen, [Phys. Lett. A 308, 96 (2003)] and further simplified by Leung, [Int. J. Quant. Inf. 2, 33 (2004)]) uses adaptive two-qubit measurements that can be applied to arbitrary pairs of qubits, and feedforward of the measurement results. The underlying principle of our derivations is a variant of teleportation introduced by Zhou, Leung, and Chuang, [Phys. Rev. A 62, 052316 (2000)]. Our derivations unify these two measurement-based models of quantum computation and provide significantly simpler schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of nonlocal density functional theory (NLDFT) to determine pore size distribution (PSD) of activated carbons using a nongraphitized carbon black, instead of graphitized thermal carbon black, as a reference system is explored. We show that in this case nitrogen and argon adsorption isotherms in activated carbons are precisely correlated by the theory, and such an excellent correlation would never be possible if the pore wall surface was assumed to be identical to that of graphitized carbon black. It suggests that pore wall surfaces of activated carbon are closer to that of amorphous solids because of defects of crystalline lattice, finite pore length, and the presence of active centers.. etc. Application of the NLDFT adapted to amorphous solids resulted in quantitative description of N-2 and Ar adsorption isotherms on nongraphitized carbon black BP280 at their respective boiling points. In the present paper we determined solid-fluid potentials from experimental adsorption isotherms on nongraphitized carbon black and subsequently used those potentials to model adsorption in slit pores and generate a corresponding set of local isotherms, which we used to determine the PSD functions of different activated carbons. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Published birthweight references in Australia do not fully take into account constitutional factors that influence birthweight and therefore may not provide an accurate reference to identify the infant with abnormal growth. Furthermore, studies in other regions that have derived adjusted (customised) birthweight references have applied untested assumptions in the statistical modelling. Aims: To validate the customised birthweight model and to produce a reference set of coefficients for estimating a customised birthweight that may be useful for maternity care in Australia and for future research. Methods: De-identified data were extracted from the clinical database for all births at the Mater Mother's Hospital, Brisbane, Australia, between January 1997 and June 2005. Births with missing data for the variables under study were excluded. In addition the following were excluded: multiple pregnancies, births less than 37 completed week's gestation, stillbirths, and major congenital abnormalities. Multivariate analysis was undertaken. A double cross-validation procedure was used to validate the model. Results: The study of 42 206 births demonstrated that, for statistical purposes, birthweight is normally distributed. Coefficients for the derivation of customised birthweight in an Australian population were developed and the statistical model is demonstrably robust. Conclusions: This study provides empirical data as to the robustness of the model to determine customised birthweight. Further research is required to define where normal physiology ends and pathology begins, and which segments of the population should be included in the construction of a customised birthweight standard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: In this paper, we present a unified electrodynamic heart model that permits simulations of the body surface potentials generated by the heart in motion. The inclusion of motion in the heart model significantly improves the accuracy of the simulated body surface potentials and therefore also the 12-lead ECG. Methods: The key step is to construct an electromechanical heart model. The cardiac excitation propagation is simulated by an electrical heart model, and the resulting cardiac active forces are used to calculate the ventricular wall motion based on a mechanical model. The source-field point relative position changes during heart systole and diastole. These can be obtained, and then used to calculate body surface ECG based on the electrical heart-torso model. Results: An electromechanical biventricular heart model is constructed and a standard 12-lead ECG is simulated. Compared with a simulated ECG based on the static electrical heart model, the simulated ECG based on the dynamic heart model is more accordant with a clinically recorded ECG, especially for the ST segment and T wave of a V1-V6 lead ECG. For slight-degree myocardial ischemia ECG simulation, the ST segment and T wave changes can be observed from the simulated ECG based on a dynamic heart model, while the ST segment and T wave of simulated ECG based on a static heart model is almost unchanged when compared with a normal ECG. Conclusions: This study confirms the importance of the mechanical factor in the ECG simulation. The dynamic heart model could provide more accurate ECG simulation, especially for myocardial ischemia or infarction simulation, since the main ECG changes occur at the ST segment and T wave, which correspond with cardiac systole and diastole phases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the absence of an external frame of reference-i.e., in background independent theories such as general relativity-physical degrees of freedom must describe relations between systems. Using a simple model, we investigate how such a relational quantum theory naturally arises by promoting reference systems to the status of dynamical entities. Our goal is twofold. First, we demonstrate using elementary quantum theory how any quantum mechanical experiment admits a purely relational description at a fundamental. Second, we describe how the original non-relational theory approximately emerges from the fully relational theory when reference systems become semi-classical. Our technique is motivated by a Bayesian approach to quantum mechanics, and relies on the noiseless subsystem method of quantum information science used to protect quantum states against undesired noise. The relational theory naturally predicts a fundamental decoherence mechanism, so an arrow of time emerges from a time-symmetric theory. Moreover, our model circumvents the problem of the collapse of the wave packet as the probability interpretation is only ever applied to diagonal density operators. Finally, the physical states of the relational theory can be described in terms of spin networks introduced by Penrose as a combinatorial description of geometry, and widely studied in the loop formulation of quantum gravity. Thus, our simple bottom-up approach (starting from the semiclassical limit to derive the fully relational quantum theory) may offer interesting insights on the low energy limit of quantum gravity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article first summarizes some available experimental results on the frictional behaviour of contact interfaces, and briefly recalls typical frictional experiments and relationships, which are applicable for rock mechanics, and then a unified description is obtained to describe the entire frictional behaviour. It is formulated based on the experimental results and applied with a stick and slip decomposition algorithm to describe the stick-slip instability phenomena, which can describe the effects observed in rock experiments without using the so-called state variable, thus avoiding related numerical difficulties. This has been implemented to our finite element code, which uses the node-to-point contact element strategy proposed by the authors to handle the frictional contact between multiple finite-deformation bodies with stick and finite frictional slip, and applied here to simulate the frictional behaviour of rocks to show its usefulness and efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ontological analysis of conceptual modelling techniques is of increasing popularity. Related research did not only explore the ontological deficiencies of classical techniques such as ER or UML, but also business process modelling techniques such as ARIS or even Web services standards such as BPEL4WS. While the selected ontologies are reasonably mature, it is the actual process of an ontological analysis that still lacks rigor. The current procedure leaves significant room for individual interpretations and is one reason for criticism of the entire ontological analysis. This paper proposes a procedural model for the ontological analysis based on the use of meta models, the involvement of more than one coder and metrics. This model is explained with examples from various ontological analyses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our extensive research has indicated that high-school teachers are reluctant to make use of existing instructional educational software (Pollard, 2005). Even software developed in a partnership between a teacher and a software engineer is unlikely to be adopted by teachers outside the partnership (Pollard, 2005). In this paper we address these issues directly by adopting a reusable architectural design for instructional educational software which allows easy customisation of software to meet the specific needs of individual teachers. By doing this we will facilitate more teachers regularly using instructional technology within their classrooms. Our domain-specific software architecture, Interface-Activities-Model, was designed specifically to facilitate individual customisation by redefining and restructuring what constitutes an object so that they can be readily reused or extended as required. The key to this architecture is the way in which the software is broken into small generic encapsulated components with minimal domain specific behaviour. The domain specific behaviour is decoupled from the interface and encapsulated in objects which relate to the instructional material through tasks and activities. The domain model is also broken into two distinct models - Application State Model and Domainspecific Data Model. This decoupling and distribution of control gives the software designer enormous flexibility in modifying components without affecting other sections of the design. This paper sets the context of this architecture, describes it in detail, and applies it to an actual application developed to teach high-school mathematical concepts.