14 resultados para Tessellation-based model
em University of Queensland eSpace - Australia
Resumo:
In this paper, we present a framework for pattern-based model evolution approaches in the MDA context. In the framework, users define patterns using a pattern modeling language that is designed to describe software design patterns, and they can use the patterns as rules to evolve their model. In the framework, design model evolution takes place via two steps. The first step is a binding process of selecting a pattern and defining where and how to apply the pattern in the model. The second step is an automatic model transformation that actually evolves the model according to the binding information and the pattern rule. The pattern modeling language is defined in terms of a MOF-based role metamodel, and implemented using an existing modeling framework, EMF, and incorporated as a plugin to the Eclipse modeling environment. The model evolution process is also implemented as an Eclipse plugin. With these two plugins, we provide an integrated framework where defining and validating patterns, and model evolution based on patterns can take place in a single modeling environment.
Resumo:
Previous research shows that correlations tend to increase in magnitude when individuals are aggregated across groups. This suggests that uncorrelated constellations of personality variables (such as the primary scales of Extraversion and Neuroticism) may display much higher correlations in aggregate factor analysis. We hypothesize and report that individual level factor analysis can be explained in terms of Giant Three (or Big Five) descriptions of personality, whereas aggregate level factor analysis can be explained in terms of Gray's physiological based model. Although alternative interpretations exist, aggregate level factor analysis may correctly identify the basis of an individual's personality as a result of better reliability of measures due to aggregation. We discuss the implications of this form of analysis in terms of construct validity, personality theory, and its applicability in general. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Queensland fruit fly, Bactrocera (Dacus) tryoni (QFF) is arguably the most costly horticultural insect pest in Australia. Despite this, no model is available to describe its population dynamics and aid in its management. This paper describes a cohort-based model of the population dynamics of the Queensland fruit fly. The model is primarily driven by weather variables, and so can be used at any location where appropriate meteorological data are available. In the model, the life cycle is divided into a number of discreet stages to allow physiological processes to be defined as accurately as possible. Eggs develop and hatch into larvae, which develop into pupae, which emerge as either teneral females or males. Both females and males can enter reproductive and over-wintering life stages, and there is a trapped male life stage to allow model predictions to be compared with trap catch data. All development rates are temperature-dependent. Daily mortality rates are temperature-dependent, but may also be influenced by moisture, density of larvae in fruit, fruit suitability, and age. Eggs, larvae and pupae all have constant establishment mortalities, causing a defined proportion of individuals to die upon entering that life stage. Transfer from one immature stage to the next is based on physiological age. In the adult life stages, transfer between stages may require additional and/or alternative functions. Maximum fecundity is 1400 eggs per female per day, and maximum daily oviposition rate is 80 eggs/female per day. The actual number of eggs laid by a female on any given day is restricted by temperature, density of larva in fruit, suitability of fruit for oviposition, and female activity. Activity of reproductive females and males, which affects reproduction and trapping, decreases with rainfall. Trapping of reproductive males is determined by activity, temperature and the proportion of males in the active population. Limitations of the model are discussed. Despite these, the model provides a useful agreement with trap catch data, and allows key areas for future research to be identified. These critical gaps in the current state of knowledge exist despite over 50 years of research on this key pest. By explicitly attempting to model the population dynamics of this pest we have clearly identified the research areas that must be addressed before progress can be made in developing the model into an operational tool for the management of Queensland fruit fly. (C) 2003 Published by Elsevier B.V.
Resumo:
The particle-based lattice solid model developed to study the physics of rocks and the nonlinear dynamics of earthquakes is refined by incorporating intrinsic friction between particles. The model provides a means for studying the causes of seismic wave attenuation, as well as frictional heat generation, fault zone evolution, and localisation phenomena. A modified velocity-Verlat scheme that allows friction to be precisely modelled is developed. This is a difficult computational problem given that a discontinuity must be accurately simulated by the numerical approach (i.e., the transition from static to dynamical frictional behaviour). This is achieved using a half time step integration scheme. At each half time step, a nonlinear system is solved to compute the static frictional forces and states of touching particle-pairs. Improved efficiency is achieved by adaptively adjusting the time step increment, depending on the particle velocities in the system. The total energy is calculated and verified to remain constant to a high precision during simulations. Numerical experiments show that the model can be applied to the study of earthquake dynamics, the stick-slip instability, heat generation, and fault zone evolution. Such experiments may lead to a conclusive resolution of the heat flow paradox and improved understanding of earthquake precursory phenomena and dynamics. (C) 1999 Academic Press.
Resumo:
Domain specific information retrieval has become in demand. Not only domain experts, but also average non-expert users are interested in searching domain specific (e.g., medical and health) information from online resources. However, a typical problem to average users is that the search results are always a mixture of documents with different levels of readability. Non-expert users may want to see documents with higher readability on the top of the list. Consequently the search results need to be re-ranked in a descending order of readability. It is often not practical for domain experts to manually label the readability of documents for large databases. Computational models of readability needs to be investigated. However, traditional readability formulas are designed for general purpose text and insufficient to deal with technical materials for domain specific information retrieval. More advanced algorithms such as textual coherence model are computationally expensive for re-ranking a large number of retrieved documents. In this paper, we propose an effective and computationally tractable concept-based model of text readability. In addition to textual genres of a document, our model also takes into account domain specific knowledge, i.e., how the domain-specific concepts contained in the document affect the document’s readability. Three major readability formulas are proposed and applied to health and medical information retrieval. Experimental results show that our proposed readability formulas lead to remarkable improvements in terms of correlation with users’ readability ratings over four traditional readability measures.
Resumo:
There is a wealth of literature documenting a directional change of body size in heavily harvested populations. Most of this work concentrates on aquatic systems, but terrestrial populations are equally at risk. This paper explores the capacity of harvest refuges to counteract potential effects of size-selective harvesting on the allele frequency,of populations. We constructed a stochastic, individual-based model parameterized with data on red kangaroos. Because we do not know which part of individual growth would change in the course of natural selection, we explored the effects of two alternative models of individual growth in which alleles affect either the growth rate or the maximum size. The model results show that size-selective harvesting can result in significantly smaller kangaroos for a given age when the entire population is subject to harvesting. In contrast, in scenarios that include dispersal from harvest refuges, the initial allele frequency remains virtually unchanged.
Resumo:
We use the consumption-based asset pricing model with habit formation to study the predictability and cross-section of returns from the international equity markets. We find that the predictability of returns from many developed countries' equity markets is explained in part by changing prices of risks associated with consumption relative to habit at the world as well as local levels. We also provide an exploratory investigation of the cross-sectional implications of the model under the complete world market integration hypothesis and find that the model performs mildly better than the traditional consumption-based model. the unconditional and conditional world CAPMs and a three-factor international asset pricing model. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The recent deregulation in electricity markets worldwide has heightened the importance of risk management in energy markets. Assessing Value-at-Risk (VaR) in electricity markets is arguably more difficult than in traditional financial markets because the distinctive features of the former result in a highly unusual distribution of returns-electricity returns are highly volatile, display seasonalities in both their mean and volatility, exhibit leverage effects and clustering in volatility, and feature extreme levels of skewness and kurtosis. With electricity applications in mind, this paper proposes a model that accommodates autoregression and weekly seasonals in both the conditional mean and conditional volatility of returns, as well as leverage effects via an EGARCH specification. In addition, extreme value theory (EVT) is adopted to explicitly model the tails of the return distribution. Compared to a number of other parametric models and simple historical simulation based approaches, the proposed EVT-based model performs well in forecasting out-of-sample VaR. In addition, statistical tests show that the proposed model provides appropriate interval coverage in both unconditional and, more importantly, conditional contexts. Overall, the results are encouraging in suggesting that the proposed EVT-based model is a useful technique in forecasting VaR in electricity markets. (c) 2005 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.
Resumo:
Simulations of a complete reflected shock tunnel facility have been performed with the aim of providing a better understanding of the flow through these facilities. In particular, the analysis is focused on the premature contamination of the test flow with the driver gas. The axisymmetric simulations model the full geometry of the shock tunnel and incorporate an iris-based model of the primary diaphragm rupture mechanics, an ideal secondary diaphragm and account for turbulence in the shock tube boundary layer with the Baldwin-Lomax eddy viscosity model. Two operating conditions were examined: one resulting in an over-tailored mode of operation and the other resulting in approximately tailored operation. The accuracy of the simulations is assessed through comparison with experimental measurements of static pressure, pitot pressure and stagnation temperature. It is shown that the widely-accepted driver gas contamination mechanism in which driver gas 'jets' along the walls through action of the bifurcated foot of the reflected shock, does not directly transport the driver gas to the nozzle at these conditions. Instead, driver gas laden vortices are generated by the bifurcated reflected shock. These vortices prevent jetting of the driver gas along the walls and convect driver gas away from the shock tube wall and downstream into the nozzle. Additional vorticity generated by the interaction of the reflected shock and the contact surface enhances the process in the over-tailored case. However, the basic mechanism appears to operate in a similar way for both the over-tailored and the approximately tailored conditions.
Resumo:
High-fidelity eye tracking is combined with a perceptual grouping task to provide insight into the likely mechanisms underlying the compensation of retinal image motion caused by movement of the eyes. The experiments describe the covert detection of minute temporal and spatial offsets incorporated into a test stimulus. Analysis of eye motion on individual trials indicates that the temporal offset sensitivity is actually due to motion of the eye inducing artificial spatial offsets in the briefly presented stimuli. The results have strong implications for two popular models of compensation for fixational eye movements, namely efference copy and image-based models. If an efference copy model is assumed, the results place constraints on the spatial accuracy and source of compensation. If an image-based model is assumed then limitations are placed on the integration time window over which motion estimates are calculated. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
As field determinations take much effort, it would be useful to be able to predict easily the coefficients describing the functional response of free-living predators, the function relating food intake rate to the abundance of food organisms in the environment. As a means easily to parameterise an individual-based model of shorebird Charadriiformes populations, we attempted this for shorebirds eating macro-invertebrates. Intake rate is measured as the ash-free dry mass (AFDM) per second of active foraging; i.e. excluding time spent on digestive pauses and other activities, such as preening. The present and previous studies show that the general shape of the functional response in shorebirds eating approximately the same size of prey across the full range of prey density is a decelerating rise to a plateau, thus approximating the Holling type 11 ('disc equation') formulation. But field studies confirmed that the asymptote was not set by handling time, as assumed by the disc equation, because only about half the foraging time was spent in successfully or unsuccessfully attacking and handling prey, the rest being devoted to searching. A review of 30 functional responses showed that intake rate in free-living shorebirds varied independently of prey density over a wide range, with the asymptote being reached at very low prey densities (< 150/m(-2)). Accordingly, most of the many studies of shorebird intake rate have probably been conducted at or near the asymptote of the functional response, suggesting that equations that predict intake rate should also predict the asymptote. A multivariate analysis of 468 'spot' estimates of intake rates from 26 shorebirds identified ten variables, representing prey and shorebird characteristics, that accounted for 81 % of the variance in logarithm-transformed intake rate. But four-variables accounted for almost as much (77.3 %), these being bird size, prey size, whether the bird was an oystercatcher Haematopus ostralegus eating mussels Mytilus edulis, or breeding. The four variable equation under-predicted, on average, the observed 30 estimates of the asymptote by 11.6%, but this discrepancy was reduced to 0.2% when two suspect estimates from one early study in the 1960s were removed. The equation therefore predicted the observed asymptote very successfully in 93 % of cases. We conclude that the asymptote can be reliably predicted from just four easily measured variables. Indeed, if the birds are not breeding and are not oystercatchers eating mussels, reliable predictions can be obtained using just two variables, bird and prey sizes. A multivariate analysis of 23 estimates of the half-asymptote constant suggested they were smaller when prey were small but greater when the birds were large, especially in oystercatchers. The resulting equation could be used to predict the half-asymptote constant, but its predictive power has yet to be tested. As well as predicting the asymptote of the functional response, the equations will enable research workers engaged in many areas of shorebird ecology and behaviour to estimate intake rate without the need for conventional time-consuming field studies, including species for which it has not yet proved possible to measure intake rate in the field.
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to p inform management decisions, but they ignore the economic costs of manipulating vital rates. For exam le, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously, These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.
Resumo:
Three important goals in describing software design patterns are: generality, precision, and understandability. To address these goals, this paper presents an integrated approach to specifying patterns using Object-Z and UML. To achieve the generality goal, we adopt a role-based metamodeling approach to define patterns. With this approach, each pattern is defined as a pattern role model. To achieve precision, we formalize role concepts using Object-Z (a role metamodel) and use these concepts to define patterns (pattern role models). To achieve understandability, we represent the role metamodel and pattern role models visually using UML. Our pattern role models provide a precise basis for pattern-based model transformations or refactoring approaches.