894 resultados para Data Modelling
Resumo:
In this paper, we consider how refinements between state-based specifications (e.g., written in Z) can be checked by use of a model checker. Specifically, we are interested in the verification of downward and upward simulations which are the standard approach to verifying refinements in state-based notations. We show how downward and upward simulations can be checked using existing temporal logic model checkers. In particular, we show how the branching time temporal logic CTL can be used to encode the standard simulation conditions. We do this for both a blocking, or guarded, interpretation of operations (often used when specifying reactive systems) as well as the more common non-blocking interpretation of operations used in many state-based specification languages (for modelling sequential systems). The approach is general enough to use with any state-based specification language, and we illustrate how refinements between Z specifications can be checked using the SAL CTL model checker using a small example.
Resumo:
In biologically mega-diverse countries that are undergoing rapid human landscape transformation, it is important to understand and model the patterns of land cover change. This problem is particularly acute in Colombia, where lowland forests are being rapidly cleared for cropping and ranching. We apply a conceptual model with a nested set of a priori predictions to analyse the spatial and temporal patterns of land cover change for six 50-100 km(2) case study areas in lowland ecosystems of Colombia. Our analysis included soil fertility, a cost-distance function, and neighbourhood of forest and secondary vegetation cover as independent variables. Deforestation and forest regrowth are tested using logistic regression analysis and an information criterion approach to rank the models and predictor variables. The results show that: (a) overall the process of deforestation is better predicted by the full model containing all variables, while for regrowth the model containing only the auto-correlated neighbourhood terms is a better predictor; (b) overall consistent patterns emerge, although there are variations across regions and time; and (c) during the transformation process, both the order of importance and significance of the drivers change. Forest cover follows a consistent logistic decline pattern across regions, with introduced pastures being the major replacement land cover type. Forest stabilizes at 2-10% of the original cover, with an average patch size of 15.4 (+/- 9.2) ha. We discuss the implications of the observed patterns and rates of land cover change for conservation planning in countries with high rates of deforestation. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
An extended refraction-diffraction equation [Massel, S.R., 1993. Extended refraction-diffraction equation for surface waves. Coastal Eng. 19, 97-126] has been applied to predict wave transformation and breaking as well as wave-induced set-up on two-dimensional reef profiles of various shapes. A free empirical coefficient alpha in a formula for the average rate of energy dissipation [epsilon(b)] = (alpha rho g omega/8 pi)(root gh/C)(H-3/h) in the modified periodic bore model was found to be a function of the dimensionless parameter F-c0 = (g(1.25)H(0)(0.5)T(2.5))/h(r)(1.75), proposed by Gourlay [Gourlayl M.R., 1994. Wave transformation on a coral reef. Coastal Eng. 23, 17-42]. The applicability of the developed model has been demonstrated for reefs of various shapes subjected to various incident wave conditions. Assuming proposed relationships of the coefficient alpha and F-c0, the model provides results on wave height attenuation and set-up elevation which compare well with experimental data. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Stochastic models based on Markov birth processes are constructed to describe the process of invasion of a fly larva by entomopathogenic nematodes. Various forms for the birth (invasion) rates are proposed. These models are then fitted to data sets describing the observed numbers of nematodes that have invaded a fly larval after a fixed period of time. Non-linear birthrates are required to achieve good fits to these data, with their precise form leading to different patterns of invasion being identified for three populations of nematodes considered. One of these (Nemasys) showed the greatest propensity for invasion. This form of modelling may be useful more generally for analysing data that show variation which is different from that expected from a binomial distribution.
Resumo:
This paper outlines the methodology of blast fragmentation modeling undertaken for a green field feasibility study at the Riska gold deposit in Indonesia. The favoured milling process for the feasibility study was dump leaching,with no crushing of the ore material extracted from the pit. For this reason,blast fragmentation was a critical issue to be addressed by the study. A range of blast designs were considered with bench heights and blasthole diameters ranging from 4 m to 7 m and 76 mm to 102 mm respectively. Rock mass data was obtained from 19 diamond drill cores across the deposit (total drill length approximately 2200 m). Intact rock strength was estimated from qualitative strength descriptors,while the in situ block size distribution of the rock mass was estimated from the Rock Quality Designation (RQD) of the core.
Resumo:
This paper considers a model-based approach to the clustering of tissue samples of a very large number of genes from microarray experiments. It is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. Frequently in practice, there are also clinical data available on those cases on which the tissue samples have been obtained. Here we investigate how to use the clinical data in conjunction with the microarray gene expression data to cluster the tissue samples. We propose two mixture model-based approaches in which the number of components in the mixture model corresponds to the number of clusters to be imposed on the tissue samples. One approach specifies the components of the mixture model to be the conditional distributions of the microarray data given the clinical data with the mixing proportions also conditioned on the latter data. Another takes the components of the mixture model to represent the joint distributions of the clinical and microarray data. The approaches are demonstrated on some breast cancer data, as studied recently in van't Veer et al. (2002).
Resumo:
Broccoli is a vegetable crop of increasing importance in Australia, particularly in south-east Queensland and farmers need to maintain a regular supply of good quality broccoli to meet the expanding market. A predictive model of ontogeny, incorporating climatic data including frost risk, would enable farmers to predict harvest maturity date and select appropriate cultivar – sowing date combinations. To develop procedures for predicting ontogeny, yield and quality, field studies using three cultivars, ‘Fiesta’, ‘Greenbelt’ and ‘Marathon’, were sown on eight dates from 11 March to 22 May 1997, and grown under natural and extended (16 h) photoperiods at the University of Queensland, Gatton Campus. Cultivar, rather than the environment, mainly determined head quality attributes of head shape and branching angle. Yield and quality were not influenced by photoperiod. A better understanding of genotype and environmental interactions will help farmers optimise yield and quality, by matching cultivars with time of sowing. The estimated base and optimum temperature for broccoli development were 0°C and 20 °C, respectively, and were consistent across cultivars, but thermal time requirements for phenological intervals were cultivar specific. Differences in thermal time requirement from floral initiation to harvest maturity between cultivars were small and of little importance, but differences in thermal time requirement from emergence to floral initiation were large. Sensitivity to photoperiod and solar radiation was low in the three cultivars used. This research has produced models to assist broccoli farmers in crop scheduling and cultivar selection in south-east Queensland.
Resumo:
New laboratory scale experimental data are presented on the forcing of beach groundwater levels by wave run-up. The experimental setup simulates a coastal barrier dividing the ocean from a relatively constant back beach water level, conditions approximating a closed off lagoon system or beach aquifer. The data are critically compared to an advanced numerical model for simulating wave and beach groundwater interaction in the coastal zone, and provide the first experimental verification of such a model. Overall model-data comparisons are good, but some systematic discrepancies are apparent, and reasons for these are discussed.
Resumo:
Land-surface processes include a broad class of models that operate at a landscape scale. Current modelling approaches tend to be specialised towards one type of process, yet it is the interaction of processes that is increasing seen as important to obtain a more integrated approach to land management. This paper presents a technique and a tool that may be applied generically to landscape processes. The technique tracks moving interfaces across landscapes for processes such as water flow, biochemical diffusion, and plant dispersal. Its theoretical development applies a Lagrangian approach to motion over a Eulerian grid space by tracking quantities across a landscape as an evolving front. An algorithm for this technique, called level set method, is implemented in a geographical information system (GIS). It fits with a field data model in GIS and is implemented as operators in map algebra. The paper describes an implementation of the level set methods in a map algebra programming language, called MapScript, and gives example program scripts for applications in ecology and hydrology.
Resumo:
Conceptual modeling forms an important part of systems analysis. If this is done incorrectly or incompletely, there can be serious implications for the resultant system, specifically in terms of rework and useability. One approach to improving the conceptual modelling process is to evaluate how well the model represents reality. Emergence of the Bunge-Wand-Weber (BWW) ontological model introduced a platform to classify and compare the grammar of conceptual modelling languages. This work applies the BWW theory to a real world example in the health arena. The general practice computing group data model was developed using the Barker Entity Relationship Modelling technique. We describe an experiment, grounded in ontological theory, which evaluates how well the GPCG data model is understood by domain experts. The results show that with the exception of the use of entities to represent events, the raw model is better understood by domain experts
Resumo:
Scanning capacitance microscopy (SCM) measurement is a proposed tool for dopant profile extraction for semiconductor material. The influence of interface traps on SCM dC/dV data is still unclear. In this paper we report on the simulation work used to study the nature of SCM dC/dV data in the presence of interface traps. A technique to correctly simulate dC/dV of SCM measurement is then presented based on our justification. We also analyze how charge of interface traps surrounding SCM probe would affect SCM dC/dV due the small SCM probe dimension.
Resumo:
Context-aware applications rely on implicit forms of input, such as sensor-derived data, in order to reduce the need for explicit input from users. They are especially relevant for mobile and pervasive computing environments, in which user attention is at a premium. To support the development of context-aware applications, techniques for modelling context information are required. These must address a unique combination of requirements, including the ability to model information supplied by both sensors and people, to represent imperfect information, and to capture context histories. As the field of context-aware computing is relatively new, mature solutions for context modelling do not exist, and researchers rely on information modelling solutions developed for other purposes. In our research, we have been using a variant of Object-Role Modeling (ORM) to model context. In this paper, we reflect on our experiences and outline some research challenges in this area.
Resumo:
Time-course experiments with microarrays are often used to study dynamic biological systems and genetic regulatory networks (GRNs) that model how genes influence each other in cell-level development of organisms. The inference for GRNs provides important insights into the fundamental biological processes such as growth and is useful in disease diagnosis and genomic drug design. Due to the experimental design, multilevel data hierarchies are often present in time-course gene expression data. Most existing methods, however, ignore the dependency of the expression measurements over time and the correlation among gene expression profiles. Such independence assumptions violate regulatory interactions and can result in overlooking certain important subject effects and lead to spurious inference for regulatory networks or mechanisms. In this paper, a multilevel mixed-effects model is adopted to incorporate data hierarchies in the analysis of time-course data, where temporal and subject effects are both assumed to be random. The method starts with the clustering of genes by fitting the mixture model within the multilevel random-effects model framework using the expectation-maximization (EM) algorithm. The network of regulatory interactions is then determined by searching for regulatory control elements (activators and inhibitors) shared by the clusters of co-expressed genes, based on a time-lagged correlation coefficients measurement. The method is applied to two real time-course datasets from the budding yeast (Saccharomyces cerevisiae) genome. It is shown that the proposed method provides clusters of cell-cycle regulated genes that are supported by existing gene function annotations, and hence enables inference on regulatory interactions for the genetic network.