860 resultados para conceptual data modelling


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Society, as we know it today, is completely dependent on computer networks, Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Moreover, and unconsciously, all services and distributed systems require network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect or improve the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we don’t consider network management systems during the development stage of main distributed systems, then there could be serious consequences or even total failures in the development of the distributed systems. It is necessary, therefore, to consider the management of the systems within the design of distributed systems and systematize their conception to minimize the impact of the management of networks within the project of distributed systems. In this paper, we present a formalization method of the conceptual modelling for design of a network management system through the use of formal modelling tools, thus allowing from the definition of processes to identify those responsible for these. Finally we will propose a use case to design a conceptual model intrusion detection system in network.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper reviews the key features of an environment to support domain users in spatial information system (SIS) development. It presents a full design and prototype implementation of a repository system for the storage and management of metadata, focusing on a subset of spatial data integrity constraint classes. The system is designed to support spatial system development and customization by users within the domain that the system will operate.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Much research has been devoted over the years to investigating and advancing the techniques and tools used by analysts when they model. As opposed to what academics, software providers and their resellers promote as should be happening, the aim of this research was to determine whether practitioners still embraced conceptual modeling seriously. In addition, what are the most popular techniques and tools used for conceptual modeling? What are the major purposes for which conceptual modeling is used? The study found that the top six most frequently used modeling techniques and methods were ER diagramming, data flow diagramming, systems flowcharting, workflow modeling, UML, and structured charts. Modeling technique use was found to decrease significantly from smaller to medium-sized organizations, but then to increase significantly in larger organizations (proxying for large, complex projects). Technique use was also found to significantly follow an inverted U-shaped curve, contrary to some prior explanations. Additionally, an important contribution of this study was the identification of the factors that uniquely influence the decision of analysts to continue to use modeling, viz., communication (using diagrams) to/from stakeholders, internal knowledge (lack of) of techniques, user expectations management, understanding models' integration into the business, and tool/software deficiencies. The highest ranked purposes for which modeling was undertaken were database design and management, business process documentation, business process improvement, and software development. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We demonstrate a portable process for developing a triple bottom line model to measure the knowledge production performance of individual research centres. For the first time, this study also empirically illustrates how a fully units-invariant model of Data Envelopment Analysis (DEA) can be used to measure the relative efficiency of research centres by capturing the interaction amongst a common set of multiple inputs and outputs. This study is particularly timely given the increasing transparency required by governments and industries that fund research activities. The process highlights the links between organisational objectives, desired outcomes and outputs while the emerging performance model represents an executive managerial view. This study brings consistency to current measures that often rely on ratios and univariate analyses that are not otherwise conducive to relative performance analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As for other complex diseases, linkage analyses of schizophrenia (SZ) have produced evidence for numerous chromosomal regions, with inconsistent results reported across studies. The presence of locus heterogeneity appears likely and may reduce the power of linkage analyses if homogeneity is assumed. In addition, when multiple heterogeneous datasets are pooled, intersample variation in the proportion of linked families ( a) may diminish the power of the pooled sample to detect susceptibility loci, in spite of the larger sample size obtained. We compare the significance of linkage. findings obtained using allele- sharing LOD scores ( LODexp) - which assume homogeneity - and heterogeneity LOD scores ( HLOD) in European American and African American NIMH SZ families. We also pool these two samples and evaluate the relative power of the LODexp and two different heterogeneity statistics. One of these ( HLOD- P) estimates the heterogeneity parameter a only in aggregate data, while the second ( HLOD- S) determines a separately for each sample. In separate and combined data, we show consistently improved performance of HLOD scores over LODexp. Notably, genome-wide significant evidence for linkage is obtained at chromosome 10p in the European American sample using a recessive HLOD score. When the two samples are combined, linkage at the 10p locus also achieves genome-wide significance under HLOD- S, but not HLOD- P. Using HLOD- S, improved evidence for linkage was also obtained for a previously reported region on chromosome 15q. In linkage analyses of complex disease, power may be maximised by routinely modelling locus heterogeneity within individual datasets, even when multiple datasets are combined to form larger samples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Count data with excess zeros relative to a Poisson distribution are common in many biomedical applications. A popular approach to the analysis of such data is to use a zero-inflated Poisson (ZIP) regression model. Often, because of the hierarchical Study design or the data collection procedure, zero-inflation and lack of independence may occur simultaneously, which tender the standard ZIP model inadequate. To account for the preponderance of zero counts and the inherent correlation of observations, a class of multi-level ZIP regression model with random effects is presented. Model fitting is facilitated using an expectation-maximization algorithm, whereas variance components are estimated via residual maximum likelihood estimating equations. A score test for zero-inflation is also presented. The multi-level ZIP model is then generalized to cope with a more complex correlation structure. Application to the analysis of correlated count data from a longitudinal infant feeding study illustrates the usefulness of the approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In biologically mega-diverse countries that are undergoing rapid human landscape transformation, it is important to understand and model the patterns of land cover change. This problem is particularly acute in Colombia, where lowland forests are being rapidly cleared for cropping and ranching. We apply a conceptual model with a nested set of a priori predictions to analyse the spatial and temporal patterns of land cover change for six 50-100 km(2) case study areas in lowland ecosystems of Colombia. Our analysis included soil fertility, a cost-distance function, and neighbourhood of forest and secondary vegetation cover as independent variables. Deforestation and forest regrowth are tested using logistic regression analysis and an information criterion approach to rank the models and predictor variables. The results show that: (a) overall the process of deforestation is better predicted by the full model containing all variables, while for regrowth the model containing only the auto-correlated neighbourhood terms is a better predictor; (b) overall consistent patterns emerge, although there are variations across regions and time; and (c) during the transformation process, both the order of importance and significance of the drivers change. Forest cover follows a consistent logistic decline pattern across regions, with introduced pastures being the major replacement land cover type. Forest stabilizes at 2-10% of the original cover, with an average patch size of 15.4 (+/- 9.2) ha. We discuss the implications of the observed patterns and rates of land cover change for conservation planning in countries with high rates of deforestation. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conceptual modeling forms an important part of systems analysis. If this is done incorrectly or incompletely, there can be serious implications for the resultant system, specifically in terms of rework and useability. One approach to improving the conceptual modelling process is to evaluate how well the model represents reality. Emergence of the Bunge-Wand-Weber (BWW) ontological model introduced a platform to classify and compare the grammar of conceptual modelling languages. This work applies the BWW theory to a real world example in the health arena. The general practice computing group data model was developed using the Barker Entity Relationship Modelling technique. We describe an experiment, grounded in ontological theory, which evaluates how well the GPCG data model is understood by domain experts. The results show that with the exception of the use of entities to represent events, the raw model is better understood by domain experts