127 resultados para Computer Science(all)


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper deals with an n-fold Weibull competing risk model. A characterisation of the WPP plot is given along with estimation of model parameters when modelling a given data set. These are illustrated through two examples. A study of the different possible shapes for the density and failure rate functions is also presented. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The rise of component-based software development has created an urgent need for effective application program interface (API) documentation. Experience has shown that it is hard to create precise and readable documentation. Prose documentation can provide a good overview but lacks precision. Formal methods offer precision but the resulting documentation is expensive to develop. Worse, few developers have the skill or inclination to read formal documentation. We present a pragmatic solution to the problem of API documentation. We augment the prose documentation with executable test cases, including expected outputs, and use the prose plus the test cases as the documentation. With appropriate tool support, the test cases are easy to develop and read. Such test cases constitute a completely formal, albeit partial, specification of input/output behavior. Equally important, consistency between code and documentation is demonstrated by running the test cases. This approach provides an attractive bridge between formal and informal documentation. We also present a tool that supports compact and readable test cases; and generation of test drivers and documentation, and illustrate the approach with detailed case studies. (C) 2002 Elsevier Science Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Timed Interval Calculus, a timed-trace formalism based on set theory, is introduced. It is extended with an induction law and a unit for concatenation, which facilitates the proof of properties over trace histories. The effectiveness of the extended Timed Interval Calculus is demonstrated via a benchmark case study, the mine pump. Specifically, a safety property relating to the operation of a mine shaft is proved, based on an implementation of the mine pump and assumptions about the environment of the mine. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A specialised reconfigurable architecture is targeted at wireless base-band processing. It is built to cater for multiple wireless standards. It has lower power consumption than the processor-based solution. It can be scaled to run in parallel for processing multiple channels. Test resources are embedded on the architecture and testing strategies are included. This architecture is functionally partitioned according to the common operations found in wireless standards, such as CRC error correction, convolution and interleaving. These modules are linked via Virtual Wire Hardware modules and route-through switch matrices. Data can be processed in any order through this interconnect structure. Virtual Wire ensures the same flexibility as normal interconnects, but the area occupied and the number of switches needed is reduced. The testing algorithm scans all possible paths within the interconnection network exhaustively and searches for faults in the processing modules. The testing algorithm starts by scanning the externally addressable memory space and testing the master controller. The controller then tests every switch in the route-through switch matrix by making loops from the shared memory to each of the switches. The local switch matrix is also tested in the same way. Next the local memory is scanned. Finally, pre-defined test vectors are loaded into local memory to check the processing modules. This paper compares various base-band processing solutions. It describes the proposed platform and its implementation. It outlines the test resources and algorithm. It concludes with the mapping of Bluetooth and GSM base-band onto the platform.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An important aspect in manufacturing design is the distribution of geometrical tolerances so that an assembly functions with given probability, while minimising the manufacturing cost. This requires a complex search over a multidimensional domain, much of which leads to infeasible solutions and which can have many local minima. As well, Monte-Carlo methods are often required to determine the probability that the assembly functions as designed. This paper describes a genetic algorithm for carrying out this search and successfully applies it to two specific mechanical designs, enabling comparisons of a new statistical tolerancing design method with existing methods. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The modelling of inpatient length of stay (LOS) has important implications in health care studies. Finite mixture distributions are usually used to model the heterogeneous LOS distribution, due to a certain proportion of patients sustaining-a longer stay. However, the morbidity data are collected from hospitals, observations clustered within the same hospital are often correlated. The generalized linear mixed model approach is adopted to accommodate the inherent correlation via unobservable random effects. An EM algorithm is developed to obtain residual maximum quasi-likelihood estimation. The proposed hierarchical mixture regression approach enables the identification and assessment of factors influencing the long-stay proportion and the LOS for the long-stay patient subgroup. A neonatal LOS data set is used for illustration, (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The notion of compensation is widely used in advanced transaction models as means of recovery from a failure. Similar concepts are adopted for providing transaction-like behaviour for long business processes supported by workflows technology. In general, it is not trivial to design compensating tasks for tasks in the context of a workflow. Actually, a task in a workflow process does not have to be compensatable in the sense that the forcibility of reverse operations of the task is not always guaranteed by the application semantics. In addition, the isolation requirement on data resources may make a task difficult to compensate. In this paper, we first look into the requirements that a compensating task has to satisfy. Then we introduce a new concept called confirmation. With the help of confirmation, we are able to modify most non-compensatable tasks so that they become compensatable. This can substantially increase the availability of shared resources and greatly improve backward recovery for workflow applications in case of failures. To effectively incorporate confirmation and compensation into a workflow management environment, a three level bottom-up workflow design method is introduced. The implementation issues of this design are also discussed. (C) 2003 Elsevier Science Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper presents a computational system based upon formal principles to run spatial models for environmental processes. The simulator is named SimuMap because it is typically used to simulate spatial processes over a mapped representation of terrain. A model is formally represented in SimuMap as a set of coupled sub-models. The paper considers the situation where spatial processes operate at different time levels, but are still integrated. An example of such a situation commonly occurs in watershed hydrology where overland flow and stream channel flow have very different flow rates but are highly related as they are subject to the same terrain runoff processes. SimuMap is able to run a network of sub-models that express different time-space derivatives for water flow processes. Sub-models may be coded generically with a map algebra programming language that uses a surface data model. To address the problem of differing time levels in simulation, the paper: (i) reviews general approaches for numerical solvers, (ii) considers the constraints that need to be enforced to use more adaptive time steps in discrete time specified simulations, and (iii) scaling transfer rates in equations that use different time bases for time-space derivatives. A multistep scheme is proposed for SimuMap. This is presented along with a description of its visual programming interface, its modelling formalisms and future plans. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Formal specifications can precisely and unambiguously define the required behavior of a software system or component. However, formal specifications are complex artifacts that need to be verified to ensure that they are consistent, complete, and validated against the requirements. Specification testing or animation tools exist to assist with this by allowing the specifier to interpret or execute the specification. However, currently little is known about how to do this effectively. This article presents a framework and tool support for the systematic testing of formal, model-based specifications. Several important generic properties that should be satisfied by model-based specifications are first identified. Following the idea of mutation analysis, we then use variants or mutants of the specification to check that these properties are satisfied. The framework also allows the specifier to test application-specific properties. All properties are tested for a range of states that are defined by the tester in the form of a testgraph, which is a directed graph that partially models the states and transitions of the specification being tested. Tool support is provided for the generation of the mutants, for automatically traversing the testgraph and executing the test cases, and for reporting any errors. The framework is demonstrated on a small specification and its application to three larger specifications is discussed. Experience indicates that the framework can be used effectively to test small to medium-sized specifications and that it can reveal a significant number of problems in these specifications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Minimum/maximum autocorrelation factor (MAF) is a suitable algorithm for orthogonalization of a vector random field. Orthogonalization avoids the use of multivariate geostatistics during joint stochastic modeling of geological attributes. This manuscript demonstrates in a practical way that computation of MAF is the same as discriminant analysis of the nested structures. Mathematica software is used to illustrate MAF calculations from a linear model of coregionalization (LMC) model. The limitation of two nested structures in the LMC for MAF is also discussed and linked to the effects of anisotropy and support. The analysis elucidates the matrix properties behind the approach and clarifies relationships that may be useful for model-based approaches. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The integration of geo-information from multiple sources and of diverse nature in developing mineral favourability indexes (MFIs) is a well-known problem in mineral exploration and mineral resource assessment. Fuzzy set theory provides a convenient framework to combine and analyse qualitative and quantitative data independently of their source or characteristics. A novel, data-driven formulation for calculating MFIs based on fuzzy analysis is developed in this paper. Different geo-variables are considered fuzzy sets and their appropriate membership functions are defined and modelled. A new weighted average-type aggregation operator is then introduced to generate a new fuzzy set representing mineral favourability. The membership grades of the new fuzzy set are considered as the MFI. The weights for the aggregation operation combine the individual membership functions of the geo-variables, and are derived using information from training areas and L, regression. The technique is demonstrated in a case study of skarn tin deposits and is used to integrate geological, geochemical and magnetic data. The study area covers a total of 22.5 km(2) and is divided into 349 cells, which include nine control cells. Nine geo-variables are considered in this study. Depending on the nature of the various geo-variables, four different types of membership functions are used to model the fuzzy membership of the geo-variables involved. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cyclic peptides containing oxazole and thiazole heterocycles have been examined for their capacity to be used as scaffolds in larger, more complex, protein-like structures. Both the macrocyclic scaffolds and the supramolecular structures derived therefrom have been visualised by molecular modelling techniques. These molecules are too symmetrical to examine structurally by NMR spectroscopy. The cyclic hexapeptide ([Aaa-Thz](3), [Aaa-Oxz](3)) and cyclic octapeptide ([Aaa-Thz](4), [Aaa-Oxz](4)) analogues are composed of dipeptide surrogates (Aaa: amino acid, Thz: thiazole, Oxz: oxazole) derived from intramolecular condensation of cysteine or serine/threonine side chains in dipeptides like Aaa-Cys, Aaa-Ser and Aaa-Thr. The five-membered heterocyclic rings, like thiazole, oxazole and reduced analogues like thiazoline, thiazolidine and oxazoline have profound influences on the structures and bioactivities of cyclic peptides derived therefrom. This work suggests that such constrained cyclic peptides can be used as scaffolds to create a range of novel protein-like supramolecular structures (e.g. cylinders, troughs, cones, multi-loop structures, helix bundles) that are comparable in size, shape and composition to bioactive surfaces of proteins. They may therefore represent interesting starting points for the design of novel artificial proteins and artificial enzymes. (C) 2002 Elsevier Science Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This combined PET and ERP study was designed to identify the brain regions activated in switching and divided attention between different features of a single object using matched sensory stimuli and motor response. The ERP data have previously been reported in this journal [64]. We now present the corresponding PET data. We identified partially overlapping neural networks with paradigms requiring the switching or dividing of attention between the elements of complex visual stimuli. Regions of activation were found in the prefrontal and temporal cortices and cerebellum. Each task resulted in different prefrontal cortical regions of activation lending support to the functional subspecialisation of the prefrontal and temporal cortices being based on the cognitive operations required rather than the stimuli themselves. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.