888 resultados para Web modelling methods
Resumo:
This paper identifies research priorities in evaluating the ways in which "genomic medicine"-the use of genetic information to prevent and treat disease-may reduce tobacco-related harm by: (1) assisting more smokers to quit; (2) preventing non-smokers from beginning to smoke tobacco; and (3) reducing the harm caused by tobacco smoking. The method proposed to achieve the first aim is pharmacogenetics", the use of genetic information to optimise the selection of smoking-cessation programmes by screening smokers for polymorphisms that predict responses to different methods of smoking cessation. This method competes with the development of more effective forms of smoking cessation that involve vaccinating smokers against the effects of nicotine and using new pharmaceuticals (such as cannabinoid antagonists and nicotine agonists). The second and third aims are more speculative. They include: screening the population for genetic susceptibility to nicotine dependence and intervening (eg, by vaccinating children and adolescents against the effects of nicotine) to prevent smoking uptake, and screening the population for genetic susceptibility to tobacco-related diseases. A framework is described for future research on these policy options. This includes: epidemiological modelling and economic evaluation to specify the conditions under which these strategies are cost-effective; and social psychological research into the effect of providing genetic information on smokers' preparedness to quit, and the general views of the public on tobacco smoking.
Resumo:
Granule impact deformation has long been recognised as important in determining whether or not two colliding granules will coalesce. Work in the last 10 years has highlighted the fact that viscous effects are significant in granulation. The relative strengths of different formulations can vary with strain rate. Therefore, traditional strength measurements made at pseudo-static conditions give no indication, even qualitatively, of how materials will behave at high strain rates, and hence are actually misleading when used to model granule coalescence. This means that new standard methods need to be developed for determining the strain rates encountered by granules inside industrial equipment and also for measuring the mechanical properties of granules at these strain rates. The constitutive equations used in theoretical models of granule coalescence also need to be extended to include strain-rate dependent components.
Resumo:
This technical appendix details the methods used in an assessment of the potential of snus for tobacco harm reduction using simulation modelling.
Resumo:
This paper critically assesses several loss allocation methods based on the type of competition each method promotes. This understanding assists in determining which method will promote more efficient network operations when implemented in deregulated electricity industries. The methods addressed in this paper include the pro rata [1], proportional sharing [2], loss formula [3], incremental [4], and a new method proposed by the authors of this paper, which is loop-based [5]. These methods are tested on a modified Nordic 32-bus network, where different case studies of different operating points are investigated. The varying results obtained for each allocation method at different operating points make it possible to distinguish methods that promote unhealthy competition from those that encourage better system operation.
Resumo:
A large number of models have been derived from the two-parameter Weibull distribution and are referred to as Weibull models. They exhibit a wide range of shapes for the density and hazard functions, which makes them suitable for modelling complex failure data sets. The WPP and IWPP plot allows one to determine in a systematic manner if one or more of these models are suitable for modelling a given data set. This paper deals with this topic.
Resumo:
We propose quadrature rules for the approximation of line integrals possessing logarithmic singularities and show their convergence. In some instances a superconvergence rate is demonstrated.
Resumo:
This paper proposes some variants of Temporal Defeasible Logic (TDL) to reason about normative modifications. These variants make it possible to differentiate cases in which, for example, modifications at some time change legal rules but their conclusions persist afterwards from cases where also their conclusions are blocked.
Resumo:
Nearest–neighbour balance is considered a desirable property for an experiment to possess in situations where experimental units are influenced by their neighbours. This paper introduces a measure of the degree of nearest–neighbour balance of a design. The measure is used in an algorithm which generates nearest–neighbour balanced designs and is readily modified to obtain designs with various types of nearest–neighbour balance. Nearest–neighbour balanced designs are produced for a wide class of parameter settings, and in particular for those settings for which such designs cannot be found by existing direct combinatorial methods. In addition, designs with unequal row and column sizes, and designs with border plots are constructed using the approach presented here.
Resumo:
The generalized Gibbs sampler (GGS) is a recently developed Markov chain Monte Carlo (MCMC) technique that enables Gibbs-like sampling of state spaces that lack a convenient representation in terms of a fixed coordinate system. This paper describes a new sampler, called the tree sampler, which uses the GGS to sample from a state space consisting of phylogenetic trees. The tree sampler is useful for a wide range of phylogenetic applications, including Bayesian, maximum likelihood, and maximum parsimony methods. A fast new algorithm to search for a maximum parsimony phylogeny is presented, using the tree sampler in the context of simulated annealing. The mathematics underlying the algorithm is explained and its time complexity is analyzed. The method is tested on two large data sets consisting of 123 sequences and 500 sequences, respectively. The new algorithm is shown to compare very favorably in terms of speed and accuracy to the program DNAPARS from the PHYLIP package.
Resumo:
Durante las últimas tres décadas el interés y diversidad en el uso de canales escalonados han aumentado debido al desarrollo de nuevas técnicas y materiales que permiten su construcción de manera rápida y económica (Concreto compactado con rodillo CCR, Gaviones, etc.). Actualmente, los canales escalonados se usan como vertedores y/o canales para peces en presas y diques, como disipadores de energía en canales y ríos, o como aireadores en plantas de tratamiento y torrentes contaminados. Diversos investigadores han estudiado el flujo en vertedores escalonados, enfocándose en estructuras de gran pendiente ( 45o) por lo que a la fecha, el comportamiento del flujo sobre vertedores con pendientes moderadas ( 15 a 30o) no ha sido totalmente comprendido. El presente artículo comprende un estudio experimental de las propiedades físicas del flujo aire-agua sobre canales escalonados con pendientes moderadas, típicas en presas de materiales sueltos. Un extenso rango de gastos en condiciones de flujo rasante se investigó en dos modelos experimentales a gran escala (Le = 3 a 6): Un canal con pendiente 3.5H:1V ( 16o) y dos alturas de escalón distintas (h = 0.1 y 0.05 m) y un canal con pendiente 2.5H:1V ( 22o) y una altura de escalón de h = 0.1 m. Los resultados incluyen un análisis detallado de las propiedades del flujo en vertedores escalonados con pendientes moderadas y un nuevo criterio de diseño hidráulico, el cual está basado en los resultados experimentales obtenidos. English abstract: Stepped chutes have been used as hydraulic structures since antiquity, they can be found acting as spillways and fish ladders in dams and weirs, as energy dissipators in artificial channels, gutters and rivers, and as aeration enhancers in water treatment plants and polluted streams. In recent years, new construction techniques and materials (Roller Compacted Concrete RCC, rip-rap gabions, etc.) together with the development of the abovementioned new applications have allowed cheaper construction methods, increasing the interest in stepped chute design. During the last three decades, research in stepped spillways has been very active. However, studies prior to 1993 neglected the effect of free-surface aeration. A number of studies have focused since on steep stepped chutes ( 45o) but the hydraulic performance of moderate-slope stepped channels is not yet totally understood. This study details an experimental investigation of physical air-water flow properties down moderate slope stepped spillways conducted in two laboratory models: the first model was a 3.15 m long stepped chute with a 15.9o slope comprising two interchangeable step heights (h = 0.1 m and h = 0.05 m); the second model was a 3.3 m long, stepped channel with a 21.8o slope (h = 0.1 m). A broad range of discharges within transition and skimming flow regimes was investigated. Measurements were conducted using a double tip conductivity probe. The study provides new, original insights into air-water stepped chute flows not foreseen in prior studies and presents a new design criterion for chutes with moderate slopes based on the experimental results.
Resumo:
The reconstruction of a complex scene from multiple images is a fundamental problem in the field of computer vision. Volumetric methods have proven to be a strong alternative to traditional correspondence-based methods due to their flexible visibility models. In this paper we analyse existing methods for volumetric reconstruction and identify three key properties of voxel colouring algorithms: a water-tight surface model, a monotonic carving order, and causality. We present a new Voxel Colouring algorithm which embeds all reconstructions of a scene into a single output. While modelling exact visibility for arbitrary camera locations, Embedded Voxel Colouring removes the need for a priori threshold selection present in previous work. An efficient implementation is given along with results demonstrating the advantages of posteriori threshold selection.
Resumo:
Many images consist of two or more 'phases', where a phase is a collection of homogeneous zones. For example, the phases may represent the presence of different sulphides in an ore sample. Frequently, these phases exhibit very little structure, though all connected components of a given phase may be similar in some sense. As a consequence, random set models are commonly used to model such images. The Boolean model and models derived from the Boolean model are often chosen. An alternative approach to modelling such images is to use the excursion sets of random fields to model each phase. In this paper, the properties of excursion sets will be firstly discussed in terms of modelling binary images. Ways of extending these models to multi-phase images will then be explored. A desirable feature of any model is to be able to fit it to data reasonably well. Different methods for fitting random set models based on excursion sets will be presented and some of the difficulties with these methods will be discussed.
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).