26 resultados para Grid-based clustering approach
em CentAUR: Central Archive University of Reading - UK
Resumo:
We developed a stochastic simulation model incorporating most processes likely to be important in the spread of Phytophthora ramorum and similar diseases across the British landscape (covering Rhododendron ponticum in woodland and nurseries, and Vaccinium myrtillus in heathland). The simulation allows for movements of diseased plants within a realistically modelled trade network and long-distance natural dispersal. A series of simulation experiments were run with the model, representing an experiment varying the epidemic pressure and linkage between natural vegetation and horticultural trade, with or without disease spread in commercial trade, and with or without inspections-with-eradication, to give a 2 x 2 x 2 x 2 factorial started at 10 arbitrary locations spread across England. Fifty replicate simulations were made at each set of parameter values. Individual epidemics varied dramatically in size due to stochastic effects throughout the model. Across a range of epidemic pressures, the size of the epidemic was 5-13 times larger when commercial movement of plants was included. A key unknown factor in the system is the area of susceptible habitat outside the nursery system. Inspections, with a probability of detection and efficiency of infected-plant removal of 80% and made at 90-day intervals, reduced the size of epidemics by about 60% across the three sectors with a density of 1% susceptible plants in broadleaf woodland and heathland. Reducing this density to 0.1% largely isolated the trade network, so that inspections reduced the final epidemic size by over 90%, and most epidemics ended without escape into nature. Even in this case, however, major wild epidemics developed in a few percent of cases. Provided the number of new introductions remains low, the current inspection policy will control most epidemics. However, as the rate of introduction increases, it can overwhelm any reasonable inspection regime, largely due to spread prior to detection. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.
Resumo:
With the fast development of wireless communications, ZigBee and semiconductor devices, home automation networks have recently become very popular. Since typical consumer products deployed in home automation networks are often powered by tiny and limited batteries, one of the most challenging research issues is concerning energy reduction and the balancing of energy consumption across the network in order to prolong the home network lifetime for consumer devices. The introduction of clustering and sink mobility techniques into home automation networks have been shown to be an efficient way to improve the network performance and have received significant research attention. Taking inspiration from nature, this paper proposes an Ant Colony Optimization (ACO) based clustering algorithm specifically with mobile sink support for home automation networks. In this work, the network is divided into several clusters and cluster heads are selected within each cluster. Then, a mobile sink communicates with each cluster head to collect data directly through short range communications. The ACO algorithm has been utilized in this work in order to find the optimal mobility trajectory for the mobile sink. Extensive simulation results from this research show that the proposed algorithm significantly improves home network performance when using mobile sinks in terms of energy consumption and network lifetime as compared to other routing algorithms currently deployed for home automation networks.
Resumo:
Abstract Background: The amount and structure of genetic diversity in dessert apple germplasm conserved at a European level is mostly unknown, since all diversity studies conducted in Europe until now have been performed on regional or national collections. Here, we applied a common set of 16 SSR markers to genotype more than 2,400 accessions across 14 collections representing three broad European geographic regions (North+East, West and South) with the aim to analyze the extent, distribution and structure of variation in the apple genetic resources in Europe. Results: A Bayesian model-based clustering approach showed that diversity was organized in three groups, although these were only moderately differentiated (FST=0.031). A nested Bayesian clustering approach allowed identification of subgroups which revealed internal patterns of substructure within the groups, allowing a finer delineation of the variation into eight subgroups (FST=0.044). The first level of stratification revealed an asymmetric division of the germplasm among the three groups, and a clear association was found with the geographical regions of origin of the cultivars. The substructure revealed clear partitioning of genetic groups among countries, but also interesting associations between subgroups and breeding purposes of recent cultivars or particular usage such as cider production. Additional parentage analyses allowed us to identify both putative parents of more than 40 old and/or local cultivars giving interesting insights in the pedigree of some emblematic cultivars. Conclusions: The variation found at group and sub-group levels may reflect a combination of historical processes of migration/selection and adaptive factors to diverse agricultural environments that, together with genetic drift, have resulted in extensive genetic variation but limited population structure. The European dessert apple germplasm represents an important source of genetic diversity with a strong historical and patrimonial value. The present work thus constitutes a decisive step in the field of conservation genetics. Moreover, the obtained data can be used for defining a European apple core collection useful for further identification of genomic regions associated with commercially important horticultural traits in apple through genome-wide association studies.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Clustering methods are increasingly being applied to residential smart meter data, providing a number of important opportunities for distribution network operators (DNOs) to manage and plan the low voltage networks. Clustering has a number of potential advantages for DNOs including, identifying suitable candidates for demand response and improving energy profile modelling. However, due to the high stochasticity and irregularity of household level demand, detailed analytics are required to define appropriate attributes to cluster. In this paper we present in-depth analysis of customer smart meter data to better understand peak demand and major sources of variability in their behaviour. We find four key time periods in which the data should be analysed and use this to form relevant attributes for our clustering. We present a finite mixture model based clustering where we discover 10 distinct behaviour groups describing customers based on their demand and their variability. Finally, using an existing bootstrapping technique we show that the clustering is reliable. To the authors knowledge this is the first time in the power systems literature that the sample robustness of the clustering has been tested.
Resumo:
This paper uses an entropy-based information approach to determine if farmland values are more closely associated with urban pressure or farm income. The basic question is: how much information on changes in farm real estate values is contained in changes in population versus changes in returns to production agriculture? Results suggest population is informative, but changes in farmland values are more strongly associated with changes in the distribution of returns. However, this relationship is not true for every region nor does it hold over time, as for some regions and time periods changes in population are more informative. Results have policy implications for both equity and efficiency.
Resumo:
The Complex Adaptive Systems, Cognitive Agents and Distributed Energy (CASCADE) project is developing a framework based on Agent Based Modelling (ABM). The CASCADE Framework can be used both to gain policy and industry relevant insights into the smart grid concept itself and as a platform to design and test distributed ICT solutions for smart grid based business entities. ABM is used to capture the behaviors of diff erent social, economic and technical actors, which may be defi ned at various levels of abstraction. It is applied to understanding their interactions and can be adapted to include learning processes and emergent patterns. CASCADE models ‘prosumer’ agents (i.e., producers and/or consumers of energy) and ‘aggregator’ agents (e.g., traders of energy in both wholesale and retail markets) at various scales, from large generators and Energy Service Companies down to individual people and devices. The CASCADE Framework is formed of three main subdivisions that link models of electricity supply and demand, the electricity market and power fl ow. It can also model the variability of renewable energy generation caused by the weather, which is an important issue for grid balancing and the profi tability of energy suppliers. The development of CASCADE has already yielded some interesting early fi ndings, demonstrating that it is possible for a mediating agent (aggregator) to achieve stable demandfl attening across groups of domestic households fi tted with smart energy control and communication devices, where direct wholesale price signals had previously been found to produce characteristic complex system instability. In another example, it has demonstrated how large changes in supply mix can be caused even by small changes in demand profi le. Ongoing and planned refi nements to the Framework will support investigation of demand response at various scales, the integration of the power sector with transport and heat sectors, novel technology adoption and diffusion work, evolution of new smart grid business models, and complex power grid engineering and market interactions.
Resumo:
This paper focuses on successful reform strategies invoked in parts of the Muslim world to address issues of gender inequality in the context of Islamic personal law. It traces the development of personal status laws in Tunisia and Morocco, exploring the models they offer in initiating equality-enhancing reforms in Bangladesh, where a secular and equality-based reform approach conflicts with Islamic-based conservatism. Recent landmark family law reforms in Morocco show the possibility of achieving ‘women-friendly’ reforms within an Islamic legal framework. Moreover, the Tunisian Personal Status Code, with its successive reforms, shows that a gender equality-based model of personal law can be successfully integrated into the Muslim way of life. This study examines the response of Muslim societies to equality-based reforms and differences in approach in initiating them. The paper maps these sometimes competing approaches, locating them within contemporary feminist debates related to gender equality in the East and West.
Resumo:
Cedrus atlantica (Pinaceae) is a large and exceptionally long-lived conifer native to the Rif and Atlas Mountains of North Africa. To assess levels and patterns of genetic diversity of this species. samples were obtained throughout the natural range in Morocco and from a forest plantation in Arbucies, Girona (Spain) and analyzed using RAPD markers. Within-population genetic diversity was high and comparable to that revealed by isozymes. Managed populations harbored levels of genetic variation similar to those found in their natural counterparts. Genotypic analyses Of Molecular variance (AMOVA) found that most variation was within populations. but significant differentiation was also found between populations. particularly in Morocco. Bayesian estimates of F,, corroborated the AMOVA partitioning and provided evidence for Population differentiation in C. atlantica. Both distance- and Bayesian-based Clustering methods revealed that Moroccan populations comprise two genetically distinct groups. Within each group, estimates of population differentiation were close to those previously reported in other gymnosperms. These results are interpreted in the context of the postglacial history of the species and human impact. The high degree of among-group differentiation recorded here highlights the need for additional conservation measures for some Moroccan Populations of C. atlantica.
Resumo:
Ubiquitous healthcare is an emerging area of technology that uses a large number of environmental and patient sensors and actuators to monitor and improve patients’ physical and mental condition. Tiny sensors gather data on almost any physiological characteristic that can be used to diagnose health problems. This technology faces some challenging ethical questions, ranging from the small-scale individual issues of trust and efficacy to the societal issues of health and longevity gaps related to economic status. It presents particular problems in combining developing computer/information/media ethics with established medical ethics. This article describes a practice-based ethics approach, considering in particular the areas of privacy, agency, equity and liability. It raises questions that ubiquitous healthcare will force practitioners to face as they develop ubiquitous healthcare systems. Medicine is a controlled profession whose practise is commonly restricted by government-appointed authorities, whereas computer software and hardware development is notoriously lacking in such regimes.
Resumo:
In financial decision-making processes, the adopted weights of the objective functions have significant impacts on the final decision outcome. However, conventional rating and weighting methods exhibit difficulty in deriving appropriate weights for complex decision-making problems with imprecise information. Entropy is a quantitative measure of uncertainty and has been useful in exploring weights of attributes in decision making. A fuzzy and entropy-based mathematical approach is employed to solve the weighting problem of the objective functions in an overall cash-flow model. The multiproject being undertaken by a medium-size construction firm in Hong Kong was used as a real case study to demonstrate the application of entropy. Its application in multiproject cash flow situations is demonstrated. The results indicate that the overall before-tax profit was HK$ 0.11 millions lower after the introduction of appropriate weights. In addition, the best time to invest in new projects arising from positive cash flow was identified to be two working months earlier than the nonweight system.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Resumo:
This study analyzes the issue of American option valuation when the underlying exhibits a GARCH-type volatility process. We propose the usage of Rubinstein's Edgeworth binomial tree (EBT) in contrast to simulation-based methods being considered in previous studies. The EBT-based valuation approach makes an implied calibration of the pricing model feasible. By empirically analyzing the pricing performance of American index and equity options, we illustrate the superiority of the proposed approach.