869 resultados para Rotors -- Balancing
Resumo:
Clustering is defined as the grouping of similar items in a set, and is an important process within the field of data mining. As the amount of data for various applications continues to increase, in terms of its size and dimensionality, it is necessary to have efficient clustering methods. A popular clustering algorithm is K-Means, which adopts a greedy approach to produce a set of K-clusters with associated centres of mass, and uses a squared error distortion measure to determine convergence. Methods for improving the efficiency of K-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting a more efficient data structure, notably a multi-dimensional binary search tree (KD-Tree) to store either centroids or data points. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient K-Means techniques in parallel computational environments. In this work, we provide a parallel formulation for the KD-Tree based K-Means algorithm and address its load balancing issues.
Resumo:
This paper investigates the impact of aerosol forcing uncertainty on the robustness of estimates of the twentieth-century warming attributable to anthropogenic greenhouse gas emissions. Attribution analyses on three coupled climate models with very different sensitivities and aerosol forcing are carried out. The Third Hadley Centre Coupled Ocean - Atmosphere GCM (HadCM3), Parallel Climate Model (PCM), and GFDL R30 models all provide good simulations of twentieth-century global mean temperature changes when they include both anthropogenic and natural forcings. Such good agreement could result from a fortuitous cancellation of errors, for example, by balancing too much ( or too little) greenhouse warming by too much ( or too little) aerosol cooling. Despite a very large uncertainty for estimates of the possible range of sulfate aerosol forcing obtained from measurement campaigns, results show that the spatial and temporal nature of observed twentieth-century temperature change constrains the component of past warming attributable to anthropogenic greenhouse gases to be significantly greater ( at the 5% level) than the observed warming over the twentieth century. The cooling effects of aerosols are detected in all three models. Both spatial and temporal aspects of observed temperature change are responsible for constraining the relative roles of greenhouse warming and sulfate cooling over the twentieth century. This is because there are distinctive temporal structures in differential warming rates between the hemispheres, between land and ocean, and between mid- and low latitudes. As a result, consistent estimates of warming attributable to greenhouse gas emissions are obtained from all three models, and predictions are relatively robust to the use of more or less sensitive models. The transient climate response following a 1% yr(-1) increase in CO2 is estimated to lie between 2.2 and 4 K century(-1) (5-95 percentiles).
Resumo:
Managing ecosystems to ensure the provision of multiple ecosystem services is a key challenge for applied ecology. Functional traits are receiving increasing attention as the main ecological attributes by which different organisms and biological communities influence ecosystem services through their effects on underlying ecosystem processes. Here we synthesize concepts and empirical evidence on linkages between functional traits and ecosystem services across different trophic levels. Most of the 247 studies reviewed considered plants and soil invertebrates, but quantitative trait–service associations have been documented for a range of organisms and ecosystems, illustrating the wide applicability of the trait approach. Within each trophic level, specific processes are affected by a combination of traits while particular key traits are simultaneously involved in the control of multiple processes. These multiple associations between traits and ecosystem processes can help to identify predictable trait–service clusters that depend on several trophic levels, such as clusters of traits of plants and soil organisms that underlie nutrient cycling, herbivory, and fodder and fibre production. We propose that the assessment of trait–service clusters will represent a crucial step in ecosystem service monitoring and in balancing the delivery of multiple, and sometimes conflicting, services in ecosystem management.
Resumo:
Recently, two approaches have been introduced that distribute the molecular fragment mining problem. The first approach applies a master/worker topology, the second approach, a completely distributed peer-to-peer system, solves the scalability problem due to the bottleneck at the master node. However, in many real world scenarios the participating computing nodes cannot communicate directly due to administrative policies such as security restrictions. Thus, potential computing power is not accessible to accelerate the mining run. To solve this shortcoming, this work introduces a hierarchical topology of computing resources, which distributes the management over several levels and adapts to the natural structure of those multi-domain architectures. The most important aspect is the load balancing scheme, which has been designed and optimized for the hierarchical structure. The approach allows dynamic aggregation of heterogenous computing resources and is applied to wide area network scenarios.
Resumo:
Structured data represented in the form of graphs arises in several fields of the science and the growing amount of available data makes distributed graph mining techniques particularly relevant. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. The problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiver-initiated, load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening dataset, where the approach attains close-to linear speedup in a network of workstations.
Resumo:
Recent observations from the Argo dataset of temperature and salinity profiles are used to evaluate a series of 3-year data assimilation experiments in a global ice–ocean general circulation model. The experiments are designed to evaluate a new data assimilation system whereby salinity is assimilated along isotherms, S(T ). In addition, the role of a balancing salinity increment to maintain water mass properties is investigated. This balancing increment is found to effectively prevent spurious mixing in tropical regions induced by univariate temperature assimilation, allowing the correction of isotherm geometries without adversely influencing temperature–salinity relationships. In addition, the balancing increment is able to correct a fresh bias associated with a weak subtropical gyre in the North Atlantic using only temperature observations. The S(T ) assimilation method is found to provide an important improvement over conventional depth level assimilation, with lower root-mean-squared forecast errors over the upper 500 m in the tropical Atlantic and Pacific Oceans. An additional set of experiments is performed whereby Argo data are withheld and used for independent evaluation. The most significant improvements from Argo assimilation are found in less well-observed regions (Indian, South Atlantic and South Pacific Oceans). When Argo salinity data are assimilated in addition to temperature, improvements to modelled temperature fields are obtained due to corrections to model density gradients and the resulting circulation. It is found that observations from the Argo array provide an invaluable tool for both correcting modelled water mass properties through data assimilation and for evaluating the assimilation methods themselves.
Resumo:
Weeds are major constraints on crop production, yet as part of the primary producers within farming systems, they may be important components of the agroecosystem. Using published literature, the role of weeds in arable systems for other above-ground trophic levels are examined. In the UK, there is evidence that weed flora have changed over the past century, with some species declining in abundance, whereas others have increased. There is also some evidence for a decline in the size of arable weed seedbanks. Some of these changes reflect improved agricultural efficiency, changes to more winter-sown crops in arable rotations and the use of more broad-spectrum herbicide combinations. Interrogation of a database of records of phytophagous insects associated with plant species in the UK reveals that many arable weed species support a high diversity of insect species. Reductions in abundances of host plants may affect associated insects and other taxa. A number of insect groups and farmland birds have shown marked population declines over the past 30 years. Correlational studies indicate that many of these declines are associated with changes in agricultural practices. Certainly reductions in food availability in winter and for nestling birds in spring are implicated in the declines of several bird species, notably the grey partridge, Perdix perdix . Thus weeds have a role within agroecosystems in supporting biodiversity more generally. An understanding of weed competitivity and the importance of weeds for insects and birds may allow the identification of the most important weed species. This may form the first step in balancing the needs for weed control with the requirements for biodiversity and more sustainable production methods.
Resumo:
Over the last decade, there has been increasing circumstantial evidence for the action of natural selection in the genome, arising largely from molecular genetic surveys of large numbers of markers. In nonmodel organisms without densely mapped markers, a frequently used method is to identify loci that have unusually high or low levels of genetic differentiation, or low genetic diversity relative to other populations. The paper by Makinen et al. (2008a) in this issue of Molecular Ecology reports the results of a survey of microsatellite allele frequencies at more than 100 loci in seven populations of the three-spined stickleback (Gasterosteus aculeatus). They show that a microsatellite locus and two indel markers located within the intron of the Eda gene, known to control the number of lateral plates in the stickleback (Fig. 1), tend to be much more highly genetically differentiated than other loci, a finding that is consistent with the action of local selection. They identify a further two independent candidates for local selection, and, most intriguingly, they further suggest that up to 15% of their loci may provide evidence of balancing selection.
Resumo:
The identification of signatures of natural selection in genomic surveys has become an area of intense research, stimulated by the increasing ease with which genetic markers can be typed. Loci identified as subject to selection may be functionally important, and hence (weak) candidates for involvement in disease causation. They can also be useful in determining the adaptive differentiation of populations, and exploring hypotheses about speciation. Adaptive differentiation has traditionally been identified from differences in allele frequencies among different populations, summarised by an estimate of F-ST. Low outliers relative to an appropriate neutral population-genetics model indicate loci subject to balancing selection, whereas high outliers suggest adaptive (directional) selection. However, the problem of identifying statistically significant departures from neutrality is complicated by confounding effects on the distribution of F-ST estimates, and current methods have not yet been tested in large-scale simulation experiments. Here, we simulate data from a structured population at many unlinked, diallelic loci that are predominantly neutral but with some loci subject to adaptive or balancing selection. We develop a hierarchical-Bayesian method, implemented via Markov chain Monte Carlo (MCMC), and assess its performance in distinguishing the loci simulated under selection from the neutral loci. We also compare this performance with that of a frequentist method, based on moment-based estimates of F-ST. We find that both methods can identify loci subject to adaptive selection when the selection coefficient is at least five times the migration rate. Neither method could reliably distinguish loci under balancing selection in our simulations, even when the selection coefficient is twenty times the migration rate.
Resumo:
Two new antimony sulphides have been prepared solvothermally and characterised by single-crystal X-ray diffraction. [Co(en)(3)][Sb4S7] (1) was prepared at 140 degreesC from COS, Sb2S3 and S in the presence of ethylenediamine, whilst heating a mixture of Sb2S3, Co and S in tris(2aminoethyl)amine, N(CH2CH2NH2)(3), at 180 degreesC fegults in the formation of [C6H20N4][Sb4S7] (2). Both materials contain [Sb4S7](2-) chains formed from linkage of cyclic Sb3S63- units by SbS33- pyramids. In (1), the [Sb4S7] chains are linked by secondary Sb-S interactions to form sheets, between which the. charge balancing [Co(en)(3)](2+) cations reside. The structure of (2) involves interconnection of pairs of [Sb4S7](2-) chains through Sb2S2 rings to form isolated [Sb4S7](2-) double chains which are interleaved by protonated template molecules. (C) 2004 Elsevier B.V. All rights resereved.
Resumo:
The tides of globalization and the unsteady surges and distortions in the evolution of the European Union are causing identities and cultures to be in a state of flux. Education is used by politicians as a major lever for political and social change through micro-management, but it is a crude tool. There can, however, be opportunities within educational experience for individual learners to gain strong, reflexive, multiple identities and multiple citizenship through the engagement of their creative energies. It has been argued that the twenty-first century needs a new kind of creativity characterized by unselfishness, caring and compassion—still involving monetary wealth, but resulting in a healthy planet and healthy people. Creativity and its economically derived relation, innovation, have become `buzz words' of our times. They are often misconstrued, misunderstood and plainly misused within educational conversations. The small-scale pan-European research study upon which this article is founded discovered that more emphasis needs to be placed on creative leadership, empowering teachers and learners, reducing pupils' fear of school, balancing teaching approaches, and ensuring that the curriculum and assessment are responsive to the needs of individual learners. These factors are key to building strong educational provision that harnesses the creative potential of learners, teachers and other stakeholders, values what it is to be human and creates a foundation upon which to build strong, morally based, consistent, participative democracies.
Resumo:
This paper develops fuzzy methods for control of the rotary inverted pendulum, an underactuated mechanical system. Two control laws are presented, one for swing up and another for the stabilization. The pendulum is swung up from the vertical down stable position to the upward unstable position in a controlled trajectory. The rules for the swing up are heuristically written such that each swing results in greater energy build up. The stabilization is achieved by mapping a stabilizing LQR control law to two fuzzy inference engines, which reduces the computational load compared with using a single fuzzy inference engine. The robustness of the balancing control is tested by attaching a bottle of water at the tip of the pendulum.
Resumo:
This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA., Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance or the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA, Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.
Resumo:
This paper presents a paralleled Two-Pass Hexagonal (TPA) algorithm constituted by Linear Hashtable Motion Estimation Algorithm (LHMEA) and Hexagonal Search (HEXBS) for motion estimation. In the TPA, Motion Vectors (MV) are generated from the first-pass LHMEA and are used as predictors for second-pass HEXBS motion estimation, which only searches a small number of Macroblocks (MBs). We introduced hashtable into video processing and completed parallel implementation. We propose and evaluate parallel implementations of the LHMEA of TPA on clusters of workstations for real time video compression. It discusses how parallel video coding on load balanced multiprocessor systems can help, especially on motion estimation. The effect of load balancing for improved performance is discussed. The performance of the algorithm is evaluated by using standard video sequences and the results are compared to current algorithms.