16 resultados para experimental analysis

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between repeated body checking and its impact on body size estimation and body dissatisfaction is of interest for two reasons. First, it has importance in theoretical accounts of the maintenance of eating disorders and, second, body checking is targeted in cognitive-behavioural treatment. The aim of this study was to determine the impact of manipulating body checking on body size estimation and body dissatisfaction. Sixty women were randomly assigned either to repeatedly scrutinize their bodies in a critical way in the mirror ("high body checking") or to refrain from body checking but to examine the whole of their bodies in a neutral way ("low body checking"). Body dissatisfaction, feelings of fatness and the strength of a particular self-critical thought increased immediately after the manipulation among those in the high body checking condition. Feelings of fatness decreased among those in the low body checking condition. These changes were short-lived. The manipulation did not effect estimations of body size or the discrepancy between estimations of body size and desired body size. The implications of these findings for understanding the influence of body checking on the maintenance of body dissatisfaction are considered. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classic Reynolds flocking model is formally analysed, with results presented and discussed. Flocking behaviour was investigated through the development of two measurements of flocking, flock area and polarisation, with a view to applying the findings to robotic applications. Experiments varying the flocking simulation parameters individually and simultaneously provide new insight into the control of flock behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We define and experimentally test a public provision mechanism that meets three basic ethical requirements and allows community members to influence, via monetary bids, which of several projects is implemented. For each project, participants are assigned personal values, which can be positive or negative. We provide either public or private information about personal values. This produces two distinct public provision games, which are experimentally implemented and analyzed for various projects. In spite of the complex experimental task, participants do not rely on bidding their own personal values as an obvious simple heuristic whose general acceptance would result in fair and efficient outcomes. Rather, they rely on strategic underbidding. Although underbidding is affected by projects’ characteristics, the provision mechanism mostly leads to the implementation of the most efficient project.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual exploration of scientific data in life science area is a growing research field due to the large amount of available data. The Kohonen’s Self Organizing Map (SOM) is a widely used tool for visualization of multidimensional data. In this paper we present a fast learning algorithm for SOMs that uses a simulated annealing method to adapt the learning parameters. The algorithm has been adopted in a data analysis framework for the generation of similarity maps. Such maps provide an effective tool for the visual exploration of large and multi-dimensional input spaces. The approach has been applied to data generated during the High Throughput Screening of molecular compounds; the generated maps allow a visual exploration of molecules with similar topological properties. The experimental analysis on real world data from the National Cancer Institute shows the speed up of the proposed SOM training process in comparison to a traditional approach. The resulting visual landscape groups molecules with similar chemical properties in densely connected regions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work a new method for clustering and building a topographic representation of a bacteria taxonomy is presented. The method is based on the analysis of stable parts of the genome, the so-called “housekeeping genes”. The proposed method generates topographic maps of the bacteria taxonomy, where relations among different type strains can be visually inspected and verified. Two well known DNA alignement algorithms are applied to the genomic sequences. Topographic maps are optimized to represent the similarity among the sequences according to their evolutionary distances. The experimental analysis is carried out on 147 type strains of the Gammaprotebacteria class by means of the 16S rRNA housekeeping gene. Complete sequences of the gene have been retrieved from the NCBI public database. In the experimental tests the maps show clusters of homologous type strains and present some singular cases potentially due to incorrect classification or erroneous annotations in the database.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new heuristic for the Steiner Minimal Tree problem is presented here. The method described is based on the detection of particular sets of nodes in networks, the “Hot Spot” sets, which are used to obtain better approximations of the optimal solutions. An algorithm is also proposed which is capable of improving the solutions obtained by classical heuristics, by means of a stirring process of the nodes in solution trees. Classical heuristics and an enumerative method are used CIS comparison terms in the experimental analysis which demonstrates the goodness of the heuristic discussed in this paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new heuristic for the Steiner minimal tree problem is presented. The method described is based on the detection of particular sets of nodes in networks, the “hot spot” sets, which are used to obtain better approximations of the optimal solutions. An algorithm is also proposed which is capable of improving the solutions obtained by classical heuristics, by means of a stirring process of the nodes in solution trees. Classical heuristics and an enumerative method are used as comparison terms in the experimental analysis which demonstrates the capability of the heuristic discussed

Relevância:

60.00% 60.00%

Publicador:

Resumo:

K-Means is a popular clustering algorithm which adopts an iterative refinement procedure to determine data partitions and to compute their associated centres of mass, called centroids. The straightforward implementation of the algorithm is often referred to as `brute force' since it computes a proximity measure from each data point to each centroid at every iteration of the K-Means process. Efficient implementations of the K-Means algorithm have been predominantly based on multi-dimensional binary search trees (KD-Trees). A combination of an efficient data structure and geometrical constraints allow to reduce the number of distance computations required at each iteration. In this work we present a general space partitioning approach for improving the efficiency and the scalability of the K-Means algorithm. We propose to adopt approximate hierarchical clustering methods to generate binary space partitioning trees in contrast to KD-Trees. In the experimental analysis, we have tested the performance of the proposed Binary Space Partitioning K-Means (BSP-KM) when a divisive clustering algorithm is used. We have carried out extensive experimental tests to compare the proposed approach to the one based on KD-Trees (KD-KM) in a wide range of the parameters space. BSP-KM is more scalable than KDKM, while keeping the deterministic nature of the `brute force' algorithm. In particular, the proposed space partitioning approach has shown to overcome the well-known limitation of KD-Trees in high-dimensional spaces and can also be adopted to improve the efficiency of other algorithms in which KD-Trees have been used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Gossip (or Epidemic) protocols have emerged as a communication and computation paradigm for large-scale networked systems. These protocols are based on randomised communication, which provides probabilistic guarantees on convergence speed and accuracy. They also provide robustness, scalability, computational and communication efficiency and high stability under disruption. This work presents a novel Gossip protocol named Symmetric Push-Sum Protocol for the computation of global aggregates (e.g., average) in decentralised and asynchronous systems. The proposed approach combines the simplicity of the push-based approach and the efficiency of the push-pull schemes. The push-pull schemes cannot be directly employed in asynchronous systems as they require synchronous paired communication operations to guarantee their accuracy. Although push schemes guarantee accuracy even with asynchronous communication, they suffer from a slower and unstable convergence. Symmetric Push- Sum Protocol does not require synchronous communication and achieves a convergence speed similar to the push-pull schemes, while keeping the accuracy stability of the push scheme. In the experimental analysis, we focus on computing the global average as an important class of node aggregation problems. The results have confirmed that the proposed method inherits the advantages of both other schemes and outperforms well-known state of the art protocols for decentralized Gossip-based aggregation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Power delivery for biomedical implants is a major consideration in their design for both measurement and stimulation. When performed by a wireless technique, transmission efficiency is critically important not only because of the costs associated with any losses but also because of the nature of those losses, for example, excessive heat can be uncomfortable for the individual involved. In this study, a method and means of wireless power transmission suitable for biomedical implants are both discussed and experimentally evaluated. The procedure initiated is comparable in size and simplicity to those methods already employed; however, some of Tesla’s fundamental ideas have been incorporated in order to obtain a significant improvement in efficiency. This study contains a theoretical basis for the approach taken; however, the emphasis here is on practical experimental analysis

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Power delivery for biomedical implants is a major consideration in their design for both measurement and stimulation. When performed by a wireless technique, transmission efficiency is critically important not only because of the costs associated with any losses but also because of the nature of those losses, for example, excessive heat can be uncomfortable for the individual involved. In this study, a method and means of wireless power transmission suitable for biomedical implants are both discussed and experimentally evaluated. The procedure initiated is comparable in size and simplicity to those methods already employed; however, some of Tesla’s fundamental ideas have been incorporated in order to obtain a significant improvement in efficiency. This study contains a theoretical basis for the approach taken; however, the emphasis here is on practical experimental analysis.

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite a number of earlier studies which seemed to confirm molecular adsorption of water on close-packed surfaces of late transition metals, new controversy has arisen over a recent theoretical work by Feibelman, according to which partial dissociation occurs on the Ru{0001} surface leading to a mixed (H2O + OH + H) superstructure. Here, we present a refined LEED-IV analysis of the (root3 x root3)R30degrees-D2O-Ru{0001} structure, testing explicitly this new model by Feibelman. Our results favour the model proposed earlier by Held and Menzel assuming intact water molecules with almost coplanar oxygen atoms and out-of-plane hydrogen atoms atop the slightly higher oxygen atoms. The partially dissociated model with an almost identical arrangement of oxygen atoms can, however, not unambiguously be excluded, especially when the single hydrogen atoms are not present in the surface unit cell. In contrast to the earlier LEED-IV analysis, we can, however, clearly exclude a buckled geometry of oxygen atoms.