913 resultados para Nearest Neighbour


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The length and time scales accessible to optical tweezers make them an ideal tool for the examination of colloidal systems. Embedded high-refractive-index tracer particles in an index-matched hard sphere suspension provide 'handles' within the system to investigate the mechanical behaviour. Passive observations of the motion of a single probe particle give information about the linear response behaviour of the system, which can be linked to the macroscopic frequency-dependent viscous and elastic moduli of the suspension. Separate 'dragging' experiments allow observation of a sample's nonlinear response to an applied stress on a particle-by particle basis. Optical force measurements have given new data about the dynamics of phase transitions and particle interactions; an example in this study is the transition from liquid-like to solid-like behaviour, and the emergence of a yield stress and other effects attributable to nearest-neighbour caging effects. The forces needed to break such cages and the frequency of these cage breaking events are investigated in detail for systems close to the glass transition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We compare a number of models of post War US output growth in terms of the degree and pattern of non-linearity they impart to the conditional mean, where we condition on either the previous period's growth rate, or the previous two periods' growth rates. The conditional means are estimated non-parametrically using a nearest-neighbour technique on data simulated from the models. In this way, we condense the complex, dynamic, responses that may be present in to graphical displays of the implied conditional mean.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information on the breeding biology of the White-headed Vulture Trigonoceps occipitalis is limited and published data are few. Within the Kruger National Park in north-east South Africa there is a regionally important population of about 60 White-headed Vulture pairs, of which 22 pairs were monitored for five years between 2008 and 2012 to determine key aspects of their breeding biology. Across 73 pair/years the mean productivity of 55 breeding attempts was 0.69 chicks per pair. Median egg-laying date across all of the Kruger National Park was 27 June, but northern nests were approximately 30 d later than southern nests. Mean (SD) nearest-neighbour distance was 9 976  7 965 m and inter-nest distances ranged from 1 400 m to more than 20 km, but this did not differ significantly between habitat types. Breeding productivity did not differ significantly between habitat types. The results presented here are the first for this species in Kruger National Park and provide details against which future comparisons can be made.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An important application of Big Data Analytics is the real-time analysis of streaming data. Streaming data imposes unique challenges to data mining algorithms, such as concept drifts, the need to analyse the data on the fly due to unbounded data streams and scalable algorithms due to potentially high throughput of data. Real-time classification algorithms that are adaptive to concept drifts and fast exist, however, most approaches are not naturally parallel and are thus limited in their scalability. This paper presents work on the Micro-Cluster Nearest Neighbour (MC-NN) classifier. MC-NN is based on an adaptive statistical data summary based on Micro-Clusters. MC-NN is very fast and adaptive to concept drift whilst maintaining the parallel properties of the base KNN classifier. Also MC-NN is competitive compared with existing data stream classifiers in terms of accuracy and speed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we study the variations of groups (galaxy properties according to the assembly history in Sloan Digital Sky Survey Data Release 6 (SDSS-DR6) selected groups. Using mock SDSS group catalogues, we find two suitable indicators of group formation time: (i) the isolation of the group, defined as the distance to the nearest neighbour ill terms of its virial radius and 00 the concentration. measured as the groups inner density calculated using the fifth nearest bright galaxy to the groups centre. Groups Within narrow ranges of Mass ill the mock catalO-Lie show increasing Ifl-OLIP alle With isolation and concentration. However, in the observational data the stellar age, as indicated by the spectral type, only shows a correlation with concentration. We study groups of similar mass and different assembly history. finding important differences ill their galaxy population. Particularly, ill high-mass SDSS groups. the number of members. mass-to-light ratios, red galaxy fractions and the magnitude difference between the brightest and second-brightest group galaxies, show different trends as a function of isolation and concentration, even when it is expected that the latter two quantities correlate with group age. Conversely. low-mass SDSS groups appear to be less sensitive to their assembly history. The correlations detected in the SDSS are not consistent with the trends measured in the mock catalogues. However, discrepancies can he explained in terms of the disagreement found in the a-e-isolation trends, suggesting that the model might be overestimating the effects of, environment, We discuss how the modelling of the cold gas ill satellite galaxies could be responsible for this problem. These results call be Used to improve our Understanding of the evolution of galaxies ill high-density environments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we propose a scheme for quasi-perfect state transfer in a network of dissipative harmonic oscillators. We consider ideal sender and receiver oscillators connected by a chain of nonideal transmitter oscillators coupled by nearest-neighbour resonances. From the algebraic properties of the dynamical quantities describing the evolution of the network state, we derive a criterion, fixing the coupling strengths between all the oscillators, apart from their natural frequencies, enabling perfect state transfer in the particular case of ideal transmitter oscillators. Our criterion provides an easily manipulated formula enabling perfect state transfer in the special case where the network nonidealities are disregarded. We also extend such a criterion to dissipative networks where the fidelity of the transferred state decreases due to the loss mechanisms. To circumvent almost completely the adverse effect of decoherence, we propose a protocol to achieve quasi-perfect state transfer in nonideal networks. By adjusting the common frequency of the sender and the receiver oscillators to be out of resonance with that of the transmitters, we demonstrate that the sender`s state tunnels to the receiver oscillator by virtually exciting the nonideal transmitter chain. This virtual process makes negligible the decay rate associated with the transmitter line at the expense of delaying the time interval for the state transfer process. Apart from our analytical results, numerical computations are presented to illustrate our protocol.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study analyses the effects of firm relocation on firm profits, using longitudinal data on Swedish limtied liability firms and employing a difference-in-differnce propensity score method in the empirical analysis. Using propensity score matching, the pre-relocalization differneces between relocating and non-relocating firms are balanced. In addition to that, a difference-in-difference estimator is employed in order to control for all time-invariant unobserved heterogeneity among firms. For matching, nearest neighbour matching, using the one-, two- and three nearest neighbours is employed. The balanacing results indicate that matching achieves a good balance, and that similar relocating and non-relocating firms are being compared. The estimated average treatment on the treatment effects indicate thats relocations has a significant effect on the profits of the relocating firms. In other words, firms taht relocate increase their profits significantly, in comparison to what the profits would be had the firms not relocated. This effect is estimated to vary between 3 to 11 percentage points, depending on the lenght of the analysed period after relocation. 

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Data mining can be used in healthcare industry to “mine” clinical data to discover hidden information for intelligent and affective decision making. Discovery of hidden patterns and relationships often goes intact, yet advanced data mining techniques can be helpful as remedy to this scenario. This thesis mainly deals with Intelligent Prediction of Chronic Renal Disease (IPCRD). Data covers blood, urine test, and external symptoms applied to predict chronic renal disease. Data from the database is initially transformed to Weka (3.6) and Chi-Square method is used for features section. After normalizing data, three classifiers were applied and efficiency of output is evaluated. Mainly, three classifiers are analyzed: Decision Tree, Naïve Bayes, K-Nearest Neighbour algorithm. Results show that each technique has its unique strength in realizing the objectives of the defined mining goals. Efficiency of Decision Tree and KNN was almost same but Naïve Bayes proved a comparative edge over others. Further sensitivity and specificity tests are used as statistical measures to examine the performance of a binary classification. Sensitivity (also called recall rate in some fields) measures the proportion of actual positives which are correctly identified while Specificity measures the proportion of negatives which are correctly identified. CRISP-DM methodology is applied to build the mining models. It consists of six major phases: business understanding, data understanding, data preparation, modeling, evaluation, and deployment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The flowering, sex ratio, and spatial distribution of four dioecious species of Trichilia (Meliaceae) were studied in a semi-deciduous forest in southeastern Brazil. All reproductive trees (T. clausseni, T. pallida and T. catigua) with dbh greater than or equal to5 cm within a 1-ha plot were collected, sexed, mapped and, for individuals of each species, the distances to the nearest neighbour of the same and opposite sex were measured. For the shrub species T elegans (dbh <5 cm), all reproductive individuals were sampled randomly in 10 samples of 10 x 10 m. The reproductive phenology was observed at weekly to monthly intervals from May 1988 to January 1990. The species are strictly dioecious, did not present any sex-mixed trees or sex switching during the study, and sex ratio did not differ significantly from 1:1. The size distributions and the relative size variation were not significantly different. between sexes. There was no significant segregation or clumping between individuals of either sex and no fruit production without pollination. Onset of flowering and flowering peak were synchronous between male and female plants for all species studied. Flower synchrony was related to outcrossing and pollinator attraction rather than climatic factors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Breeding success and nest-site characteristics were studied during the 1996-1997 breeding season in a colony of Scarlet Ibises Eudocimus ruber in south-eastern Brazil to test the hypothesis that nest-site characteristics and clutch size affect nest success. Two nesting pulses produced young, the earlier being more successful. Predation accounted for most failures during the first pulse, wind destruction during the second. A third pulse with few nests produced no young. Adult Ibises abandoned nests when they lost sight of other incubating birds. Logistic regression analysis indicated that nest success during the first pulse was positively related to clutch size, number of nests in the nest tree and in the nearest tree, and negatively to the distance to the nearest neighbour. During the second pulse there were significant negative associations between success, nest height and distance to the fourth nearest nest, and a positive association between success and nest cover. The results agree with the 'selfish herd' hypothesis, indicating that nest aggregation may increase breeding success, but the nest-site characteristics affecting success can differ over the course of one breeding season.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A parallel technique, for a distributed memory machine, based on domain decomposition for solving the Navier-Stokes equations in cartesian and cylindrical coordinates in two dimensions with free surfaces is described. It is based on the code by Tome and McKee (J. Comp. Phys. 110 (1994) 171-186) and Tome (Ph.D. Thesis, University of Strathclyde, Glasgow, 1993) which in turn is based on the SMAC method by Amsden and Harlow (Report LA-4370, Los Alamos Scientific Laboratory, 1971), which solves the Navier-Stokes equations in three steps: the momentum and Poisson equations and particle movement, These equations are discretized by explicit and 5-point finite differences. The parallelization is performed by splitting the computation domain into vertical panels and assigning each of these panels to a processor. All the computation can then be performed using nearest neighbour communication. Test runs comparing the performance of the parallel with the serial code, and a discussion of the load balancing question are presented. PVM is used for communication between processes. (C) 1999 Elsevier B.V. B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.