681 resultados para cluster computing
Resumo:
We present a detailed description of the Voronoi Tessellation (VT) cluster finder algorithm in 2+1 dimensions, which improves on past implementations of this technique. The need for cluster finder algorithms able to produce reliable cluster catalogs up to redshift 1 or beyond and down to 10(13.5) solar masses is paramount especially in light of upcoming surveys aiming at cosmological constraints from galaxy cluster number counts. We build the VT in photometric redshift shells and use the two-point correlation function of the galaxies in the field to both determine the density threshold for detection of cluster candidates and to establish their significance. This allows us to detect clusters in a self-consistent way without any assumptions about their astrophysical properties. We apply the VT to mock catalogs which extend to redshift 1.4 reproducing the ACDM cosmology and the clustering properties observed in the Sloan Digital Sky Survey data. An objective estimate of the cluster selection function in terms of the completeness and purity as a function of mass and redshift is as important as having a reliable cluster finder. We measure these quantities by matching the VT cluster catalog with the mock truth table. We show that the VT can produce a cluster catalog with completeness and purity > 80% for the redshift range up to similar to 1 and mass range down to similar to 10(13.5) solar masses.
Resumo:
Background and Objectives: There are some indications that low-level laser therapy (LLLT) may delay the development of skeletal muscle fatigue during high-intensity exercise. There have also been claims that LED cluster probes may be effective for this application however there are differences between LED and laser sources like spot size, spectral width, power output, etc. In this study we wanted to test if light emitting diode therapy (LEDT) can alter muscle performance, fatigue development and biochemical markers for skeletal muscle recovery in an experimental model of biceps humeri muscle contractions. Study Design/Materials and Methods: Ten male professional volleyball players (23.6 [SD +/- 5.6] years old) entered a randomized double-blinded placebo-controlled crossover trial. Active cluster LEDT (69 LEDs with wavelengths 660/850 nm, 10/30 mW, 30 seconds total irradiation time, 41.7J of total energy irradiated) or an identical placebo LEDT was delivered under double-blinded conditions to the middle of biceps humeri muscle immediately before exercise. All subjects performed voluntary biceps humeri contractions with a workload of 75% of their maximal voluntary contraction force (MVC) until exhaustion. Results: Active LEDT increased the number of biceps humeri contractions by 12.9% (38.60 [SD +/- 9.03] vs. 34.20 [SD +/- 8.68], P = 0.021) and extended the elapsed time to perform contractions by 11.6% (P = 0.036) versus placebo. In addition, post-exercise levels of biochemical markers decreased significantly with active LEDT: Blood Lactate (P = 0.042), Creatine Kinase (P = 0.035), and C-Reative Protein levels (P = 0.030), when compared to placebo LEDT. Conclusion: We conclude that this particular procedure and dose of LEDT immediately before exhaustive biceps humeri contractions, causes a slight delay in the development of skeletal muscle fatigue, decreases post-exercise blood lactate levels and inhibits the release of Creatine Kinase and C-Reative Protein. Lasers Surg. Med. 41:572-577, 2009. (C) 2009 Wiley-Liss, Inc.
Resumo:
One of the top ten most influential data mining algorithms, k-means, is known for being simple and scalable. However, it is sensitive to initialization of prototypes and requires that the number of clusters be specified in advance. This paper shows that evolutionary techniques conceived to guide the application of k-means can be more computationally efficient than systematic (i.e., repetitive) approaches that try to get around the above-mentioned drawbacks by repeatedly running the algorithm from different configurations for the number of clusters and initial positions of prototypes. To do so, a modified version of a (k-means based) fast evolutionary algorithm for clustering is employed. Theoretical complexity analyses for the systematic and evolutionary algorithms under interest are provided. Computational experiments and statistical analyses of the results are presented for artificial and text mining data sets. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a filter-based algorithm for feature selection. The filter is based on the partitioning of the set of features into clusters. The number of clusters, and consequently the cardinality of the subset of selected features, is automatically estimated from data. The computational complexity of the proposed algorithm is also investigated. A variant of this filter that considers feature-class correlations is also proposed for classification problems. Empirical results involving ten datasets illustrate the performance of the developed algorithm, which in general has obtained competitive results in terms of classification accuracy when compared to state of the art algorithms that find clusters of features. We show that, if computational efficiency is an important issue, then the proposed filter May be preferred over their counterparts, thus becoming eligible to join a pool of feature selection algorithms to be used in practice. As an additional contribution of this work, a theoretical framework is used to formally analyze some properties of feature selection methods that rely on finding clusters of features. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Models of dynamical dark energy unavoidably possess fluctuations in the energy density and pressure of that new component. In this paper we estimate the impact of dark energy fluctuations on the number of galaxy clusters in the Universe using a generalization of the spherical collapse model and the Press-Schechter formalism. The observations we consider are several hypothetical Sunyaev-Zel`dovich and weak lensing (shear maps) cluster surveys, with limiting masses similar to ongoing (SPT, DES) as well as future (LSST, Euclid) surveys. Our statistical analysis is performed in a 7-dimensional cosmological parameter space using the Fisher matrix method. We find that, in some scenarios, the impact of these fluctuations is large enough that their effect could already be detected by existing instruments such as the South Pole Telescope, when priors from other standard cosmological probes are included. We also show how dark energy fluctuations can be a nuisance for constraining cosmological parameters with cluster counts, and point to a degeneracy between the parameter that describes dark energy pressure on small scales (the effective sound speed) and the parameters describing its equation of state.
Resumo:
An experimental overview of reactions induced by the stable, but weakly-bound nuclei (6)Li, (7)Li and (9)Be, and by the exotic, halo nuclei (6)He, (8)B, (11)Be and (17)F On medium-mass targets, such as (58)Ni, (59)Co or (64)Zn, is presented. Existing data on elastic scattering, total reaction cross sections, fusion, breakup and transfer channels are discussed in the framework of a CDCC approach taking into account the breakup degree of freedom.
Resumo:
The InteGrade middleware intends to exploit the idle time of computing resources in computer laboratories. In this work we investigate the performance of running parallel applications with communication among processors on the InteGrade grid. As costly communication on a grid can be prohibitive, we explore the so-called systolic or wavefront paradigm to design the parallel algorithms in which no global communication is used. To evaluate the InteGrade middleware we considered three parallel algorithms that solve the matrix chain product problem, the 0-1 Knapsack Problem, and the local sequence alignment problem, respectively. We show that these three applications running under the InteGrade middleware and MPI take slightly more time than the same applications running on a cluster with only LAM-MPI support. The results can be considered promising and the time difference between the two is not substantial. The overhead of the InteGrade middleware is acceptable, in view of the benefits obtained to facilitate the use of grid computing by the user. These benefits include job submission, checkpointing, security, job migration, etc. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
A systematic and comprehensive study of the interaction of citrate-stabilized gold nanoparticles with triruthenium cluster complexes of general formula [Ru(3)(CH(3)COO)(6)(L)](+) [L = 4-cyanopyridine (4-CNpy), 4,4`-bipyridine (4,4`-bpy) or 4,4`-bis(pyridyl)ethylene (bpe)] has been carried out. The cluster-nanoparticle interaction in solution and the construction of thin films of the hybrid materials were investigated in detail by electronic and surface plasmon resonance (SPR) spectroscopy, Raman scattering spectroscopy and scanning electron microscopy (SEM). Citrate-stabilized gold nanoparticles readily interacted with [Ru(3)O(CH(3)COO)(6)(L)(3)](+) complexes to generate functionalized nanoparticles that tend to aggregate according to rates and extents that depend on the bond strength defined by the characteristics of the cluster L ligands following the sequence bpe > 4,4`-bpy >> 4-CNpy. The formation of compact thin films of hybrid AuNP/[Ru(3)O(CH(3)COO)(6)(L)(3)](+) derivatives with L = bpe and 4,4`-bpy indicated that the stability/lability of AuNP-cluster bonds as well as their solubility are important parameters that influence the film contruction process. Fluorine-doped tin oxide electrodes modified with thin films of these nanomaterials exhibited similar electrocatalytic activity but much higher sensitivity than a conventional gold electrode in the oxidation of nitrite ion to nitrate depending on the bridging cluster complex, demonstrating the high potential for the development of amperometric sensors.
Resumo:
Parkinson's disease (PD) is the second most common neurodegenerative disorder (after Alzheimer's disease) and directly affects upto 5 million people worldwide. The stages (Hoehn and Yaar) of disease has been predicted by many methods which will be helpful for the doctors to give the dosage according to it. So these methods were brought up based on the data set which includes about seventy patients at nine clinics in Sweden. The purpose of the work is to analyze unsupervised technique with supervised neural network techniques in order to make sure the collected data sets are reliable to make decisions. The data which is available was preprocessed before calculating the features of it. One of the complex and efficient feature called wavelets has been calculated to present the data set to the network. The dimension of the final feature set has been reduced using principle component analysis. For unsupervised learning k-means gives the closer result around 76% while comparing with supervised techniques. Back propagation and J4 has been used as supervised model to classify the stages of Parkinson's disease where back propagation gives the variance percentage of 76-82%. The results of both these models have been analyzed. This proves that the data which are collected are reliable to predict the disease stages in Parkinson's disease.
Resumo:
Cloud computing innebär användning av datorresurser som är tillgängliga via ett nätverk, oftast Internet och är ett område som har vuxit fram i snabb takt under de senaste åren. Allt fler företag migrerar hela eller delar av sin verksamhet till molnet. Sogeti i Borlänge har behov av att migrera sina utvecklingsmiljöer till en molntjänst då drift och underhåll av dessa är kostsamma och tidsödande. Som Microsoftpartners vill Sogeti använda Microsoft tjänst för cloud computing, Windows Azure, för detta syfte. Migration till molnet är ett nytt område för Sogeti och de har inga beskrivningar för hur en sådan process går till. Vårt uppdrag var att utveckla ett tillvägagångssätt för migration av en IT-lösning till molnet. En del av uppdraget blev då att kartlägga cloud computing, dess beståndsdelar samt vilka för- och nackdelar som finns, vilket har gjort att vi har fått grundläggande kunskap i ämnet. För att utveckla ett tillvägagångssätt för migration har vi utfört flera migrationer av virtuella maskiner till Windows Azure och utifrån dessa migrationer, litteraturstudier och intervjuer dragit slutsatser som mynnat ut i ett generellt tillvägagångssätt för migration till molnet. Resultatet har visat att det är svårt att göra en generell men samtidigt detaljerad beskrivning över ett tillvägagångssätt för migration, då scenariot ser olika ut beroende på vad som ska migreras och vilken typ av molntjänst som används. Vi har dock utifrån våra erfarenheter från våra migrationer, tillsammans med litteraturstudier, dokumentstudier och intervjuer lyft vår kunskap till en generell nivå. Från denna kunskap har vi sammanställt ett generellt tillvägagångssätt med större fokus på de förberedande aktiviteter som en organisation bör genomföra innan migration. Våra studier har även resulterat i en fördjupad beskrivning av cloud computing. I vår studie har vi inte sett att någon tidigare har beskrivit kritiska framgångsfaktorer i samband med cloud computing. I vårt empiriska arbete har vi dock identifierat tre kritiska framgångsfaktorer för cloud computing och i och med detta täckt upp en del av kunskapsgapet där emellan.
Resumo:
Learning from anywhere anytime is a contemporary phenomenon in the field of education that is thought to be flexible, time and cost saving. The phenomenon is evident in the way computer technology mediates knowledge processes among learners. Computer technology is however, in some instances, faulted. There are studies that highlight drawbacks of computer technology use in learning. In this study we aimed at conducting a SWOT analysis on ubiquitous computing and computer-mediated social interaction and their affect on education. Students and teachers were interviewed on the mentioned concepts using focus group interviews. Our contribution in this study is, identifying what teachers and students perceive to be the strength, weaknesses, opportunities and threats of ubiquitous computing and computer-mediated social interaction in education. We also relate the findings with literature and present a common understanding on the SWOT of these concepts. Results show positive perceptions. Respondents revealed that ubiquitous computing and computer-mediated social interaction are important in their education due to advantages such as flexibility, efficiency in terms of cost and time, ability to acquire computer skills. Nevertheless disadvantages where also mentioned for example health effects, privacy and security issues, noise in the learning environment, to mention but a few. This paper gives suggestions on how to overcome threats mentioned.
Resumo:
The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.