681 resultados para computing cluster
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes a filter-based algorithm for feature selection. The filter is based on the partitioning of the set of features into clusters. The number of clusters, and consequently the cardinality of the subset of selected features, is automatically estimated from data. The computational complexity of the proposed algorithm is also investigated. A variant of this filter that considers feature-class correlations is also proposed for classification problems. Empirical results involving ten datasets illustrate the performance of the developed algorithm, which in general has obtained competitive results in terms of classification accuracy when compared to state of the art algorithms that find clusters of features. We show that, if computational efficiency is an important issue, then the proposed filter May be preferred over their counterparts, thus becoming eligible to join a pool of feature selection algorithms to be used in practice. As an additional contribution of this work, a theoretical framework is used to formally analyze some properties of feature selection methods that rely on finding clusters of features. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Models of dynamical dark energy unavoidably possess fluctuations in the energy density and pressure of that new component. In this paper we estimate the impact of dark energy fluctuations on the number of galaxy clusters in the Universe using a generalization of the spherical collapse model and the Press-Schechter formalism. The observations we consider are several hypothetical Sunyaev-Zel`dovich and weak lensing (shear maps) cluster surveys, with limiting masses similar to ongoing (SPT, DES) as well as future (LSST, Euclid) surveys. Our statistical analysis is performed in a 7-dimensional cosmological parameter space using the Fisher matrix method. We find that, in some scenarios, the impact of these fluctuations is large enough that their effect could already be detected by existing instruments such as the South Pole Telescope, when priors from other standard cosmological probes are included. We also show how dark energy fluctuations can be a nuisance for constraining cosmological parameters with cluster counts, and point to a degeneracy between the parameter that describes dark energy pressure on small scales (the effective sound speed) and the parameters describing its equation of state.
Resumo:
An experimental overview of reactions induced by the stable, but weakly-bound nuclei (6)Li, (7)Li and (9)Be, and by the exotic, halo nuclei (6)He, (8)B, (11)Be and (17)F On medium-mass targets, such as (58)Ni, (59)Co or (64)Zn, is presented. Existing data on elastic scattering, total reaction cross sections, fusion, breakup and transfer channels are discussed in the framework of a CDCC approach taking into account the breakup degree of freedom.
Resumo:
The InteGrade middleware intends to exploit the idle time of computing resources in computer laboratories. In this work we investigate the performance of running parallel applications with communication among processors on the InteGrade grid. As costly communication on a grid can be prohibitive, we explore the so-called systolic or wavefront paradigm to design the parallel algorithms in which no global communication is used. To evaluate the InteGrade middleware we considered three parallel algorithms that solve the matrix chain product problem, the 0-1 Knapsack Problem, and the local sequence alignment problem, respectively. We show that these three applications running under the InteGrade middleware and MPI take slightly more time than the same applications running on a cluster with only LAM-MPI support. The results can be considered promising and the time difference between the two is not substantial. The overhead of the InteGrade middleware is acceptable, in view of the benefits obtained to facilitate the use of grid computing by the user. These benefits include job submission, checkpointing, security, job migration, etc. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
A systematic and comprehensive study of the interaction of citrate-stabilized gold nanoparticles with triruthenium cluster complexes of general formula [Ru(3)(CH(3)COO)(6)(L)](+) [L = 4-cyanopyridine (4-CNpy), 4,4`-bipyridine (4,4`-bpy) or 4,4`-bis(pyridyl)ethylene (bpe)] has been carried out. The cluster-nanoparticle interaction in solution and the construction of thin films of the hybrid materials were investigated in detail by electronic and surface plasmon resonance (SPR) spectroscopy, Raman scattering spectroscopy and scanning electron microscopy (SEM). Citrate-stabilized gold nanoparticles readily interacted with [Ru(3)O(CH(3)COO)(6)(L)(3)](+) complexes to generate functionalized nanoparticles that tend to aggregate according to rates and extents that depend on the bond strength defined by the characteristics of the cluster L ligands following the sequence bpe > 4,4`-bpy >> 4-CNpy. The formation of compact thin films of hybrid AuNP/[Ru(3)O(CH(3)COO)(6)(L)(3)](+) derivatives with L = bpe and 4,4`-bpy indicated that the stability/lability of AuNP-cluster bonds as well as their solubility are important parameters that influence the film contruction process. Fluorine-doped tin oxide electrodes modified with thin films of these nanomaterials exhibited similar electrocatalytic activity but much higher sensitivity than a conventional gold electrode in the oxidation of nitrite ion to nitrate depending on the bridging cluster complex, demonstrating the high potential for the development of amperometric sensors.
Resumo:
Parkinson's disease (PD) is the second most common neurodegenerative disorder (after Alzheimer's disease) and directly affects upto 5 million people worldwide. The stages (Hoehn and Yaar) of disease has been predicted by many methods which will be helpful for the doctors to give the dosage according to it. So these methods were brought up based on the data set which includes about seventy patients at nine clinics in Sweden. The purpose of the work is to analyze unsupervised technique with supervised neural network techniques in order to make sure the collected data sets are reliable to make decisions. The data which is available was preprocessed before calculating the features of it. One of the complex and efficient feature called wavelets has been calculated to present the data set to the network. The dimension of the final feature set has been reduced using principle component analysis. For unsupervised learning k-means gives the closer result around 76% while comparing with supervised techniques. Back propagation and J4 has been used as supervised model to classify the stages of Parkinson's disease where back propagation gives the variance percentage of 76-82%. The results of both these models have been analyzed. This proves that the data which are collected are reliable to predict the disease stages in Parkinson's disease.
Resumo:
Cloud computing innebär användning av datorresurser som är tillgängliga via ett nätverk, oftast Internet och är ett område som har vuxit fram i snabb takt under de senaste åren. Allt fler företag migrerar hela eller delar av sin verksamhet till molnet. Sogeti i Borlänge har behov av att migrera sina utvecklingsmiljöer till en molntjänst då drift och underhåll av dessa är kostsamma och tidsödande. Som Microsoftpartners vill Sogeti använda Microsoft tjänst för cloud computing, Windows Azure, för detta syfte. Migration till molnet är ett nytt område för Sogeti och de har inga beskrivningar för hur en sådan process går till. Vårt uppdrag var att utveckla ett tillvägagångssätt för migration av en IT-lösning till molnet. En del av uppdraget blev då att kartlägga cloud computing, dess beståndsdelar samt vilka för- och nackdelar som finns, vilket har gjort att vi har fått grundläggande kunskap i ämnet. För att utveckla ett tillvägagångssätt för migration har vi utfört flera migrationer av virtuella maskiner till Windows Azure och utifrån dessa migrationer, litteraturstudier och intervjuer dragit slutsatser som mynnat ut i ett generellt tillvägagångssätt för migration till molnet. Resultatet har visat att det är svårt att göra en generell men samtidigt detaljerad beskrivning över ett tillvägagångssätt för migration, då scenariot ser olika ut beroende på vad som ska migreras och vilken typ av molntjänst som används. Vi har dock utifrån våra erfarenheter från våra migrationer, tillsammans med litteraturstudier, dokumentstudier och intervjuer lyft vår kunskap till en generell nivå. Från denna kunskap har vi sammanställt ett generellt tillvägagångssätt med större fokus på de förberedande aktiviteter som en organisation bör genomföra innan migration. Våra studier har även resulterat i en fördjupad beskrivning av cloud computing. I vår studie har vi inte sett att någon tidigare har beskrivit kritiska framgångsfaktorer i samband med cloud computing. I vårt empiriska arbete har vi dock identifierat tre kritiska framgångsfaktorer för cloud computing och i och med detta täckt upp en del av kunskapsgapet där emellan.
Resumo:
Learning from anywhere anytime is a contemporary phenomenon in the field of education that is thought to be flexible, time and cost saving. The phenomenon is evident in the way computer technology mediates knowledge processes among learners. Computer technology is however, in some instances, faulted. There are studies that highlight drawbacks of computer technology use in learning. In this study we aimed at conducting a SWOT analysis on ubiquitous computing and computer-mediated social interaction and their affect on education. Students and teachers were interviewed on the mentioned concepts using focus group interviews. Our contribution in this study is, identifying what teachers and students perceive to be the strength, weaknesses, opportunities and threats of ubiquitous computing and computer-mediated social interaction in education. We also relate the findings with literature and present a common understanding on the SWOT of these concepts. Results show positive perceptions. Respondents revealed that ubiquitous computing and computer-mediated social interaction are important in their education due to advantages such as flexibility, efficiency in terms of cost and time, ability to acquire computer skills. Nevertheless disadvantages where also mentioned for example health effects, privacy and security issues, noise in the learning environment, to mention but a few. This paper gives suggestions on how to overcome threats mentioned.
Resumo:
The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.
Resumo:
BACKGROUND: Facilitation of local women's groups may reportedly reduce neonatal mortality. It is not known whether facilitation of groups composed of local health care staff and politicians can improve perinatal outcomes. We hypothesised that facilitation of local stakeholder groups would reduce neonatal mortality (primary outcome) and improve maternal, delivery, and newborn care indicators (secondary outcomes) in Quang Ninh province, Vietnam. METHODS AND FINDINGS: In a cluster-randomized design 44 communes were allocated to intervention and 46 to control. Laywomen facilitated monthly meetings during 3 years in groups composed of health care staff and key persons in the communes. A problem-solving approach was employed. Births and neonatal deaths were monitored, and interviews were performed in households of neonatal deaths and of randomly selected surviving infants. A latent period before effect is expected in this type of intervention, but this timeframe was not pre-specified. Neonatal mortality rate (NMR) from July 2008 to June 2011 was 16.5/1,000 (195 deaths per 11,818 live births) in the intervention communes and 18.4/1,000 (194 per 10,559 live births) in control communes (adjusted odds ratio [OR] 0.96 [95% CI 0.73-1.25]). There was a significant downward time trend of NMR in intervention communes (p = 0.003) but not in control communes (p = 0.184). No significant difference in NMR was observed during the first two years (July 2008 to June 2010) while the third year (July 2010 to June 2011) had significantly lower NMR in intervention arm: adjusted OR 0.51 (95% CI 0.30-0.89). Women in intervention communes more frequently attended antenatal care (adjusted OR 2.27 [95% CI 1.07-4.8]). CONCLUSIONS: A randomized facilitation intervention with local stakeholder groups composed of primary care staff and local politicians working for three years with a perinatal problem-solving approach resulted in increased attendance to antenatal care and reduced neonatal mortality after a latent period. TRIAL REGISTRATION: Current Controlled Trials ISRCTN44599712. Please see later in the article for the Editors' Summary.
Resumo:
BACKGROUND: Reminder systems in electronic patient records (EPR) have proven to affect both health care professionals' behaviour and patient outcomes. The aim of this cluster randomised trial was to investigate the effects of implementing a clinical practice guideline (CPG) for peripheral venous catheters (PVCs) in paediatric care in the format of reminders integrated in the EPRs, on PVC-related complications, and on registered nurses' (RNs') self-reported adherence to the guideline. An additional aim was to study the relationship between contextual factors and the outcomes of the intervention. METHODS: The study involved 12 inpatient units at a paediatric university hospital. The reminders included choice of PVC, hygiene, maintenance, and daily inspection of PVC site. Primary outcome was documented signs and symptoms of PVC-related complications at removal, retrieved from the EPR. Secondary outcome was RNs' adherence to a PVC guideline, collected through a questionnaire that also included RNs' perceived work context, as measured by the Alberta Context Tool. Units were allocated into two strata, based on occurrence of PVCs. A blinded simple draw of lots from each stratum randomised six units to the control and intervention groups, respectively. Units were not blinded. The intervention group included 626 PVCs at baseline and 618 post-intervention and the control group 724 PVCs at baseline and 674 post-intervention. RNs included at baseline were 212 (65.4 %) and 208 (71.5 %) post-intervention. RESULTS: No significant effect was found for the computer reminders on PVC-related complications nor on RNs' adherence to the guideline recommendations. The complication rate at baseline and post-intervention was 40.6 % (95 % confidence interval (CI) 36.7-44.5) and 41.9 % (95 % CI 38.0-45.8), for the intervention group and 40.3 % (95 % CI 36.8-44.0) and 46.9 % (95 % CI 43.1-50.7) for the control. In general, RNs' self-rated work context varied from moderately low to moderately high, indicating that conditions for a successful implementation to occur were less optimal. CONCLUSIONS: The reminders might have benefitted from being accompanied by a tailored intervention that targeted specific barriers, such as the low frequency of recorded reasons for removal, the low adherence to daily inspection of PVC sites, and the lack of regular feedback to the RNs. TRIAL REGISTRATION: Current Controlled Trials ISRCTN44819426.
Resumo:
MyGrid is an e-Science Grid project that aims to help biologists and bioinformaticians to perform workflow-based in silico experiments, and help them to automate the management of such workflows through personalisation, notification of change and publication of experiments. In this paper, we describe the architecture of myGrid and how it will be used by the scientist. We then show how myGrid can benefit from agents technologies. We have identified three key uses of agent technologies in myGrid: user agents, able to customize and personalise data, agent communication languages offering a generic and portable communication medium, and negotiation allowing multiple distributed entities to reach service level agreements.