681 resultados para computing cluster
The Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions
Resumo:
Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.
Resumo:
Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.
Resumo:
In this paper we propose methods for computing Fresnel integrals based on truncated trapezium rule approximations to integrals on the real line, these trapezium rules modified to take into account poles of the integrand near the real axis. Our starting point is a method for computation of the error function of complex argument due to Matta and Reichel (J Math Phys 34:298–307, 1956) and Hunter and Regan (Math Comp 26:539–541, 1972). We construct approximations which we prove are exponentially convergent as a function of N , the number of quadrature points, obtaining explicit error bounds which show that accuracies of 10−15 uniformly on the real line are achieved with N=12 , this confirmed by computations. The approximations we obtain are attractive, additionally, in that they maintain small relative errors for small and large argument, are analytic on the real axis (echoing the analyticity of the Fresnel integrals), and are straightforward to implement.
Resumo:
Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.
Resumo:
Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.
Resumo:
n this study, the authors discuss the effective usage of technology to solve the problem of deciding on journey start times for recurrent traffic conditions. The developed algorithm guides the vehicles to travel on more reliable routes that are not easily prone to congestion or travel delays, ensures that the start time is as late as possible to avoid the traveller waiting too long at their destination and attempts to minimise the travel time. Experiments show that in order to be more certain of reaching their destination on time, a traveller has to leave early and correspondingly arrive early, resulting in a large waiting time. The application developed here asks the user to set this certainty factor as per the task in hand, and computes the best start time and route.
Resumo:
SOA (Service Oriented Architecture), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.
Resumo:
The use of virtualization in high-performance computing (HPC) has been suggested as a means to provide tailored services and added functionality that many users expect from full-featured Linux cluster environments. The use of virtual machines in HPC can offer several benefits, but maintaining performance is a crucial factor. In some instances the performance criteria are placed above the isolation properties. This selective relaxation of isolation for performance is an important characteristic when considering resilience for HPC environments that employ virtualization. In this paper we consider some of the factors associated with balancing performance and isolation in configurations that employ virtual machines. In this context, we propose a classification of errors based on the concept of “error zones”, as well as a detailed analysis of the trade-offs between resilience and performance based on the level of isolation provided by virtualization solutions. Finally, a set of experiments are performed using different virtualization solutions to elucidate the discussion.
Resumo:
The mammalian lignan, enterolactone, has been shown to reduce the proliferation of the earlier stages of prostate cancer at physiological concentrations in vitro. However, efficacy in the later stages of the disease occurs at concentrations difficult to achieve through dietary modification. We have therefore investigated what concentration(s) of enterolactone can restrict proliferation in multiple stages of prostate cancer using an in vitro model system of prostate disease. We determined that enterolactone at 20 μM significantly restricted the proliferation of mid and late stage models of prostate disease. These effects were strongly associated with changes in the expression of the DNA licencing genes (GMNN, CDT1, MCM2 and 7), in reduced expression of the miR-106b cluster (miR-106b, miR-93, and miR-25), and in increased expression of the PTEN tumour suppressor gene. We have shown anti-proliferative effects of enterolactone in earlier stages of prostate disease than previously reported and that these effects are mediated, in part, by microRNA-mediated regulation.
Resumo:
The aim of the present study was to investigate whether the saliency effect for word beginnings reported in children with Dyslexia (Marshall & van der Lely, 2009) can be found also in TD children. Thirty-four TD Italian children aged 8-10 completed two specifically designed tasks: a production task and a perception task. Both tasks used nonwords containing clusters consisting of plosive plus liquid (eg. pl). Clusters could be either in a stressed or in an unstressed syllable, and could be either in initial position (first syllable) or in medial position (second syllable). In the production task children were asked to repeat the non-words. In the perception task, the children were asked to discriminate between two nonwords differing in one phoneme belonging to a cluster by reporting whether two repetitions were the same or different. Results from the production task showed that children are more accurate in repeating stressed than unstressed syllables, but there was no difference with respect to position of the cluster. Results from the perception task showed that children performed more accurately when discriminating word initial contrasts than when discriminating word medial contrasts, especially if the cluster was unstressed. Implications of this finding for clinical assessments are discussed.
Resumo:
We present an overview of the MELODIES project, which is developing new data-intensive environmental services based on data from Earth Observation satellites, government databases, national and European agencies and more. We focus here on the capabilities and benefits of the project’s “technical platform”, which applies cloud computing and Linked Data technologies to enable the development of these services, providing flexibility and scalability.
Resumo:
The launch of the Double Star mission has provided the opportunity to monitor events at distinct locations on the dayside magnetopause, in coordination with the quartet of Cluster spacecraft. We present results of two such coordinated studies. In the first, 6 April 2004, both Cluster and the Double Star TC-1 spacecraft were on outbound transits through the dawn-side magnetosphere. Cluster observed northward moving FTEs with +/- polarity, whereas TC-1 saw -/+ polarity FTEs. The strength, motion and occurrence of the FTE signatures changes somewhat according to changes in IMF clock angle. These observations are consistent with ongoing reconnection on the dayside magnetopause, resulting in a series of flux transfer events (FTEs) seen both at Cluster and TC-1. The observed polarity and motion of each FTE signature advocates the existence of an active reconnection region consistently located between the positions of Cluster and TC-1, lying north and south of the reconnection line, respectively. This scenario is supported by the application of a model, designed to track flux tube motion, to conditions appropriate for the prevailing interplanetary conditions. The results from the model confirm the observational evidence that the low-latitude FTE dynamics is sensitive to changes in convected upstream conditions. In particular, changing the interplanetary magnetic field (IMF) clock angle in the model predicts that TC-1 should miss the resulting FTEs more often than Cluster, as is observed. For the second conjunction, on the 4 Jan 2005, the Cluster and TC-1 spacecraft all exited the dusk-side magnetosphere almost simultaneously, with TC-1 lying almost equatorial and Cluster at northern latitudes at about 4 RE from TC-1. The spacecraft traverse the magnetopause during a strong reversal in the IMF from northward to southward and a number of magnetosheath FTE signatures are subsequently observed. One coordinated FTE, studied in detail by Pu et al, [this issue], carries and inflowing energetic electron population and shows a motion and orientation which is similar at all spacecraft and consistent with the predictions of the model for the flux tube dynamics, given a near sub-solar reconnection line. This event can be interpreted either as the passage of two parallel flux tubes arising from adjacent x-line positions, or as a crossing of a single flux tube at different positions.
Resumo:
The recent launch of the equatorial spacecraft of the Double Star mission, TC-1, has provided an unprecedented opportunity to monitor the southern hemisphere dayside magnetopause boundary layer in conjunction with northern hemisphere observations by the quartet of Cluster spacecraft. We present first results of one such situation where, on 6 April 2004, both Cluster and the Double Star TC-1 spacecraft were on outbound transits through the dawnside magnetosphere. The observations are consistent with ongoing reconnection on the dayside magnetopause, resulting in a series of flux transfer events (FTEs) seen both at Cluster and TC-1, which appear to lie north and south of the reconnection line, respectively. In fact, the observed polarity and motion of each FTE signature advocates the existence of an active reconnection region consistently located between the positions of Cluster and TC-1, with Cluster observing northward moving FTEs with +/− polarity, whereas TC-1 sees −/+ polarity FTEs. This assertion is further supported by the application of a model designed to track flux tube motion for the prevailing interplanetary conditions. The results from this model show, in addition, that the low-latitude FTE dynamics are sensitive to changes in convected upstream conditions. In particular, changing the interplanetary magnetic field (IMF) clock angle in the model suggests that TC-1 should miss the resulting FTEs more often than Cluster and this is borne out by the observations.
Resumo:
A realistic representation of the North Atlantic tropical cyclone tracks is crucial as it allows, for example, explaining potential changes in US landfalling systems. Here we present a tentative study, which examines the ability of recent climate models to represent North Atlantic tropical cyclone tracks. Tracks from two types of climate models are evaluated: explicit tracks are obtained from tropical cyclones simulated in regional or global climate models with moderate to high horizontal resolution (1° to 0.25°), and downscaled tracks are obtained using a downscaling technique with large-scale environmental fields from a subset of these models. For both configurations, tracks are objectively separated into four groups using a cluster technique, leading to a zonal and a meridional separation of the tracks. The meridional separation largely captures the separation between deep tropical and sub-tropical, hybrid or baroclinic cyclones, while the zonal separation segregates Gulf of Mexico and Cape Verde storms. The properties of the tracks’ seasonality, intensity and power dissipation index in each cluster are documented for both configurations. Our results show that except for the seasonality, the downscaled tracks better capture the observed characteristics of the clusters. We also use three different idealized scenarios to examine the possible future changes of tropical cyclone tracks under 1) warming sea surface temperature, 2) increasing carbon dioxide, and 3) a combination of the two. The response to each scenario is highly variable depending on the simulation considered. Finally, we examine the role of each cluster in these future changes and find no preponderant contribution of any single cluster over the others.