790 resultados para Computing clouds
Resumo:
Pocket Data Mining (PDM) is our new term describing collaborative mining of streaming data in mobile and distributed computing environments. With sheer amounts of data streams are now available for subscription on our smart mobile phones, the potential of using this data for decision making using data stream mining techniques has now been achievable owing to the increasing power of these handheld devices. Wireless communication among these devices using Bluetooth and WiFi technologies has opened the door wide for collaborative mining among the mobile devices within the same range that are running data mining techniques targeting the same application. This paper proposes a new architecture that we have prototyped for realizing the significant applications in this area. We have proposed using mobile software agents in this application for several reasons. Most importantly the autonomic intelligent behaviour of the agent technology has been the driving force for using it in this application. Other efficiency reasons are discussed in details in this paper. Experimental results showing the feasibility of the proposed architecture are presented and discussed.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data and a data warehouse. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular we look at two aspects, first how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories --- this is an important and challenging aspect of P-found because the data volumes involved are too large to be centralised. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling new scientific discoveries.
Resumo:
Southern Hemisphere (SH) polar mesospheric clouds (PMCs), also known as noctilucent clouds, have been observed to be more variable and, in general, dimmer than their Northern Hemisphere (NH) counterparts. The precise cause of these hemispheric differences is not well understood. This paper focuses on one aspect of the hemispheric differences: the timing of the PMC season onset. Observations from the Aeronomy of Ice in the Mesosphere satellite indicate that in recent years the date on which the PMC season begins varies much more in the SH than in the NH. Using the Canadian Middle Atmosphere Model, we show that the generation of sufficiently low temperatures necessary for cloud formation in the SH summer polar mesosphere is perturbed by year‐to‐year variations in the timing of the late‐spring breakdown of the SH stratospheric polar vortex. These stratospheric variations, which persist until the end of December, influence the propagation of gravity waves up to the mesosphere. This adds a stratospheric control to the temperatures in the polar mesopause region during early summer, which causes the onset of PMCs to vary from one year to another. This effect is much stronger in the SH than in the NH because the breakdown of the polar vortex occurs much later in the SH, closer in time to the PMC season.
Resumo:
Purpose: This paper aims to design an evaluation method that enables an organization to assess its current IT landscape and provide readiness assessment prior to Software as a Service (SaaS) adoption. Design/methodology/approach: The research employs a mixed of quantitative and qualitative approaches for conducting an IT application assessment. Quantitative data such as end user’s feedback on the IT applications contribute to the technical impact on efficiency and productivity. Qualitative data such as business domain, business services and IT application cost drivers are used to determine the business value of the IT applications in an organization. Findings: The assessment of IT applications leads to decisions on suitability of each IT application that can be migrated to cloud environment. Research limitations/implications: The evaluation of how a particular IT application impacts on a business service is done based on the logical interpretation. Data mining method is suggested in order to derive the patterns of the IT application capabilities. Practical implications: This method has been applied in a local council in UK. This helps the council to decide the future status of the IT applications for cost saving purpose.
Resumo:
From geostationary satellite observations of equatorial Africa and the equatorial east Atlantic during May and June 2000 we explore the radiative forcing by deep convective cloud systems in these regions. Deep convective clouds (DCCs) are associated with a mean radiative forcing relative to non–deep convective areas of −39 W m−2 over the Atlantic Ocean and of +13 W m−2 over equatorial Africa (±10 W m−2 in both cases). We show that over land the timing of the daily cycle of convection relative to the daily cycle in solar illumination and surface temperature significantly affects the mean radiative forcing by DCCs. Displacement of the daily cycle of DCC coverage by 2 hours changes their overall radiative effect by ∼10 W m−2, with implications for the simulation of the radiative balance in this region. The timing of the minimum DCC cover over land, close to noon local time, means that the mean radiative forcing is nearly maximized.
Resumo:
During the VOCALS campaign spaceborne satellite observations showed that travelling gravity wave packets, generated by geostrophic adjustment, resulted in perturbations to marine boundary layer (MBL) clouds over the south-east Pacific Ocean (SEP). Often, these perturbations were reversible in that passage of the wave resulted in the clouds becoming brighter (in the wave crest), then darker (in the wave trough) and subsequently recovering their properties after the passage of the wave. However, occasionally the wave packets triggered irreversible changes to the clouds, which transformed from closed mesoscale cellular convection to open form. In this paper we use large eddy simulation (LES) to examine the physical mechanisms that cause this transition. Specifically, we examine whether the clearing of the cloud is due to (i) the wave causing additional cloud-top entrainment of warm, dry air or (ii) whether the additional condensation of liquid water onto the existing drops and the subsequent formation of drizzle are the important mechanisms. We find that, although the wave does cause additional drizzle formation, this is not the reason for the persistent clearing of the cloud; rather it is the additional entrainment of warm, dry air into the cloud followed by a reduction in longwave cooling, although this only has a significant effect when the cloud is starting to decouple from the boundary layer. The result in this case is a change from a stratocumulus to a more patchy cloud regime. For the simulations presented here, cloud condensation nuclei (CCN) scavenging did not play an important role in the clearing of the cloud. The results have implications for understanding transitions between the different cellular regimes in marine boundary layer (MBL) clouds.
Resumo:
EVENT has been used to examine the effects of 3D cloud structure, distribution, and inhomogeneity on the scattering of visible solar radiation and the resulting 3D radiation field. Large eddy simulation and aircraft measurements are used to create realistic cloud fields which are continuous or broken with smooth or uneven tops. The values, patterns and variance in the resulting downwelling and upwelling radiation from incident visible solar radiation at different angles are then examined and compared to measurements. The results from EVENT confirm that 3D cloud structure is important in determining the visible radiation field, and that these results are strongly influenced by the solar zenith angle. The results match those from other models using visible solar radiation, and are supported by aircraft measurements of visible radiation, providing confidence in the new model.
Resumo:
We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.
The Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions
Resumo:
Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.
Resumo:
We have extensively evaluated the response of cloud-base drizzle rate (Rcb; mm day–1) in warm clouds to liquid water path (LWP; g m–2) and to cloud condensation nuclei (CCN) number concentration (NCCN; cm–3), an aerosol proxy. This evaluation is based on a 19-month long dataset of Doppler radar, lidar, microwave radiometers and aerosol observing systems from the Atmospheric Radiation Measurement (ARM) Mobile Facility deployments at the Azores and in Germany. Assuming 0.55% supersaturation to calculate NCCN, we found a power law , indicating that Rcb decreases by a factor of 2–3 as NCCN increases from 200 to 1000 cm–3 for fixed LWP. Additionally, the precipitation susceptibility to NCCN ranges between 0.5 and 0.9, in agreement with values from simulations and aircraft measurements. Surprisingly, the susceptibility of the probability of precipitation from our analysis is much higher than that from CloudSat estimates, but agrees well with simulations from a multi-scale high-resolution aerosol-climate model. Although scale issues are not completely resolved in the intercomparisons, our results are encouraging, suggesting that it is possible for multi-scale models to accurately simulate the response of LWP to aerosol perturbations.
Resumo:
In this paper we propose methods for computing Fresnel integrals based on truncated trapezium rule approximations to integrals on the real line, these trapezium rules modified to take into account poles of the integrand near the real axis. Our starting point is a method for computation of the error function of complex argument due to Matta and Reichel (J Math Phys 34:298–307, 1956) and Hunter and Regan (Math Comp 26:539–541, 1972). We construct approximations which we prove are exponentially convergent as a function of N , the number of quadrature points, obtaining explicit error bounds which show that accuracies of 10−15 uniformly on the real line are achieved with N=12 , this confirmed by computations. The approximations we obtain are attractive, additionally, in that they maintain small relative errors for small and large argument, are analytic on the real axis (echoing the analyticity of the Fresnel integrals), and are straightforward to implement.
Resumo:
n this study, the authors discuss the effective usage of technology to solve the problem of deciding on journey start times for recurrent traffic conditions. The developed algorithm guides the vehicles to travel on more reliable routes that are not easily prone to congestion or travel delays, ensures that the start time is as late as possible to avoid the traveller waiting too long at their destination and attempts to minimise the travel time. Experiments show that in order to be more certain of reaching their destination on time, a traveller has to leave early and correspondingly arrive early, resulting in a large waiting time. The application developed here asks the user to set this certainty factor as per the task in hand, and computes the best start time and route.
Resumo:
SOA (Service Oriented Architecture), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.
Resumo:
The use of virtualization in high-performance computing (HPC) has been suggested as a means to provide tailored services and added functionality that many users expect from full-featured Linux cluster environments. The use of virtual machines in HPC can offer several benefits, but maintaining performance is a crucial factor. In some instances the performance criteria are placed above the isolation properties. This selective relaxation of isolation for performance is an important characteristic when considering resilience for HPC environments that employ virtualization. In this paper we consider some of the factors associated with balancing performance and isolation in configurations that employ virtual machines. In this context, we propose a classification of errors based on the concept of “error zones”, as well as a detailed analysis of the trade-offs between resilience and performance based on the level of isolation provided by virtualization solutions. Finally, a set of experiments are performed using different virtualization solutions to elucidate the discussion.