127 resultados para Clouds computing
Resumo:
Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
Pocket Data Mining (PDM) is our new term describing collaborative mining of streaming data in mobile and distributed computing environments. With sheer amounts of data streams are now available for subscription on our smart mobile phones, the potential of using this data for decision making using data stream mining techniques has now been achievable owing to the increasing power of these handheld devices. Wireless communication among these devices using Bluetooth and WiFi technologies has opened the door wide for collaborative mining among the mobile devices within the same range that are running data mining techniques targeting the same application. This paper proposes a new architecture that we have prototyped for realizing the significant applications in this area. We have proposed using mobile software agents in this application for several reasons. Most importantly the autonomic intelligent behaviour of the agent technology has been the driving force for using it in this application. Other efficiency reasons are discussed in details in this paper. Experimental results showing the feasibility of the proposed architecture are presented and discussed.
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data and a data warehouse. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular we look at two aspects, first how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories --- this is an important and challenging aspect of P-found because the data volumes involved are too large to be centralised. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling new scientific discoveries.
Resumo:
Southern Hemisphere (SH) polar mesospheric clouds (PMCs), also known as noctilucent clouds, have been observed to be more variable and, in general, dimmer than their Northern Hemisphere (NH) counterparts. The precise cause of these hemispheric differences is not well understood. This paper focuses on one aspect of the hemispheric differences: the timing of the PMC season onset. Observations from the Aeronomy of Ice in the Mesosphere satellite indicate that in recent years the date on which the PMC season begins varies much more in the SH than in the NH. Using the Canadian Middle Atmosphere Model, we show that the generation of sufficiently low temperatures necessary for cloud formation in the SH summer polar mesosphere is perturbed by year‐to‐year variations in the timing of the late‐spring breakdown of the SH stratospheric polar vortex. These stratospheric variations, which persist until the end of December, influence the propagation of gravity waves up to the mesosphere. This adds a stratospheric control to the temperatures in the polar mesopause region during early summer, which causes the onset of PMCs to vary from one year to another. This effect is much stronger in the SH than in the NH because the breakdown of the polar vortex occurs much later in the SH, closer in time to the PMC season.
Resumo:
Purpose: This paper aims to design an evaluation method that enables an organization to assess its current IT landscape and provide readiness assessment prior to Software as a Service (SaaS) adoption. Design/methodology/approach: The research employs a mixed of quantitative and qualitative approaches for conducting an IT application assessment. Quantitative data such as end user’s feedback on the IT applications contribute to the technical impact on efficiency and productivity. Qualitative data such as business domain, business services and IT application cost drivers are used to determine the business value of the IT applications in an organization. Findings: The assessment of IT applications leads to decisions on suitability of each IT application that can be migrated to cloud environment. Research limitations/implications: The evaluation of how a particular IT application impacts on a business service is done based on the logical interpretation. Data mining method is suggested in order to derive the patterns of the IT application capabilities. Practical implications: This method has been applied in a local council in UK. This helps the council to decide the future status of the IT applications for cost saving purpose.
Resumo:
From geostationary satellite observations of equatorial Africa and the equatorial east Atlantic during May and June 2000 we explore the radiative forcing by deep convective cloud systems in these regions. Deep convective clouds (DCCs) are associated with a mean radiative forcing relative to non–deep convective areas of −39 W m−2 over the Atlantic Ocean and of +13 W m−2 over equatorial Africa (±10 W m−2 in both cases). We show that over land the timing of the daily cycle of convection relative to the daily cycle in solar illumination and surface temperature significantly affects the mean radiative forcing by DCCs. Displacement of the daily cycle of DCC coverage by 2 hours changes their overall radiative effect by ∼10 W m−2, with implications for the simulation of the radiative balance in this region. The timing of the minimum DCC cover over land, close to noon local time, means that the mean radiative forcing is nearly maximized.
Resumo:
During the VOCALS campaign spaceborne satellite observations showed that travelling gravity wave packets, generated by geostrophic adjustment, resulted in perturbations to marine boundary layer (MBL) clouds over the south-east Pacific Ocean (SEP). Often, these perturbations were reversible in that passage of the wave resulted in the clouds becoming brighter (in the wave crest), then darker (in the wave trough) and subsequently recovering their properties after the passage of the wave. However, occasionally the wave packets triggered irreversible changes to the clouds, which transformed from closed mesoscale cellular convection to open form. In this paper we use large eddy simulation (LES) to examine the physical mechanisms that cause this transition. Specifically, we examine whether the clearing of the cloud is due to (i) the wave causing additional cloud-top entrainment of warm, dry air or (ii) whether the additional condensation of liquid water onto the existing drops and the subsequent formation of drizzle are the important mechanisms. We find that, although the wave does cause additional drizzle formation, this is not the reason for the persistent clearing of the cloud; rather it is the additional entrainment of warm, dry air into the cloud followed by a reduction in longwave cooling, although this only has a significant effect when the cloud is starting to decouple from the boundary layer. The result in this case is a change from a stratocumulus to a more patchy cloud regime. For the simulations presented here, cloud condensation nuclei (CCN) scavenging did not play an important role in the clearing of the cloud. The results have implications for understanding transitions between the different cellular regimes in marine boundary layer (MBL) clouds.
Resumo:
EVENT has been used to examine the effects of 3D cloud structure, distribution, and inhomogeneity on the scattering of visible solar radiation and the resulting 3D radiation field. Large eddy simulation and aircraft measurements are used to create realistic cloud fields which are continuous or broken with smooth or uneven tops. The values, patterns and variance in the resulting downwelling and upwelling radiation from incident visible solar radiation at different angles are then examined and compared to measurements. The results from EVENT confirm that 3D cloud structure is important in determining the visible radiation field, and that these results are strongly influenced by the solar zenith angle. The results match those from other models using visible solar radiation, and are supported by aircraft measurements of visible radiation, providing confidence in the new model.
Resumo:
We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.
The Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions
Resumo:
Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft’s cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.
Resumo:
We have extensively evaluated the response of cloud-base drizzle rate (Rcb; mm day–1) in warm clouds to liquid water path (LWP; g m–2) and to cloud condensation nuclei (CCN) number concentration (NCCN; cm–3), an aerosol proxy. This evaluation is based on a 19-month long dataset of Doppler radar, lidar, microwave radiometers and aerosol observing systems from the Atmospheric Radiation Measurement (ARM) Mobile Facility deployments at the Azores and in Germany. Assuming 0.55% supersaturation to calculate NCCN, we found a power law , indicating that Rcb decreases by a factor of 2–3 as NCCN increases from 200 to 1000 cm–3 for fixed LWP. Additionally, the precipitation susceptibility to NCCN ranges between 0.5 and 0.9, in agreement with values from simulations and aircraft measurements. Surprisingly, the susceptibility of the probability of precipitation from our analysis is much higher than that from CloudSat estimates, but agrees well with simulations from a multi-scale high-resolution aerosol-climate model. Although scale issues are not completely resolved in the intercomparisons, our results are encouraging, suggesting that it is possible for multi-scale models to accurately simulate the response of LWP to aerosol perturbations.
Resumo:
In this paper we propose methods for computing Fresnel integrals based on truncated trapezium rule approximations to integrals on the real line, these trapezium rules modified to take into account poles of the integrand near the real axis. Our starting point is a method for computation of the error function of complex argument due to Matta and Reichel (J Math Phys 34:298–307, 1956) and Hunter and Regan (Math Comp 26:539–541, 1972). We construct approximations which we prove are exponentially convergent as a function of N , the number of quadrature points, obtaining explicit error bounds which show that accuracies of 10−15 uniformly on the real line are achieved with N=12 , this confirmed by computations. The approximations we obtain are attractive, additionally, in that they maintain small relative errors for small and large argument, are analytic on the real axis (echoing the analyticity of the Fresnel integrals), and are straightforward to implement.
Resumo:
n this study, the authors discuss the effective usage of technology to solve the problem of deciding on journey start times for recurrent traffic conditions. The developed algorithm guides the vehicles to travel on more reliable routes that are not easily prone to congestion or travel delays, ensures that the start time is as late as possible to avoid the traveller waiting too long at their destination and attempts to minimise the travel time. Experiments show that in order to be more certain of reaching their destination on time, a traveller has to leave early and correspondingly arrive early, resulting in a large waiting time. The application developed here asks the user to set this certainty factor as per the task in hand, and computes the best start time and route.