969 resultados para Point cloud
Resumo:
The purpose of this paper is to empirically examine the state of cloud computing adoption in Australia. I specifically focus on the drivers, risks, and benefits of cloud computing from the perspective of IT experts and forensic accountants. I use thematic analysis of interview data to answer the research questions of the study. The findings suggest that cloud computing is increasingly gaining foothold in many sectors due to its advantages such as flexibility and the speed of deployment. However, security remains an issue and therefore its adoption is likely to be selective and phased. Of particular concern are the involvement of third parties and foreign jurisdictions, which in the event of damage may complicate litigation and forensic investigations. This is one of the first empirical studies that reports on cloud computing adoption and experiences in Australia.
Resumo:
The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generalization of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics. Also, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement. The comparison results show that the computation using our mapper/reducer placement is much cheaper while still satisfying the computation deadline.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NPcomplete. Thus, in this paper we propose a new grouping genetic algorithm for the mappers/reducers placement problem in cloud computing. Compared with the original one, our grouping genetic algorithm uses an innovative coding scheme and also eliminates the inversion operator which is an essential operator in the original grouping genetic algorithm. The new grouping genetic algorithm is evaluated by experiments and the experimental results show that it is much more efficient than four popular algorithms for the problem, including the original grouping genetic algorithm.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
The increasing penetration rate of feature rich mobile devices such as smartphones and tablets in the global population has resulted in a large number of applications and services being created or modified to support mobile devices. Mobile cloud computing is a proposed paradigm to address the resource scarcity of mobile devices in the face of demand for more computing intensive tasks. Several approaches have been proposed to confront the challenges of mobile cloud computing, but none has used the user experience as the primary focus point. In this paper we evaluate these approaches in respect of the user experience, propose what future research directions in this area require to provide for this crucial aspect, and introduce our own solution.
On the modelling of the thermal interactions between a spray curtain and an impinging cold gas cloud
Resumo:
A mixed Lagrangian-Eulerian model of a Water Curtain barrier is presented. The heat, mass and momentum processes are modelled in a Lagrangian framework for the dispersed phase and in an Eulerian framework for the carrier phase. The derivation of the coupling source terms is illustrated with reference to a given carrier phase cell. The turbulent character of the flow is treated with a single equation model, modified to directly account for the influence of the particles on the flow. The model is implemented in the form of a 2 D incompressible Navier Stokes solver, coupled to an adaptive Rung Kutta method for the Lagrangian sub-system. Simulations of a free standing full cone water spray show satisfactory agreement with experiment. Predictions of a Water Curtain barrier impacted by a cold gas cloud point to markedly different flow fields for the upward and downward configurations, which could influence the effectiveness of chemical absorption in the liquid phase.
Resumo:
We present Westerbork Synthesis Radio Telescope HI images, Lovell telescope multibeam H I wide-field mapping, William Herschel Telescope long-slit echelle Ca II observations, Wisconsin Halpha Mapper (WHAM) facility images, and IRAS ISSA 60- and 100-mum co-added images towards the intermediate- velocity cloud (IVC) at + 70 km s(-1), located in the general direction of the M15 globular cluster. When combined with previously published Arecibo data, the H I gas in the IVC is found to be clumpy, with a peak H I column density of similar to1.5 x 10(20) cm(-2), inferred volume density (assuming spherical symmetry) of similar to24 cm(-3)/D (kpc) and a maximum brightness temperature at a resolution of 81 x 14 arcsec(2) of 14 K. The major axis of this part of the IVC lies approximately parallel to the Galactic plane, as does the low- velocity H I gas and IRAS emission. The H I gas in the cloud is warm, with a minimum value of the full width at half-maximum velocity width of 5 km s(-1) corresponding to a kinetic temperature, in the absence of turbulence, of similar to540 K. From the H I data, there are indications of two-component velocity structure. Similarly, the Ca II spectra, of resolution 7 km s(-1), also show tentative evidence of velocity structure, perhaps indicative of cloudlets. Assuming that there are no unresolved narrow-velocity components, the mean values of log(10)[N(Ca II K) cm(2)] similar to 12.0 and Ca II/H I similar to2 5 x 10(-8) are typical of observations of high Galactic latitude clouds. This compares with a value of Ca II/H I>10(-6) for IVC absorption towards HD 203664, a halo star of distance 3 kpc, some 3.degrees1 from the main M15 IVC condensation. The main IVC condensation is detected by WHAM in Halpha with central local-standard-of-rest velocities of similar to60-70 km s(-1), and intensities uncorrected for Galactic extinction of up to 1.3 R, indicating that the gas is partially ionized. The FWHM values of the Halpha IVC component, at a resolution of 1degrees, exceed 30 km s(-1). This is some 10 km s(-1) larger than the corresponding H I value at a similar resolution, and indicates that the two components may not be mixed. However, the spatial and velocity coincidence of the Halpha and H I peaks in emission towards the main IVC component is qualitatively good. If the Halpha emission is caused solely by photoionization, the Lyman continuum flux towards the main IVC condensation is similar to2.7 x 10(6) photon cm(-2) s(-1). There is not a corresponding IVC Halpha detection towards the halo star HD 203664 at velocities exceeding similar to60 km s(- 1). Finally, both the 60- and 100-mum IRAS images show spatial coincidence, over a 0.675 x 0 625 deg(2) field, with both low- and intermediate-velocity H I gas (previously observed with the Arecibo telescope), indicating that the IVC may contain dust. Both the Halpha and tentative IRAS detections discriminate this IVC from high-velocity clouds, although the H I properties do not. When combined with the H I and optical results, these data point to a Galactic origin for at least parts of this IVC.
Resumo:
Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute resources to host cloud applications. From an application user's point of view, it is desirable to identify the most appropriate set of available resources on which to execute an application. Resource choice can be complex and may involve comparing available hardware specifications, operating systems, value-added services, such as network configuration or data replication, and operating costs, such as hosting cost and data throughput. Providers' cost models often change and new commodity cost models, such as spot pricing, have been introduced to offer significant savings. In this paper, a software abstraction layer is used to discover infrastructure resources for a particular application, across multiple providers, by using a two-phase constraints-based approach. In the first phase, a set of possible infrastructure resources are identified for a given application. In the second phase, a heuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based heuristic is most appropriate; for others a performance-based heuristic may be used. A financial services application and a high performance computing application are used to illustrate the execution of the proposed resource discovery mechanism. The experimental result shows the proposed model could dynamically select an appropriate set of resouces that match the application's requirements.
Resumo:
The identification, tracking, and statistical analysis of tropical convective complexes using satellite imagery is explored in the context of identifying feature points suitable for tracking. The feature points are determined based on the shape of complexes using the distance transform technique. This approach has been applied to the determination feature points for tropical convective complexes identified in a time series of global cloud imagery. The feature points are used to track the complexes, and from the tracks statistical diagnostic fields are computed. This approach allows the nature and distribution of organized deep convection in the Tropics to be explored.
Resumo:
Magnetic clouds (MCs) are a subset of interplanetary coronal mass ejections (ICMEs) which exhibit signatures consistent with a magnetic flux rope structure. Techniques for reconstructing flux rope orientation from single-point in situ observations typically assume the flux rope is locally cylindrical, e.g., minimum variance analysis (MVA) and force-free flux rope (FFFR) fitting. In this study, we outline a non-cylindrical magnetic flux rope model, in which the flux rope radius and axial curvature can both vary along the length of the axis. This model is not necessarily intended to represent the global structure of MCs, but it can be used to quantify the error in MC reconstruction resulting from the cylindrical approximation. When the local flux rope axis is approximately perpendicular to the heliocentric radial direction, which is also the effective spacecraft trajectory through a magnetic cloud, the error in using cylindrical reconstruction methods is relatively small (≈ 10∘). However, as the local axis orientation becomes increasingly aligned with the radial direction, the spacecraft trajectory may pass close to the axis at two separate locations. This results in a magnetic field time series which deviates significantly from encounters with a force-free flux rope, and consequently the error in the axis orientation derived from cylindrical reconstructions can be as much as 90∘. Such two-axis encounters can result in an apparent ‘double flux rope’ signature in the magnetic field time series, sometimes observed in spacecraft data. Analysing each axis encounter independently produces reasonably accurate axis orientations with MVA, but larger errors with FFFR fitting.
Resumo:
Galactic Cosmic Rays are one of the major sources of ion production in the troposphere and stratosphere. Recent studies have shown that ions form electrically charged clusters which may grow to become cloud droplets. Aerosol particles charge by the attachment of ions and electrons. The collision efficiency between a particle and a water droplet increases, if the particle is electrically charged, and thus aerosol-cloud interactions can be enhanced. Because these microphysical processes may change radiative properties of cloud and impact Earth's climate it is important to evaluate these processes' quantitative effects. Five different models developed independently have been coupled to investigate this. The first model estimates cloud height from dew point temperature and the temperature profile. The second model simulates the cloud droplet growth from aerosol particles using the cloud parcel concept. In the third model, the scavenging rate of the aerosol particles is calculated using the collision efficiency between charged particles and droplets. The fourth model calculates electric field and charge distribution on water droplets and aerosols within cloud. The fifth model simulates the global electric circuit (GEC), which computes the conductivity and ionic concentration in the atmosphere in altitude range 0–45 km. The first four models are initially coupled to calculate the height of cloud, boundary condition of cloud, followed by growth of droplets, charge distribution calculation on aerosols and cloud droplets and finally scavenging. These models are incorporated with the GEC model. The simulations are verified with experimental data of charged aerosol for various altitudes. Our calculations showed an effect of aerosol charging on the CCN concentration within the cloud, due to charging of aerosols increase the scavenging of particles in the size range 0.1 µm to 1 µm.
Resumo:
SOA (Service Oriented Architecture), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.
Resumo:
The Large Magellanic Cloud (LMC) has a rich star cluster system spanning a wide range of ages and masses. One striking feature of the LMC cluster system is the existence of an age gap between 3 and 10 Gyr. But this feature is not clearly seen among field stars. Three LMC fields containing relatively poor and sparse clusters whose integrated colours are consistent with those of intermediate-age simple stellar populations have been imaged in BVI with the Optical Imager (SOI) at the Southern Telescope for Astrophysical Research (SOAR). A total of six clusters, five of them with estimated initial masses M < 104 M(circle dot), were studied in these fields. Photometry was performed and colour-magnitude diagrams (CMDs) were built using standard point spread function fitting methods. The faintest stars measured reach V similar to 23. The CMD was cleaned from field contamination by making use of the three-dimensional colour and magnitude space available in order to select stars in excess relative to the field. A statistical CMD comparison method was developed for this purpose. The subtraction method has proven to be successful, yielding cleaned CMDs consistent with a simple stellar population. The intermediate-age candidates were found to be the oldest in our sample, with ages between 1 and 2 Gyr. The remaining clusters found in the SOAR/SOI have ages ranging from 100 to 200 Myr. Our analysis has conclusively shown that none of the relatively low-mass clusters studied by us belongs to the LMC age gap.
Resumo:
Simulations of overshooting, tropical deep convection using a Cloud Resolving Model with bulk microphysics are presented in order to examine the effect on the water content of the TTL (Tropical Tropopause Layer) and lower stratosphere. This case study is a subproject of the HIBISCUS (Impact of tropical convection on the upper troposphere and lower stratosphere at global scale) campaign, which took place in Bauru, Brazil (22° S, 49° W), from the end of January to early March 2004. Comparisons between 2-D and 3-D simulations suggest that the use of 3-D dynamics is vital in order to capture the mixing between the overshoot and the stratospheric air, which caused evaporation of ice and resulted in an overall moistening of the lower stratosphere. In contrast, a dehydrating effect was predicted by the 2-D simulation due to the extra time, allowed by the lack of mixing, for the ice transported to the region to precipitate out of the overshoot air. Three different strengths of convection are simulated in 3-D by applying successively lower heating rates (used to initiate the convection) in the boundary layer. Moistening is produced in all cases, indicating that convective vigour is not a factor in whether moistening or dehydration is produced by clouds that penetrate the tropopause, since the weakest case only just did so. An estimate of the moistening effect of these clouds on an air parcel traversing a convective region is made based on the domain mean simulated moistening and the frequency of convective events observed by the IPMet (Instituto de Pesquisas Meteorológicas, Universidade Estadual Paulista) radar (S-band type at 2.8 Ghz) to have the same 10 dBZ echo top height as those simulated. These suggest a fairly significant mean moistening of 0.26, 0.13 and 0.05 ppmv in the strongest, medium and weakest cases, respectively, for heights between 16 and 17 km. Since the cold point and WMO (World Meteorological Organization) tropopause in this region lies at ∼ 15.9 km, this is likely to represent direct stratospheric moistening. Much more moistening is predicted for the 15-16 km height range with increases of 0.85-2.8 ppmv predicted. However, it would be required that this air is lofted through the tropopause via the Brewer Dobson circulation in order for it to have a stratospheric effect. Whether this is likely is uncertain and, in addition, the dehydration of air as it passes through the cold trap and the number of times that trajectories sample convective regions needs to be taken into account to gauge the overall stratospheric effect. Nevertheless, the results suggest a potentially significant role for convection in determining the stratospheric water content. Sensitivity tests exploring the impact of increased aerosol numbers in the boundary layer suggest that a corresponding rise in cloud droplet numbers at cloud base would increase the number concentrations of the ice crystals transported to the TTL, which had the effect of reducing the fall speeds of the ice and causing a ∼13% rise in the mean vapour increase in both the 15-16 and 16-17 km height ranges, respectively, when compared to the control case. Increases in the total water were much larger, being 34% and 132% higher for the same height ranges, but it is unclear whether the extra ice will be able to evaporate before precipitating from the region. These results suggest a possible impact of natural and anthropogenic aerosols on how convective clouds affect stratospheric moisture levels.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)