23 resultados para Cloud Storage Google
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
The proliferation of mobile devices in society accessing data via the ‘cloud’ is imposing a dramatic increase in the amount of information to be stored on hard disk drives (HDD) used in servers. Forecasts are that areal densities will need to increase by as much as 35% compound per annum and by 2020 cloud storage capacity will be around 7 zettabytes corresponding to areal densities of 2 Tb/in2. This requires increased performance from the magnetic pole of the electromagnetic writer in the read/write head in the HDD. Current state-of-art writing is undertaken by morphologically complex magnetic pole of sub 100 nm dimensions, in an environment of engineered magnetic shields and it needs to deliver strong directional magnetic field to areas on the recording media around 50 nm x 13 nm. This points to the need for a method to perform direct quantitative measurements of the magnetic field generated by the write pole at the nanometer scale. Here we report on the complete in situ quantitative mapping of the magnetic field generated by a functioning write pole in operation using electron holography. Opportunistically, it points the way towards a new nanoscale magnetic field source to further develop in situ Transmission Electron Microscopy.
Resumo:
We propose simple models to predict the performance degradation of disk requests due to storage device contention in consolidated virtualized environments. Model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same server. We first propose a trace-driven approach that evaluates a queueing network with fair share scheduling using simulation. The model parameters consider Virtual Machine Monitor level disk access optimizations and rely on a calibration technique. We further present a measurement-based approach that allows a distinct characterization of read/write performance attributes. In particular, we define simple linear prediction models for I/O request mean response times, throughputs and read/write mixes, as well as a simulation model for predicting response time distributions. We found our models to be effective in predicting such quantities across a range of synthetic and emulated application workloads.
Resumo:
At the heavy ion storage ring CRYRING in Stockholm, Sweden, we have investigated the dissociative recombination of DCOOD2+ at low relative kinetic energies, from ~1 meV to 1 eV. The thermal rate coefficient has been found to follow the expression k(T) = 8.43 × 10-7 (T/300)^-0.78 cm3 s-1 for electron temperatures, T, ranging from ~10 to ~1000 K. The branching fractions of the reaction have been studied at ~2 meV relative kinetic energy. It has been found that ~87% of the reactions involve breaking a bond between heavy atoms. In only 13% of the reactions do the heavy atoms remain in the same product fragment. This puts limits on the gas-phase production of formic acid, observed in both molecular clouds and cometary comae. Using the experimental results in chemical models of the dark cloud, TMC-1, and using the latest release of the UMIST Database for Astrochemistry improves the agreement with observations for the abundance of formic acid. Our results also strengthen the assumption that formic acid is a component of cometary ices.
Resumo:
Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute resources to host cloud applications. From an application user's point of view, it is desirable to identify the most appropriate set of available resources on which to execute an application. Resource choice can be complex and may involve comparing available hardware specifications, operating systems, value-added services, such as network configuration or data replication, and operating costs, such as hosting cost and data throughput. Providers' cost models often change and new commodity cost models, such as spot pricing, have been introduced to offer significant savings. In this paper, a software abstraction layer is used to discover infrastructure resources for a particular application, across multiple providers, by using a two-phase constraints-based approach. In the first phase, a set of possible infrastructure resources are identified for a given application. In the second phase, a heuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based heuristic is most appropriate; for others a performance-based heuristic may be used. A financial services application and a high performance computing application are used to illustrate the execution of the proposed resource discovery mechanism. The experimental result shows the proposed model could dynamically select an appropriate set of resouces that match the application's requirements.
Resumo:
This special issue provides the latest research and development on wireless mobile wearable communications. According to a report by Juniper Research, the market value of connected wearable devices is expected to reach $1.5 billion by 2014, and the shipment of wearable devices may reach 70 million by 2017. Good examples of wearable devices are the prominent Google Glass and Microsoft HoloLens. As wearable technology is rapidly penetrating our daily life, mobile wearable communication is becoming a new communication paradigm. Mobile wearable device communications create new challenges compared to ordinary sensor networks and short-range communication. In mobile wearable communications, devices communicate with each other in a peer-to-peer fashion or client-server fashion and also communicate with aggregation points (e.g., smartphones, tablets, and gateway nodes). Wearable devices are expected to integrate multiple radio technologies for various applications' needs with small power consumption and low transmission delays. These devices can hence collect, interpret, transmit, and exchange data among supporting components, other wearable devices, and the Internet. Such data are not limited to people's personal biomedical information but also include human-centric social and contextual data. The success of mobile wearable technology depends on communication and networking architectures that support efficient and secure end-to-end information flows. A key design consideration of future wearable devices is the ability to ubiquitously connect to smartphones or the Internet with very low energy consumption. Radio propagation and, accordingly, channel models are also different from those in other existing wireless technologies. A huge number of connected wearable devices require novel big data processing algorithms, efficient storage solutions, cloud-assisted infrastructures, and spectrum-efficient communications technologies.
Resumo:
How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.
Resumo:
How can applications be deployed on the cloud to achieve maximum performance? This question is challenging to address with the availability of a wide variety of cloud Virtual Machines (VMs) with different performance capabilities. The research reported in this paper addresses the above question by proposing a six step benchmarking methodology in which a user provides a set of weights that indicate how important memory, local communication, computation and storage related operations are to an application. The user can either provide a set of four abstract weights or eight fine grain weights based on the knowledge of the application. The weights along with benchmarking data collected from the cloud are used to generate a set of two rankings - one based only on the performance of the VMs and the other takes both performance and costs into account. The rankings are validated on three case study applications using two validation techniques. The case studies on a set of experimental VMs highlight that maximum performance can be achieved by the three top ranked VMs and maximum performance in a cost-effective manner is achieved by at least one of the top three ranked VMs produced by the methodology.
Resumo:
There is considerable disagreement in the literature on available oxygen storage capacity, and on the reaction rates associated with the storage process, for three-way automotive catalysts. This paper seeks to address the issue of oxygen storage capacity in a clear and precise manner. The work described involved a detailed investigation of oxygen storage capacity in typical samples of automotive catalysts. The capacity has also been precisely defined and estimates have been made of the specific capacity based on catalyst dimensions. A purpose-built miniature catalyst test rig has been assembled to allow measurement of the capacity and the experimental procedure has been developed to ensure accurate measurement. The measurements from the first series of experiments have been compared with the theoretical calculations and good agreement is seen. A second series of experiments allowed the effect of temperature on oxygen storage capacity to be investigated. This work shows very clearly the large variation of the capacity with temperature.
Resumo:
Poly(methyl vinyl ether-co-maleic anhydride) formed films from aqueous formulations with characteristics that are ideal as a basis for producing a drug-containing bioadhesive delivery system when plasticized with a monohydroxyl functionalized plasticizer. Hence, films containing a novel plasticizer, tripropylene glycol methyl ether (TPME), maintained their adhesive strength and tensile properties when packaged in aluminized foil for extended periods of time. Films plasticized with commonly used polyhydric alcohols, such as the glycerol in this study, underwent an esterification reaction that led to polymer crosslinking, as shown in NMR studies. These revealed the presence of peaks in the ester/carbonyl region, suggesting that glyceride residue formation had been initiated. Given the polyfunctional nature of glycerol, progressive esterification would result in a polyester network and an accompanying profound alteration in the physical characteristics. Indeed, films became brittle over time with a loss of both the aqueous solubility and bioadhesion to porcine skin. In addition, a swelling index was measurable after 7 days, a property not seen with those films containing TPME. This change in bioadhesive strength and pliability was independent of the packaging conditions, rendering the films that contain glycerol as unsuitable as a basis for topical bioadhesive delivery of drug substances. Consequently, films containing TPME have potential as an alternative formulation strategy.
Resumo:
We present intermediate-resolution HST/STIS spectra of a high- velocity interstellar cloud ((LSR)-L-upsilon = + 80 kms(-1)) towards DI1388, a young star in the Magellanic Bridge located between the Small and Large Magellanic Clouds. The STIS data have a signal-to-noise ratio (S/N) of 20-45 and a spectral resolution of about 6.5 km s(-1) (FWHM), The high-velocity cloud absorption is observed in the lines of C II, O I, Si II, Si III, Si IV and S III. Limits can be placed on the amount of S II and Fe II absorption that is present. An analysis of the relative abundances derived from the observed species, particularly C II and O I, suggests that this high-velocity gas is warm (T-k similar to 10(3)-10(4) K) and predominantly ionized, This hypothesis is supported by the presence of absorption produced by highly ionized species, such as Si IV, This sightline also intercepts two other high-velocity clouds that produce weak absorption features at (LSR)-L-upsilon = + 113 and + 130kms(-1) in the STIS spectra.
Resumo:
We present Westerbork Synthesis Radio Telescope HI images, Lovell telescope multibeam H I wide-field mapping, William Herschel Telescope long-slit echelle Ca II observations, Wisconsin Halpha Mapper (WHAM) facility images, and IRAS ISSA 60- and 100-mum co-added images towards the intermediate- velocity cloud (IVC) at + 70 km s(-1), located in the general direction of the M15 globular cluster. When combined with previously published Arecibo data, the H I gas in the IVC is found to be clumpy, with a peak H I column density of similar to1.5 x 10(20) cm(-2), inferred volume density (assuming spherical symmetry) of similar to24 cm(-3)/D (kpc) and a maximum brightness temperature at a resolution of 81 x 14 arcsec(2) of 14 K. The major axis of this part of the IVC lies approximately parallel to the Galactic plane, as does the low- velocity H I gas and IRAS emission. The H I gas in the cloud is warm, with a minimum value of the full width at half-maximum velocity width of 5 km s(-1) corresponding to a kinetic temperature, in the absence of turbulence, of similar to540 K. From the H I data, there are indications of two-component velocity structure. Similarly, the Ca II spectra, of resolution 7 km s(-1), also show tentative evidence of velocity structure, perhaps indicative of cloudlets. Assuming that there are no unresolved narrow-velocity components, the mean values of log(10)[N(Ca II K) cm(2)] similar to 12.0 and Ca II/H I similar to2 5 x 10(-8) are typical of observations of high Galactic latitude clouds. This compares with a value of Ca II/H I>10(-6) for IVC absorption towards HD 203664, a halo star of distance 3 kpc, some 3.degrees1 from the main M15 IVC condensation. The main IVC condensation is detected by WHAM in Halpha with central local-standard-of-rest velocities of similar to60-70 km s(-1), and intensities uncorrected for Galactic extinction of up to 1.3 R, indicating that the gas is partially ionized. The FWHM values of the Halpha IVC component, at a resolution of 1degrees, exceed 30 km s(-1). This is some 10 km s(-1) larger than the corresponding H I value at a similar resolution, and indicates that the two components may not be mixed. However, the spatial and velocity coincidence of the Halpha and H I peaks in emission towards the main IVC component is qualitatively good. If the Halpha emission is caused solely by photoionization, the Lyman continuum flux towards the main IVC condensation is similar to2.7 x 10(6) photon cm(-2) s(-1). There is not a corresponding IVC Halpha detection towards the halo star HD 203664 at velocities exceeding similar to60 km s(- 1). Finally, both the 60- and 100-mum IRAS images show spatial coincidence, over a 0.675 x 0 625 deg(2) field, with both low- and intermediate-velocity H I gas (previously observed with the Arecibo telescope), indicating that the IVC may contain dust. Both the Halpha and tentative IRAS detections discriminate this IVC from high-velocity clouds, although the H I properties do not. When combined with the H I and optical results, these data point to a Galactic origin for at least parts of this IVC.