35 resultados para cloud-based applications
Resumo:
The major current commercial applications of semiconductor photochemistry promoted on the world wide web are reviewed. The basic principles behind the different applications are discussed, including the use of semiconductor photochemistry to: photo-mineralise organics, photo-sterilise and photo-demist. The range of companies, and their products, which utilise semiconductor photochemistry are examined and typical examples listed. An analysis of the geographical distribution of current commercial activity in this area is made. The results indicate that commercial activity in this area is growing world-wide, but is especially strong in Japan. The number and geographical distribution of patents in semiconductor photocatalysis are also commented on. The trends in the numbers of US and Japanese patents over the last 6 years are discussed. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The application of high intensity laser-produced gamma rays is discussed with regard to picosecond resolution deep-penetration radiography. The spectrum and angular distribution of these gamma rays is measured using an array of thermoluminescent detectors for both an underdense (gas) target and an overdense (solid) target. It is found that the use of an underdense target in a laser plasma accelerator configuration produces a much more intense and directional source. The peak dose is also increased significantly. Radiography is demonstrated in these experiments and the source size is also estimated. (C) 2002 American Institute of Physics.
Resumo:
A Digital Video Broadcast Terrestrial (DVB-T) based passive radar requires the development of an antenna array that performs satisfactorily over the entire DVB-T band. The array should require no mechanical adjustments to inter-element spacing to correspond to the DVB-T carrier frequency used for any particular measurement. This paper will describe the challenges involved in designing an antenna array with a bandwidth of 450 MHz. It will discuss the design procedure and demonstrate a number of simulated array configurations. The final configuration of the array will be shown as well as simulations of the expected performance over the desired frequency span.
Resumo:
The scheduling problem in distributed data-intensive computing environments has become an active research topic due to the tremendous growth in grid and cloud computing environments. As an innovative distributed intelligent paradigm, swarm intelligence provides a novel approach to solving these potentially intractable problems. In this paper, we formulate the scheduling problem for work-flow applications with security constraints in distributed data-intensive computing environments and present a novel security constraint model. Several meta-heuristic adaptations to the particle swarm optimization algorithm are introduced to deal with the formulation of efficient schedules. A variable neighborhood particle swarm optimization algorithm is compared with a multi-start particle swarm optimization and multi-start genetic algorithm. Experimental results illustrate that population based meta-heuristics approaches usually provide a good balance between global exploration and local exploitation and their feasibility and effectiveness for scheduling work-flow applications. © 2010 Elsevier Inc. All rights reserved.
Resumo:
Recent advances in hardware development coupled with the rapid adoption and broad applicability of cloud computing have introduced widespread heterogeneity in data centers, significantly complicating the management of cloud applications and data center resources. This paper presents the CACTOS approach to cloud infrastructure automation and optimization, which addresses heterogeneity through a combination of in-depth analysis of application behavior with insights from commercial cloud providers. The aim of the approach is threefold: to model applications and data center resources, to simulate applications and resources for planning and operation, and to optimize application deployment and resource use in an autonomic manner. The approach is based on case studies from the areas of business analytics, enterprise applications, and scientific computing.
Resumo:
A PSS/E 32 model of a real section of the Northern Ireland electrical grid was dynamically controlled with Python 2.5. In this manner data from a proposed wide area monitoring system was simulated. The area is of interest as it is a weakly coupled distribution grid with significant distributed generation. The data was used to create an optimization and protection metric that reflected reactive power flow, voltage profile, thermal overload and voltage excursions. Step changes in the metric were introduced upon the operation of special protection systems and voltage excursions. A wide variety of grid conditions were simulated while tap changer positions and switched capacitor banks were iterated through; with the most desirable state returning the lowest optimization and protection metric. The optimized metric was compared against the metric generated from the standard system state returned by PSS/E. Various grid scenarios were explored involving an intact network and compromised networks (line loss) under summer maximum, summer minimum and winter maximum conditions. In each instance the output from the installed distributed generation is varied between 0 MW and 80 MW (120% of installed capacity). It is shown that in grid models the triggering of special protection systems is delayed by between 1 MW and 6 MW (1.5% to 9% of capacity), with 3.5 MW being the average. The optimization and protection metric gives a quantitative value for system health and demonstrates the potential efficacy of wide area monitoring for protection and control.
Resumo:
Fully Homomorphic Encryption (FHE) is a recently developed cryptographic technique which allows computations on encrypted data. There are many interesting applications for this encryption method, especially within cloud computing. However, the computational complexity is such that it is not yet practical for real-time applications. This work proposes optimised hardware architectures of the encryption step of an integer-based FHE scheme with the aim of improving its practicality. A low-area design and a high-speed parallel design are proposed and implemented on a Xilinx Virtex-7 FPGA, targeting the available DSP slices, which offer high-speed multiplication and accumulation. Both use the Comba multiplication scheduling method to manage the large multiplications required with uneven sized multiplicands and to minimise the number of read and write operations to RAM. Results show that speed up factors of 3.6 and 10.4 can be achieved for the encryption step with medium-sized security parameters for the low-area and parallel designs respectively, compared to the benchmark software implementation on an Intel Core2 Duo E8400 platform running at 3 GHz.
Resumo:
As part of any drilling cuttings pile removal process the requirement for monitoring the release of contaminants into the marine environment will be critical. Traditional methods for such monitoring involve taking samples for laboratory analysis. This process is time consuming and only provides data on spot samples taken from a limited number of locations and time frames. Such processes, therefore, offer very restricted information. The need for improved marine sensors for monitoring contaminants is established. We report here the development and application of a multi-capability optical sensor for the real-time in situ monitoring of three key marine environmental and offshore/oil parameters: hydrocarbons, synthetic-based fluids and heavy metal concentrations. The use of these sensors will be a useful tool for real-time in situ environmental monitoring during the process of decommissioning offshore structures. Multi-capability array sensors could also provide information on the dispersion of contamination from drill cuttings piles either while they are in situ or during their removal.
Resumo:
Cloud data centres are critical business infrastructures and the fastest growing service providers. Detecting anomalies in Cloud data centre operation is vital. Given the vast complexity of the data centre system software stack, applications and workloads, anomaly detection is a challenging endeavour. Current tools for detecting anomalies often use machine learning techniques, application instance behaviours or system metrics distribu- tion, which are complex to implement in Cloud computing environments as they require training, access to application-level data and complex processing. This paper presents LADT, a lightweight anomaly detection tool for Cloud data centres that uses rigorous correlation of system metrics, implemented by an efficient corre- lation algorithm without need for training or complex infrastructure set up. LADT is based on the hypothesis that, in an anomaly-free system, metrics from data centre host nodes and virtual machines (VMs) are strongly correlated. An anomaly is detected whenever correlation drops below a threshold value. We demonstrate and evaluate LADT using a Cloud environment, where it shows that the hosting node I/O operations per second (IOPS) are strongly correlated with the aggregated virtual machine IOPS, but this correlation vanishes when an application stresses the disk, indicating a node-level anomaly.
Resumo:
Uncertainty profiles are used to study the effects of contention within cloud and service-based environments. An uncertainty profile provides a qualitative description of an environment whose quality of service (QoS) may fluctuate unpredictably. Uncertain environments are modelled by strategic games with two agents; a daemon is used to represent overload and high resource contention; an angel is used to represent an idealised resource allocation situation with no underlying contention. Assessments of uncertainty profiles are useful in two ways: firstly, they provide a broad understanding of how environmental stress can effect an application’s performance (and reliability); secondly, they allow the effects of introducing redundancy into a computation to be assessed
Resumo:
Current data-intensive image processing applications push traditional embedded architectures to their limits. FPGA based hardware acceleration is a potential solution but the programmability gap and time consuming HDL design flow is significant. The proposed research approach to develop “FPGA based programmable hardware acceleration platform” that uses, large number of Streaming Image processing Processors (SIPPro) potentially addresses these issues. SIPPro is pipelined in-order soft-core processor architecture with specific optimisations for image processing applications. Each SIPPro core uses 1 DSP48, 2 Block RAMs and 370 slice-registers, making the processor as compact as possible whilst maintaining flexibility and programmability. It is area efficient, scalable and high performance softcore architecture capable of delivering 530 MIPS per core using Xilinx Zynq SoC (ZC7Z020-3). To evaluate the feasibility of the proposed architecture, a Traffic Sign Recognition (TSR) algorithm has been prototyped on a Zedboard with the color and morphology operations accelerated using multiple SIPPros. Simulation and experimental results demonstrate that the processing platform is able to achieve a speedup of 15 and 33 times for color filtering and morphology operations respectively, with a significant reduced design effort and time.