877 resultados para Real applications
Resumo:
Long running multi-physics coupled parallel applications have gained prominence in recent years. The high computational requirements and long durations of simulations of these applications necessitate the use of multiple systems of a Grid for execution. In this paper, we have built an adaptive middleware framework for execution of long running multi-physics coupled applications across multiple batch systems of a Grid. Our framework, apart from coordinating the executions of the component jobs of an application on different batch systems, also automatically resubmits the jobs multiple times to the batch queues to continue and sustain long running executions. As the set of active batch systems available for execution changes, our framework performs migration and rescheduling of components using a robust rescheduling decision algorithm. We have used our framework for improving the application throughput of a foremost long running multi-component application for climate modeling, the Community Climate System Model (CCSM). Our real multi-site experiments with CCSM indicate that Grid executions can lead to improved application throughput for climate models.
Resumo:
Computational grids with multiple batch systems (batch grids) can be powerful infrastructures for executing long-running multi-component parallel applications. In this paper, we evaluate the potential improvements in throughput of long-running multi-component applications when the different components of the applications are executed on multiple batch systems of batch grids. We compare the multiple batch executions with executions of the components on a single batch system without increasing the number of processors used for executions. We perform our analysis with a foremost long-running multi-component application for climate modeling, the Community Climate System Model (CCSM). We have built a robust simulator that models the characteristics of both the multi-component application and the batch systems. By conducting large number of simulations with different workload characteristics and queuing policies of the systems, processor allocations to components of the application, distributions of the components to the batch systems and inter-cluster bandwidths, we show that multiple batch executions lead to 55% average increase in throughput over single batch executions for long-running CCSM. We also conducted real experiments with a practical middleware infrastructure and showed that multi-site executions lead to effective utilization of batch systems for executions of CCSM and give higher simulation throughput than single-site executions. Copyright (c) 2011 John Wiley & Sons, Ltd.
Resumo:
Critical applications like cyclone tracking and earthquake modeling require simultaneous high-performance simulations and online visualization for timely analysis. Faster simulations and simultaneous visualization enable scientists provide real-time guidance to decision makers. In this work, we have developed an integrated user-driven and automated steering framework that simultaneously performs numerical simulations and efficient online remote visualization of critical weather applications in resource-constrained environments. It considers application dynamics like the criticality of the application and resource dynamics like the storage space, network bandwidth and available number of processors to adapt various application and resource parameters like simulation resolution, simulation rate and the frequency of visualization. We formulate the problem of finding an optimal set of simulation parameters as a linear programming problem. This leads to 30% higher simulation rate and 25-50% lesser storage consumption than a naive greedy approach. The framework also provides the user control over various application parameters like region of interest and simulation resolution. We have also devised an adaptive algorithm to reduce the lag between the simulation and visualization times. Using experiments with different network bandwidths, we find that our adaptive algorithm is able to reduce lag as well as visualize the most representative frames.
Resumo:
With the introduction of the earth observing satellites, remote sensing has become an important tool in analyzing the Earth's surface characteristics, and hence in supplying valuable information necessary for the hydrologic analysis. Due to their capability to capture the spatial variations in the hydro-meteorological variables and frequent temporal resolution sufficient to represent the dynamics of the hydrologic processes, remote sensing techniques have significantly changed the water resources assessment and management methodologies. Remote sensing techniques have been widely used to delineate the surface water bodies, estimate meteorological variables like temperature and precipitation, estimate hydrological state variables like soil moisture and land surface characteristics, and to estimate fluxes such as evapotranspiration. Today, near-real time monitoring of flood, drought events, and irrigation management are possible with the help of high resolution satellite data. This paper gives a brief overview of the potential applications of remote sensing in water resources.
Resumo:
Real-time object tracking is a critical task in many computer vision applications. Achieving rapid and robust tracking while handling changes in object pose and size, varying illumination and partial occlusion, is a challenging task given the limited amount of computational resources. In this paper we propose a real-time object tracker in l(1) framework addressing these issues. In the proposed approach, dictionaries containing templates of overlapping object fragments are created. The candidate fragments are sparsely represented in the dictionary fragment space by solving the l(1) regularized least squares problem. The non zero coefficients indicate the relative motion between the target and candidate fragments along with a fidelity measure. The final object motion is obtained by fusing the reliable motion information. The dictionary is updated based on the object likelihood map. The proposed tracking algorithm is tested on various challenging videos and found to outperform earlier approach.
Resumo:
Transaction processing is a key constituent of the IT workload of commercial enterprises (e.g., banks, insurance companies). Even today, in many large enterprises, transaction processing is done by legacy "batch" applications, which run offline and process accumulated transactions. Developers acknowledge the presence of multiple loosely coupled pieces of functionality within individual applications. Identifying such pieces of functionality (which we call "services") is desirable for the maintenance and evolution of these legacy applications. This is a hard problem, which enterprises grapple with, and one without satisfactory automated solutions. In this paper, we propose a novel static-analysis-based solution to the problem of identifying services within transaction-processing programs. We provide a formal characterization of services in terms of control-flow and data-flow properties, which is well-suited to the idioms commonly exhibited by business applications. Our technique combines program slicing with the detection of conditional code regions to identify services in accordance with our characterization. A preliminary evaluation, based on a manual analysis of three real business programs, indicates that our approach can be effective in identifying useful services from batch applications.
Resumo:
Building integrated photovoltaic (BIPV) applications are gaining widespread popularity. The performance of any given BIPV system is dependent on prevalent meteorological factors, site conditions and system characteristics. Investigations pertaining to the performance assessment of photovoltaic (PV) systems are generally confined to either controlled environment-chambers or computer-based simulation studies. Such investigations fall short of providing a realistic insight into how a PV system actually performs real-time. Solar radiation and the PV cell temperature are amongst the most crucial parameters affecting PV output. The current paper deals with the real-time performance assessment of a recently commissioned 5.25 kW, BIPV system installed at the Center for Sustainable Technologies, Indian Institute of Science, Bangalore. The overall average system efficiency was found to be 6% for the period May 2011-April 2012. This paper provides a critical appraisal of PV system performance based on ground realities, particularly characteristic to tropical (moderate) regions such as Bangalore, India. (C) 2013 International Energy Initiative. Published by Elsevier Inc. All rights reserved.
Resumo:
This work reports the processing-microstructure-property correlation of novel HA-BaTiO3-based piezobiocomposites, which demonstrated the bone-mimicking functional properties. A series of composites of hydroxyapatite (HA) with varying amounts of piezoelectric BaTiO3 (BT) were optimally processed using uniquely designed multistage spark plasma sintering (SPS) route. Transmission electron microscopy imaging during in situ heating provides complementary information on the real-time observation of sintering behavior. Ultrafine grains (0.50m) of HA and BT phases were predominantly retained in the SPSed samples. The experimental results revealed that dielectric constant, AC conductivity, piezoelectric strain coefficient, compressive strength, and modulus values of HA-40wt% BT closely resembles with that of the natural bone. The addition of 40wt% BT enhances the long-crack fracture toughness, compressive strength, and modulus by 132%, 200%, and 165%, respectively, with respect to HA. The above-mentioned exceptional combination of functional properties potentially establishes HA-40wt% BT piezocomposite as a new-generation composite for orthopedic implant applications.
Resumo:
Recently, it has been shown that fusion of the estimates of a set of sparse recovery algorithms result in an estimate better than the best estimate in the set, especially when the number of measurements is very limited. Though these schemes provide better sparse signal recovery performance, the higher computational requirement makes it less attractive for low latency applications. To alleviate this drawback, in this paper, we develop a progressive fusion based scheme for low latency applications in compressed sensing. In progressive fusion, the estimates of the participating algorithms are fused progressively according to the availability of estimates. The availability of estimates depends on computational complexity of the participating algorithms, in turn on their latency requirement. Unlike the other fusion algorithms, the proposed progressive fusion algorithm provides quick interim results and successive refinements during the fusion process, which is highly desirable in low latency applications. We analyse the developed scheme by providing sufficient conditions for improvement of CS reconstruction quality and show the practical efficacy by numerical experiments using synthetic and real-world data. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
PurposeTo extend the previously developed temporally constrained reconstruction (TCR) algorithm to allow for real-time availability of three-dimensional (3D) temperature maps capable of monitoring MR-guided high intensity focused ultrasound applications. MethodsA real-time TCR (RT-TCR) algorithm is developed that only uses current and previously acquired undersampled k-space data from a 3D segmented EPI pulse sequence, with the image reconstruction done in a graphics processing unit implementation to overcome computation burden. Simulated and experimental data sets of HIFU heating are used to evaluate the performance of the RT-TCR algorithm. ResultsThe simulation studies demonstrate that the RT-TCR algorithm has subsecond reconstruction time and can accurately measure HIFU-induced temperature rises of 20 degrees C in 15 s for 3D volumes of 16 slices (RMSE = 0.1 degrees C), 24 slices (RMSE = 0.2 degrees C), and 32 slices (RMSE = 0.3 degrees C). Experimental results in ex vivo porcine muscle demonstrate that the RT-TCR approach can reconstruct temperature maps with 192 x 162 x 66 mm 3D volume coverage, 1.5 x 1.5 x 3.0 mm resolution, and 1.2-s scan time with an accuracy of 0.5 degrees C. ConclusionThe RT-TCR algorithm offers an approach to obtaining large coverage 3D temperature maps in real-time for monitoring MR-guided high intensity focused ultrasound treatments. Magn Reson Med 71:1394-1404, 2014. (c) 2013 Wiley Periodicals, Inc.
Resumo:
Earth abundant alternative chalcopyrite Cu2CoSnS4 (CCTS) thin films were deposited by a facile sol-gel process onto larger substrates. Temperature dependence of the process control of deposition and desired phase formations was studied in detail. Films were analyzed for complete transformation from amorphous to polycrystalline, with textured structures for stannite phase, as reflected from the X-ray diffraction and with nearly stoichiometric compositions of Cu:Co:Sn:S = 2:0:1:0:1:0:4:0 from EDAX analysis. Morphological investigations revealed that the CCTS films with larger grains, on the order of its thickness, were synthesized at higher temperature of 500 degrees C. The optimal band gap for application in photovoltaics was estimated to be 1.4 eV. Devices with SLG/CCTS/Al geometry were fabricated for real time demonstration of photoconductivity under A.M 1.5 G solar and 1064 rim infrared laser illuminations. A photodetector showed one order current amplification from similar to 1.9 X 10(-6) A in the dark to 2.2 x 10(-5) A and 9.8 X 10(-6) A under A.M 1.5 G illumination and 50 mW cm(-2) IR laser, respectively. Detector sensitivity, responsivity, external quantum efficiency, and gain were estimated as 4.2, 0.12 A/W, 14.74% and 14.77%, respectively, at 50 mW cm(-2) laser illuminations. An ON and OFF ratio of 2.5 proved that CCTS can be considered as a potential absorber in low cost photovoltaics applications.
Resumo:
In this paper, for the first time, we have reported the novel synthesis of reduced graphene oxide (r-GO) dendrite kind of nanomaterial. The proposed r-GO dendrite possesses multifunctional properties in various fields of sensing and separation. The dendrite was synthesized by chemical reaction in different steps. Initially, the r-GO sheet was conjugated with silane group modified magnetic nanoparticle, resulting in nanoparticle decorated r-GO. The above r-GO sheet was further reacted with a new r-GO sheet, resulting in the formation of r-GO dendrite type of structure. Multifunctional behavior of this r-GO dendrite structure was studied by different methods. First, magnetic properties were studied by vibrating sample magnetometer (VSM) and it was found that dendrite structure shows good magnetic susceptibility (180.2 emu/g). The proposed r-GO dendrite also shows a very good antibacterial behavior for Escherichia coli and excellent electrochemical behavior towards ferrocyanide probe molecule. Along with these, it also acts as a substrate for the synthesis of molecularly imprinted polymer for europium metal ion, a lanthanide. The proposed imprinted sensor shows a very high selectivity and sensitivity for europium metal ion (limit of detection= 0.019 mu g L-1) in aqueous as well as real samples. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The correctness of a hard real-time system depends its ability to meet all its deadlines. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system (OS). Existing implementations of schedulers in multicore systems that support real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. As a result, the execution overhead of real-time tasks is quite large in these systems, which, in turn, affects their runtime. In order that the hard real-time tasks can be executed in such systems with minimal interference from other Linux tasks, we propose, in this paper, an integrated scheduler architecture, called SchedISA, which aims to considerably reduce the execution overhead of real-time tasks in these systems. In order to test the efficacy of the proposed scheduler, we implemented partitioned earliest deadline first (P-EDF) scheduling algorithm in SchedISA on Linux kernel, version 3.8, and conducted experiments on Intel core i7 processor with eight logical cores. We compared the execution overhead of real-time tasks in the above implementation of SchedISA with that in SCHED_DEADLINE's P-EDF implementation, which concurrently executes real-time and non-real-time tasks in Linux OS in all the cores. The experimental results show that the execution overhead of real-time tasks in the above implementation of SchedISA is considerably less than that in SCHED_DEADLINE. We believe that, with further refinement of SchedISA, the execution overhead of real-time tasks in SchedISA can be reduced to a predictable maximum, making it suitable for scheduling hard real-time tasks without affecting the CPU share of Linux tasks.
Resumo:
A block-structured adaptive mesh refinement (AMR) technique has been used to obtain numerical solutions for many scientific applications. Some block-structured AMR approaches have focused on forming patches of non-uniform sizes where the size of a patch can be tuned to the geometry of a region of interest. In this paper, we develop strategies for adaptive execution of block-structured AMR applications on GPUs, for hyperbolic directionally split solvers. While effective hybrid execution strategies exist for applications with uniform patches, our work considers efficient execution of non-uniform patches with different workloads. Our techniques include bin-packing work units to load balance GPU computations, adaptive asynchronism between CPU and GPU executions using a knapsack formulation, and scheduling communications for multi-GPU executions. Our experiments with synthetic and real data, for single-GPU and multi-GPU executions, on Tesla S1070 and Fermi C2070 clusters, show that our strategies result in up to a 3.23 speedup in performance over existing strategies.
Resumo:
We present an analysis of the rate of sign changes in the discrete Fourier spectrum of a sequence. The sign changes of either the real or imaginary parts of the spectrum are considered, and the rate of sign changes is termed as the spectral zero-crossing rate (SZCR). We show that SZCR carries information pertaining to the locations of transients within the temporal observation window. We show duality with temporal zero-crossing rate analysis by expressing the spectrum of a signal as a sum of sinusoids with random phases. This extension leads to spectral-domain iterative filtering approaches to stabilize the spectral zero-crossing rate and to improve upon the location estimates. The localization properties are compared with group-delay-based localization metrics in a stylized signal setting well-known in speech processing literature. We show applications to epoch estimation in voiced speech signals using the SZCR on the integrated linear prediction residue. The performance of the SZCR-based epoch localization technique is competitive with the state-of-the-art epoch estimation techniques that are based on average pitch period.