964 resultados para Support operations
Resumo:
Very Long Instruction Word (VLIW) architectures exploit instruction level parallelism (ILP) with the help of the compiler to achieve higher instruction throughput with minimal hardware. However, control and data dependencies between operations limit the available ILP, which not only hinders the scalability of VLIW architectures, but also result in code size expansion. Although speculation and predicated execution mitigate ILP limitations due to control dependencies to a certain extent, they increase hardware cost and exacerbate code size expansion. Simultaneous multistreaming (SMS) can significantly improve operation throughput by allowing interleaved execution of operations from multiple instruction streams. In this paper we study SMS for VLIW architectures and quantify the benefits associated with it using a case study of the MPEG-2 video decoder. We also propose the notion of virtual resources for VLIW architectures, which decouple architectural resources (resources exposed to the compiler) from the microarchitectural resources, to limit code size expansion. Our results for a VLIW architecture demonstrate that: (1) SMS delivers much higher throughput than that achieved by speculation and predicated execution, (2) the increase in performance due to the addition of speculation and predicated execution support over SMS averages around 12%. The minor increase in performance might not warrant the additional hardware complexity involved, and (3) the notion of virtual resources is very effective in reducing no-operations (NOPs) and consequently reduce code size with little or no impact on performance.
Resumo:
In this work, we evaluate performance of a real-world image processing application that uses a cross-correlation algorithm to compare a given image with a reference one. The algorithm processes individual images represented as 2-dimensional matrices of single-precision floating-point values using O(n4) operations involving dot-products and additions. We implement this algorithm on a nVidia GTX 285 GPU using CUDA, and also parallelize it for the Intel Xeon (Nehalem) and IBM Power7 processors, using both manual and automatic techniques. Pthreads and OpenMP with SSE and VSX vector intrinsics are used for the manually parallelized version, while a state-of-the-art optimization framework based on the polyhedral model is used for automatic compiler parallelization and optimization. The performance of this algorithm on the nVidia GPU suffers from: (1) a smaller shared memory, (2) unaligned device memory access patterns, (3) expensive atomic operations, and (4) weaker single-thread performance. On commodity multi-core processors, the application dataset is small enough to fit in caches, and when parallelized using a combination of task and short-vector data parallelism (via SSE/VSX) or through fully automatic optimization from the compiler, the application matches or beats the performance of the GPU version. The primary reasons for better multi-core performance include larger and faster caches, higher clock frequency, higher on-chip memory bandwidth, and better compiler optimization and support for parallelization. The best performing versions on the Power7, Nehalem, and GTX 285 run in 1.02s, 1.82s, and 1.75s, respectively. These results conclusively demonstrate that, under certain conditions, it is possible for a FLOP-intensive structured application running on a multi-core processor to match or even beat the performance of an equivalent GPU version.
Resumo:
In this paper, an approach to enhance the Extra High Voltage (EHV) Transmission system distance protection is presented. The scheme depends on the apparent impedance seen by the distance relay during the disturbance. In a distance relay,the impedance seen at the relay location is calculated from the fundamental frequency component of the voltage and current signals. Support Vector Machines (SVMs) are a new learning-byexample are employed in discriminating zone settings (Zone-1,Zone-2 and Zone-3) using the signals to be used by the relay.Studies on 265-bus system, an equivalent of practical Indian Western grid are presented for illustrating the proposed scheme.
Resumo:
In this paper, reduced level of rock at Bangalore, India is arrived from the 652 boreholes data in the area covering 220 sq.km. In the context of prediction of reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth, ordinary kriging and Support Vector Machine (SVM) models have been developed. In ordinary kriging, the knowledge of the semivariogram of the reduced level of rock from 652 points in Bangalore is used to predict the reduced level of rock at any point in the subsurface of Bangalore, where field measurements are not available. A cross validation (Q1 and Q2) analysis is also done for the developed ordinary kriging model. The SVM is a novel type of learning machine based on statistical learning theory, uses regression technique by introducing e-insensitive loss function has been used to predict the reduced level of rock from a large set of data. A comparison between ordinary kriging and SVM model demonstrates that the SVM is superior to ordinary kriging in predicting rock depth.
Resumo:
Energy plays a prominent role in human society. As a result of technological and industrial development,the demand for energy is rapidly increasing. Existing power sources that are mainly fossil fuel based are leaving an unacceptable legacy of waste and pollution apart from diminishing stock of fuels.Hence, the focus is now shifted to large-scale propagation of renewable energy. Renewable energy technologies are clean sources of energy that have a much lower environmental impact than conventional energy technologies. Solar energy is one such renewable energy. Most renewable energy comes either directly or indirectly from the sun. Estimation of solar energy potential of a region requires detailed solar radiation climatology, and it is necessary to collect extensive radiation data of high accuracy covering all climatic zones of the region. In this regard, a decision support system (DSS)would help in estimating solar energy potential considering the region’s energy requirement.This article explains the design and implementation of DSS for assessment of solar energy. The DSS with executive information systems and reporting tools helps to tap vast data resources and deliver information. The main hypothesis is that this tool can be used to form a core of practical methodology that will result in more resilient in time and can be used by decision-making bodies to assess various scenarios. It also offers means of entering, accessing, and interpreting the information for the purpose of sound decision making.
Resumo:
Fully structured and matured open source spatial and temporal analysis technology seems to be the official carrier of the future for planning of the natural resources especially in the developing nations. This technology has gained enormous momentum because of technical superiority, affordability and ability to join expertise from all sections of the society. Sustainable development of a region depends on the integrated planning approaches adopted in decision making which requires timely and accurate spatial data. With the increased developmental programmes, the need for appropriate decision support system has increased in order to analyse and visualise the decisions associated with spatial and temporal aspects of natural resources. In this regard Geographic Information System (GIS) along with remote sensing data support the applications that involve spatial and temporal analysis on digital thematic maps and the remotely sensed images. Open source GIS would help in wide scale applications involving decisions at various hierarchical levels (for example from village panchayat to planning commission) on economic viability, social acceptance apart from technical feasibility. GRASS (Geographic Resources Analysis Support System, http://wgbis.ces.iisc.ernet.in/grass) is an open source GIS that works on Linux platform (freeware), but most of the applications are in command line argument, necessitating a user friendly and cost effective graphical user interface (GUI). Keeping these aspects in mind, Geographic Resources Decision Support System (GRDSS) has been developed with functionality such as raster, topological vector, image processing, statistical analysis, geographical analysis, graphics production, etc. This operates through a GUI developed in Tcltk (Tool command language / Tool kit) under Linux as well as with a shell in X-Windows. GRDSS include options such as Import /Export of different data formats, Display, Digital Image processing, Map editing, Raster Analysis, Vector Analysis, Point Analysis, Spatial Query, which are required for regional planning such as watershed Analysis, Landscape Analysis etc. This is customised to Indian context with an option to extract individual band from the IRS (Indian Remote Sensing Satellites) data, which is in BIL (Band Interleaved by Lines) format. The integration of PostgreSQL (a freeware) in GRDSS aids as an efficient database management system.
Resumo:
Spatial Decision Support System (SDSS) assist in strategic decision-making activities considering spatial and temporal variables, which help in Regional planning. WEPA is a SDSS designed for assessment of wind potential spatially. A wind energy system transforms the kinetic energy of the wind into mechanical or electrical energy that can be harnessed for practical use. Wind energy can diversify the economies of rural communities, adding to the tax base and providing new types of income. Wind turbines can add a new source of property value in rural areas that have a hard time attracting new industry. Wind speed is extremely important parameter for assessing the amount of energy a wind turbine can convert to electricity: The energy content of the wind varies with the cube (the third power) of the average wind speed. Estimation of the wind power potential for a site is the most important requirement for selecting a site for the installation of a wind electric generator and evaluating projects in economic terms. It is based on data of the wind frequency distribution at the site, which are collected from a meteorological mast consisting of wind anemometer and a wind vane and spatial parameters (like area available for setting up wind farm, landscape, etc.). The wind resource is governed by the climatology of the region concerned and has large variability with reference to space (spatial expanse) and time (season) at any fixed location. Hence the need to conduct wind resource surveys and spatial analysis constitute vital components in programs for exploiting wind energy. SDSS for assessing wind potential of a region / location is designed with user friendly GUI’s (Graphic User Interface) using VB as front end with MS Access database (backend). Validation and pilot testing of WEPA SDSS has been done with the data collected for 45 locations in Karnataka based on primary data at selected locations and data collected from the meteorological observatories of the India Meteorological Department (IMD). Wind energy and its characteristics have been analysed for these locations to generate user-friendly reports and spatial maps. Energy Pattern Factor (EPF) and Power Densities are computed for sites with hourly wind data. With the knowledge of EPF and mean wind speed, mean power density is computed for the locations with only monthly data. Wind energy conversion systems would be most effective in these locations during May to August. The analyses show that coastal and dry arid zones in Karnataka have good wind potential, which if exploited would help local industries, coconut and areca plantations, and agriculture. Pre-monsoon availability of wind energy would help in irrigating these orchards, making wind energy a desirable alternative.
Resumo:
We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy. 1
Resumo:
This paper presents an approach for identifying the faulted line section and fault location on transmission systems using support vector machines (SVMs) for diagnosis/post-fault analysis purpose. Power system disturbances are often caused by faults on transmission lines. When fault occurs on a transmission system, the protective relay detects the fault and initiates the tripping operation, which isolates the affected part from the rest of the power system. Based on the fault section identified, rapid and corrective restoration procedures can thus be taken to minimize the power interruption and limit the impact of outage on the system. The approach is particularly important for post-fault diagnosis of any mal-operation of relays following a disturbance in the neighboring line connected to the same substation. This may help in improving the fault monitoring/diagnosis process, thus assuring secure operation of the power systems. In this paper we compare SVMs with radial basis function neural networks (RBFNN) in data sets corresponding to different faults on a transmission system. Classification and regression accuracy is reported for both strategies. Studies on a practical 24-Bus equivalent EHV transmission system of the Indian Southern region is presented for indicating the improved generalization with the large margin classifiers in enhancing the efficacy of the chosen model.