915 resultados para software creation methodology


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of how to select the optimal number of sensors and how to determine their placement in a given monitored area for multimedia surveillance systems. We propose to solve this problem by obtaining a novel performance metric in terms of a probability measure for accomplishing the task as a function of set of sensors and their placement. This measure is then used to find the optimal set. The same measure can be used to analyze the degradation in system 's performance with respect to the failure of various sensors. We also build a surveillance system using the optimal set of sensors obtained based on the proposed design methodology. Experimental results show the effectiveness of the proposed design methodology in selecting the optimal set of sensors and their placement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today 80 % of the content on the Web is in English, which is spoken by only 8% of the World population and 5% of Indian population. There is wealth of useful content in the various languages of the world other than English, which can be made available on the Internet. But, to date, for various reasons most of it is not yet available on the Internet. India itself has 18 officially recognized languages and scores of dialects. Although the medium of instruction for most of the higher education and research in India is English, substantial amount of literature by way of novels, textbooks, scholarly information are being generated in the other languages in the country. Many of the e-governance initiatives are in the respective state languages. In the past, support for different languages by the operating systems and the software packages were not very encouraging. However, with the advent of Unicode technology, operating systems and software packages are supporting almost all the major languages of the world that have scripts. In the work reported in this paper, we have explained the configuration changes that are needed for Eprints.org software to store multilingual content and to create a multilingual user interface.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the efforts at MILE lab, IISc, to create a 100,000-word database each in Kannada and Tamil for the design and development of Online Handwritten Recognition. It has been collected from over 600 users in order to capture the variations in writing style. We describe features of the scripts and how the number of symbols were reduced to be able to effectively train the data for recognition. The list of words include all the characters, Kannada and Indo-Arabic numerals, punctuations and other symbols. A semi-automated tool for the annotation of data from stroke to word level is used. It segments each word into stroke groups and also acts as a validation mechanism for segmentation. The tool displays the stroke, stroke groups and aksharas of a word and hence can be used to study the various styles of writing, delayed strokes and for assigning quality tags to the words. The tool is currently being used for annotating Tamil and Kannada data. The output is stored in a standard XML format.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software transactional memory (STM) has been proposed as a promising programming paradigm for shared memory multi-threaded programs as an alternative to conventional lock based synchronization primitives. Typical STM implementations employ a conflict detection scheme, which works with uniform access granularity, tracking shared data accesses either at word/cache line or at object level. It is well known that a single fixed access tracking granularity cannot meet the conflicting goals of reducing false conflicts without impacting concurrency adversely. A fine grained granularity while improving concurrency can have an adverse impact on performance due to lock aliasing, lock validation overheads, and additional cache pressure. On the other hand, a coarse grained granularity can impact performance due to reduced concurrency. Thus, in general, a fixed or uniform granularity access tracking (UGAT) scheme is application-unaware and rarely matches the access patterns of individual application or parts of an application, leading to sub-optimal performance for different parts of the application(s). In order to mitigate the disadvantages associated with UGAT scheme, we propose a Variable Granularity Access Tracking (VGAT) scheme in this paper. We propose a compiler based approach wherein the compiler uses inter-procedural whole program static analysis to select the access tracking granularity for different shared data structures of the application based on the application's data access pattern. We describe our prototype VGAT scheme, using TL2 as our STM implementation. Our experimental results reveal that VGAT-STM scheme can improve the application performance of STAMP benchmarks from 1.87% to up to 21.2%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Precision, sophistication and economic factors in many areas of scientific research that demand very high magnitude of compute power is the order of the day. Thus advance research in the area of high performance computing is getting inevitable. The basic principle of sharing and collaborative work by geographically separated computers is known by several names such as metacomputing, scalable computing, cluster computing, internet computing and this has today metamorphosed into a new term known as grid computing. This paper gives an overview of grid computing and compares various grid architectures. We show the role that patterns can play in architecting complex systems, and provide a very pragmatic reference to a set of well-engineered patterns that the practicing developer can apply to crafting his or her own specific applications. We are not aware of pattern-oriented approach being applied to develop and deploy a grid. There are many grid frameworks that are built or are in the process of being functional. All these grids differ in some functionality or the other, though the basic principle over which the grids are built is the same. Despite this there are no standard requirements listed for building a grid. The grid being a very complex system, it is mandatory to have a standard Software Architecture Specification (SAS). We attempt to develop the same for use by any grid user or developer. Specifically, we analyze the grid using an object oriented approach and presenting the architecture using UML. This paper will propose the usage of patterns at all levels (analysis. design and architectural) of the grid development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a method for minimizing the sum of the square of voltage deviations by a least-square minimization technique, and thus improving the voltage profile in a given system by adjusting control variables, such as tap position of transformers, reactive power injection of VAR sources and generator excitations. The control variables and dependent variables are related by a matrix J whose elements are computed as the sensitivity matrix. Linear programming is used to calculate voltage increments that minimize transmission losses. The active and reactive power optimization sub-problems are solved separately taking advantage of the loose coupling between the two problems. The proposed algorithm is applied to IEEE 14-and 30-bus systems and numerical results are presented. The method is computationally fast and promises to be suitable for implementation in real-time dispatch centres.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We studied the feasibility of the measurement of Higgs pair creation at a photon linear collider. From the sensitivity to the anomalous self-coupling of the Higgs boson, the optimum gamma gamma collision energy was found to be around 270 GeV for a Higgs mass of 120 GeV/c(2). We found that large backgrounds such as gamma gamma -> W+W-, ZZ, and b (b) over barb (b) over bar can be suppressed if correct assignment of tracks to parent partons is achieved and Higgs pair events can be observed with a statistical significance of similar to 5 sigma by operating the photon linear collider for 5 years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A methodology using sensitivity analysis is proposed to measure the effective permeability which includes the interaction of the resin and the reinforcement. Initially, mold-filling experiments were performed at isothermal conditions on the test specimen and the positions of the flow front were tracked with time using a flow visualization method. Following this, mold-filling experiments were simulated using a commercial software to obtain the positions of the flow front with time at the process conditions used for experiments. Several iterations were performed using different trial values of the permeability until the experimentally tracked and simulated positions of the flow front with time were matched. Finally, the value of the permeability thus obtained was validated by comparing the positions obtained by performing the experiments at different process conditions with the positions obtained by simulating the experiments. In this study, woven roving and chopped strand mats of E-class glass fiber and unsaturated polyester resin were used for the experiments. From the results, it was found that the measured permeabilities were consistent with varying process conditions. POLYM. COMPOS., 2012. (c) 2012 Society of Plastics Engineers

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three dimensional digital model of a representative human kidney is needed for a surgical simulator that is capable of simulating a laparoscopic surgery involving kidney. Buying a three dimensional computer model of a representative human kidney, or reconstructing a human kidney from an image sequence using commercial software, both involve (sometimes significant amount of) money. In this paper, author has shown that one can obtain a three dimensional surface model of human kidney by making use of images from the Visible Human Data Set and a few free software packages (ImageJ, ITK-SNAP, and MeshLab in particular). Images from the Visible Human Data Set, and the software packages used here, both do not cost anything. Hence, the practice of extracting the geometry of a representative human kidney for free, as illustrated in the present work, could be a free alternative to the use of expensive commercial software or to the purchase of a digital model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The experimental implementation of a quantum algorithm requires the decomposition of unitary operators. Here we treat unitary-operator decomposition as an optimization problem, and use a genetic algorithm-a global-optimization method inspired by nature's evolutionary process-for operator decomposition. We apply this method to NMR quantum information processing, and find a probabilistic way of performing universal quantum computation using global hard pulses. We also demonstrate the efficient creation of the singlet state (a special type of Bell state) directly from thermal equilibrium, using an optimum sequence of pulses. © 2012 American Physical Society.