969 resultados para level set method
Resumo:
A system for temporal data mining includes a computer readable medium having an application configured to receive at an input module a temporal data series having events with start times and end times, a set of allowed dwelling times and a threshold frequency. The system is further configured to identify, using a candidate identification and tracking module, one or more occurrences in the temporal data series of a candidate episode and increment a count for each identified occurrence. The system is also configured to produce at an output module an output for those episodes whose count of occurrences results in a frequency exceeding the threshold frequency.
Resumo:
This paper reports improved performance of advantages when compared to its counterpart as it is cost discharge plasma in filtered engine exhaust treatment. Our effective, low capital and operation costs, salable by- paper deals about the removal of NOX emissions from the diesel products, and integration with the existing systems. In this exhaust by electric discharge plasma. For the treatment of diesel paper we describe an alternate reactor geometry referred to exhaust a new type of reactor referred to as cross-flow dielectric as cross-flow DBD reactor, where the exhaust gas flow barrier discharge reactor has been used, where the gas flow is perpendicular to the wire-cylinder reaction chamber. This perpendicular to the corona electrode. Experiments were reactor is used to treat the actual exhaust of a 3.75 kW diesel- conducted at different flow rates ranging from 2 l/min to 10 l/ generator set. The main emphasis is laid on the NOX treatment min. The discharge plasma assisted barrier discharge reactor of diesel engine exhaust. Experiments were carried out at has shown promising results in NOX removal at high flow rates.
Resumo:
In this paper, we investigate a numerical method for the solution of an inverse problem of recovering lacking data on some part of the boundary of a domain from the Cauchy data on other part for a variable coefficient elliptic Cauchy problem. In the process, the Cauchy problem is transformed into the problem of solving a compact linear operator equation. As a remedy to the ill-posedness of the problem, we use a projection method which allows regularization solely by discretization. The discretization level plays the role of regularization parameter in the case of projection method. The balancing principle is used for the choice of an appropriate discretization level. Several numerical examples show that the method produces a stable good approximate solution.
Resumo:
Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]
Resumo:
In space application the precision level measurement of cryogenic liquids in the storage tanks is done using triple redundant capacitance level sensor, for control and safety point of view. The linearity of each sensor element depends upon the cylindricity and concentricity of the internal and external electrodes. The complexity of calibrating all sensors together has been addressed by two step calibration methodology which has been developed and used for the calibration of six capacitance sensors. All calibrations are done using Liquid Nitrogen (LN2) as a cryogenic fluid. In the first step of calibration, one of the elements of Liquid Hydrogen (LH2) level sensor is calibrated using 700mm eleven point discrete diode array. Four wire method has been used for the diode array. Thus a linearity curve for a single element of LH2 is obtained. In second step of calibration, using the equation thus obtained for the above sensor, it is considered as a reference for calibrating remaining elements of the same LH2 sensor and other level sensor (either Liquid Oxygen (LOX) or LH2). The elimination of stray capacitance for the capacitance level probes has been attempted. The automatic data logging of capacitance values through GPIB is done using LabVIEW 8.5.
Resumo:
This study proposes an inverter circuit topology capable of generating multilevel dodecagonal (12-sided polygon) voltage space vectors by the cascaded connection of two-level and three-level inverters. By the proper selection of DC-link voltages and resultant switching states for the inverters, voltage space vectors whose tips lie on three concentric dodecagons, are obtained. A rectifier circuit for the inverter is also proposed, which significantly improves the power factor. The topology offers advantages such as the complete elimination of the fifth and seventh harmonics in phase voltages and an extension of the linear modulation range. In this study, a simple method for the calculation of pulse width modulation timing was presented along with extensive simulation and experimental results in order to validate the proposed concept.
Resumo:
The Reeb graph of a scalar function tracks the evolution of the topology of its level sets. This paper describes a fast algorithm to compute the Reeb graph of a piecewise-linear (PL) function defined over manifolds and non-manifolds. The key idea in the proposed approach is to maximally leverage the efficient contour tree algorithm to compute the Reeb graph. The algorithm proceeds by dividing the input into a set of subvolumes that have loop-free Reeb graphs using the join tree of the scalar function and computes the Reeb graph by combining the contour trees of all the subvolumes. Since the key ingredient of this method is a series of union-find operations, the algorithm is fast in practice. Experimental results demonstrate that it outperforms current generic algorithms by a factor of up to two orders of magnitude, and has a performance on par with algorithms that are catered to restricted classes of input. The algorithm also extends to handle large data that do not fit in memory.
Resumo:
In many real world prediction problems the output is a structured object like a sequence or a tree or a graph. Such problems range from natural language processing to compu- tational biology or computer vision and have been tackled using algorithms, referred to as structured output learning algorithms. We consider the problem of structured classifi- cation. In the last few years, large margin classifiers like sup-port vector machines (SVMs) have shown much promise for structured output learning. The related optimization prob -lem is a convex quadratic program (QP) with a large num-ber of constraints, which makes the problem intractable for large data sets. This paper proposes a fast sequential dual method (SDM) for structural SVMs. The method makes re-peated passes over the training set and optimizes the dual variables associated with one example at a time. The use of additional heuristics makes the proposed method more efficient. We present an extensive empirical evaluation of the proposed method on several sequence learning problems.Our experiments on large data sets demonstrate that the proposed method is an order of magnitude faster than state of the art methods like cutting-plane method and stochastic gradient descent method (SGD). Further, SDM reaches steady state generalization performance faster than the SGD method. The proposed SDM is thus a useful alternative for large scale structured output learning.
Resumo:
This paper describes a new method of color text localization from generic scene images containing text of different scripts and with arbitrary orientations. A representative set of colors is first identified using the edge information to initiate an unsupervised clustering algorithm. Text components are identified from each color layer using a combination of a support vector machine and a neural network classifier trained on a set of low-level features derived from the geometric, boundary, stroke and gradient information. Experiments on camera-captured images that contain variable fonts, size, color, irregular layout, non-uniform illumination and multiple scripts illustrate the robustness of the method. The proposed method yields precision and recall of 0.8 and 0.86 respectively on a database of 100 images. The method is also compared with others in the literature using the ICDAR 2003 robust reading competition dataset.
Resumo:
This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.
Resumo:
Multilevel inverters with hexagonal and dodecagonal voltage space vector structures have improved harmonic profile compared to two level inverters. Further improvement in the quality of the waveform is possible using multilevel octadecagonal (18 sided polygon) voltage space vectors. This paper proposes an inverter circuit topology capable of generating multilevel octadecagonal voltage space vectors, by cascading two asymmetric three level inverters. By proper selection of DC link voltages and the resultant switching states for the inverters, voltage space vectors, whose tips lie on three concentric octadecagons, are obtained. The advantages of octadecagonal voltage space vector based PWM techniques are the complete elimination of fifth, seventh, eleventh and thirteenth harmonics in phase voltages and the extension of linear modulation range. In this paper, a simple PWM timing calculation method is also proposed. Matlab simulation results and experimental results have been presented in this paper to validate the proposed concept.
Resumo:
The name `Seven Pagodas' has served as a nickname for the south Indian port of Mahabalipuram since the early European explorers used it as landmark for navigation as they could see summits of seven temples from the sea. There are many theories concerning the name Seven Pagodas. The present study has compared coastline and adjacent seven monuments illustrated in a 17th century Portolan Chart (maritime map) with recent remote sensing data. This analysis throws new light on the name ``Seven Pagodas'' for the city. This study has used DEM of the site to simulate the coastline which is similar to the one depicted in the old portolan chart. Through this, the then sea level and corresponding flooding extent according to topography of the area and their effect on monuments could be analyzed. Most importantly this work has in the process identified possibly the seven monuments that constituted the name Seven Pagodas and this provides an alternative explanation to one of the mysteries of history. This work has demonstrated unique method of studying coastal archaeological sites. As large numbers of heritage sites around the world are on coastlines, this methodology has potential to be very useful for coastal heritage preservation and management.
Resumo:
Before installation, a voltage source converter is usually subjected to heat-run test to verify its thermal design and performance under load. For heat-run test, the converter needs to be operated at rated voltage and rated current for a substantial length of time. Hence, such tests consume huge amount of energy in case of high-power converters. Also, the capacities of the source and loads available in the research and development (R&D) centre or the production facility could be inadequate to conduct such tests. This paper proposes a method to conduct heat-run tests on high-power, pulse width modulated (PWM) converters with low energy consumption. The experimental set-up consists of the converter under test and another converter (of similar or higher rating), both connected in parallel on the ac side and open on the dc side. Vector-control or synchronous reference frame control is employed to control the converters such that one draws certain amount of reactive power and the other supplies the same; only the system losses are drawn from the mains. The performance of the controller is validated through simulation and experiments. Experimental results, pertaining to heat-run tests on a high-power PWM converter, are presented at power levels of 25 kVA to 150 kVA.
Resumo:
Spatial information at the landscape scale is extremely important for conservation planning, especially in the case of long-ranging vertebrates. The biodiversity-rich Anamalai hill ranges in the Western Ghats of southern India hold a viable population for the long-term conservation of the Asian elephant. Through rapid but extensive field surveys we mapped elephant habitat, corridors, vegetation and land-use patterns, estimated the elephant population density and structure, and assessed elephant-human conflict across this landscape. GIS and remote sensing analyses indicate that elephants are distributed among three blocks over a total area of about 4600 km(2). Approximately 92% remains contiguous because of four corridors; however, under 4000 km2 of this area may be effectively used by elephants. Nine landscape elements were identified, including five natural vegetation types, of which tropical moist deciduous forest is dominant. Population density assessed through the dung count method using line transects covering 275 km of walk across the effective elephant habitat of the landscape yielded a mean density of 1.1 (95% Cl = 0.99-1.2) elephant/km(2). Population structure from direct sighting of elephants showed that adult male elephants constitute just 2.9% and adult females 42.3% of the population with the rest being subadults (27.4%), juveniles (16%) and calves (11.4%). Sex ratios show an increasing skew toward females from juvenile (1:1.8) to sub-adult (1:2.4) and adult (1:14.7) indicating higher mortality of sub-adult and adult males that is most likely due to historical poaching for ivory. A rapid questionnaire survey and secondary data on elephant-human conflict from forest department records reveals that villages in and around the forest divisions on the eastern side of landscape experience higher levels of elephant-human conflict than those on the western side; this seems to relate to a greater degree of habitat fragmentation and percentage farmers cultivating annual crops in the east. We provide several recommendations that could help maintain population viability and reduce elephant-human conflict of the Anamalai elephant landscape. (C) 2013 Deutsche Gesellschaft far Saugetierkunde. Published by Elsevier GmbH. All rights reserved.
Resumo:
The increasing number of available protein structures requires efficient tools for multiple structure comparison. Indeed, multiple structural alignments are essential for the analysis of function, evolution and architecture of protein structures. For this purpose, we proposed a new web server called multiple Protein Block Alignment (mulPBA). This server implements a method based on a structural alphabet to describe the backbone conformation of a protein chain in terms of dihedral angles. This sequence-like' representation enables the use of powerful sequence alignment methods for primary structure comparison, followed by an iterative refinement of the structural superposition. This approach yields alignments superior to most of the rigid-body alignment methods and highly comparable with the flexible structure comparison approaches. We implement this method in a web server designed to do multiple structure superimpositions from a set of structures given by the user. Outputs are given as both sequence alignment and superposed 3D structures visualized directly by static images generated by PyMol or through a Jmol applet allowing dynamic interaction. Multiple global quality measures are given. Relatedness between structures is indicated by a distance dendogram. Superimposed structures in PDB format can be also downloaded, and the results are quickly obtained. mulPBA server can be accessed at www.dsimb.inserm.fr/dsimb_tools/mulpba/.