870 resultados para foreground object removal
Resumo:
En la presente tesis se han realizado estudios de eliminación de metales pesados y arsénico de aguas contaminadas mediante procesos de adsorción sobre materiales de bajo coste. Dichos materiales son, en todos los casos, subproductos de industrias agroalimentarias o metalúrgicas.La tesis consta de diferentes capítulos enmarcados en tres secciones: (i) Eliminación de cromo hexavalente y trivalente (ii) Eliminación de cationes divalentes de metales pesados en presencia de complejantes y en mezclas multimetálicas y (iii) Eliminación de arsénico utilizando un subproducto de la industria de cromados metálicos como adsorbente. Los resultados obtenidos ponen de manifiesto que ciertos residuos industriales pueden ser utilizados como adsorbentes en la detoxificación de efluentes contaminados con metales pesados. La tecnología propuesta representa una alternativa sostenible y de bajo coste frente a los tratamientos actuales más costosos y dependientes, en muchas ocasiones, de productos derivados del petróleo.
Resumo:
En aquesta tesis s'ha desenvolupat un sistema de control capaç d'optimitzar el funcionament dels Reactors Discontinus Seqüencials dins el camp de l'eliminació de matèria orgànica i nitrogen de les aigües residuals. El sistema de control permet ajustar en línia la durada de les etapes de reacció a partir de mesures directes o indirectes de sondes. En una primera etapa de la tesis s'ha estudiat la calibració de models matemàtics que permeten realitzar fàcilment provatures de diferents estratègies de control. A partir de l'anàlisis de dades històriques s'han plantejat diferents opcions per controlar l'SBR i les més convenients s'han provat mitjançant simulació. Després d'assegurar l'èxit de l'estratègia de control mitjançant simulacions s'ha implementat en una planta semi-industrial. Finalment es planteja l'estructura d'uns sistema supervisor encarregat de controlar el funcionament de l'SBR no només a nivell de fases sinó també a nivell cicle.
Resumo:
L'increment de bases de dades que cada vegada contenen imatges més difícils i amb un nombre més elevat de categories, està forçant el desenvolupament de tècniques de representació d'imatges que siguin discriminatives quan es vol treballar amb múltiples classes i d'algorismes que siguin eficients en l'aprenentatge i classificació. Aquesta tesi explora el problema de classificar les imatges segons l'objecte que contenen quan es disposa d'un gran nombre de categories. Primerament s'investiga com un sistema híbrid format per un model generatiu i un model discriminatiu pot beneficiar la tasca de classificació d'imatges on el nivell d'anotació humà sigui mínim. Per aquesta tasca introduïm un nou vocabulari utilitzant una representació densa de descriptors color-SIFT, i desprès s'investiga com els diferents paràmetres afecten la classificació final. Tot seguit es proposa un mètode par tal d'incorporar informació espacial amb el sistema híbrid, mostrant que la informació de context es de gran ajuda per la classificació d'imatges. Desprès introduïm un nou descriptor de forma que representa la imatge segons la seva forma local i la seva forma espacial, tot junt amb un kernel que incorpora aquesta informació espacial en forma piramidal. La forma es representada per un vector compacte obtenint un descriptor molt adequat per ésser utilitzat amb algorismes d'aprenentatge amb kernels. Els experiments realitzats postren que aquesta informació de forma te uns resultats semblants (i a vegades millors) als descriptors basats en aparença. També s'investiga com diferents característiques es poden combinar per ésser utilitzades en la classificació d'imatges i es mostra com el descriptor de forma proposat juntament amb un descriptor d'aparença millora substancialment la classificació. Finalment es descriu un algoritme que detecta les regions d'interès automàticament durant l'entrenament i la classificació. Això proporciona un mètode per inhibir el fons de la imatge i afegeix invariança a la posició dels objectes dins les imatges. S'ensenya que la forma i l'aparença sobre aquesta regió d'interès i utilitzant els classificadors random forests millora la classificació i el temps computacional. Es comparen els postres resultats amb resultats de la literatura utilitzant les mateixes bases de dades que els autors Aixa com els mateixos protocols d'aprenentatge i classificació. Es veu com totes les innovacions introduïdes incrementen la classificació final de les imatges.
Resumo:
Actualment, la legislació ambiental ha esdevingut més restrictiva pel que fa a la descàrrega d'aigües residuals amb nutrients, especialment en les anomenades àrees sensibles o zones vulnerables. Arran d'aquest fet, s'ha estimulat el coneixement, desenvolupament i millora dels processos d'eliminació de nutrients. El Reactor Discontinu Seqüencial (RDS) o Sequencing Batch Reactor (SBR) en anglès, és un sistema de tractament de fangs actius que opera mitjançant un procediment d'omplerta-buidat. En aquest tipus de reactors, l'aigua residual és addicionada en un sol reactor que treballa per càrregues repetint un cicle (seqüència) al llarg del temps. Una de les característiques dels SBR és que totes les diferents operacions (omplerta, reacció, sedimentació i buidat) es donen en un mateix reactor. La tecnologia SBR no és nova d'ara. El fet, és que va aparèixer abans que els sistema de tractament continu de fangs actius. El precursor dels SBR va ser un sistema d'omplerta-buidat que operava en discontinu. Entre els anys 1914 i 1920, varen sorgir certes dificultats moltes d'elles a nivell d'operació (vàlvules, canvis el cabal d'un reactor a un altre, elevat temps d'atenció per l'operari...) per aquests reactors. Però no va ser fins a finals de la dècada dels '50 principis del '60, amb el desenvolupament de nous equipaments i noves tecnologies, quan va tornar a ressorgir l'interès pels SBRs. Importants millores en el camp del subministrament d'aire (vàlvules motoritzades o d'acció pneumàtica) i en el de control (sondes de nivell, mesuradors de cabal, temporitzadors automàtics, microprocessadors) han permès que avui en dia els SBRs competeixin amb els sistemes convencional de fangs actius. L'objectiu de la present tesi és la identificació de les condicions d'operació adequades per un cicle segons el tipus d'aigua residual a l'entrada, les necessitats del tractament i la qualitat desitjada de la sortida utilitzant la tecnologia SBR. Aquestes tres característiques, l'aigua a tractar, les necessitats del tractament i la qualitat final desitjada determinen en gran mesura el tractament a realitzar. Així doncs, per tal d'adequar el tractament a cada tipus d'aigua residual i les seves necessitats, han estat estudiats diferents estratègies d'alimentació. El seguiment del procés es realitza mitjançant mesures on-line de pH, OD i RedOx, els canvis de les quals donen informació sobre l'estat del procés. Alhora un altre paràmetre que es pot calcular a partir de l'oxigen dissolt és la OUR que és una dada complementària als paràmetres esmentats. S'han avaluat les condicions d'operació per eliminar nitrogen d'una aigua residual sintètica utilitzant una estratègia d'alimentació esglaonada, a través de l'estudi de l'efecte del nombre d'alimentacions, la definició de la llargada i el número de fases per cicle, i la identificació dels punts crítics seguint les sondes de pH, OD i RedOx. S'ha aplicat l'estratègia d'alimentació esglaonada a dues aigües residuals diferents: una procedent d'una indústria tèxtil i l'altra, dels lixiviats d'un abocador. En ambdues aigües residuals es va estudiar l'eficiència del procés a partir de les condicions d'operació i de la velocitat del consum d'oxigen. Mentre que en l'aigua residual tèxtil el principal objectiu era eliminar matèria orgànica, en l'aigua procedent dels lixiviats d'abocador era eliminar matèria orgànica i nitrogen. S'han avaluat les condicions d'operació per eliminar nitrogen i fòsfor d'una aigua residual urbana utilitzant una estratègia d'alimentació esglaonada, a través de la definició del número i la llargada de les fases per cicle, i la identificació dels punts crítics seguint les sondes de pH, OD i RedOx. S'ha analitzat la influència del pH i la font de carboni per tal d'eliminar fòsfor d'una aigua sintètica a partir de l'estudi de l'increment de pH a dos reactors amb diferents fonts de carboni i l'estudi de l'efecte de canviar la font de carboni. Tal i com es pot veure al llarg de la tesi, on s'han tractat diferents aigües residuals per a diferents necessitats, un dels avantatges més importants d'un SBR és la seva flexibilitat.
Resumo:
Focusing on Fluxus, a loosely knit association of artists from America, Europe and Asia whose work centers around intermediality, this article explores the notion of relationality without relata. Intermediality refers to works that fall conceptually between media – such as visual poetry or action music – as well as between the general area of art media and those of life media(Higgins). Departing from two Fluxus intermedia – the event score, a performative score in the form of words, and the Fluxkit, a performative score in the form of objects – I investigate the logic of co-constitutivity within which every element is both subject and object, both constitutive and constituted. To be more precise, I trace the cross-categorial interplay of differences that explodes the logico-linguistic structure of binary oppositions, such as those between foreground and background, word and action, sound and silence, identity and alterity. Aided by Jacques Derrida’s concept of ‘de-centered play’ and Shigenori Nagatomo’s concept of ‘interfusion’ this article seeks to articulate the ways in which the Fluxus works mobilise the ‘silent background’ to dismantle the dualistic logic of definite differences.
Resumo:
"Exhibiting is or should be to work against ignorance, especially against the most refractory of all ignorance: the pre-conceived idea of stereo typed culture. To exhibit is to take a calculated risk of disorientation - in the etymological sense : ( to lose your bearings), disturbs the harmony, the evident , and the consensus, that constitutes the common place ( the banal). Needless to say however it is obvious that an exhibition that deliberately tries to scandalise will create an inverted perversion which results in an obscurantist pseudo-luxury - culture ... between demagogy and provocation, one has to find visual communication's subtle itinerary. Even though an intermediary route is not so stimulating : as Gaston Bachelard said "All the roads lead to Rome, except the roads of compromise."
Resumo:
Exhibiting is or should be to work against ignorance, especially against the most refractory of all ignorance: the pre-conceived idea of stereo typed culture. To exhibit is to take a calculated risk of disorientation - in the etymological sense: (to lose your bearings), disturbs the harmony, the evident , and the consensus, that constitutes the common place (the banal). Needless to say however it is obvious that an exhibition that deliberately tries to scandalise will create an inverted perversion which results in an obscurantist pseudo-luxury - culture ... between demagogy and provocation, one has to find visual communication's subtle itinerary. Even though an intermediary route is not so stimulating: as Gaston Bachelard said "All the roads lead to Rome, except the roads of compromise." It is becoming ever more evident that museums have undergone changes that are noticeable in numerous areas. As well as the traditional functions of collecting, conserving and exhibiting objects. museums have tried to become a means of communication, open and aware of the worries of modern society. In order to do this , it has started to utilise modern technology now available and lead by the hand of "marketing" and modern business management.
Resumo:
This study evaluates patient's short and long-term balance function after microsurgical tumor removal and gamma knife radiosurgery using an unvalidated qualitative questionnaire and the Dizziness Handicap Inventory.
Resumo:
For the tracking of extrema associated with weather systems to be applied to a broad range of fields it is necessary to remove a background field that represents the slowly varying, large spatial scales. The sensitivity of the tracking analysis to the form of background field removed is explored for the Northern Hemisphere winter storm tracks for three contrasting fields from an integration of the U. K. Met Office's (UKMO) Hadley Centre Climate Model (HadAM3). Several methods are explored for the removal of a background field from the simple subtraction of the climatology, to the more sophisticated removal of the planetary scales. Two temporal filters are also considered in the form of a 2-6-day Lanczos filter and a 20-day high-pass Fourier filter. The analysis indicates that the simple subtraction of the climatology tends to change the nature of the systems to the extent that there is a redistribution of the systems relative to the climatological background resulting in very similar statistical distributions for both positive and negative anomalies. The optimal planetary wave filter removes total wavenumbers less than or equal to a number in the range 5-7, resulting in distributions more easily related to particular types of weather system. For the temporal filters the 2-6-day bandpass filter is found to have a detrimental impact on the individual weather systems, resulting in the storm tracks having a weak waveguide type of behavior. The 20-day high-pass temporal filter is less aggressive than the 2-6-day filter and produces results falling between those of the climatological and 2-6-day filters.
Resumo:
This workshop paper reports recent developments to a vision system for traffic interpretation which relies extensively on the use of geometrical and scene context. Firstly, a new approach to pose refinement is reported, based on forces derived from prominent image derivatives found close to an initial hypothesis. Secondly, a parameterised vehicle model is reported, able to represent different vehicle classes. This general vehicle model has been fitted to sample data, and subjected to a Principal Component Analysis to create a deformable model of common car types having 6 parameters. We show that the new pose recovery technique is also able to operate on the PCA model, to allow the structure of an initial vehicle hypothesis to be adapted to fit the prevailing context. We report initial experiments with the model, which demonstrate significant improvements to pose recovery.
Resumo:
Flood modelling of urban areas is still at an early stage, partly because until recently topographic data of sufficiently high resolution and accuracy have been lacking in urban areas. However, Digital Surface Models (DSMs) generated from airborne scanning laser altimetry (LiDAR) having sub-metre spatial resolution have now become available, and these are able to represent the complexities of urban topography. The paper describes the development of a LiDAR post-processor for urban flood modelling based on the fusion of LiDAR and digital map data. The map data are used in conjunction with LiDAR data to identify different object types in urban areas, though pattern recognition techniques are also employed. Post-processing produces a Digital Terrain Model (DTM) for use as model bathymetry, and also a friction parameter map for use in estimating spatially-distributed friction coefficients. In vegetated areas, friction is estimated from LiDAR-derived vegetation height, and (unlike most vegetation removal software) the method copes with short vegetation less than ~1m high, which may occupy a substantial fraction of even an urban floodplain. The DTM and friction parameter map may also be used to help to generate an unstructured mesh of a vegetated urban floodplain for use by a 2D finite element model. The mesh is decomposed to reflect floodplain features having different frictional properties to their surroundings, including urban features such as buildings and roads as well as taller vegetation features such as trees and hedges. This allows a more accurate estimation of local friction. The method produces a substantial node density due to the small dimensions of many urban features.
Resumo:
The classical computer vision methods can only weakly emulate some of the multi-level parallelisms in signal processing and information sharing that takes place in different parts of the primates’ visual system thus enabling it to accomplish many diverse functions of visual perception. One of the main functions of the primates’ vision is to detect and recognise objects in natural scenes despite all the linear and non-linear variations of the objects and their environment. The superior performance of the primates’ visual system compared to what machine vision systems have been able to achieve to date, motivates scientists and researchers to further explore this area in pursuit of more efficient vision systems inspired by natural models. In this paper building blocks for a hierarchical efficient object recognition model are proposed. Incorporating the attention-based processing would lead to a system that will process the visual data in a non-linear way focusing only on the regions of interest and hence reducing the time to achieve real-time performance. Further, it is suggested to modify the visual cortex model for recognizing objects by adding non-linearities in the ventral path consistent with earlier discoveries as reported by researchers in the neuro-physiology of vision.
Resumo:
Recent work has suggested that for some tasks, graphical displays which visually integrate information from more than one source offer an advantage over more traditional displays which present the same information in a separated format. Three experiments are described which investigate this claim using a task which requires subjects to control a dynamic system. In the first experiment, the integrated display is compared to two separated displays, one an animated mimic diagram, the other an alphanumeric display. The integrated display is shown to support better performance in a control task, but experiment 2 shows that part of this advantage may be due to its analogue nature. Experiment 3 considers performance on a fault detection task, and shows no difference between the integrated and separated displays. The paper concludes that previous claims made for integrated displays may not generalize from monitoring to control tasks.
Resumo:
Ten mothers were observed prospectively, interacting with their infants aged 0 ; 10 in two contexts (picture description and noun description). Maternal communicative behaviours were coded for volubility, gestural production and labelling style. Verbal labelling events were categorized into three exclusive categories: label only; label plus deictic gesture; label plus iconic gesture. We evaluated the predictive relations between maternal communicative style and children's subsequent acquisition of ten target nouns. Strong relations were observed between maternal communicative style and children's acquisition of the target nouns. Further, even controlling for maternal volubility and maternal labelling, maternal use of iconic gestures predicted the timing of acquisition of nouns in comprehension. These results support the proposition that maternal gestural input facilitates linguistic development, and suggest that such facilitation may be a function of gesture type.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.