12 resultados para parallel implementation
em Universitat de Girona, Spain
Resumo:
This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed
Resumo:
El treball desenvolupat en aquesta tesi aprofundeix i aporta solucions innovadores en el camp orientat a tractar el problema de la correspondència en imatges subaquàtiques. En aquests entorns, el que realment complica les tasques de processat és la falta de contorns ben definits per culpa d'imatges esborronades; un fet aquest que es deu fonamentalment a il·luminació deficient o a la manca d'uniformitat dels sistemes d'il·luminació artificials. Els objectius aconseguits en aquesta tesi es poden remarcar en dues grans direccions. Per millorar l'algorisme d'estimació de moviment es va proposar un nou mètode que introdueix paràmetres de textura per rebutjar falses correspondències entre parells d'imatges. Un seguit d'assaigs efectuats en imatges submarines reals han estat portats a terme per seleccionar les estratègies més adients. Amb la finalitat d'aconseguir resultats en temps real, es proposa una innovadora arquitectura VLSI per la implementació d'algunes parts de l'algorisme d'estimació de moviment amb alt cost computacional.
Resumo:
Theory of compositional data analysis is often focused on the composition only. However in practical applications we often treat a composition together with covariables with some other scale. This contribution systematically gathers and develop statistical tools for this situation. For instance, for the graphical display of the dependence of a composition with a categorical variable, a colored set of ternary diagrams might be a good idea for a first look at the data, but it will fast hide important aspects if the composition has many parts, or it takes extreme values. On the other hand colored scatterplots of ilr components could not be very instructive for the analyst, if the conventional, black-box ilr is used. Thinking on terms of the Euclidean structure of the simplex, we suggest to set up appropriate projections, which on one side show the compositional geometry and on the other side are still comprehensible by a non-expert analyst, readable for all locations and scales of the data. This is e.g. done by defining special balance displays with carefully- selected axes. Following this idea, we need to systematically ask how to display, explore, describe, and test the relation to complementary or explanatory data of categorical, real, ratio or again compositional scales. This contribution shows that it is sufficient to use some basic concepts and very few advanced tools from multivariate statistics (principal covariances, multivariate linear models, trellis or parallel plots, etc.) to build appropriate procedures for all these combinations of scales. This has some fundamental implications in their software implementation, and how might they be taught to analysts not already experts in multivariate analysis
Resumo:
This paper presents the implementation details of a coded structured light system for rapid shape acquisition of unknown surfaces. Such techniques are based on the projection of patterns onto a measuring surface and grabbing images of every projection with a camera. Analyzing the pattern deformations that appear in the images, 3D information of the surface can be calculated. The implemented technique projects a unique pattern so that it can be used to measure moving surfaces. The structure of the pattern is a grid where the color of the slits are selected using a De Bruijn sequence. Moreover, since both axis of the pattern are coded, the cross points of the grid have two codewords (which permits to reconstruct them very precisely), while pixels belonging to horizontal and vertical slits have also a codeword. Different sets of colors are used for horizontal and vertical slits, so the resulting pattern is invariant to rotation. Therefore, the alignment constraint between camera and projector considered by a lot of authors is not necessary
Resumo:
In the finite field (FF) treatment of vibrational polarizabilities and hyperpolarizabilities, the field-free Eckart conditions must be enforced in order to prevent molecular reorientation during geometry optimization. These conditions are implemented for the first time. Our procedure facilities identification of field-induced internal coordinates that make the major contribution to the vibrational properties. Using only two of these coordinates, quantitative accuracy for nuclear relaxation polarizabilities and hyperpolarizabilities is achieved in π-conjugated systems. From these two coordinates a single most efficient natural conjugation coordinate (NCC) can be extracted. The limitations of this one coordinate approach are discussed. It is shown that the Eckart conditions can lead to an isotope effect that is comparable to the isotope effect on zero-point vibrational averaging, but with a different mass-dependence
Resumo:
La present tesi pretén recollir l'experiència viscuda en desenvolupar un sistema supervisor intel·ligent per a la millora de la gestió de plantes depuradores d'aigües residuals., implementar-lo en planta real (EDAR Granollers) i avaluar-ne el funcionament dia a dia amb situacions típiques de la planta. Aquest sistema supervisor combina i integra eines de control clàssic de les plantes depuradores (controlador automàtic del nivell d'oxigen dissolt al reactor biològic, ús de models descriptius del procés...) amb l'aplicació d'eines del camp de la intel·ligència artificial (sistemes basats en el coneixement, concretament sistemes experts i sistemes basats en casos, i xarxes neuronals). Aquest document s'estructura en 9 capítols diferents. Hi ha una primera part introductòria on es fa una revisió de l'estat actual del control de les EDARs i s'explica el perquè de la complexitat de la gestió d'aquests processos (capítol 1). Aquest capítol introductori juntament amb el capítol 2, on es pretén explicar els antecedents d'aquesta tesi, serveixen per establir els objectius d'aquest treball (capítol 3). A continuació, el capítol 4 descriu les peculiaritats i especificitats de la planta que s'ha escollit per implementar el sistema supervisor. Els capítols 5 i 6 del present document exposen el treball fet per a desenvolupar el sistema basat en regles o sistema expert (capítol 6) i el sistema basat en casos (capítol 7). El capítol 8 descriu la integració d'aquestes dues eines de raonament en una arquitectura multi nivell distribuïda. Finalment, hi ha una darrer capítol que correspon a la avaluació (verificació i validació), en primer lloc, de cadascuna de les eines per separat i, posteriorment, del sistema global en front de situacions reals que es donin a la depuradora
Resumo:
La implantació de Sistemes de Suport a la presa de Decisions (SSD) en Estacions Depuradores d'Aigües Residuals Urbanes (EDAR) facilita l'aplicació de tècniques més eficients basades en el coneixement per a la gestió del procés, assegurant la qualitat de l'aigua de sortida tot minimitzant el cost ambiental de la seva explotació. Els sistemes basats en el coneixement es caracteritzen per la seva capacitat de treballar amb dominis molt poc estructurats, i gran part de la informació rellevant de tipus qualitatiu i/o incerta. Precisament aquests són els trets característics que es poden trobar en els sistemes biològics de depuració, i en conseqüència en una EDAR. No obstant, l'elevada complexitat dels SSD fa molt costós el seu disseny, desenvolupament i aplicació en planta real, pel que resulta determinant la generació d'un protocol que faciliti la seva exportació a EDARs de tecnologia similar. L'objectiu del present treball de Tesi és precisament el desenvolupament d'un protocol que faciliti l'exportació sistemàtica de SSD i l'aprofitament del coneixement del procés prèviament adquirit. El treball es desenvolupa en base al cas d'estudi resultant de l'exportació a l'EDAR Montornès del prototipus original de SSD implementat a l'EDAR Granollers. Aquest SSD integra dos tipus de sistemes basats en el coneixement, concretament els sistemes basats en regles (els quals són programes informàtics que emulen el raonament humà i la seva capacitat de solucionar problemes utilitzant les mateixes fonts d'informació) i els sistemes de raonament basats en casos (els quals són programes informàtics basats en el coneixement que volen solucionar les situacions anormals que pateix la planta en el moment actual mitjançant el record de l'acció efectuada en una situació passada similar). El treball està estructurat en diferents capítols, en el primer dels quals, el lector s'introdueix en el món dels sistemes de suport a la decisió i en el domini de la depuració d'aigües. Seguidament es fixen els objectius i es descriuen els materials i mètodes utilitzats. A continuació es presenta el prototipus de SSD desenvolupat per la EDAR Granollers. Una vegada el prototipus ha estat presentat es descriu el primer protocol plantejat pel mateix autor de la Tesi en el seu Treball de Recerca. A continuació es presenten els resultats obtinguts en l'aplicació pràctica del protocol per generar un nou SSD, per una planta depuradora diferent, partint del prototipus. L'aplicació pràctica del protocol permet l'evolució del mateix cap a un millor pla d'exportació. Finalment, es pot concloure que el nou protocol redueix el temps necessari per realitzar el procés d'exportació, tot i que el nombre de passos necessaris ha augmentat, la qual cosa significa que el nou protocol és més sistemàtic.
Resumo:
Between 1895 and 1910 Barcelona saw a whole range of social, political and cultural changes due to the increasingly important emergence of the working masses. At the same time, the cinema arrived in Catalonia, becoming very quickly one of the favorite entertainments of the urban laboring population which was about creating a new culture opposed to the modernist and nineteenth-century elite .This is, broadly speaking, the context that serves as a starting point for a study of the role of cinema in shaping a mass audience in Barcelona, an analysis centered on new urban spaces intended for the leisure of the lower classes emerged with the birth of modern Barcelona, especially the “Paral•lel” avenue, whose opening in 1894 made even more apparent the great social tensions and existing inequalities in Barcelona’s society at the end of the century.
Resumo:
Virtual tools are commonly used nowadays to optimize product design and manufacturing process of fibre reinforced composite materials. The present work focuses on two areas of interest to forecast the part performance and the production process particularities. The first part proposes a multi-physical optimization tool to support the concept stage of a composite part. The strategy is based on the strategic handling of information and, through a single control parameter, is able to evaluate the effects of design variations throughout all these steps in parallel. The second part targets the resin infusion process and the impact of thermal effects. The numerical and experimental approach allowed the identificationof improvement opportunities regarding the implementation of algorithms in commercially available simulation software.
Resumo:
En la literatura sobre mecànica quàntica és freqüent trobar descriptors basats en la densitat de parells o la densitat electrònica, amb un èxit divers segons les aplicacions que atenyin. Per tal de que tingui sentit químic un descriptor ha de donar la definició d'un àtom en una molècula, o ésser capaç d'identificar regions de l'espai molecular associades amb algun concepte químic (com pot ser un parell solitari o zona d'enllaç, entre d'altres). En aquesta línia, s'han proposat diversos esquemes de partició: la teoria d'àtoms en molècules (AIM), la funció de localització electrònica (ELF), les cel·les de Voroni, els àtoms de Hirshfeld, els àtoms difusos, etc. L'objectiu d'aquesta tesi és explorar descriptors de la densitat basats en particions de l'espai molecular del tipus AIM, ELF o àtoms difusos, analitzar els descriptors existents amb diferents nivells de teoria, proposar nous descriptors d'aromaticitat, així com estudiar l'habilitat de totes aquestes eines per discernir entre diferents mecanismes de reacció.
Resumo:
This thesis deals with the so-called Basis Set Superposition Error (BSSE) from both a methodological and a practical point of view. The purpose of the present thesis is twofold: (a) to contribute step ahead in the correct characterization of weakly bound complexes and, (b) to shed light the understanding of the actual implications of the basis set extension effects in the ab intio calculations and contribute to the BSSE debate. The existing BSSE-correction procedures are deeply analyzed, compared, validated and, if necessary, improved. A new interpretation of the counterpoise (CP) method is used in order to define counterpoise-corrected descriptions of the molecular complexes. This novel point of view allows for a study of the BSSE-effects not only in the interaction energy but also on the potential energy surface and, in general, in any property derived from the molecular energy and its derivatives A program has been developed for the calculation of CP-corrected geometry optimizations and vibrational frequencies, also using several counterpoise schemes for the case of molecular clusters. The method has also been implemented in Gaussian98 revA10 package. The Chemical Hamiltonian Approach (CHA) methodology has been also implemented at the RHF and UHF levels of theory for an arbitrary number interacting systems using an algorithm based on block-diagonal matrices. Along with the methodological development, the effects of the BSSE on the properties of molecular complexes have been discussed in detail. The CP and CHA methodologies are used for the determination of BSSE-corrected molecular complexes properties related to the Potential Energy Surfaces and molecular wavefunction, respectively. First, the behaviour of both BSSE-correction schemes are systematically compared at different levels of theory and basis sets for a number of hydrogen-bonded complexes. The Complete Basis Set (CBS) limit of both uncorrected and CP-corrected molecular properties like stabilization energies and intermolecular distances has also been determined, showing the capital importance of the BSSE correction. Several controversial topics of the BSSE correction are addressed as well. The application of the counterpoise method is applied to internal rotational barriers. The importance of the nuclear relaxation term is also pointed out. The viability of the CP method for dealing with charged complexes and the BSSE effects on the double-well PES blue-shifted hydrogen bonds is also studied in detail. In the case of the molecular clusters the effect of high-order BSSE effects introduced with the hierarchical counterpoise scheme is also determined. The effect of the BSSE on the electron density-related properties is also addressed. The first-order electron density obtained with the CHA/F and CHA/DFT methodologies was used to assess, both graphically and numerically, the redistribution of the charge density upon BSSE-correction. Several tools like the Atoms in Molecules topologycal analysis, density difference maps, Quantum Molecular Similarity, and Chemical Energy Component Analysis were used to deeply analyze, for the first time, the BSSE effects on the electron density of several hydrogen bonded complexes of increasing size. The indirect effect of the BSSE on intermolecular perturbation theory results is also pointed out It is shown that for a BSSE-free SAPT study of hydrogen fluoride clusters, the use of a counterpoise-corrected PES is essential in order to determine the proper molecular geometry to perform the SAPT analysis.
Resumo:
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.