10 resultados para Text analytic approach
em Universitat de Girona, Spain
Resumo:
A review article of the The New England Journal of Medicine refers that almost a century ago, Abraham Flexner, a research scholar at the Carnegie Foundation for the Advancement of Teaching, undertook an assessment of medical education in 155 medical schools in operation in the United States and Canada. Flexner’s report emphasized the nonscientific approach of American medical schools to preparation for the profession, which contrasted with the university-based system of medical education in Germany. At the core of Flexner’s view was the notion that formal analytic reasoning, the kind of thinking integral to the natural sciences, should hold pride of place in the intellectual training of physicians. This idea was pioneered at Harvard University, the University of Michigan, and the University of Pennsylvania in the 1880s, but was most fully expressed in the educational program at Johns Hopkins University, which Flexner regarded as the ideal for medical education. (...)
Resumo:
We investigate whether dimensionality reduction using a latent generative model is beneficial for the task of weakly supervised scene classification. In detail, we are given a set of labeled images of scenes (for example, coast, forest, city, river, etc.), and our objective is to classify a new image into one of these categories. Our approach consists of first discovering latent ";topics"; using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature here applied to a bag of visual words representation for each image, and subsequently, training a multiway classifier on the topic distribution vector for each image. We compare this approach to that of representing each image by a bag of visual words vector directly and training a multiway classifier on these vectors. To this end, we introduce a novel vocabulary using dense color SIFT descriptors and then investigate the classification performance under changes in the size of the visual vocabulary, the number of latent topics learned, and the type of discriminative classifier used (k-nearest neighbor or SVM). We achieve superior classification performance to recent publications that have used a bag of visual word representation, in all cases, using the authors' own data sets and testing protocols. We also investigate the gain in adding spatial information. We show applications to image retrieval with relevance feedback and to scene classification in videos
Resumo:
Introspecció sobre la dinàmica dels agents té un important impacte en decisions individuals i cooperatives en entorns multi-agent. Introspecció, una habilitat cognitiva provinent de la metàfora "agent", permet que els agents siguin conscients de les seves capacitats per a realitzar correctament les tasques. Aquesta introspecció, principalment sobre capacitats relacionades amb la dinàmica, proporciona als agents un raonament adequat per a assolir compromisos segurs en sistemes cooperatius. Per a tal fi, les capacitats garanteixen una representació adequada i explícita de tal dinàmica. Aquest enfocament canvia i millora la manera com els agents poden coordinar-se per a portar a terme tasques i com gestionar les seves interaccions i compromisos en entorns cooperatius. L'enfocament s'ha comprovat en escenaris on la coordinació és important, beneficiosa i necessària. Els resultats i les conclusions són presentats ressaltant els avantatges de la introspecció en la millora del rendiment dels sistemes multi-agent en tasques coordinades i assignació de tasques.
Resumo:
The main objective pursued in this thesis targets the development and systematization of a methodology that allows addressing management problems in the dynamic operation of Urban Wastewater Systems. The proposed methodology will suggest operational strategies that can improve the overall performance of the system under certain problematic situations through a model-based approach. The proposed methodology has three main steps: The first step includes the characterization and modeling of the case-study, the definition of scenarios, the evaluation criteria and the operational settings that can be manipulated to improve the system’s performance. In the second step, Monte Carlo simulations are launched to evaluate how the system performs for a wide range of operational settings combinations, and a global sensitivity analysis is conducted to rank the most influential operational settings. Finally, the third step consists on a screening methodology applying a multi-criteria analysis to select the best combinations of operational settings.
Resumo:
L'ús d'esperma criopreservada en la inseminació artificial (IA) d'espècies d'interès productiu permet un major control sanitari i la creació de bancs de germoplasma d'alt valor genètic, entre d'altres avantatges. En el mercat porcí la major part de les inseminacions són encara realitzades amb semen refrigerat degut a l'èxit de l'aplicació de diluents de llarga durada i també a causa de la sensibilitat de l'esperma porcina a la criopreservació. Malgrat que aquesta sensibilitat ve donada per característiques particulars de la fisiologia espermàtica en l'espècie, algunes ejaculacions mantenen els paràmetres de qualitat espermàtica després de la criopreservació (ejaculacions amb bona "congelabilitat", GFEs) enfront d'altres que no sobreviuen al procés (ejaculacions amb mala "congelabilitat", PFEs). El primer objectiu de l'estudi va ser comparar ambdós grups en termes de fertilitat in vivo. El segon objectiu va ser testar l'eficiència de la inseminació postcervical (post-CAI) amb l'esperma criopreservada. El tercer objectiu va ser buscar predictors de la congelabilitat de les ejaculacions, tant en les GFEs com en les PFEs i en tres passos del procés de criopreservació (a 17ºC, a 5ºC i a 240 min postdescongelació). Aquest objectiu es va dur a terme mitjançant l'avaluació de paràmetres convencionals de qualitat espermàtica i a través de l'estudi de la localització i la reactivitat sota el microscopi de tres proteïnes (GLUT3, HSP90AA1 i Cu/ZnSOD) relacionades amb la fisiologia espermàtica i amb possibles rols en la congelabilitat. El quart objectiu va ser quantificar l'expressió de les tres proteïnes per transferència western, tant en espermatozoides d'ejaculacions GFEs com en els d'ejaculacions PFEs i en els tres passos abans esmentats, per tal de determinar el seu potencial com a predictores de la congelabilitat. Pel primer i el segon objectiu, 86 truges van ser inseminades per post-CAI amb 26 ejaculacions de mascles Piétrain dividides en una porció refrigerada a 17ºC (tractament control) i una porció criopreservada, ambdues porcions classificades alhora com a GFEs o PFEs. Els resultats més rellevants van demostrar que les probabilitats d'embaràs eren dues vegades menors en inseminacions amb esperma criopreservada d'ejaculacions PFEs (P < 0.05) que en inseminacions amb esperma criopreservada d'ejaculacions GFEs, fet que indica que les ejaculacions amb percentatges elevats d'espermatozoides mòbils progressius i d'integritat de membrana (per sobre del 40% en les GFEs) són més favorables a provocar embarassos que no pas aquelles ejaculacions amb una pobra funció espermàtica in vitro (PFEs). Ni el nombre de truges que van donar a llum, ni la quantitat de garrins, ni el risc de reflux espermàtic van ser significativament diferents entre les inseminacions amb esperma criopreservada d'ejaculacions GFEs i les inseminacions control amb semen refrigerat, la qual cosa demostra la bona aplicabilitat de la inseminació post-CAI amb l'esperma criopreservada. Finalment, pel tercer i quart objectius van ser criopreservades 29 i 11 ejaculacions de mascles Piétrain, respectivament. Dos paràmetres cinètics espermàtics, la linealitat (LIN) i la rectitud (STR), van mostrar una hiperactivació de la mobilitat superior en les ejaculacions PFEs que en les GFEs després de 30 min a 5ºC durant la criopreservació. A més, la combinació d'ambdós paràmetres va donar una fiabilitat propera al 72% en la predicció de la congelabilitat de les ejaculacions porcines. Tot i que no va ser possible predir la congelabilitat mitjançant l'avaluació de les tres proteïnes al microscopi, els resultats de transferència western van revelar diferències en l'expressió de la HSP90AA1 en l'esperma a 17ºC, molt possiblement relacionades amb la millor supervivència a la criopreservació dels espermatozoides d'ejaculacions GFEs. Aquests resultats suggereixen que la promoció de la criopreservació d'esperma porcina per la seva aplicació en IA passa pel desenvolupament de tests per la predicció de la congelabilitat en semen refrigerat.
Resumo:
Zooplankton community structure (composition, diversity, dynamics and trophic relationships) of Mediterranian marshes, has been analysed by means of a size based approach. In temporary basins the shape of the biomass-size spectra is related to the hydrological cycle. Linear shape spectra are more frequent in flooding situations when nutrient input causes population growth of small-sized organisms, more than compensating for the effect of competitive interactions. During confinement conditions the scarcity of food would decrease zooplankton growth and increase intra- and interspecific interactions between zooplankton organisms which favour the greatest sizes thus leading to the appearance of curved shape spectra. Temporary and permanent basins have similar taxonomic composition but the latter have higher species diversity, a more simplified temporal pattern and a size distribution dominated mainly by smaller sizes. In permanents basins zooplankton growth is not only conditioned by the availability of resources but by the variable predation of planktivorous fish, so that the temporal variability of the spectra may also be a result of temporal differences in fish predation. Size diversity seems to be a better indicator of the degree of this community structure than species diversity. The tendency of size diversity to increase during succession makes it useful to discriminate between different succession stages, fact that is not achieved by analysing only species diversity since it is low both under large and frequent or small and rare disturbances. Amino acid composition differences found among stages of copepod species indicate a gradual change in diet during the life cycle of these copepods, which provide evidence of food niche partitioning during ontogeny, whereas Daphnia species show a relatively constant amino acid composition. There is a relationship between the degree of trophic niche overlap among stages of the different species and nutrient concentration. Copepods, which have low trophic niche overlap among stages are dominant in food-limited environments, probably because trophic niche partitioning during development allow them to reduce intraspecific competition between adults, juveniles and nauplii. Daphnia species are only dominant in water bodies or periods with high productivity, probably due to the high trophic niche overlap between juveniles and adults. These findings suggest that, in addition to the effect of interspecific competition, predation and abiotic factors, the intraspecific competition might play also an important role in structuring zooplankton assemblages.
Resumo:
La idea básica de detección de defectos basada en vibraciones en Monitorización de la Salud Estructural (SHM), es que el defecto altera las propiedades de rigidez, masa o disipación de energía de un sistema, el cual, altera la respuesta dinámica del mismo. Dentro del contexto de reconocimiento de patrones, esta tesis presenta una metodología híbrida de razonamiento para evaluar los defectos en las estructuras, combinando el uso de un modelo de la estructura y/o experimentos previos con el esquema de razonamiento basado en el conocimiento para evaluar si el defecto está presente, su gravedad y su localización. La metodología involucra algunos elementos relacionados con análisis de vibraciones, matemáticas (wavelets, control de procesos estadístico), análisis y procesamiento de señales y/o patrones (razonamiento basado en casos, redes auto-organizativas), estructuras inteligentes y detección de defectos. Las técnicas son validadas numérica y experimentalmente considerando corrosión, pérdida de masa, acumulación de masa e impactos. Las estructuras usadas durante este trabajo son: una estructura tipo cercha voladiza, una viga de aluminio, dos secciones de tubería y una parte del ala de un avión comercial.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.