957 resultados para region-based algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a field application of a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot in cable tracking task. The learning system is characterized by using a direct policy search method for learning the internal state/action mapping. Policy only algorithms may suffer from long convergence times when dealing with real robotics. In order to speed up the process, the learning phase has been carried out in a simulated environment and, in a second step, the policy has been transferred and tested successfully on a real robot. Future steps plan to continue the learning process on-line while on the real robot while performing the mentioned task. We demonstrate its feasibility with real experiments on the underwater robot ICTINEU AUV

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When unmanned underwater vehicles (UUVs) perform missions near the ocean floor, optical sensors can be used to improve local navigation. Video mosaics allow to efficiently process the images acquired by the vehicle, and also to obtain position estimates. We discuss in this paper the role of lens distortions in this context, proving that degenerate mosaics have their origin not only in the selected motion model or in registration errors, but also in the cumulative effect of radial distortion residuals. Additionally, we present results on the accuracy of different feature-based approaches for self-correction of lens distortions that may guide the choice of appropriate techniques for correcting distortions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a multicast implementation based on adaptive routing with anticipated calculation. Three different cost measures for a point-to-multipoint connection: bandwidth cost, connection establishment cost and switching cost can be considered. The application of the method based on pre-evaluated routing tables makes possible the reduction of bandwidth cost and connection establishment cost individually

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a study of connection availability in GMPLS over optical transport networks (OTN) taking into account different network topologies. Two basic path protection schemes are considered and compared with the no protection case. The selected topologies are heterogeneous in geographic coverage, network diameter, link lengths, and average node degree. Connection availability is also computed considering the reliability data of physical components and a well-known network availability model. Results show several correspondences between suitable path protection algorithms and several network topology characteristics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Big data nowadays is a fashionable topic, independently of what people mean when they use this term. But being big is just a matter of volume, although there is no clear agreement in the size threshold. On the other hand, it is easy to capture large amounts of data using a brute force approach. So the real goal should not be big data but to ask ourselves, for a given problem, what is the right data and how much of it is needed. For some problems this would imply big data, but for the majority of the problems much less data will and is needed. In this talk we explore the trade-offs involved and the main problems that come with big data using the Web as case study: scalability, redundancy, bias, noise, spam, and privacy. Speaker Biography Ricardo Baeza-Yates Ricardo Baeza-Yates is VP of Research for Yahoo Labs leading teams in United States, Europe and Latin America since 2006 and based in Sunnyvale, California, since August 2014. During this time he has lead the labs in Barcelona and Santiago de Chile. Between 2008 and 2012 he also oversaw the Haifa lab. He is also part time Professor at the Dept. of Information and Communication Technologies of the Universitat Pompeu Fabra, in Barcelona, Spain. During 2005 he was an ICREA research professor at the same university. Until 2004 he was Professor and before founder and Director of the Center for Web Research at the Dept. of Computing Science of the University of Chile (in leave of absence until today). He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989. Before he obtained two masters (M.Sc. CS & M.Eng. EE) and the electronics engineer degree from the University of Chile in Santiago. He is co-author of the best-seller Modern Information Retrieval textbook, published in 1999 by Addison-Wesley with a second enlarged edition in 2011, that won the ASIST 2012 Book of the Year award. He is also co-author of the 2nd edition of the Handbook of Algorithms and Data Structures, Addison-Wesley, 1991; and co-editor of Information Retrieval: Algorithms and Data Structures, Prentice-Hall, 1992, among more than 500 other publications. From 2002 to 2004 he was elected to the board of governors of the IEEE Computer Society and in 2012 he was elected for the ACM Council. He has received the Organization of American States award for young researchers in exact sciences (1993), the Graham Medal for innovation in computing given by the University of Waterloo to distinguished ex-alumni (2007), the CLEI Latin American distinction for contributions to CS in the region (2009), and the National Award of the Chilean Association of Engineers (2010), among other distinctions. In 2003 he was the first computer scientist to be elected to the Chilean Academy of Sciences and since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introducción: La enfermedad celiaca (EC) es una enfermedad autoinmune (EA) intestinal desencadenada por la ingesta de gluten. Por la falta de información de la presencia de EC en Latinoamérica (LA), nosotros investigamos la prevalencia de la enfermedad en esta región utilizando una revisión sistemática de la literatura y un meta-análisis. Métodos y resultados: Este trabajo fue realizado en dos fases: La primera, fue un estudio de corte transversal de 300 individuos Colombianos. La segunda, fue una revisión sistemática y una meta-regresión siguiendo las guías PRSIMA. Nuestros resultados ponen de manifiesto una falta de anti-transglutaminasa tisular (tTG) e IgA anti-endomisio (EMA) en la población Colombiana. En la revisión sistemática, 72 artículos cumplían con los criterios de selección, la prevalencia estimada de EC en LA fue de 0,46% a 0,64%, mientras que la prevalencia en familiares de primer grado fue de 5,5 a 5,6%, y en los pacientes con diabetes mellitus tipo 1 fue de 4,6% a 8,7% Conclusión: Nuestro estudio muestra que la prevalencia de EC en pacientes sanos de LA es similar a la notificada en la población europea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research we present here forms part of a two-phase project - one quantitative and the other qualitative - assessing the use of primary health care services. This paper presents the qualitative phase of said research, which is aimed at ascertaining the needs, beliefs, barriers to access and health practices of the immigrant population in comparison with the native population, as well as the perceptions of healthcare professionals. Moroccan and sub-Saharan were the immigrants to who the qualitative phase was specifically addressed. The aims of this paper are as follows: to analyse any possible implications of family organisation in the health practices of the immigrant population; to ascertain social practices relating to illness; to understand the significances of sexual and reproductive health practices; and to ascertain the ideas and perceptions of immigrants, local people and professionals regarding health and the health system. Methods: qualitative research based on discursive analysis. Data gathering techniques consisted of discussion groups with health system users and semi-structured individual interviews with healthcare professionals. The sample was taken from the Basic Healthcare Areas of Salt and Banyoles (belonging to the Girona Healthcare Region), the discussion groups being comprised of (a) 6 immigrant Moroccan women, (b) 7 immigrant sub-Saharan African women and (c) 6 immigrant and native population men (2 native men, 2 Moroccan men and 2 sub-Saharan men); and the semi-structured interviews being conducted with the following healthcare professionals: (a) 3 gynaecologists, (b) 3 nurses and 1 administrative staff. Results: use of the healthcare system is linked to the perception of not being well, knowledge of the healthcare system, length of time resident in Spain and interiorization of traditional Western medicine as a cure mechanism. The divergences found among the groups of immigrants, local people and healthcare professionals with regard to healthcare education, use of the healthcare service, sexual and reproductive healthcare and reticence with regard to being attended by healthcare personnel of the opposite sex demonstrate a need to work with the immigrant population as a heterogeneous group. Conclusions: the results we have obtained support the idea that feeling unwell is a psycho-social process, as it takes place within a specific socio-cultural situation and spans a range of beliefs, perceptions and ideas regarding symptomology and how to treat it

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Darrerament, l'interès pel desenvolupament d'aplicacions amb robots submarins autònoms (AUV) ha crescut de forma considerable. Els AUVs són atractius gràcies al seu tamany i el fet que no necessiten un operador humà per pilotar-los. Tot i això, és impossible comparar, en termes d'eficiència i flexibilitat, l'habilitat d'un pilot humà amb les escasses capacitats operatives que ofereixen els AUVs actuals. L'utilització de AUVs per cobrir grans àrees implica resoldre problemes complexos, especialment si es desitja que el nostre robot reaccioni en temps real a canvis sobtats en les condicions de treball. Per aquestes raons, el desenvolupament de sistemes de control autònom amb l'objectiu de millorar aquestes capacitats ha esdevingut una prioritat. Aquesta tesi tracta sobre el problema de la presa de decisions utilizant AUVs. El treball presentat es centra en l'estudi, disseny i aplicació de comportaments per a AUVs utilitzant tècniques d'aprenentatge per reforç (RL). La contribució principal d'aquesta tesi consisteix en l'aplicació de diverses tècniques de RL per tal de millorar l'autonomia dels robots submarins, amb l'objectiu final de demostrar la viabilitat d'aquests algoritmes per aprendre tasques submarines autònomes en temps real. En RL, el robot intenta maximitzar un reforç escalar obtingut com a conseqüència de la seva interacció amb l'entorn. L'objectiu és trobar una política òptima que relaciona tots els estats possibles amb les accions a executar per a cada estat que maximitzen la suma de reforços totals. Així, aquesta tesi investiga principalment dues tipologies d'algoritmes basats en RL: mètodes basats en funcions de valor (VF) i mètodes basats en el gradient (PG). Els resultats experimentals finals mostren el robot submarí Ictineu en una tasca autònoma real de seguiment de cables submarins. Per portar-la a terme, s'ha dissenyat un algoritme anomenat mètode d'Actor i Crític (AC), fruit de la fusió de mètodes VF amb tècniques de PG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of a model-based diagnosis system could be affected by several uncertainty sources, such as,model errors,uncertainty in measurements, and disturbances. This uncertainty can be handled by mean of interval models.The aim of this thesis is to propose a methodology for fault detection, isolation and identification based on interval models. The methodology includes some algorithms to obtain in an automatic way the symbolic expression of the residual generators enhancing the structural isolability of the faults, in order to design the fault detection tests. These algorithms are based on the structural model of the system. The stages of fault detection, isolation, and identification are stated as constraint satisfaction problems in continuous domains and solved by means of interval based consistency techniques. The qualitative fault isolation is enhanced by a reasoning in which the signs of the symptoms are derived from analytical redundancy relations or bond graph models of the system. An initial and empirical analysis regarding the differences between interval-based and statistical-based techniques is presented in this thesis. The performance and efficiency of the contributions are illustrated through several application examples, covering different levels of complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las superfícies implícitas son útiles en muchas áreasde los gráficos por ordenador. Una de sus principales ventajas es que pueden ser fácilmente usadas como primitivas para modelado. Aun asi, no son muy usadas porque su visualización toma bastante tiempo. Cuando se necesita una visualización precisa, la mejor opción es usar trazado de rayos. Sin embargo, pequeñas partes de las superficies desaparecen durante la visualización. Esto ocurre por la truncación que se presenta en la representación en punto flotante de los ordenadores; algunos bits se puerden durante las operaciones matemáticas en los algoritmos de intersección. En este tesis se presentan algoritmos para solucionar esos problemas. La investigación se basa en el uso del Análisis Intervalar Modal el cual incluye herramientas para resolver problemas con incertidumbe cuantificada. En esta tesis se proporcionan los fundamentos matemáticos necesarios para el desarrollo de estos algoritmos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En les xarxes IP/MPLS sobre WDM on es transporta gran quantitat d'informacio, la capacitat de garantir que el trafic arriba al node de desti ha esdevingut un problema important, ja que la fallada d'un element de la xarxa pot resultar en una gran quantitat d'informacio perduda. Per garantir que el trafic afectat per una fallada arribi al node desti, s'han definit nous algoritmes d'encaminament que incorporen el coneixement de la proteccio en els dues capes: l'optica (WDM) i la basada en paquets (IP/MPLS). D'aquesta manera s'evita reservar recursos per protegir el trafic a les dues capes. Els nous algoritmes resulten en millor us dels recursos de la xarxa, ofereixen rapid temps de recuperacio, eviten la duplicacio de recursos i disminueixen el numero de conversions del trafic de senyal optica a electrica.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesis se centra en la Visión por Computador y, más concretamente, en la segmentación de imágenes, la cual es una de las etapas básicas en el análisis de imágenes y consiste en la división de la imagen en un conjunto de regiones visualmente distintas y uniformes considerando su intensidad, color o textura. Se propone una estrategia basada en el uso complementario de la información de región y de frontera durante el proceso de segmentación, integración que permite paliar algunos de los problemas básicos de la segmentación tradicional. La información de frontera permite inicialmente identificar el número de regiones presentes en la imagen y colocar en el interior de cada una de ellas una semilla, con el objetivo de modelar estadísticamente las características de las regiones y definir de esta forma la información de región. Esta información, conjuntamente con la información de frontera, es utilizada en la definición de una función de energía que expresa las propiedades requeridas a la segmentación deseada: uniformidad en el interior de las regiones y contraste con las regiones vecinas en los límites. Un conjunto de regiones activas inician entonces su crecimiento, compitiendo por los píxeles de la imagen, con el objetivo de optimizar la función de energía o, en otras palabras, encontrar la segmentación que mejor se adecua a los requerimientos exprsados en dicha función. Finalmente, todo esta proceso ha sido considerado en una estructura piramidal, lo que nos permite refinar progresivamente el resultado de la segmentación y mejorar su coste computacional. La estrategia ha sido extendida al problema de segmentación de texturas, lo que implica algunas consideraciones básicas como el modelaje de las regiones a partir de un conjunto de características de textura y la extracción de la información de frontera cuando la textura es presente en la imagen. Finalmente, se ha llevado a cabo la extensión a la segmentación de imágenes teniendo en cuenta las propiedades de color y textura. En este sentido, el uso conjunto de técnicas no-paramétricas de estimación de la función de densidad para la descripción del color, y de características textuales basadas en la matriz de co-ocurrencia, ha sido propuesto para modelar adecuadamente y de forma completa las regiones de la imagen. La propuesta ha sido evaluada de forma objetiva y comparada con distintas técnicas de integración utilizando imágenes sintéticas. Además, se han incluido experimentos con imágenes reales con resultados muy positivos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management. The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework: ·

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tradicionalment, la reproducció del mon real se'ns ha mostrat a traves d'imatges planes. Aquestes imatges se solien materialitzar mitjançant pintures sobre tela o be amb dibuixos. Avui, per sort, encara podem veure pintures fetes a ma, tot i que la majoria d'imatges s'adquireixen mitjançant càmeres, i es mostren directament a una audiència, com en el cinema, la televisió o exposicions de fotografies, o be son processades per un sistema computeritzat per tal d'obtenir un resultat en particular. Aquests processaments s'apliquen en camps com en el control de qualitat industrial o be en la recerca mes puntera en intel·ligència artificial. Aplicant algorismes de processament de nivell mitja es poden obtenir imatges 3D a partir d'imatges 2D, utilitzant tècniques ben conegudes anomenades Shape From X, on X es el mètode per obtenir la tercera dimensió, i varia en funció de la tècnica que s'utilitza a tal nalitat. Tot i que l'evolució cap a la càmera 3D va començar en els 90, cal que les tècniques per obtenir les formes tridimensionals siguin mes i mes acurades. Les aplicacions dels escàners 3D han augmentat considerablement en els darrers anys, especialment en camps com el lleure, diagnosi/cirurgia assistida, robòtica, etc. Una de les tècniques mes utilitzades per obtenir informació 3D d'una escena, es la triangulació, i mes concretament, la utilització d'escàners laser tridimensionals. Des de la seva aparició formal en publicacions científiques al 1971 [SS71], hi ha hagut contribucions per solucionar problemes inherents com ara la disminució d'oclusions, millora de la precisió, velocitat d'adquisició, descripció de la forma, etc. Tots i cadascun dels mètodes per obtenir punts 3D d'una escena te associat un procés de calibració, i aquest procés juga un paper decisiu en el rendiment d'un dispositiu d'adquisició tridimensional. La nalitat d'aquesta tesi es la d'abordar el problema de l'adquisició de forma 3D, des d'un punt de vista total, reportant un estat de l'art sobre escàners laser basats en triangulació, provant el funcionament i rendiment de diferents sistemes, i fent aportacions per millorar la precisió en la detecció del feix laser, especialment en condicions adverses, i solucionant el problema de la calibració a partir de mètodes geomètrics projectius.