913 resultados para Computer Vision and Robotics (Autonomous Systems)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eine der offenen Fragen der aktuellen Physik ist das Verständnis von Systemen im Nichtgleichgewicht. Im Gegensatz zu der Gleichgewichtsphysik ist in diesem Bereich aktuell kein Formalismus bekannt der ein systematisches Beschreiben der unterschiedlichen Systeme ermöglicht. Um das Verständnis über diese Systeme zu vergrößern werden in dieser Arbeit zwei unterschiedliche Systeme studiert, die unter einem externen Feld ein starkes nichtlineares Verhalten zeigen. Hierbei handelt es sich zum einen um das Verhalten von Teilchen unter dem Einfluss einer extern angelegten Kraft und zum anderen um das Verhalten eines Systems in der Nähe des kritischen Punktes unter Scherung. Das Modellsystem in dem ersten Teil der Arbeit ist eine binäre Yukawa Mischung, die bei tiefen Temperaturen einen Glassübergang zeigt. Dies führt zu einer stark ansteigenden Relaxationszeit des Systems, so dass man auch bei kleinen Kräften relativ schnell ein nichtlineares Verhalten beobachtet. In Abhängigkeit der angelegten konstanten Kraft können in dieser Arbeit drei Regime, mit stark unterschiedlichem Teilchenverhalten, identifiziert werden. In dem zweiten Teil der Arbeit wird das Ising-Modell unter Scherung betrachtet. In der Nähe des kritischen Punkts kommt es in diesem Modell zu einer Beeinflussung der Fluktuationen in dem System durch das angelegte Scherfeld. Dies hat zur Folge, dass das System stark anisotrop wird und man zwei unterschiedliche Korrelationslängen vorfindet, die mit unterschiedlichen Exponenten divergieren. Infolgedessen lässt sich der normale isotrope Formalismus des "finite-size scaling" nicht mehr auf dieses System anwenden. In dieser Arbeit wird gezeigt, wie dieser auf den anisotropen Fall zu verallgemeinern ist und wie damit die kritischen Punkte, sowie die dazu gehörenden kritischen Exponenten berechnet werden können.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems Biology is an innovative way of doing biology recently raised in bio-informatics contexts, characterised by the study of biological systems as complex systems with a strong focus on the system level and on the interaction dimension. In other words, the objective is to understand biological systems as a whole, putting on the foreground not only the study of the individual parts as standalone parts, but also of their interaction and of the global properties that emerge at the system level by means of the interaction among the parts. This thesis focuses on the adoption of multi-agent systems (MAS) as a suitable paradigm for Systems Biology, for developing models and simulation of complex biological systems. Multi-agent system have been recently introduced in informatics context as a suitabe paradigm for modelling and engineering complex systems. Roughly speaking, a MAS can be conceived as a set of autonomous and interacting entities, called agents, situated in some kind of nvironment, where they fruitfully interact and coordinate so as to obtain a coherent global system behaviour. The claim of this work is that the general properties of MAS make them an effective approach for modelling and building simulations of complex biological systems, following the methodological principles identified by Systems Biology. In particular, the thesis focuses on cell populations as biological systems. In order to support the claim, the thesis introduces and describes (i) a MAS-based model conceived for modelling the dynamics of systems of cells interacting inside cell environment called niches. (ii) a computational tool, developed for implementing the models and executing the simulations. The tool is meant to work as a kind of virtual laboratory, on top of which kinds of virtual experiments can be performed, characterised by the definition and execution of specific models implemented as MASs, so as to support the validation, falsification and improvement of the models through the observation and analysis of the simulations. A hematopoietic stem cell system is taken as reference case study for formulating a specific model and executing virtual experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Individuals with type 1 diabetes (T1D) have to count the carbohydrates (CHOs) of their meal to estimate the prandial insulin dose needed to compensate for the meal’s effect on blood glucose levels. CHO counting is very challenging but also crucial, since an error of 20 grams can substantially impair postprandial control. Method: The GoCARB system is a smartphone application designed to support T1D patients with CHO counting of nonpacked foods. In a typical scenario, the user places a reference card next to the dish and acquires 2 images with his/her smartphone. From these images, the plate is detected and the different food items on the plate are automatically segmented and recognized, while their 3D shape is reconstructed. Finally, the food volumes are calculated and the CHO content is estimated by combining the previous results and using the USDA nutritional database. Results: To evaluate the proposed system, a set of 24 multi-food dishes was used. For each dish, 3 pairs of images were taken and for each pair, the system was applied 4 times. The mean absolute percentage error in CHO estimation was 10 ± 12%, which led to a mean absolute error of 6 ± 8 CHO grams for normal-sized dishes. Conclusion: The laboratory experiments demonstrated the feasibility of the GoCARB prototype system since the error was below the initial goal of 20 grams. However, further improvements and evaluation are needed prior launching a system able to meet the inter- and intracultural eating habits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rho guanosine triphosphatases (GTPases) control the cytoskeletal dynamics that power neurite outgrowth. This process consists of dynamic neurite initiation, elongation, retraction, and branching cycles that are likely to be regulated by specific spatiotemporal signaling networks, which cannot be resolved with static, steady-state assays. We present NeuriteTracker, a computer-vision approach to automatically segment and track neuronal morphodynamics in time-lapse datasets. Feature extraction then quantifies dynamic neurite outgrowth phenotypes. We identify a set of stereotypic neurite outgrowth morphodynamic behaviors in a cultured neuronal cell system. Systematic RNA interference perturbation of a Rho GTPase interactome consisting of 219 proteins reveals a limited set of morphodynamic phenotypes. As proof of concept, we show that loss of function of two distinct RhoA-specific GTPase-activating proteins (GAPs) leads to opposite neurite outgrowth phenotypes. Imaging of RhoA activation dynamics indicates that both GAPs regulate different spatiotemporal Rho GTPase pools, with distinct functions. Our results provide a starting point to dissect spatiotemporal Rho GTPase signaling networks that regulate neurite outgrowth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the development of an ontology for autonomous systems, as the initial stage of a research programe on autonomous systems’ engineering within a model-based control approach. The ontology aims at providing a unified conceptual framework for the autonomous systems’ stakeholders, from developers to software engineers. The modular ontology contains both generic and domain-specific concepts for autonomous systems description and engineering. The ontology serves as the basis in a methodology to obtain the autonomous system’s conceptual models. The objective is to obtain and to use these models as main input for the autonomous system’s model-based control system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper outlines an automatic computervision system for the identification of avena sterilis which is a special weed seed growing in cereal crops. The final goal is to reduce the quantity of herbicide to be sprayed as an important and necessary step for precision agriculture. So, only areas where the presence of weeds is important should be sprayed. The main problems for the identification of this kind of weed are its similar spectral signature with respect the crops and also its irregular distribution in the field. It has been designed a new strategy involving two processes: image segmentation and decision making. The image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and weeds. The decision making is based on the SupportVectorMachines and determines if a cell must be sprayed. The main findings of this paper are reflected in the combination of the segmentation and the SupportVectorMachines decision processes. Another important contribution of this approach is the minimum requirements of the system in terms of memory and computation power if compared with other previous works. The performance of the method is illustrated by comparative analysis against some existing strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

First, this paper describes a future layered Air Traffic Management (ATM) system centred in the execution phase of flights. The layered ATM model is based on the work currently performed by SESAR [1] and takes into account the availability of accurate and updated flight information ?seen by all? across the European airspace. This shared information of each flight will be referred as Reference Business Trajectory (RBT). In the layered ATM system, exchanges of information will involve several actors (human or automatic), which will have varying time horizons, areas of responsibility and tasks. Second, the paper will identify the need to define the negotiation processes required to agree revisions to the RBT in the layered ATM system. Third, the final objective of the paper is to bring to the attention of researchers and engineers the communalities between multi-player games and Collaborative Decision Making processes (CDM) in a layered ATM system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: For a comprehensive health sector response to intimate partner violence (IPV), interventions should target individual and health facility levels, along with the broader health systems level which includes issues of governance, financing, planning, service delivery, monitoring and evaluation, and demand generation. This study aims to map and explore the integration of IPV response in the Spanish national health system. Methods: Information was collected on five key areas based on WHO recommendations: policy environment, protocols, training, monitoring and prevention. A systematic review of public documents was conducted to assess 39 indicators in each of Spain’s 17 regional health systems. In addition, we performed qualitative content analysis of 26 individual interviews with key informants responsible for coordinating the health sector response to IPV in Spain. Results: In 88% of the 17 autonomous regions, the laws concerning IPV included the health sector response, but the integration of IPV in regional health plans was just 41%. Despite the existence of a supportive national structure, responding to IPV still relies strongly on the will of health professionals. All seventeen regions had published comprehensive protocols to guide the health sector response to IPV, but participants recognized that responding to IPV was more complex than merely following the steps of a protocol. Published training plans existed in 43% of the regional health systems, but none had institutionalized IPV training in medical and nursing schools. Only 12% of regional health systems collected information on the quality of the IPV response, and there are many limitations to collecting information on IPV within health services, for example underreporting, fears about confidentiality, and underuse of data for monitoring purposes. Finally, preventive activities that were considered essential were not institutionalized anywhere. Conclusions: Within the Spanish health system, differences exist in terms of achievements both between regions and between the areas assessed. Progress towards integration of IPV has been notable at the level of policy, less outstanding regarding health service delivery, and very limited in terms of preventive actions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer science studies possess a strong multidisciplinary aptitude since most graduates do their professional work outside of a computing environment, in close collaboration with professionals from many different areas. However, the training offered in computer science studies lacks that multidisciplinary factor, focusing more on purely technical aspects. In this paper we present a novel experience where computer studies and educational psychology find a common ground and realistic working through laboratory practices. Specifically, the work enables students of computer science education the development of diagnosis support systems, with artificial intelligence techniques, which could then be used for future educational psychologists. The applications developed by computer science students are the creation of a model for the diagnosis of pervasive developmental disorders (PDD), sometimes also commonly called the autism spectrum disorders (ASD). The complexity of this diagnosis, not only by the exclusive characteristics of every person who suffers from it, but also by the large numbers of variables involved in it, requires very strong and close interdisciplinary participation. This work demonstrates that it is possible to intervene in a curricular perspective, in the university, to promote the development of interpersonal skills. What can be shown, in this way, is a methodology for interdisciplinary practices design and a guide for monitoring and evaluation. The results are very encouraging since we obtained significant differences in academic achievement between students who attended a course using the new methodology and those who did not use it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we present a new framework oriented to teach Computer Vision related subjects called JavaVis. It is a computer vision library divided in three main areas: 2D package is featured for classical computer vision processing; 3D package, which includes a complete 3D geometric toolset, is used for 3D vision computing; Desktop package comprises a tool for graphic designing and testing of new algorithms. JavaVis is designed to be easy to use, both for launching and testing existing algorithms and for developing new ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with enfasis on readability and understanding rather than on efficiency. However, the library can also be used for research purposes. JavaVis is an open source Java library, oriented to the teaching of Computer Vision. It consists of a framework with several features that meet its demands. It has been designed to be easy to use: the user does not have to deal with internal structures or graphical interface, and should the student need to add a new algorithm it can be done simply enough. Once we sketch the library, we focus on the experience the student gets using this library in several computer vision courses. Our main goal is to find out whether the students understand what they are doing, that is, find out how much the library helps the student in grasping the basic concepts of computer vision. In the last four years we have conducted surveys to assess how much the students have improved their skills by using this library.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Period of performance: September, l968- December, 1973."