954 resultados para I.3.8 [Computing Methodologies]: Computer Graphics-Applications
Resumo:
Boletín semanal para profesionales sanitarios de la Secretaría General de Salud Pública y Participación Social de la Consejería de Salud
Resumo:
The concept of Library of the Health Sciences has noticeably changed during the last decade. The embedded librarian is a recently emerged figure, who works as a member of multidisciplinary groups with the mission of providing them with relevant literature as well as media for acquisition, exchange and dissemination of information. This figure has been gradually implanted in some committees of the ASEMA. The objective of the present work is to describe the functions of the embedded librarian and its results in our area.
Resumo:
In 2006 the Library of the Andalusian Public Health System (BVSSPA) is constituted as a Virtual Library which provides Resources and Services that are accessed through inter-hospital local area network (corporate intranet) and Internet. On the other hand, the Hospital de la Axarquia still did not have any institutional or self-presence on the Internet and the librarian ask the need to create a space for communication with the "digital users" of the Library of the area through a website. MATERIALS AND METHODS. The reasons why we opted for a blog were: -It was necessary to make no financial outlay for its establishment. It allowed for great versatility, both in its administration and in its management by users. -The ability to compile on the same platform different Web 2.0 communication tools. Between different options available we chose Blogger Google Inc. The blog allowed entry to the 2.0 services or Social Web in the library. The benefits offered were many, especially the visibility of the service and communication with the user. 2.0 tools that have been incorporated into the library are: content syndication (RSS) which allowed users to stay informed about updates to the blog. Share documents and other multimedia as presentations through SlideShare, images through Flickr or Picasa, or videos (YouTube). And the presence on social network like Facebook and Twitter. RESULTS. The analysis of the activity we has been traking by Google Analytics tool, helping to determine the number of blog visits. Since its stablishment, on November 17th 2006, until November 29th 2010 the blog has received 15,787 visitors, 38,422 page views were consulted, at each visit on average 2.4 pages were consulted and each visit has an average stay at the site of 4'31''. DISCUSSION. The blog has served as a communication and information tool with the user. Since the creation of the blog we have incorporated technologies and tools to interact with the user. With all the tools used we have applied the concept of "open source" and the contents were generated from the activities organized in the Knowledge Management Unit from the anatomo-clinical sessions, the training activities, dissemination events, etc. The result has been the customization of library services, contextualized in the Knowledge Management Unit - Axarquia. In social networks we have shared information and files with the professionals and the community. CONCLUSIONS. The blog has allowed us to explore technologies that allow us to communicate with the user and the community, disseminate information and documents with the participation of users and become the "Interactive Library" we aspire to be.
Resumo:
Introduction: Our goal was to know the web contents and examine the technical information pest control services available to users through their webpages. Method: A total of 70 webpages from biocides services in the province of Málaga (Spain) were analyzed. We used 15 evaluation indicators grouped into 5 parameters relating to data of the service provider; information’s reliability and services; accuracy of content and writing style; technical resources and interaction with the users. As test instruments were used sectoral legislation, official records of products and deliveries, standards and technical guides. Results: Companies showed a remarkable degree of awareness with the implementation and use of new technologies. Aspects negative that they can have an impact on the confidence of users, relating to the reliability of the information and deficiencies associated with the description of the services portfolio and credentials of the companies were identified. The integration and use of collaborative platforms 2.0 was poorly developed and squandered. Discussion: It is possible to improve the trust of users intervening in those aspects that affect the reliability of the information provided on the web.
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
Self-adaptive software provides a profound solution for adapting applications to changing contexts in dynamic and heterogeneous environments. Having emerged from Autonomic Computing, it incorporates fully autonomous decision making based on predefined structural and behavioural models. The most common approach for architectural runtime adaptation is the MAPE-K adaptation loop implementing an external adaptation manager without manual user control. However, it has turned out that adaptation behaviour lacks acceptance if it does not correspond to a user’s expectations – particularly for Ubiquitous Computing scenarios with user interaction. Adaptations can be irritating and distracting if they are not appropriate for a certain situation. In general, uncertainty during development and at run-time causes problems with users being outside the adaptation loop. In a literature study, we analyse publications about self-adaptive software research. The results show a discrepancy between the motivated application domains, the maturity of examples, and the quality of evaluations on the one hand and the provided solutions on the other hand. Only few publications analysed the impact of their work on the user, but many employ user-oriented examples for motivation and demonstration. To incorporate the user within the adaptation loop and to deal with uncertainty, our proposed solutions enable user participation for interactive selfadaptive software while at the same time maintaining the benefits of intelligent autonomous behaviour. We define three dimensions of user participation, namely temporal, behavioural, and structural user participation. This dissertation contributes solutions for user participation in the temporal and behavioural dimension. The temporal dimension addresses the moment of adaptation which is classically determined by the self-adaptive system. We provide mechanisms allowing users to influence or to define the moment of adaptation. With our solution, users can have full control over the moment of adaptation or the self-adaptive software considers the user’s situation more appropriately. The behavioural dimension addresses the actual adaptation logic and the resulting run-time behaviour. Application behaviour is established during development and does not necessarily match the run-time expectations. Our contributions are three distinct solutions which allow users to make changes to the application’s runtime behaviour: dynamic utility functions, fuzzy-based reasoning, and learning-based reasoning. The foundation of our work is a notification and feedback solution that improves intelligibility and controllability of self-adaptive applications by implementing a bi-directional communication between self-adaptive software and the user. The different mechanisms from the temporal and behavioural participation dimension require the notification and feedback solution to inform users on adaptation actions and to provide a mechanism to influence adaptations. Case studies show the feasibility of the developed solutions. Moreover, an extensive user study with 62 participants was conducted to evaluate the impact of notifications before and after adaptations. Although the study revealed that there is no preference for a particular notification design, participants clearly appreciated intelligibility and controllability over autonomous adaptations.
Resumo:
Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.
Resumo:
"Supported in part by grant U.S. AEC AT(11-1) 1469."
Resumo:
"COO-1469-0152. File no. 818."
Resumo:
Augmented reality is the latest among information technologies in modern electronics industry. The essence is in the addition of advanced computer graphics in real and/or digitized images. This paper gives a brief analysis of the concept and the approaches to implementing augmented reality for an expanded presentation of a digitized object of national cultural and/or scientific heritage. ACM Computing Classification System (1998): H.5.1, H.5.3, I.3.7.
Resumo:
We have developed the computer programme NUTRISOL, a nutritional programme destined to analysis of dietary intake by means of the food transformation to nutrient. It has been performed under Windows operative system, using Visual Basic 6.0. It is presented in a CD-Rom. We have used the Spanish CSIC Food Composition Table and domestic food measures commonly used in Spain which could be modified and updated. Diverse kind of diets and reference anthropometric data are also presented. The results may be treated using various statistical programmes. The programme contains three modules: 1) Nutritional epidemiology, which allows to create or open a data base, sample management, analyse food intake, consultation of nutrient content and exportation of data to statistical programmes. 2) Analyses of diets and recipes, creation or modification of new ones. 3) To ask different diets for prevalent pathologies. Independent tools for modifying the original tables, calculate energetic needs, recommend nutrient intake and anthropometric indexes are also offered. In conclusion, NUTRISOL Programme is an application which runs in PC computers with minimal equipment in a friendly interface, of easy use, freeware, which may be adapted to each country, and has demonstrated its usefulness and reliability in different epidemiologic studies. Furthermore, it may become an efficient instrument for clinical nutrition and health promotion.