959 resultados para Information integration
Resumo:
Resumen tomado de la publicaci??n
Resumo:
La tesis se centra en la Visión por Computador y, más concretamente, en la segmentación de imágenes, la cual es una de las etapas básicas en el análisis de imágenes y consiste en la división de la imagen en un conjunto de regiones visualmente distintas y uniformes considerando su intensidad, color o textura. Se propone una estrategia basada en el uso complementario de la información de región y de frontera durante el proceso de segmentación, integración que permite paliar algunos de los problemas básicos de la segmentación tradicional. La información de frontera permite inicialmente identificar el número de regiones presentes en la imagen y colocar en el interior de cada una de ellas una semilla, con el objetivo de modelar estadísticamente las características de las regiones y definir de esta forma la información de región. Esta información, conjuntamente con la información de frontera, es utilizada en la definición de una función de energía que expresa las propiedades requeridas a la segmentación deseada: uniformidad en el interior de las regiones y contraste con las regiones vecinas en los límites. Un conjunto de regiones activas inician entonces su crecimiento, compitiendo por los píxeles de la imagen, con el objetivo de optimizar la función de energía o, en otras palabras, encontrar la segmentación que mejor se adecua a los requerimientos exprsados en dicha función. Finalmente, todo esta proceso ha sido considerado en una estructura piramidal, lo que nos permite refinar progresivamente el resultado de la segmentación y mejorar su coste computacional. La estrategia ha sido extendida al problema de segmentación de texturas, lo que implica algunas consideraciones básicas como el modelaje de las regiones a partir de un conjunto de características de textura y la extracción de la información de frontera cuando la textura es presente en la imagen. Finalmente, se ha llevado a cabo la extensión a la segmentación de imágenes teniendo en cuenta las propiedades de color y textura. En este sentido, el uso conjunto de técnicas no-paramétricas de estimación de la función de densidad para la descripción del color, y de características textuales basadas en la matriz de co-ocurrencia, ha sido propuesto para modelar adecuadamente y de forma completa las regiones de la imagen. La propuesta ha sido evaluada de forma objetiva y comparada con distintas técnicas de integración utilizando imágenes sintéticas. Además, se han incluido experimentos con imágenes reales con resultados muy positivos.
Resumo:
The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management. The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework: ·
Resumo:
This dissertation investigates discrimination between pure tones. Three questions were investigated: can listeners integrate frequency and duration information in the discrimination of pure tones; how does the discriminability of duration-frequency compounds relate to the discriminability of the changes in the individual dimensions; and how is the integration of these two dimensions affected by the parameters of the stimuli in which the changes in duration and frequency are introduced.
Resumo:
The Konstanz Information Miner is a modular environment which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables easy integration of new algorithms, data manipulation or visualization methods as new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture and briefly sketch how new nodes can be incorporated.
Resumo:
Construction materials and equipment are essential building blocks of every construction project and may account for 50-60 per cent of the total cost of construction. The rate of their utilization, on the other hand, is the element that most directly relates to a project progress. A growing concern in the industry that inadequate efficiency hinders its success could thus be accommodated by turning construction into a logistic process. Although mostly limited, recent attempts and studies show that Radio Frequency IDentification (RFID) applications have significant potentials in construction. However, the aim of this research is to show that the technology itself should not only be used for automation and tracking to overcome the supply chain complexity but also as a tool to generate, record and exchange process-related knowledge among the supply chain stakeholders. This would enable all involved parties to identify and understand consequences of any forthcoming difficulties and react accordingly before they cause major disruptions in the construction process. In order to achieve this aim the study focuses on a number of methods. First of all it develops a generic understanding of how RFID technology has been used in logistic processes in industrial supply chain management. Secondly, it investigates recent applications of RFID as an information and communication technology support facility in construction logistics for the management of construction supply chain. Based on these the study develops an improved concept of a construction logistics architecture that explicitly relies on integrating RFID with the Global Positioning System (GPS). The developed conceptual model architecture shows that categorisation provided through RFID and traceability as a result of RFID/GPS integration could be used as a tool to identify, record and share potential problems and thus vastly improve knowledge management processes within the entire supply chain. The findings thus clearly show a need for future research in this area.
Resumo:
Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences oil scene perception. Previous work has shown that participants will look longer at a path region of a picture when it is described with a type of figurative language called fictive motion (The road goes through the desert) rather than without (The road is in the desert). The current experiment provided evidence that such fictive motion descriptions affect eye movements by evoking mental representations of motion. If participants heard contextual information that would hinder actual motion, it influenced how they viewed a picture when it was described with fictive motion. Inspection times and eye movements scanning along the path increased during fictive motion descriptions when the terrain was first described as difficult (The desert is hilly) as compared to easy (The desert is flat); there were no such effects for descriptions without fictive motion. It is argued that fictive motion evokes a mental simulation of motion that is immediately integrated with visual processing, and hence figurative language can have a distinct effect on perception. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We investigated whether it is possible to control the temporal window of attention used to rapidly integrate visual information. To study the underlying neural mechanisms, we recorded ERPs in an attentional blink task, known to elicit Lag-1 sparing. Lag-1 sparing fosters joint integration of the two targets, evidenced by increased order errors. Short versus long integration windows were induced by showing participants mostly fast or slow stimuli. Participants expecting slow speed used a longer integration window, increasing joint integration. Difference waves showed an early (200 ms post-T2) negative and a late positive modulation (390 ms) in the fast group, but not in the slow group. The modulations suggest the creation of a separate event for T2, which is not needed in the slow group, where targets were often jointly integrated. This suggests that attention can be guided by global expectations of presentation speed within tens of milliseconds.
Resumo:
E-Learning is an emerging tool that uses advanced technology to provide training and development in higher education and within industry. Its rapid growth has been facilitated by the Internet and the massive opportunities in global education. The aim of this study is to consider how effective and efficient e-learning is when integrated with traditional learning in a blended learning environment. The study will provide a comparison between purist ELearning and Blended learning environment. The paper will also provide directions for the blended learning environment which can be used by all the three main stakeholder student, tutors and institution to make strategic decision about the learning and teaching initiatives. The paper concludes that blended learning approaches offer the most flexible and scalable route to E-Learning.
Resumo:
In the decade since OceanObs `99, great advances have been made in the field of ocean data dissemination. The use of Internet technologies has transformed the landscape: users can now find, evaluate and access data rapidly and securely using only a web browser. This paper describes the current state of the art in dissemination methods for ocean data, focussing particularly on ocean observations from in situ and remote sensing platforms. We discuss current efforts being made to improve the consistency of delivered data and to increase the potential for automated integration of diverse datasets. An important recent development is the adoption of open standards from the Geographic Information Systems community; we discuss the current impact of these new technologies and their future potential. We conclude that new approaches will indeed be necessary to exchange data more effectively and forge links between communities, but these approaches must be evaluated critically through practical tests, and existing ocean data exchange technologies must be used to their best advantage. Investment in key technology components, cross-community pilot projects and the enhancement of end-user software tools will be required in order to assess and demonstrate the value of any new technology.
Resumo:
Although the use of climate scenarios for impact assessment has grown steadily since the 1990s, uptake of such information for adaptation is lagging by nearly a decade in terms of scientific output. Nonetheless, integration of climate risk information in development planning is now a priority for donor agencies because of the need to prepare for climate change impacts across different sectors and countries. This urgency stems from concerns that progress made against Millennium Development Goals (MDGs) could be threatened by anthropogenic climate change beyond 2015. Up to this time the human signal, though detectable and growing, will be a relatively small component of climate variability and change. This implies the need for a twin-track approach: on the one hand, vulnerability assessments of social and economic strategies for coping with present climate extremes and variability, and, on the other hand, development of climate forecast tools and scenarios to evaluate sector-specific, incremental changes in risk over the next few decades. This review starts by describing the climate outlook for the next couple of decades and the implications for adaptation assessments. We then review ways in which climate risk information is already being used in adaptation assessments and evaluate the strengths and weaknesses of three groups of techniques. Next we identify knowledge gaps and opportunities for improving the production and uptake of climate risk information for the 2020s. We assert that climate change scenarios can meet some, but not all, of the needs of adaptation planning. Even then, the choice of scenario technique must be matched to the intended application, taking into account local constraints of time, resources, human capacity and supporting infrastructure. We also show that much greater attention should be given to improving and critiquing models used for climate impact assessment, as standard practice. Finally, we highlight the over-arching need for the scientific community to provide more information and guidance on adapting to the risks of climate variability and change over nearer time horizons (i.e. the 2020s). Although the focus of the review is on information provision and uptake in developing regions, it is clear that many developed countries are facing the same challenges. Copyright © 2009 Royal Meteorological Society
Resumo:
This paper presents a new approach to modelling flash floods in dryland catchments by integrating remote sensing and digital elevation model (DEM) data in a geographical information system (GIS). The spectral reflectance of channels affected by recent flash floods exhibit a marked increase, due to the deposition of fine sediments in these channels as the flood recedes. This allows the parts of a catchment that have been affected by a recent flood event to be discriminated from unaffected parts, using a time series of Landsat images. Using images of the Wadi Hudain catchment in southern Egypt, the hillslope areas contributing flow were inferred for different flood events. The SRTM3 DEM was used to derive flow direction, flow length, active channel cross-sectional areas and slope. The Manning Equation was used to estimate the channel flow velocities, and hence the time-area zones of the catchment. A channel reach that was active during a 1985 runoff event, that does not receive any tributary flow, was used to estimate a transmission loss rate of 7·5 mm h−1, given the maximum peak discharge estimate. Runoff patterns resulting from different flood events are quite variable; however the southern part of the catchment appears to have experienced more floods during the period of study (1984–2000), perhaps because the bedrock hillslopes in this area are more effective at runoff production than other parts of the catchment which are underlain by unconsolidated Quaternary sands and gravels. Due to high transmission loss, runoff generated within the upper reaches is rarely delivered to the alluvial fan and Shalateen city situated at the catchment outlet. The synthetic GIS-based time area zones, on their own, cannot be relied on to model the hydrographs reliably; physical parameters, such as rainfall intensity, distribution, and transmission loss, must also be considered.
Resumo:
Background: Intrusions are common symptoms of both posttraumatic stress disorder (PTSD) and schizophrenia. Steel et al (2005) suggest that an information processing style characterized by weak trait contextual integration renders psychotic individuals vulnerable to intrusive experiences. This ‘contextual integration hypothesis’ was tested in individuals reporting anomalous experiences in the absence of a need-for-care. Methods: Twenty-six low schizotypes and twenty-three individuals reporting anomalous experiences were shown a traumatic film with and without a concurrent visuo-spatial task. Participants rated post-traumatic intrusions for frequency and form, and completed self-report measures of information processing style. It was predicted that, due to their weaker trait contextual integration, the anomalous experiences group would (1) exhibit more intrusions following exposure to the trauma-film; (2) display intrusions characterised by more PTSD qualities and (3) show a greater reduction of intrusions with the concurrent visuo-spatial task. Results: As predicted, the anomalous experiences group reported a lower level of trait contextual integration and more intrusions than the low schizotypes, both immediately after watching the film, and during the following seven days. Their post-traumatic intrusive memories were more PTSD-like (more intrusive, vivid and associated with emotion). The visuo-spatial task had no effect on number of intrusions in either group. Conclusions: These findings provide some support for the proposal that weak trait contextual integration underlies the development of intrusions within both PTSD and psychosis.
Resumo:
Platelets in the circulation are triggered by vascular damage to activate, aggregate and form a thrombus that prevents excessive blood loss. Platelet activation is stringently regulated by intracellular signalling cascades, which when activated inappropriately lead to myocardial infarction and stroke. Strategies to address platelet dysfunction have included proteomics approaches which have lead to the discovery of a number of novel regulatory proteins of potential therapeutic value. Global analysis of platelet proteomes may enhance the outcome of these studies by arranging this information in a contextual manner that recapitulates established signalling complexes and predicts novel regulatory processes. Platelet signalling networks have already begun to be exploited with interrogation of protein datasets using in silico methodologies that locate functionally feasible protein clusters for subsequent biochemical validation. Characterization of these biological systems through analysis of spatial and temporal organization of component proteins is developing alongside advances in the proteomics field. This focused review highlights advances in platelet proteomics data mining approaches that complement the emerging systems biology field. We have also highlighted nucleated cell types as key examples that can inform platelet research. Therapeutic translation of these modern approaches to understanding platelet regulatory mechanisms will enable the development of novel anti-thrombotic strategies.
Resumo:
Information systems integration becomes critical in enhancing organisational competitiveness through effective use of information resource provided by the whole host of information systems. Information systems integration in its nature is a process of bringing about the capability of communication and information exchange between systems; while interoperability, often as the result of systems integration, is such a capability. However currently there is a lack of theoretical foundation for representation and measure of the interoperability in organisations. Organisational semiotics provides a theoretical foundation for systems interoperability. A notion of ‘semiotic interoperability’ is proposed in this paper as a paradigm, guiding systems integration and measuring degree of interoperability, covering aspects from physical properties, transmission structure of signs, placing emphasis on communicating meaning, intention to social consequence of information.