974 resultados para Event Detection


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Even simple hybrid systems like the classic bouncing ball can exhibit Zeno behaviors. The existence of this type of behavior has so far forced simulators to either ignore some events or risk looping indefinitely. This in turn forces modelers to either insert ad hoc restrictions to circumvent Zeno behavior or to abandon hybrid modeling. To address this problem, we take a fresh look at event detection and localization. A key insight that emerges from this investigation is that an enclosure for a given time interval can be valid independently of the occurrence of a given event. Such an event can then even occur an unbounded number of times, thus making it possible to handle certain types of Zeno behavior.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Semantic Web has come a long way since its inception in 2001, especially in terms of technical development and research progress. However, adoption by non- technical practitioners is still an ongoing process, and in some areas this process is just now starting. Emergency response is an area where reliability and timeliness of information and technologies is of essence. Therefore it is quite natural that more widespread adoption in this area has not been seen until now, when Semantic Web technologies are mature enough to support the high requirements of the application area. Nevertheless, to leverage the full potential of Semantic Web research results for this application area, there is need for an arena where practitioners and researchers can meet and exchange ideas and results. Our intention is for this workshop, and hopefully coming workshops in the same series, to be such an arena for discussion. The Extended Semantic Web Conference (ESWC - formerly the European Semantic Web conference) is one of the major research conferences in the Semantic Web field, whereas this is a suitable location for this workshop in order to discuss the application of Semantic Web technology to our specific area of applications. Hence, we chose to arrange our first SMILE workshop at ESWC 2013. However, this workshop does not focus solely on semantic technologies for emergency response, but rather Semantic Web technologies in combination with technologies and principles for what is sometimes called the "social web". Social media has already been used successfully in many cases, as a tool for supporting emergency response. The aim of this workshop is therefore to take this to the next level and answer questions like: "how can we make sense of, and furthermore make use of, all the data that is produced by different kinds of social media platforms in an emergency situation?" For the first edition of this workshop the chairs collected the following main topics of interest: • Semantic Annotation for understanding the content and context of social media streams. • Integration of Social Media with Linked Data. • Interactive Interfaces and visual analytics methodologies for managing multiple large-scale, dynamic, evolving datasets. • Stream reasoning and event detection. • Social Data Mining. • Collaborative tools and services for Citizens, Organisations, Communities. • Privacy, ethics, trustworthiness and legal issues in the Social Semantic Web. • Use case analysis, with specific interest for use cases that involve the application of Social Media and Linked Data methodologies in real-life scenarios. All of these, applied in the context of: • Crisis and Disaster Management • Emergency Response • Security and Citizen Journalism The workshop received 6 high-quality paper submissions and based on a thorough review process, thanks to our program committee, the decision was made to accept four of these papers for the workshop (67% acceptance rate). These four papers can be found later in this proceedings volume. Three out of four of these papers particularly discuss the integration and analysis of social media data, using Semantic Web technologies, e.g. for detecting complex events in social media streams, for visualizing and analysing sentiments with respect to certain topics in social media, or for detecting small-scale incidents entirely through the use of social media information. Finally, the fourth paper presents an architecture for using Semantic Web technologies in resource management during a disaster. Additionally, the workshop featured an invited keynote speech by Dr. Tomi Kauppinen from Aalto university. Dr. Kauppinen shared experiences from his work on applying Semantic Web technologies to application fields such as geoinformatics and scientific research, i.e. so-called Linked Science, but also recent ideas and applications in the emergency response field. His input was also highly valuable for the roadmapping discussion, which was held at the end of the workshop. A separate summary of the roadmapping session can be found at the end of these proceedings. Finally, we would like to thank our invited speaker Dr. Tomi Kauppinen, all our program committee members, as well as the workshop chair of ESWC2013, Johanna Völker (University of Mannheim), for helping us to make this first SMILE workshop a highly interesting and successful event!

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Even simple hybrid automata like the classic bouncing ball can exhibit Zeno behavior. The existence of this type of behavior has so far forced a large class of simulators to either ignore some events or risk looping indefinitely. This in turn forces modelers to either insert ad-hoc restrictions to circumvent Zeno behavior or to abandon hybrid automata. To address this problem, we take a fresh look at event detection and localization. A key insight that emerges from this investigation is that an enclosure for a given time interval can be valid independent of the occurrence of a given event. Such an event can then even occur an unbounded number of times. This insight makes it possible to handle some types of Zeno behavior. If the post-Zeno state is defined explicitly in the given model of the hybrid automaton, the computed enclosure covers the corresponding trajectory that starts from the Zeno point through a restarted evolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main challenges of multimedia data retrieval lie in the effective mapping between low-level features and high-level concepts, and in the individual users' subjective perceptions of multimedia content. ^ The objectives of this dissertation are to develop an integrated multimedia indexing and retrieval framework with the aim to bridge the gap between semantic concepts and low-level features. To achieve this goal, a set of core techniques have been developed, including image segmentation, content-based image retrieval, object tracking, video indexing, and video event detection. These core techniques are integrated in a systematic way to enable the semantic search for images/videos, and can be tailored to solve the problems in other multimedia related domains. In image retrieval, two new methods of bridging the semantic gap are proposed: (1) for general content-based image retrieval, a stochastic mechanism is utilized to enable the long-term learning of high-level concepts from a set of training data, such as user access frequencies and access patterns of images. (2) In addition to whole-image retrieval, a novel multiple instance learning framework is proposed for object-based image retrieval, by which a user is allowed to more effectively search for images that contain multiple objects of interest. An enhanced image segmentation algorithm is developed to extract the object information from images. This segmentation algorithm is further used in video indexing and retrieval, by which a robust video shot/scene segmentation method is developed based on low-level visual feature comparison, object tracking, and audio analysis. Based on shot boundaries, a novel data mining framework is further proposed to detect events in soccer videos, while fully utilizing the multi-modality features and object information obtained through video shot/scene detection. ^ Another contribution of this dissertation is the potential of the above techniques to be tailored and applied to other multimedia applications. This is demonstrated by their utilization in traffic video surveillance applications. The enhanced image segmentation algorithm, coupled with an adaptive background learning algorithm, improves the performance of vehicle identification. A sophisticated object tracking algorithm is proposed to track individual vehicles, while the spatial and temporal relationships of vehicle objects are modeled by an abstract semantic model. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work presents a low cost architecture for development of synchronized phasor measurement units (PMU). The device is intended to be connected in the low voltage grid, which allows the monitoring of transmission and distribution networks. Developments of this project include a complete PMU, with instrumentation module for use in low voltage network, GPS module to provide the sync signal and time stamp for the measures, processing unit with the acquisition system, phasor estimation and formatting data according to the standard and finally, communication module for data transmission. For the development and evaluation of the performance of this PMU, it was developed a set of applications in LabVIEW environment with specific features that let analyze the behavior of the measures and identify the sources of error of the PMU, as well as to apply all the tests proposed by the standard. The first application, useful for the development of instrumentation, consists of a function generator integrated with an oscilloscope, which allows the generation and acquisition of signals synchronously, in addition to the handling of samples. The second and main, is the test platform, with capabality of generating all tests provided by the synchronized phasor measurement standard IEEE C37.118.1, allowing store data or make the analysis of the measurements in real time. Finally, a third application was developed to evaluate the results of the tests and generate calibration curves to adjust the PMU. The results include all the tests proposed by synchrophasors standard and an additional test that evaluates the impact of noise. Moreover, through two prototypes connected to the electrical installation of consumers in same distribution circuit, it was obtained monitoring records that allowed the identification of loads in consumer and power quality analysis, beyond the event detection at the distribution and transmission levels.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the present work, the development of a genosensor for the event-specific detection of MON810 transgenic maize is proposed. Taking advantage of nanostructuration, a cost-effective three dimensional electrode was fabricated and a ternary monolayer containing a dithiol, a monothiol and the thiolated capture probe was optimized to minimize the unspecific signals. A sandwich format assay was selected as a way of precluding inefficient hybridization associated with stable secondary target structures. A comparison between the analytical performance of the Au nanostructured electrodes and commercially available screen-printed electrodes highlighted the superior performance of the nanostructured ones. Finally, the genosensor was effectively applied to detect the transgenic sequence in real samples, showing its potential for future quantitative analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract. Terrestrial laser scanning (TLS) is one of the most promising surveying techniques for rockslope characteriza- tion and monitoring. Landslide and rockfall movements can be detected by means of comparison of sequential scans. One of the most pressing challenges of natural hazards is com- bined temporal and spatial prediction of rockfall. An outdoor experiment was performed to ascertain whether the TLS in- strumental error is small enough to enable detection of pre- cursory displacements of millimetric magnitude. This con- sists of a known displacement of three objects relative to a stable surface. Results show that millimetric changes cannot be detected by the analysis of the unprocessed datasets. Dis- placement measurement are improved considerably by ap- plying Nearest Neighbour (NN) averaging, which reduces the error (1σ ) up to a factor of 6. This technique was ap- plied to displacements prior to the April 2007 rockfall event at Castellfollit de la Roca, Spain. The maximum precursory displacement measured was 45 mm, approximately 2.5 times the standard deviation of the model comparison, hampering the distinction between actual displacement and instrumen- tal error using conventional methodologies. Encouragingly, the precursory displacement was clearly detected by apply- ing the NN averaging method. These results show that mil- limetric displacements prior to failure can be detected using TLS.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this work we present the results of experimental work on the development of lexical class-based lexica by automatic means. Our purpose is to assess the use of linguistic lexical-class based information as a feature selection methodology for the use of classifiers in quick lexical development. The results show that the approach can help reduce the human effort required in the development of language resources significantly.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Terrestrial laser scanning (TLS) is one of the most promising surveying techniques for rockslope characterization and monitoring. Landslide and rockfall movements can be detected by means of comparison of sequential scans. One of the most pressing challenges of natural hazards is combined temporal and spatial prediction of rockfall. An outdoor experiment was performed to ascertain whether the TLS instrumental error is small enough to enable detection of precursory displacements of millimetric magnitude. This consists of a known displacement of three objects relative to a stable surface. Results show that millimetric changes cannot be detected by the analysis of the unprocessed datasets. Displacement measurement are improved considerably by applying Nearest Neighbour (NN) averaging, which reduces the error (1¿) up to a factor of 6. This technique was applied to displacements prior to the April 2007 rockfall event at Castellfollit de la Roca, Spain. The maximum precursory displacement measured was 45 mm, approximately 2.5 times the standard deviation of the model comparison, hampering the distinction between actual displacement and instrumental error using conventional methodologies. Encouragingly, the precursory displacement was clearly detected by applying the NN averaging method. These results show that millimetric displacements prior to failure can be detected using TLS.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have developed a technique, methylation-specific PCR in situ hybridization (MSP-ISH), which allows for the methylation status of specific DNA sequences to be visualized in individual cells. We use MSP-ISH to monitor the timing and consequences of aberrant hypermethylation of the p16 tumor suppresser gene during the progression of cancers of the lung and cervix. Hypermethylation of p16 was localized only to the neoplastic cells in both in situ lesions and invasive cancers, and was associated with loss of p16 protein expression. MSP-ISH allowed us to dissect the surprising finding that p16 hypermethylation occurs in cervical carcinoma. This tumor is associated with infection of the oncogenic human papillomavirus, which expresses a protein, E7, that inactivates the retinoblastoma (Rb) protein. Thus, simultaneous Rb and p16 inactivation would not be needed to abrogate the critical cyclin D–Rb pathway. MSP-ISH reveals that p16 hypermethylation occurs heterogeneously within early cervical tumor cell populations that are separate from those expressing viral E7 transcripts. In advanced cervical cancers, the majority of cells have a hypermethylated p16, lack p16 protein, but no longer express E7. These data suggest that p16 inactivation is selected as the most effective mechanism of blocking the cyclin D–Rb pathway during the evolution of an invasive cancer from precursor lesions. These studies demonstrate that MSP-ISH is a powerful approach for studying the dynamics of aberrant methylation of critical tumor suppressor genes during tumor evolution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Existe un interés considerable en hallar métodos que nos ayuden a saber cuándo una persona miente y cuándo dice la verdad desde un punto de vista forense. Actualmente, una de las líneas de investigación se inclina hacia el uso de potenciales relacionados con eventos. Se pretende hacer una revisión de los artículos que estudian estos procedimientos mediante distintos métodos: propiedades, fiabilidad, validez y limitaciones. Los resultados indican tasas de acierto en la discriminación de culpables en un rango de 7 al 100 por ciento, y en la de inocentes de 31 a 100 por ciento. La gran variabilidad y la posibilidad de “falsear” las respuestas llevan a cuestionar la inexactitud utilizada en algunos círculos mediáticos respecto a las cualidades y finalidades de dicha prueba. Se concluye la necesidad de profundizar más la posibilidad de que esta prueba sea utilizada con fines forenses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. To study the evolution of Li in the Galaxy it is necessary to observe dwarf or subgiant stars. These are the only long-lived stars whose present-day atmospheric chemical composition reflects their natal Li abundances according to standard models of stellar evolution. Although Li has been extensively studied in the Galactic disk and halo, to date there has only been one uncertain detection of Li in an unevolved bulge star. Aims. Our aim with this study is to provide the first clear detection of Li in the Galactic bulge, based on an analysis of a dwarf star that has largely retained its initial Li abundance. Methods. We performed a detailed elemental abundance analysis of the bulge dwarf star MOA-2010-BLG-285S using a high-resolution and high signal-to-noise spectrum obtained with the UVES spectrograph at the VLT when the object was optically magnified during a gravitational microlensing event (visual magnification A similar to 550 during observation). The Li abundance was determined through synthetic line profile fitting of the (7)Li resonance doublet line at 670.8 nm. The results have been corrected for departures from LTE. Results. MOA-2010-BLG-285S is, at [Fe/H] = -1.23, the most metal-poor dwarf star detected so far in the Galactic bulge. Its old age (12.5 Gyr) and enhanced [alpha/Fe] ratios agree well with stars in the thick disk at similar metallicities. This star represents the first unambiguous detection of Li in a metal-poor dwarf star in the Galactic bulge. We find an NLTE corrected Li abundance of log epsilon(Li) = 2.16, which is consistent with values derived for Galactic disk and halo dwarf stars at similar metallicities and temperatures. Conclusions. Our results show that there are no signs of Li enrichment or production in the Galactic bulge during its earliest phases. Observations of Li in other galaxies (omega Cen) and other components of the Galaxy suggest further that the Spite plateau is universal.