965 resultados para Underwater bio-acoustic event detection
Resumo:
The IEEE 802.15.4 is the most widespread used protocol for Wireless Sensor Networks (WSNs) and it is being used as a baseline for several higher layer protocols such as ZigBee, 6LoWPAN or WirelessHART. Its MAC (Medium Access Control) supports both contention-free (CFP, based on the reservation of guaranteed time-slots GTS) and contention based (CAP, ruled by CSMA/CA) access, when operating in beacon-enabled mode. Thus, it enables the differentiation between real-time and best-effort traffic. However, some WSN applications and higher layer protocols may strongly benefit from the possibility of supporting more traffic classes. This happens, for instance, for dense WSNs used in time-sensitive industrial applications. In this context, we propose to differentiate traffic classes within the CAP, enabling lower transmission delays and higher success probability to timecritical messages, such as for event detection, GTS reservation and network management. Building upon a previously proposed methodology (TRADIF), in this paper we outline its implementation and experimental validation over a real-time operating system. Importantly, TRADIF is fully backward compatible with the IEEE 802.15.4 standard, enabling to create different traffic classes just by tuning some MAC parameters.
Resumo:
SOUND OBJECTS IN TIME, SPACE AND ACTIONThe term "sound object" describes an auditory experience that is associated with an acoustic event produced by a sound source. At cortical level, sound objects are represented by temporo-spatial activity patterns within distributed neural networks. This investigation concerns temporal, spatial and action aspects as assessed in normal subjects using electrical imaging or measurement of motor activity induced by transcranial magnetic stimulation (TMS).Hearing the same sound again has been shown to facilitate behavioral responses (repetition priming) and to modulate neural activity (repetition suppression). In natural settings the same source is often heard again and again, with variations in spectro-temporal and spatial characteristics. I have investigated how such repeats influence response times in a living vs. non-living categorization task and the associated spatio-temporal patterns of brain activity in humans. Dynamic analysis of distributed source estimations revealed differential sound object representations within the auditory cortex as a function of the temporal history of exposure to these objects. Often heard sounds are coded by a modulation in a bilateral network. Recently heard sounds, independently of the number of previous exposures, are coded by a modulation of a left-sided network.With sound objects which carry spatial information, I have investigated how spatial aspects of the repeats influence neural representations. Dynamics analyses of distributed source estimations revealed an ultra rapid discrimination of sound objects which are characterized by spatial cues. This discrimination involved two temporo-spatially distinct cortical representations, one associated with position-independent and the other with position-linked representations within the auditory ventral/what stream.Action-related sounds were shown to increase the excitability of motoneurons within the primary motor cortex, possibly via an input from the mirror neuron system. The role of motor representations remains unclear. I have investigated repetition priming-induced plasticity of the motor representations of action sounds with the measurement of motor activity induced by TMS pulses applied on the hand motor cortex. TMS delivered to the hand area within the primary motor cortex yielded larger magnetic evoked potentials (MEPs) while the subject was listening to sounds associated with manual than non- manual actions. Repetition suppression was observed at motoneuron level, since during a repeated exposure to the same manual action sound the MEPs were smaller. I discuss these results in terms of specialized neural network involved in sound processing, which is characterized by repetition-induced plasticity.Thus, neural networks which underlie sound object representations are characterized by modulations which keep track of the temporal and spatial history of the sound and, in case of action related sounds, also of the way in which the sound is produced.LES OBJETS SONORES AU TRAVERS DU TEMPS, DE L'ESPACE ET DES ACTIONSLe terme "objet sonore" décrit une expérience auditive associée avec un événement acoustique produit par une source sonore. Au niveau cortical, les objets sonores sont représentés par des patterns d'activités dans des réseaux neuronaux distribués. Ce travail traite les aspects temporels, spatiaux et liés aux actions, évalués à l'aide de l'imagerie électrique ou par des mesures de l'activité motrice induite par stimulation magnétique trans-crânienne (SMT) chez des sujets sains. Entendre le même son de façon répétitive facilite la réponse comportementale (amorçage de répétition) et module l'activité neuronale (suppression liée à la répétition). Dans un cadre naturel, la même source est souvent entendue plusieurs fois, avec des variations spectro-temporelles et de ses caractéristiques spatiales. J'ai étudié la façon dont ces répétitions influencent le temps de réponse lors d'une tâche de catégorisation vivant vs. non-vivant, et les patterns d'activité cérébrale qui lui sont associés. Des analyses dynamiques d'estimations de sources ont révélé des représentations différenciées des objets sonores au niveau du cortex auditif en fonction de l'historique d'exposition à ces objets. Les sons souvent entendus sont codés par des modulations d'un réseau bilatéral. Les sons récemment entendus sont codé par des modulations d'un réseau du côté gauche, indépendamment du nombre d'expositions. Avec des objets sonores véhiculant de l'information spatiale, j'ai étudié la façon dont les aspects spatiaux des sons répétés influencent les représentations neuronales. Des analyses dynamiques d'estimations de sources ont révélé une discrimination ultra rapide des objets sonores caractérisés par des indices spatiaux. Cette discrimination implique deux représentations corticales temporellement et spatialement distinctes, l'une associée à des représentations indépendantes de la position et l'autre à des représentations liées à la position. Ces représentations sont localisées dans la voie auditive ventrale du "quoi".Des sons d'actions augmentent l'excitabilité des motoneurones dans le cortex moteur primaire, possiblement par une afférence du system des neurones miroir. Le rôle des représentations motrices des sons d'actions reste peu clair. J'ai étudié la plasticité des représentations motrices induites par l'amorçage de répétition à l'aide de mesures de potentiels moteurs évoqués (PMEs) induits par des pulsations de SMT sur le cortex moteur de la main. La SMT appliquée sur le cortex moteur primaire de la main produit de plus grands PMEs alors que les sujets écoutent des sons associée à des actions manuelles en comparaison avec des sons d'actions non manuelles. Une suppression liée à la répétition a été observée au niveau des motoneurones, étant donné que lors de l'exposition répétée au son de la même action manuelle les PMEs étaient plus petits. Ces résultats sont discuté en termes de réseaux neuronaux spécialisés impliqués dans le traitement des sons et caractérisés par de la plasticité induite par la répétition. Ainsi, les réseaux neuronaux qui sous-tendent les représentations des objets sonores sont caractérisés par des modulations qui gardent une trace de l'histoire temporelle et spatiale du son ainsi que de la manière dont le son a été produit, en cas de sons d'actions.
Resumo:
The term "sound object" describes an auditory experience that is associated with an acoustic event produced by a sound source. In natural settings, a sound produced by a living being or an object provides information about the identity and the location of the sound source. Sound's identity is orocessed alono the ventral "What" pathway which consists of regions within the superior and middle temporal cortices as well as the inferior frontal gyrus. This work concerns the creation of individual auditory object representations in narrow semantic categories and their plasticity using electrical imaging. Discrimination of sounds from broad category has been shown to occur along a temporal hierarchy and in different brain regions along the ventral "What" pathway. However, sounds belonging to the same semantic category, such as faces or voices, were shown to be discriminated in specific brain areas and are thought to represent a special class of stimuli. I have investigated how cortical representations of a narrow category, here birdsongs, is modulated by training novices to recognized songs of individual bird species. Dynamic analysis of distributed source estimations revealed differential sound object representations within the auditory ventral "What" pathway as a function of the level of expertise newly acquired. Correct recognition of trained items induces a sharpening within a left-lateralized semantic network starting around 200ms, whereas untrained items' processing occurs later in lower-level and memory-related regions. With another category of sounds belonging to the same category, here heartbeats, I investigated the cortical representations of correct and incorrect recognition of sounds. Source estimations revealed differential representations partially overlapping with regions involved in the semantic network that is activated when participants became experts in the task. Incorrect recognition also induces a higher activation when compared to correct recognition in regions processing lower-level features. The discrimination of heartbeat sounds is a difficult task and requires a continuous listening. I investigated whether the repetition effects are modulated by participants' behavioral performance. Dynamic source estimations revealed repetition suppression in areas located outside of the semantic network. Therefore, individual environmental sounds become meaningful with training. Their representations mainly involve a left-lateralized network of brain regions that are tuned with expertise, as well as other brain areas, not related to semantic processing, and occurring in early stages of semantic processing. -- Le terme objet sonore" décrit une expérience auditive associée à un événement acoustique produit par une source sonore. Dans l'environnement, un son produit par un être vivant ou un objet fournit des informations concernant l'identité et la localisation de la source sonore. Les informations concernant l'identité d'un son sont traitée le long de la voie ventrale di "Quoi". Cette voie est composée de regions situées dans le cortex temporal et frontal. L'objet de ce travail est d'étudier quels sont les neuro-mecanismes impliqués dans la représentation de nouveaux objets sonores appartenant à une meme catégorie sémantique ainsi que les phénomènes de plasticité à l'aide de l'imagerie électrique. Il a été montré que la discrimination de sons appartenant à différentes catégories sémantiques survient dans différentes aires situées le long la voie «Quoi» et suit une hiérarchie temporelle II a également été montré que la discrimination de sons appartenant à la même catégorie sémantique tels que les visages ou les voix, survient dans des aires spécifiques et représenteraient des stimuli particuliers. J'ai étudié comment les représentations corticales de sons appartenant à une même catégorie sémantique, dans ce cas des chants d'oiseaux, sont modifiées suite à un entraînement Pour ce faire, des sujets novices ont été entraînés à reconnaître des chants d'oiseaux spécifiques L'analyse des estimations des sources neuronales au cours du temps a montré que les representations des objets sonores activent de manière différente des régions situées le long de la vo,e ventrale en fonction du niveau d'expertise acquis grâce à l'entraînement. La reconnaissance des chants pour lesquels les sujets ont été entraînés implique un réseau sémantique principalement situé dans l'hémisphère gauche activé autour de 200ms. Au contraire, la reconnaissance des chants pour lesquels les sujets n'ont pas été entraînés survient plus tardivement dans des régions de plus bas niveau. J'ai ensuite étudié les mécanismes impliqués dans la reconnaissance et non reconnaissance de sons appartenant à une autre catégorie, .es battements de coeur. L'analyse des sources neuronales a montre que certaines régions du réseau sémantique lié à l'expertise acquise sont recrutées de maniere différente en fonction de la reconnaissance ou non reconnaissance du son La non reconnaissance des sons recrute des régions de plus bas niveau. La discrimination des bruits cardiaques est une tâche difficile et nécessite une écoute continue du son. J'ai étudié l'influence des réponses comportementales sur les effets de répétitions. L'analyse des sources neuronales a montré que la reconnaissance ou non reconnaissance des sons induisent des effets de repétition différents dans des régions situées en dehors des aires du réseau sémantique. Ainsi, les sons acquièrent un sens grâce à l'entraînement. Leur représentation corticale implique principalement un réseau d'aires cérébrales situé dans l'hémisphère gauche, dont l'activité est optimisée avec l'acquisition d'un certain niveau d'expertise, ainsi que d'autres régions qui ne sont pas liée au traitement de l'information sémantique. L'activité de ce réseau sémantique survient plus rapidemement que la prédiction par le modèle de la hiérarchie temporelle.
Resumo:
The high sensitivity and excellent timing accuracy of Geiger mode avalanche photodiodes makes them ideal sensors as pixel detectors for particle tracking in high energy physics experiments to be performed in future linear colliders. Nevertheless, it is well known that these sensors suffer from dark counts and afterpulsing noise, which induce false hits (indistinguishable from event detection) as well as an increase of the necessary area of the readout system. In this work, we present a comparison between APDs fabricated in a high voltage 0.35 µm and a high integration 0.13 µm commercially available CMOS technologies that has been performed to determine which of them best fits the particle collider requirements. In addition, a readout circuit that allows low noise operation is introduced. Experimental characterization of the proposed pixel is also presented in this work.
Resumo:
peaker(s): Jon Hare Organiser: Time: 25/06/2014 11:00-11:50 Location: B32/3077 Abstract The aggregation of items from social media streams, such as Flickr photos and Twitter tweets, into meaningful groups can help users contextualise and effectively consume the torrents of information on the social web. This task is challenging due to the scale of the streams and the inherently multimodal nature of the information being contextualised. In this talk I'll describe some of our recent work on trend and event detection in multimedia data streams. We focus on scalable streaming algorithms that can be applied to multimedia data streams from the web and the social web. The talk will cover two particular aspects of our work: mining Twitter for trending images by detecting near duplicates; and detecting social events in multimedia data with streaming clustering algorithms. I'll will describe in detail our techniques, and explore open questions and areas of potential future work, in both these tasks.
Resumo:
Two vertical cosmic ray telescopes for atmospheric cosmic ray ionization event detection are compared. Counter A, designed for low power remote use, was deployed in the Welsh mountains; its event rate increased with altitude as expected from atmospheric cosmic ray absorption. Independently, Counter B’s event rate was found to vary with incoming particle acceptance angle. Simultaneous colocated comparison of both telescopes exposed to atmospheric ionization showed a linear relationship between their event rates.
Resumo:
This paper presents the two datasets (ARENA and P5) and the challenge that form a part of the PETS 2015 workshop. The datasets consist of scenarios recorded by us- ing multiple visual and thermal sensors. The scenarios in ARENA dataset involve different staged activities around a parked vehicle in a parking lot in UK and those in P5 dataset involve different staged activities around the perimeter of a nuclear power plant in Sweden. The scenarios of each dataset are grouped into ‘Normal’, ‘Warning’ and ‘Alarm’ categories. The Challenge specifically includes tasks that account for different steps in a video understanding system: Low-Level Video Analysis (object detection and tracking), Mid-Level Video Analysis (‘atomic’ event detection) and High-Level Video Analysis (‘complex’ event detection). The evaluation methodology used for the Challenge includes well-established measures.
Resumo:
This paper describes the dataset and vision challenges that form part of the PETS 2014 workshop. The datasets are multisensor sequences containing different activities around a parked vehicle in a parking lot. The dataset scenarios were filmed from multiple cameras mounted on the vehicle itself and involve multiple actors. In PETS2014 workshop, 22 acted scenarios are provided of abnormal behaviour around the parked vehicle. The aim in PETS 2014 is to provide a standard benchmark that indicates how detection, tracking, abnormality and behaviour analysis systems perform against a common database. The dataset specifically addresses several vision challenges corresponding to different steps in a video understanding system: Low-Level Video Analysis (object detection and tracking), Mid-Level Video Analysis (‘simple’ event detection: the behaviour recognition of a single actor) and High-Level Video Analysis (‘complex’ event detection: the behaviour and interaction recognition of several actors).
Resumo:
Synoptic wind events in the equatorial Pacific strongly influence the El Niño/Southern Oscillation (ENSO) evolution. This paper characterizes the spatio-temporal distribution of Easterly (EWEs) and Westerly Wind Events (WWEs) and quantifies their relationship with intraseasonal and interannual large-scale climate variability. We unambiguously demonstrate that the Madden–Julian Oscillation (MJO) and Convectively-coupled Rossby Waves (CRW) modulate both WWEs and EWEs occurrence probability. 86 % of WWEs occur within convective MJO and/or CRW phases and 83 % of EWEs occur within the suppressed phase of MJO and/or CRW. 41 % of WWEs and 26 % of EWEs are in particular associated with the combined occurrence of a CRW/MJO, far more than what would be expected from a random distribution (3 %). Wind events embedded within MJO phases also have a stronger impact on the ocean, due to a tendency to have a larger amplitude, zonal extent and longer duration. These findings are robust irrespective of the wind events and MJO/CRW detection methods. While WWEs and EWEs behave rather symmetrically with respect to MJO/CRW activity, the impact of ENSO on wind events is asymmetrical. The WWEs occurrence probability indeed increases when the warm pool is displaced eastward during El Niño events, an increase that can partly be related to interannual modulation of the MJO/CRW activity in the western Pacific. On the other hand, the EWEs modulation by ENSO is less robust, and strongly depends on the wind event detection method. The consequences of these results for ENSO predictability are discussed.
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.
Resumo:
This book will serve as a foundation for a variety of useful applications of graph theory to computer vision, pattern recognition, and related areas. It covers a representative set of novel graph-theoretic methods for complex computer vision and pattern recognition tasks. The first part of the book presents the application of graph theory to low-level processing of digital images such as a new method for partitioning a given image into a hierarchy of homogeneous areas using graph pyramids, or a study of the relationship between graph theory and digital topology. Part II presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, including a survey of graph based methodologies for pattern recognition and computer vision, a presentation of a series of computationally efficient algorithms for testing graph isomorphism and related graph matching tasks in pattern recognition and a new graph distance measure to be used for solving graph matching problems. Finally, Part III provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks. It includes a critical review of the main graph-based and structural methods for fingerprint classification, a new method to visualize time series of graphs, and potential applications in computer network monitoring and abnormal event detection.
Resumo:
Various applications for the purposes of event detection, localization, and monitoring can benefit from the use of wireless sensor networks (WSNs). Wireless sensor networks are generally easy to deploy, with flexible topology and can support diversity of tasks thanks to the large variety of sensors that can be attached to the wireless sensor nodes. To guarantee the efficient operation of such a heterogeneous wireless sensor networks during its lifetime an appropriate management is necessary. Typically, there are three management tasks, namely monitoring, (re) configuration, and code updating. On the one hand, status information, such as battery state and node connectivity, of both the wireless sensor network and the sensor nodes has to be monitored. And on the other hand, sensor nodes have to be (re)configured, e.g., setting the sensing interval. Most importantly, new applications have to be deployed as well as bug fixes have to be applied during the network lifetime. All management tasks have to be performed in a reliable, time- and energy-efficient manner. The ability to disseminate data from one sender to multiple receivers in a reliable, time- and energy-efficient manner is critical for the execution of the management tasks, especially for code updating. Using multicast communication in wireless sensor networks is an efficient way to handle such traffic pattern. Due to the nature of code updates a multicast protocol has to support bulky traffic and endto-end reliability. Further, the limited resources of wireless sensor nodes demand an energy-efficient operation of the multicast protocol. Current data dissemination schemes do not fulfil all of the above requirements. In order to close the gap, we designed the Sensor Node Overlay Multicast (SNOMC) protocol such that to support a reliable, time-efficient and energy-efficient dissemination of data from one sender node to multiple receivers. In contrast to other multicast transport protocols, which do not support reliability mechanisms, SNOMC supports end-to-end reliability using a NACK-based reliability mechanism. The mechanism is simple and easy to implement and can significantly reduce the number of transmissions. It is complemented by a data acknowledgement after successful reception of all data fragments by the receiver nodes. In SNOMC three different caching strategies are integrated for an efficient handling of necessary retransmissions, namely, caching on each intermediate node, caching on branching nodes, or caching only on the sender node. Moreover, an option was included to pro-actively request missing fragments. SNOMC was evaluated both in the OMNeT++ simulator and in our in-house real-world testbed and compared to a number of common data dissemination protocols, such as Flooding, MPR, TinyCubus, PSFQ, and both UDP and TCP. The results showed that SNOMC outperforms the selected protocols in terms of transmission time, number of transmitted packets, and energy-consumption. Moreover, we showed that SNOMC performs well with different underlying MAC protocols, which support different levels of reliability and energy-efficiency. Thus, SNOMC can offer a robust, high-performing solution for the efficient distribution of code updates and management information in a wireless sensor network. To address the three management tasks, in this thesis we developed the Management Architecture for Wireless Sensor Networks (MARWIS). MARWIS is specifically designed for the management of heterogeneous wireless sensor networks. A distinguished feature of its design is the use of wireless mesh nodes as backbone, which enables diverse communication platforms and offloading functionality from the sensor nodes to the mesh nodes. This hierarchical architecture allows for efficient operation of the management tasks, due to the organisation of the sensor nodes into small sub-networks each managed by a mesh node. Furthermore, we developed a intuitive -based graphical user interface, which allows non-expert users to easily perform management tasks in the network. In contrast to other management frameworks, such as Mate, MANNA, TinyCubus, or code dissemination protocols, such as Impala, Trickle, and Deluge, MARWIS offers an integrated solution monitoring, configuration and code updating of sensor nodes. Integration of SNOMC into MARWIS further increases performance efficiency of the management tasks. To our knowledge, our approach is the first one, which offers a combination of a management architecture with an efficient overlay multicast transport protocol. This combination of SNOMC and MARWIS supports reliably, time- and energy-efficient operation of a heterogeneous wireless sensor network.
Resumo:
Phycobiliproteins are a family of water-soluble pigment proteins that play an important role as accessory or antenna pigments and absorb in the green part of the light spectrum poorly used by chlorophyll a. The phycoerythrins (PEs) are one of four types of phycobiliproteins that are generally distinguished based on their absorption properties. As PEs are water soluble, they are generally not captured with conventional pigment analysis. Here we present a statistical model based on in situ measurements of three transatlantic cruises which allows us to derive relative PE concentration from standardized hyperspectral underwater radiance measurements (Lu). The model relies on Empirical Orthogonal Function (EOF) analysis of Lu spectra and, subsequently, a Generalized Linear Model with measured PE concentrations as the response variable and EOF loadings as predictor variables. The method is used to predict relative PE concentrations throughout the water column and to calculate integrated PE estimates based on those profiles.