842 resultados para Mega-event


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A significantly increased water regime can lead to inundation of rivers, creeks and surrounding floodplains- and thus impact on the temporal dynamics of both the extant vegetation and the dormant, but viable soil-seed bank of riparian corridors. The study documented changes in the soil seed-bank along riparian corridors before and after a major flood event in January 2011 in southeast Queensland, Australia. The study site was a major river (the Mooleyember creek) near Roma, Central Queensland impacted by the extreme flood event and where baseline ecological data on riparian seed-bank populations have previously been collected in 2007, 2008 and 2009. After the major flood event, we collected further soil samples from the same locations in spring/summer (November–December 2011) and in early autumn (March 2012). Thereafter, the soils were exposed to adequate warmth and moisture under glasshouse conditions, and emerged seedlings identified taxonomically. Flooding increased seed-bank abundance but decreased its species richness and diversity. However, flood impact was less than that of yearly effect but greater than that of seasonal variation. Seeds of trees and shrubs were few in the soil, and were negatively affected by the flood; those of herbaceous and graminoids were numerous and proliferate after the flood. Seed-banks of weedy and/or exotic species were no more affected by the flood than those of native and/or non-invasive species. Overall, the studied riparian zone showed evidence of a quick recovery of its seed-bank over time, and can be considered to be resilient to an extreme flood event.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The idea of extracting knowledge in process mining is a descendant of data mining. Both mining disciplines emphasise data flow and relations among elements in the data. Unfortunately, challenges have been encountered when working with the data flow and relations. One of the challenges is that the representation of the data flow between a pair of elements or tasks is insufficiently simplified and formulated, as it considers only a one-to-one data flow relation. In this paper, we discuss how the effectiveness of knowledge representation can be extended in both disciplines. To this end, we introduce a new representation of the data flow and dependency formulation using a flow graph. The flow graph solves the issue of the insufficiency of presenting other relation types, such as many-to-one and one-to-many relations. As an experiment, a new evaluation framework is applied to the Teleclaim process in order to show how this method can provide us with more precise results when compared with other representations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: We aimed to assess the impact of task demands and individual characteristics on threat detection in baggage screeners. Background: Airport security staff work under time constraints to ensure optimal threat detection. Understanding the impact of individual characteristics and task demands on performance is vital to ensure accurate threat detection. Method: We examined threat detection in baggage screeners as a function of event rate (i.e., number of bags per minute) and time on task across 4 months. We measured performance in terms of the accuracy of detection of Fictitious Threat Items (FTIs) randomly superimposed on X-ray images of real passenger bags. Results: Analyses of the percentage of correct FTI identifications (hits) show that longer shifts with high baggage throughput result in worse threat detection. Importantly, these significant performance decrements emerge within the first 10 min of these busy screening shifts only. Conclusion: Longer shift lengths, especially when combined with high baggage throughput, increase the likelihood that threats go undetected. Application: Shorter shift rotations, although perhaps difficult to implement during busy screening periods, would ensure more consistently high vigilance in baggage screeners and, therefore, optimal threat detection and passenger safety.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aerosol particles in the atmosphere are known to significantly influence ecosystems, to change air quality and to exert negative health effects. Atmospheric aerosols influence climate through cooling of the atmosphere and the underlying surface by scattering of sunlight, through warming of the atmosphere by absorbing sun light and thermal radiation emitted by the Earth surface and through their acting as cloud condensation nuclei. Aerosols are emitted from both natural and anthropogenic sources. Depending on their size, they can be transported over significant distances, while undergoing considerable changes in their composition and physical properties. Their lifetime in the atmosphere varies from a few hours to a week. New particle formation is a result of gas-to-particle conversion. Once formed, atmospheric aerosol particles may grow due to condensation or coagulation, or be removed by deposition processes. In this thesis we describe analyses of air masses, meteorological parameters and synoptic situations to reveal conditions favourable for new particle formation in the atmosphere. We studied the concentration of ultrafine particles in different types of air masses, and the role of atmospheric fronts and cloudiness in the formation of atmospheric aerosol particles. The dominant role of Arctic and Polar air masses causing new particle formation was clearly observed at Hyytiälä, Southern Finland, during all seasons, as well as at other measurement stations in Scandinavia. In all seasons and on multi-year average, Arctic and North Atlantic areas were the sources of nucleation mode particles. In contrast, concentrations of accumulation mode particles and condensation sink values in Hyytiälä were highest in continental air masses, arriving at Hyytiälä from Eastern Europe and Central Russia. The most favourable situation for new particle formation during all seasons was cold air advection after cold-front passages. Such a period could last a few days until the next front reached Hyytiälä. The frequency of aerosol particle formation relates to the frequency of low-cloud-amount days in Hyytiälä. Cloudiness of less than 5 octas is one of the factors favouring new particle formation. Cloudiness above 4 octas appears to be an important factor that prevents particle growth, due to the decrease of solar radiation, which is one of the important meteorological parameters in atmospheric particle formation and growth. Keywords: Atmospheric aerosols, particle formation, air mass, atmospheric front, cloudiness

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing business process drift detection methods do not work with event streams. As such, they are designed to detect inter-trace drifts only, i.e. drifts that occur between complete process executions (traces), as recorded in event logs. However, process drift may also occur during the execution of a process, and may impact ongoing executions. Existing methods either do not detect such intra-trace drifts, or detect them with a long delay. Moreover, they do not perform well with unpredictable processes, i.e. processes whose logs exhibit a high number of distinct executions to the total number of executions. We address these two issues by proposing a fully automated and scalable method for online detection of process drift from event streams. We perform statistical tests over distributions of behavioral relations between events, as observed in two adjacent windows of adaptive size, sliding along with the stream. An extensive evaluation on synthetic and real-life logs shows that our method is fast and accurate in the detection of typical change patterns, and performs significantly better than the state of the art.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The overlapping sound pressure waves that enter our brain via the ears and auditory nerves must be organized into a coherent percept. Modelling the regularities of the auditory environment and detecting unexpected changes in these regularities, even in the absence of attention, is a necessary prerequisite for orientating towards significant information as well as speech perception and communication, for instance. The processing of auditory information, in particular the detection of changes in the regularities of the auditory input, gives rise to neural activity in the brain that is seen as a mismatch negativity (MMN) response of the event-related potential (ERP) recorded by electroencephalography (EEG). --- As the recording of MMN requires neither a subject s behavioural response nor attention towards the sounds, it can be done even with subjects with problems in communicating or difficulties in performing a discrimination task, for example, from aphasic and comatose patients, newborns, and even fetuses. Thus with MMN one can follow the evolution of central auditory processing from the very early, often critical stages of development, and also in subjects who cannot be examined with the more traditional behavioural measures of auditory discrimination. Indeed, recent studies show that central auditory processing, as indicated by MMN, is affected in different clinical populations, such as schizophrenics, as well as during normal aging and abnormal childhood development. Moreover, the processing of auditory information can be selectively impaired for certain auditory attributes (e.g., sound duration, frequency) and can also depend on the context of the sound changes (e.g., speech or non-speech). Although its advantages over behavioral measures are undeniable, a major obstacle to the larger-scale routine use of the MMN method, especially in clinical settings, is the relatively long duration of its measurement. Typically, approximately 15 minutes of recording time is needed for measuring the MMN for a single auditory attribute. Recording a complete central auditory processing profile consisting of several auditory attributes would thus require from one hour to several hours. In this research, I have contributed to the development of new fast multi-attribute MMN recording paradigms in which several types and magnitudes of sound changes are presented in both speech and non-speech contexts in order to obtain a comprehensive profile of auditory sensory memory and discrimination accuracy in a short measurement time (altogether approximately 15 min for 5 auditory attributes). The speed of the paradigms makes them highly attractive for clinical research, their reliability brings fidelity to longitudinal studies, and the language context is especially suitable for studies on language impairments such as dyslexia and aphasia. In addition I have presented an even more ecological paradigm, and more importantly, an interesting result in view of the theory of MMN where the MMN responses are recorded entirely without a repetitive standard tone. All in all, these paradigms contribute to the development of the theory of auditory perception, and increase the feasibility of MMN recordings in both basic and clinical research. Moreover, they have already proven useful in studying for instance dyslexia, Asperger syndrome and schizophrenia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of the Advanced Regional Prediction System (ARPS) in simulating an extreme rainfall event is evaluated, and subsequently the physical mechanisms leading to its initiation and sustenance are explored. As a case study, the heavy precipitation event that led to 65 cm of rainfall accumulation in a span of around 6 h (1430 LT-2030 LT) over Santacruz (Mumbai, India), on 26 July, 2005, is selected. Three sets of numerical experiments have been conducted. The first set of experiments (EXP1) consisted of a four-member ensemble, and was carried out in an idealized mode with a model grid spacing of 1 km. In spite of the idealized framework, signatures of heavy rainfall were seen in two of the ensemble members. The second set (EXP2) consisted of a five-member ensemble, with a four-level one-way nested integration and grid spacing of 54, 18, 6 and 1 km. The model was able to simulate a realistic spatial structure with the 54, 18, and 6 km grids; however, with the 1 km grid, the simulations were dominated by the prescribed boundary conditions. The third and final set of experiments (EXP3) consisted of a five-member ensemble, with a four-level one-way nesting and grid spacing of 54, 18, 6, and 2 km. The Scaled Lagged Average Forecasting (SLAF) methodology was employed to construct the ensemble members. The model simulations in this case were closer to observations, as compared to EXP2. Specifically, among all experiments, the timing of maximum rainfall, the abrupt increase in rainfall intensities, which was a major feature of this event, and the rainfall intensities simulated in EXP3 (at 6 km resolution) were closest to observations. Analysis of the physical mechanisms causing the initiation and sustenance of the event reveals some interesting aspects. Deep convection was found to be initiated by mid-tropospheric convergence that extended to lower levels during the later stage. In addition, there was a high negative vertical gradient of equivalent potential temperature suggesting strong atmospheric instability prior to and during the occurrence of the event. Finally, the presence of a conducive vertical wind shear in the lower and mid-troposphere is thought to be one of the major factors influencing the longevity of the event.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Frequent episode discovery is a popular framework for mining data available as a long sequence of events. An episode is essentially a short ordered sequence of event types and the frequency of an episode is some suitable measure of how often the episode occurs in the data sequence. Recently,we proposed a new frequency measure for episodes based on the notion of non-overlapped occurrences of episodes in the event sequence, and showed that, such a definition, in addition to yielding computationally efficient algorithms, has some important theoretical properties in connecting frequent episode discovery with HMM learning. This paper presents some new algorithms for frequent episode discovery under this non-overlapped occurrences-based frequency definition. The algorithms presented here are better (by a factor of N, where N denotes the size of episodes being discovered) in terms of both time and space complexities when compared to existing methods for frequent episode discovery. We show through some simulation experiments, that our algorithms are very efficient. The new algorithms presented here have arguably the least possible orders of spaceand time complexities for the task of frequent episode discovery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we consider the process of discovering frequent episodes in event sequences. The most computationally intensive part of this process is that of counting the frequencies of a set of candidate episodes. We present two new frequency counting algorithms for speeding up this part. These, referred to as non-overlapping and non-inteleaved frequency counts, are based on directly counting suitable subsets of the occurrences of an episode. Hence they are different from the frequency counts of Mannila et al [1], where they count the number of windows in which the episode occurs. Our new frequency counts offer a speed-up factor of 7 or more on real and synthetic datasets. We also show how the new frequency counts can be used when the events in episodes have time-durations as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Discovering patterns in temporal data is an important task in Data Mining. A successful method for this was proposed by Mannila et al. [1] in 1997. In their framework, mining for temporal patterns in a database of sequences of events is done by discovering the so called frequent episodes. These episodes characterize interesting collections of events occurring relatively close to each other in some partial order. However, in this framework(and in many others for finding patterns in event sequences), the ordering of events in an event sequence is the only allowed temporal information. But there are many applications where the events are not instantaneous; they have time durations. Interesting episodesthat we want to discover may need to contain information regarding event durations etc. In this paper we extend Mannila et al.’s framework to tackle such issues. In our generalized formulation, episodes are defined so that much more temporal information about events can be incorporated into the structure of an episode. This significantly enhances the expressive capability of the rules that can be discovered in the frequent episode framework. We also present algorithms for discovering such generalized frequent episodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a small extent sensor network for event detection, in which nodes periodically take samples and then contend over a random access network to transmit their measurement packets to the fusion center. We consider two procedures at the fusion center for processing the measurements. The Bayesian setting, is assumed, that is, the fusion center has a prior distribution on the change time. In the first procedure, the decision algorithm at the fusion center is network-oblivious and makes a decision only when a complete vector of measurements taken at a sampling instant is available. In the second procedure, the decision algorithm at the fusion center is network-aware and processes measurements as they arrive, but in a time-causal order. In this case, the decision statistic depends on the network delays, whereas in the network-oblivious case, the decision statistic does not. This yields a Bayesian change-detection problem with a trade-off between the random network delay and the decision delay that is, a higher sampling rate reduces the decision delay but increases the random access delay. Under periodic sampling, in the network-oblivious case, the structure of the optimal stopping rule is the same as that without the network, and the optimal change detection delay decouples into the network delay and the optimal decision delay without the network. In the network-aware case, the optimal stopping problem is analyzed as a partially observable Markov decision process, in which the states of the queues and delays in the network need to be maintained. A sufficient decision statistic is the network state and the posterior probability of change having occurred, given the measurements received and the state of the network. The optimal regimes are studied using simulation.