864 resultados para Precondition event
Resumo:
Computer networks produce tremendous amounts of event-based data that can be collected and managed to support an increasing number of new classes of pervasive applications. Examples of such applications are network monitoring and crisis management. Although the problem of distributed event-based management has been addressed in the non-pervasive settings such as the Internet, the domain of pervasive networks has its own characteristics that make these results non-applicable. Many of these applications are based on time-series data that possess the form of time-ordered series of events. Such applications also embody the need to handle large volumes of unexpected events, often modified on-the-fly, containing conflicting information, and dealing with rapidly changing contexts while producing results with low-latency. Correlating events across contextual dimensions holds the key to expanding the capabilities and improving the performance of these applications. This dissertation addresses this critical challenge. It establishes an effective scheme for complex-event semantic correlation. The scheme examines epistemic uncertainty in computer networks by fusing event synchronization concepts with belief theory. Because of the distributed nature of the event detection, time-delays are considered. Events are no longer instantaneous, but duration is associated with them. Existing algorithms for synchronizing time are split into two classes, one of which is asserted to provide a faster means for converging time and hence better suited for pervasive network management. Besides the temporal dimension, the scheme considers imprecision and uncertainty when an event is detected. A belief value is therefore associated with the semantics and the detection of composite events. This belief value is generated by a consensus among participating entities in a computer network. The scheme taps into in-network processing capabilities of pervasive computer networks and can withstand missing or conflicting information gathered from multiple participating entities. Thus, this dissertation advances knowledge in the field of network management by facilitating the full utilization of characteristics offered by pervasive, distributed and wireless technologies in contemporary and future computer networks.
Resumo:
In fire-dependent forests, managers are interested in predicting the consequences of prescribed burning on postfire tree mortality. We examined the effects of prescribed fire on tree mortality in Florida Keys pine forests, using a factorial design with understory type, season, and year of burn as factors. We also used logistic regression to model the effects of burn season, fire severity, and tree dimensions on individual tree mortality. Despite limited statistical power due to problems in carrying out the full suite of planned experimental burns, associations with tree and fire variables were observed. Post-fire pine tree mortality was negatively correlated with tree size and positively correlated with char height and percent crown scorch. Unlike post-fire mortality, tree mortality associated with storm surge from Hurricane Wilma was greater in the large size classes. Due to their influence on population structure and fuel dynamics, the size-selective mortality patterns following fire and storm surge have practical importance for using fire as a management tool in Florida Keys pinelands in the future, particularly when the threats to their continued existence from tropical storms and sea level rise are expected to increase.
Resumo:
The frequency of extreme environmental events is predicted to increase in the future. Understanding the short- and long-term impacts of these extreme events on large-bodied predators will provide insight into the spatial and temporal scales at which acute environmental disturbances in top-down processes may persist within and across ecosystems. Here, we use long-term studies of movements and age structure of an estuarine top predator—juvenile bull sharks Carcharhinus leucas—to identify the effects of an extreme ‘cold snap’ from 2 to 13 January 2010 over short (weeks) to intermediate (months) time scales. Juvenile bull sharks are typically year-round residents of the Shark River Estuary until they reach 3 to 5 yr of age. However, acoustic telemetry revealed that almost all sharks either permanently left the system or died during the cold snap. For 116 d after the cold snap, no sharks were detected in the system with telemetry or captured during longline sampling. Once sharks returned, both the size structure and abundance of the individuals present in the nursery had changed considerably. During 2010, individual longlines were 70% less likely to capture any sharks, and catch rates on successful longlines were 40% lower than during 2006−2009. Also, all sharks caught after the cold snap were young-of-the-year or neonates, suggesting that the majority of sharks in the estuary were new recruits and several cohorts had been largely lost from the nursery. The longer-term impacts of this change in bull shark abundance to the trophic dynamics of the estuary and the importance of episodic disturbances to bull shark population dynamics will require continued monitoring, but are of considerable interest because of the ecological roles of bull sharks within coastal estuaries and oceans.
Resumo:
Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^
Resumo:
Florida International University hosts an information session for the Hispanic Scholarship's Fund Steps for Success program. The program assists students and parents by providing workshops on financial aid and the college application process. January 21, 2012 at the Graham Center Ballroom, Modesto Maidique Campus, Florida International University
Resumo:
In the last decade, large numbers of social media services have emerged and been widely used in people's daily life as important information sharing and acquisition tools. With a substantial amount of user-contributed text data on social media, it becomes a necessity to develop methods and tools for text analysis for this emerging data, in order to better utilize it to deliver meaningful information to users. ^ Previous work on text analytics in last several decades is mainly focused on traditional types of text like emails, news and academic literatures, and several critical issues to text data on social media have not been well explored: 1) how to detect sentiment from text on social media; 2) how to make use of social media's real-time nature; 3) how to address information overload for flexible information needs. ^ In this dissertation, we focus on these three problems. First, to detect sentiment of text on social media, we propose a non-negative matrix tri-factorization (tri-NMF) based dual active supervision method to minimize human labeling efforts for the new type of data. Second, to make use of social media's real-time nature, we propose approaches to detect events from text streams on social media. Third, to address information overload for flexible information needs, we propose two summarization framework, dominating set based summarization framework and learning-to-rank based summarization framework. The dominating set based summarization framework can be applied for different types of summarization problems, while the learning-to-rank based summarization framework helps utilize the existing training data to guild the new summarization tasks. In addition, we integrate these techneques in an application study of event summarization for sports games as an example of how to better utilize social media data. ^
Resumo:
In the last decade, large numbers of social media services have emerged and been widely used in people's daily life as important information sharing and acquisition tools. With a substantial amount of user-contributed text data on social media, it becomes a necessity to develop methods and tools for text analysis for this emerging data, in order to better utilize it to deliver meaningful information to users. Previous work on text analytics in last several decades is mainly focused on traditional types of text like emails, news and academic literatures, and several critical issues to text data on social media have not been well explored: 1) how to detect sentiment from text on social media; 2) how to make use of social media's real-time nature; 3) how to address information overload for flexible information needs. In this dissertation, we focus on these three problems. First, to detect sentiment of text on social media, we propose a non-negative matrix tri-factorization (tri-NMF) based dual active supervision method to minimize human labeling efforts for the new type of data. Second, to make use of social media's real-time nature, we propose approaches to detect events from text streams on social media. Third, to address information overload for flexible information needs, we propose two summarization framework, dominating set based summarization framework and learning-to-rank based summarization framework. The dominating set based summarization framework can be applied for different types of summarization problems, while the learning-to-rank based summarization framework helps utilize the existing training data to guild the new summarization tasks. In addition, we integrate these techneques in an application study of event summarization for sports games as an example of how to better utilize social media data.
Resumo:
To improve our knowledge of the influence of land-use on solute behaviour and export rates in neotropical montane catchments we investigated total organic carbon (TOC), Ca, Mg, Na, K, NO3 and SO4 concentrations during April 2007-May 2008 at different flow conditions and over time in six forested and pasture-dominated headwaters (0.7-76 km2) in Ecuador. NO3 and SO4 concentrations decreased during the study period, with a continual decrease in NO3 and an abrupt decrease in February 2008 for SO4. We attribute this to changing weather regimes connected to a weakening La Niña event. Stream Na concentration decreased in all catchments, and Mg and Ca concentration decreased in all but the forested catchments during storm flow. Under all land-uses TOC increased at high flows. The differences in solute behaviour during storm flow might be attributed to largely shallow subsurface and surface flow paths in pasture streams on the one hand, and a predominant origin of storm flow from the organic layer in the forested streams on the other hand. Nutrient export rates in the forested streams were comparable to the values found in literature for tropical streams. They amounted to 6-8 kg/ha/y for Ca, 7-8 kg/ha/y for K, 4-5 kg/ha/y for Mg, 11-14 kg/ha/y for Na, 19-22 kg/ha/y for NO3 (i.e. 4.3-5.0 kg/ha/y NO3-N) and 17 kg/ha/y for SO4. Our data contradict the assumption that nutrient export increases with the loss of forest cover. For NO3 we observed a positive correlation of export value and percentage forest cover.
Resumo:
Event-B is a formal method for modeling and verification of discrete transition systems. Event-B development yields proof obligations that must be verified (i.e. proved valid) in order to keep the produced models consistent. Satisfiability Modulo Theory solvers are automated theorem provers used to verify the satisfiability of logic formulas considering a background theory (or combination of theories). SMT solvers not only handle large firstorder formulas, but can also generate models and proofs, as well as identify unsatisfiable subsets of hypotheses (unsat-cores). Tool support for Event-B is provided by the Rodin platform: an extensible Eclipse based IDE that combines modeling and proving features. A SMT plug-in for Rodin has been developed intending to integrate alternative, efficient verification techniques to the platform. We implemented a series of complements to the SMT solver plug-in for Rodin, namely improvements to the user interface for when proof obligations are reported as invalid by the plug-in. Additionally, we modified some of the plug-in features, such as support for proof generation and unsat-core extraction, to comply with the SMT-LIB standard for SMT solvers. We undertook tests using applicable proof obligations to demonstrate the new features. The contributions described can potentially affect productivity in a positive manner.
Resumo:
This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.
Resumo:
Event layers in lake sediments are indicators of past extreme events, mostly the results of floods or earthquakes. Detailed characterisation of the layers allows the discrimination of the sedimentation processes involved, such as surface runoff, landslides or subaqueous slope failures. These processes can then be interpreted in terms of their triggering mechanisms. Here we present a 40 kyr event layer chronology from Lake Suigetsu, Japan. The event layers were characterised using a multi-proxy approach, employing light microscopy and µXRF for microfacies analysis. The vast majority of event layers in Lake Suigetsu was produced by flood events (362 out of 369), allowing the construction of the first long-term, quantitative (with respect to recurrence) and well dated flood chronology from the region. The flood layer frequency shows a high variability over the last 40 kyr, and it appears that extreme precipitation events were decoupled from the average long-term precipitation. For instance, the flood layer frequency is highest in the Glacial at around 25 kyr BP, at which time Japan was experiencing a generally cold and dry climate. Other cold episodes, such as Heinrich Event 1 or the Late Glacial stadial, show a low flood layer frequency. Both observations together exclude a simple, straightforward relationship with average precipitation and temperature. We argue that, especially during Glacial times, changes in typhoon genesis/typhoon tracks are the most likely control on the flood layer frequency, rather than changes in the monsoon front or snow melts. Spectral analysis of the flood chronology revealed periodic variations on centennial and millennial time scales, with 220 yr, 450 yr and a 2000 yr cyclicity most pronounced. However, the flood layer frequency appears to have not only been influenced by climate changes, but also by changes in erosion rates due to, for instance, earthquakes.
Resumo:
This paper presents a methodology to emulate Single Event Upsets (SEUs) in FPGA flip-flops (FFs). Since the content of a FF is not modifiable through the FPGA configuration memory bits, a dedicated design is required for fault injection in the FFs. The method proposed in this paper is a hybrid approach that combines FPGA partial reconfiguration and extra logic added to the circuit under test, without modifying its operation. This approach has been integrated into a fault-injection platform, named NESSY (Non intrusive ErrorS injection SYstem), developed by our research group. Finally, this paper includes results on a Virtex-5 FPGA demonstrating the validity of the method on the ITC’99 benchmark set and a Feed-Forward Equalization (FFE) filter. In comparison with other approaches in the literature, this methodology reduces the resource consumption introduced to carry out the fault injection in FFs, at the cost of adding very little time overhead (1.6 �μs per fault).
Resumo:
Acknowledgments The authors gratefully acknowledge the support of the German Research Foundation (DFG) through the Cluster of Excellence ‘Engineering of Advanced Materials’ at the University of Erlangen-Nuremberg and through Grant Po 472/25.
Resumo:
General note: Title and date provided by Bettye Lane.