980 resultados para event-driven
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A disastrous storm surge hit the coast of the Netherlands on 31 January and 1 February 1953. We examine the meteorological situation during this event using the Twentieth Century Reanalysis (20CR) data set. We find a strong pressure gradient between Ireland and northern Germany accompanied by strong north-westerly winds over the North Sea. Storm driven sea level rise combined with spring tide contributed to this extreme event. The state of the atmosphere in 20CR during this extreme event is in good agreement with historical observational data
Resumo:
Heinrich layers of the glacial North Atlantic record abrupt widespread iceberg rafting of detrital carbonate and other lithic material at the extreme-cold culminations of Bond climate cycles. Both internal (glaciologic) and external ( climate) forcings have been proposed. Here we suggest an explanation for the iceberg release that encompasses external climate forcing on the basis of a new glaciological process recently witnessed along the Antarctic Peninsula: rapid disintegrations of fringing ice shelves induced by climate-controlled meltwater infilling of surface crevasses. We postulate that peripheral ice shelves, formed along the eastern Canadian seaboard during extreme cold conditions, would be vulnerable to sudden climate-driven disintegration during any climate amelioration. Ice shelf disintegration then would be the source of Heinrich event icebergs.
Resumo:
While most healthy elderly are able to manage their everyday activities, studies showed that there are both stable and declining abilities during healthy aging. For example, there is evidence that semantic memory processes which involve controlled retrieval mechanism decrease, whereas the automatic functioning of the semantic network remains intact. In contrast, patients with Alzheimer’s disease (AD) suffer from episodic and semantic memory impairments aggravating their daily functioning. In AD, severe episodic as well as semantic memory deficits are observable. While the hallmark symptom of episodic memory decline in AD is well investigated, the underlying mechanisms of semantic memory deterioration remain unclear. By disentangling the semantic memory impairments in AD, the present thesis aimed to improve early diagnosis and to find a biomarker for dementia. To this end, a study on healthy aging and a study with dementia patients were conducted investigating automatic and controlled semantic word retrieval. Besides the inclusion of AD patients, a group of participants diagnosed with semantic dementia (SD) – showing isolated semantic memory loss – was assessed. Automatic and controlled semantic word retrieval was measured with standard neuropsychological tests and by means of event-related potentials (ERP) recorded during the performance of a semantic priming (SP) paradigm. Special focus was directed to the N400 or N400-LPC (late positive component) complex, an ERP that is sensitive to the semantic word retrieval. In both studies, data driven topographical analyses were applied. Furthermore, in the patient study, the combination of the individual baseline cerebral blood flow (CBF) with the N400 topography of each participant was employed in order to relate altered functional electrophysiology to the pathophysiology of dementia. Results of the aging study revealed that the automatic semantic word retrieval remains stable during healthy aging, the N400-LPC complex showed a comparable topography in contrast to the young participants. Both patient groups showed automatic SP to some extent, but strikingly the ERP topographies were altered compared to healthy controls. Most importantly, the N400 was identified as a putative marker for dementia. In particular, the degree of the topographical N400 similarity was demonstrated to separate healthy elderly from demented patients. Furthermore, the marker was significantly related to baseline CBF reduction in brain areas relevant for semantic word retrieval. Summing up, the first major finding of the present thesis was that all groups showed semantic priming, but that the N400 topography differed significantly between healthy and demented elderly. The second major contribution was the identification of the N400 similarity as a putative marker for dementia. To conclude, the present thesis added evidence of preserved automatic processing during healthy aging. Moreover, a possible marker which might contribute to an improved diagnosis and lead consequently to a more effective treatment of dementia was presented and has to be further developed.
Resumo:
The Late Permian mass extinction event about 252 million years ago was the most severe biotic crisis of the past 500 million years and occurred during an episode of global warming. The loss of around two-thirds of marine genera is thought to have had substantial ecological effects, but the overall impacts on the functioning of marine ecosystems and the pattern of marine recovery are uncertain. Here we analyse the fossil occurrences of all known benthic marine invertebrate genera from the Permian and Triassic periods, and assign each to a functional group based on their inferred lifestyle. We show that despite the selective extinction of 62-74% of these genera, all but one functional group persisted through the crisis, indicating that there was no significant loss of functional diversity at the global scale. In addition, only one new mode of life originated in the extinction aftermath. We suggest that Early Triassic marine ecosystems were not as ecologically depauperate as widely assumed. Functional diversity was, however, reduced in particular regions and habitats, such as tropical reefs; at these smaller scales, recovery varied spatially and temporally, probably driven by migration of surviving groups. We find that marine ecosystems did not return to their pre-extinction state, and by the Middle Triassic greater functional evenness is recorded, resulting from the radiation of previously subordinate groups such as motile, epifaunal grazers.
Resumo:
The mid-Cretaceous is thought to be a greenhouse world with significantly higher atmospheric pCO2 and sea-surface temperatures as well as a much flatter latitudinal thermal gradient compared to the present. This time interval was punctuated by the Cenomanian/Turonian Oceanic Anoxic Event (OAE-2, ~ 93.5 Myr ago), an episode of global, massive organic carbon burial that likely resulted in a large and abrupt pCO2 decline. However, the climatic consequences of this pCO2 drop are yet poorly constrained. We determined the first, high-resolution sea-surface temperature (SST) record across OAE-2 from a deep-marine sedimentary sequence at Ocean Drilling Program (ODP) Site 1276 in the mid-latitudinal Newfoundland Basin, NW Atlantic. By employing the organic palaeothermometer TEX86, we found that SSTs across the OAE-2 interval were extremely high, but were punctuated by a remarkably large cooling (5-11 °C), which is synchronous with the 2.5-5.5 °C cooling in SST records from equatorial Atlantic sites, and the "Plenus Cold Event". Because this global cooling event is concurrent with increased organic carbon burial, it likely acted in response to the associated pCO2 drop. Our findings imply a substantial increase in the latitudinal SST gradient in the proto-North Atlantic during this period of global cooling and reduced atmospheric pCO2, suggesting a strong coupling between pCO2 and latitudinal thermal gradients under greenhouse climate conditions.
Resumo:
Oxygen minimum zones are expanding globally, and at present account for around 20-40% of oceanic nitrogen loss. Heterotrophic denitrification and anammox-anaerobic ammonium oxidation with nitrite-are responsible for most nitrogen loss in these low-oxygen waters. Anammox is particularly significant in the eastern tropical South Pacific, one of the largest oxygen minimum zones globally. However, the factors that regulate anammox-driven nitrogen loss have remained unclear. Here, we present a comprehensive nitrogen budget for the eastern tropical South Pacific oxygen minimum zone, using measurements of nutrient concentrations, experimentally determined rates of nitrogen transformation and a numerical model of export production. Anammox was the dominant mode of nitrogen loss at the time of sampling. Rates of anammox, and related nitrogen transformations, were greatest in the productive shelf waters, and tailed off with distance from the coast. Within the shelf region, anammox activity peaked in both upper and bottom waters. Overall, rates of nitrogen transformation, including anammox, were strongly correlated with the export of organic matter. We suggest that the sinking of organic matter, and thus the release of ammonium into the water column, together with benthic ammonium release, fuel nitrogen loss from oxygen minimum zones.
Resumo:
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.
Resumo:
Integrity assurance of configuration data has a significant impact on microcontroller-based systems reliability. This is especially true when running applications driven by events which behavior is tightly coupled to this kind of data. This work proposes a new hybrid technique that combines hardware and software resources for detecting and recovering soft-errors in system configuration data. Our approach is based on the utilization of a common built-in microcontroller resource (timer) that works jointly with a software-based technique, which is responsible to periodically refresh the configuration data. The experiments demonstrate that non-destructive single event effects can be effectively mitigated with reduced overheads. Results show an important increase in fault coverage for SEUs and SETs, about one order of magnitude.
Resumo:
The sharing of near real-time traceability knowledge in supply chains plays a central role in coordinating business operations and is a key driver for their success. However before traceability datasets received from external partners can be integrated with datasets generated internally within an organisation, they need to be validated against information recorded for the physical goods received as well as against bespoke rules defined to ensure uniformity, consistency and completeness within the supply chain. In this paper, we present a knowledge driven framework for the runtime validation of critical constraints on incoming traceability datasets encapuslated as EPCIS event-based linked pedigrees. Our constraints are defined using SPARQL queries and SPIN rules. We present a novel validation architecture based on the integration of Apache Storm framework for real time, distributed computation with popular Semantic Web/Linked data libraries and exemplify our methodology on an abstraction of the pharmaceutical supply chain.
Resumo:
The paper examines the motivational drivers behind the participation of Hungarian consumers on a special shopping event, also known as Glamour Days. The study encompasses a variety of related conceptualizations such as hedonic/utilitarian shopping values, self-gifting as well as impulsive buying practices. After the introduction of relevant consumer behaviour concepts and theoretical frameworks, the paper presents a qualitative research on adult and adolescent female consumers’ shopping experiences during Glamour Days. By building on phenomenological methodology, this study also portrays the ways this shopping event has changed consumer society within an originally strongly utilitarian attitude driven Hungarian culture. The phenomenological interview results highlight differences within the motivational drivers of pleasure-oriented shopping for the two age groups. For teenagers, the main motivation was related to the utilitarian aspect due to their financial dependence and the special opportunity to stand out of their peer group by joining an event that is exclusively held for adult women. On the other hand, adult women are motivated by combined hedonic and utilitarian values manifested in self gifting and impulse buying within an effectively planned and managed shopping trip. Based on the results, retail specific strategies are provided along with future research directions.
Resumo:
Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^
Resumo:
Following the workshop on new developments in daily licensing practice in November 2011, we brought together fourteen representatives from national consortia (from Denmark, Germany, Netherlands and the UK) and publishers (Elsevier, SAGE and Springer) met in Copenhagen on 9 March 2012 to discuss provisions in licences to accommodate new developments. The one day workshop aimed to: present background and ideas regarding the provisions KE Licensing Expert Group developed; introduce and explain the provisions the invited publishers currently use;ascertain agreement on the wording for long term preservation, continuous access and course packs; give insight and more clarity about the use of open access provisions in licences; discuss a roadmap for inclusion of the provisions in the publishers’ licences; result in report to disseminate the outcome of the meeting. Participants of the workshop were: United Kingdom: Lorraine Estelle (Jisc Collections) Denmark: Lotte Eivor Jørgensen (DEFF), Lone Madsen (Southern University of Denmark), Anne Sandfær (DEFF/Knowledge Exchange) Germany: Hildegard Schaeffler (Bavarian State Library), Markus Brammer (TIB) The Netherlands: Wilma Mossink (SURF), Nol Verhagen (University of Amsterdam), Marc Dupuis (SURF/Knowledge Exchange) Publishers: Alicia Wise (Elsevier), Yvonne Campfens (Springer), Bettina Goerner (Springer), Leo Walford (Sage) Knowledge Exchange: Keith Russell The main outcome of the workshop was that it would be valuable to have a standard set of clauses which could used in negotiations, this would make concluding licences a lot easier and more efficient. The comments on the model provisions the Licensing Expert group had drafted will be taken into account and the provisions will be reformulated. Data and text mining is a new development and demand for access to allow for this is growing. It would be easier if there was a simpler way to access materials so they could be more easily mined. However there are still outstanding questions on how authors of articles that have been mined can be properly attributed.
Resumo:
Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.
Resumo:
Il presente elaborato esplora l’attitudine delle organizzazioni nei confronti dei processi di business che le sostengono: dalla semi-assenza di struttura, all’organizzazione funzionale, fino all’avvento del Business Process Reengineering e del Business Process Management, nato come superamento dei limiti e delle problematiche del modello precedente. All’interno del ciclo di vita del BPM, trova spazio la metodologia del process mining, che permette un livello di analisi dei processi a partire dagli event data log, ossia dai dati di registrazione degli eventi, che fanno riferimento a tutte quelle attività supportate da un sistema informativo aziendale. Il process mining può essere visto come naturale ponte che collega le discipline del management basate sui processi (ma non data-driven) e i nuovi sviluppi della business intelligence, capaci di gestire e manipolare l’enorme mole di dati a disposizione delle aziende (ma che non sono process-driven). Nella tesi, i requisiti e le tecnologie che abilitano l’utilizzo della disciplina sono descritti, cosi come le tre tecniche che questa abilita: process discovery, conformance checking e process enhancement. Il process mining è stato utilizzato come strumento principale in un progetto di consulenza da HSPI S.p.A. per conto di un importante cliente italiano, fornitore di piattaforme e di soluzioni IT. Il progetto a cui ho preso parte, descritto all’interno dell’elaborato, ha come scopo quello di sostenere l’organizzazione nel suo piano di improvement delle prestazioni interne e ha permesso di verificare l’applicabilità e i limiti delle tecniche di process mining. Infine, nell’appendice finale, è presente un paper da me realizzato, che raccoglie tutte le applicazioni della disciplina in un contesto di business reale, traendo dati e informazioni da working papers, casi aziendali e da canali diretti. Per la sua validità e completezza, questo documento è stata pubblicato nel sito dell'IEEE Task Force on Process Mining.