636 resultados para Event Log Comparison


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes the Clinical Pathway Analysis Method (CPAM) approach that enables the extraction of valuable organisational and medical information on past clinical pathway executions from the event logs of healthcare information systems. The method deals with the complexity of real-world clinical pathways by introducing a perspective-based segmentation of the date-stamped event log. CPAM enables the clinical pathway analyst to effectively and efficiently acquire a profound insight into the clinical pathways. By comparing the specific medical conditions of patients with the factors used for characterising the different clinical pathway variants, the medical expert can identify the best therapeutic option. Process mining-based analytics enables the acquisition of valuable insights into clinical pathways, based on the complete audit traces of previous clinical pathway instances. Additionally, the methodology is suited to assess guideline compliance and analyse adverse events. Finally, the methodology provides support for eliciting tacit knowledge and providing treatment selection assistance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The care processes of healthcare providers are typically considered as human-centric, flexible, evolving, complex and multi-disciplinary. Consequently, acquiring an insight in the dynamics of these care processes can be an arduous task. A novel event log based approach for extracting valuable medical and organizational information on past executions of the care processes is presented in this study. Care processes are analyzed with the help of a preferential set of process mining techniques in order to discover recurring patterns, analyze and characterize process variants and identify adverse medical events.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article presents a method for checking the conformance between an event log capturing the actual execution of a business process, and a model capturing its expected or normative execution. Given a business process model and an event log, the method returns a set of statements in natural language describing the behavior allowed by the process model but not observed in the log and vice versa. The method relies on a unified representation of process models and event logs based on a well-known model of concurrency, namely event structures. Specifically, the problem of conformance checking is approached by folding the input event log into an event structure, unfolding the process model into another event structure, and comparing the two event structures via an error-correcting synchronized product. Each behavioral difference detected in the synchronized product is then verbalized as a natural language statement. An empirical evaluation shows that the proposed method scales up to real-life datasets while producing more concise and higher-level difference descriptions than state-of-the-art conformance checking methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Standard Monte Carlo (sMC) simulation models have been widely used in AEC industry research to address system uncertainties. Although the benefits of probabilistic simulation analyses over deterministic methods are well documented, the sMC simulation technique is quite sensitive to the probability distributions of the input variables. This phenomenon becomes highly pronounced when the region of interest within the joint probability distribution (a function of the input variables) is small. In such cases, the standard Monte Carlo approach is often impractical from a computational standpoint. In this paper, a comparative analysis of standard Monte Carlo simulation to Markov Chain Monte Carlo with subset simulation (MCMC/ss) is presented. The MCMC/ss technique constitutes a more complex simulation method (relative to sMC), wherein a structured sampling algorithm is employed in place of completely randomized sampling. Consequently, gains in computational efficiency can be made. The two simulation methods are compared via theoretical case studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE To compare diffusion-weighted functional magnetic resonance imaging (DfMRI), a novel alternative to the blood oxygenation level-dependent (BOLD) contrast, in a functional MRI experiment. MATERIALS AND METHODS Nine participants viewed contrast reversing (7.5 Hz) black-and-white checkerboard stimuli using block and event-related paradigms. DfMRI (b = 1800 mm/s2 ) and BOLD sequences were acquired. Four parameters describing the observed signal were assessed: percent signal change, spatial extent of the activation, the Euclidean distance between peak voxel locations, and the time-to-peak of the best fitting impulse response for different paradigms and sequences. RESULTS The BOLD conditions showed a higher percent signal change relative to DfMRI; however, event-related DfMRI showed the strongest group activation (t = 21.23, P < 0.0005). Activation was more diffuse and spatially closer to the BOLD response for DfMRI when the block design was used. DfMRIevent showed the shortest TTP (4.4 +/- 0.88 sec). CONCLUSION The hemodynamic contribution to DfMRI may increase with the use of block designs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Searching for multimedia is an important activity for users of Web search engines. Studying user's interactions with Web search engine multimedia buttons, including image, audio, and video, is important for the development of multimedia Web search systems. This article provides results from a Weblog analysis study of multimedia Web searching by Dogpile users in 2006. The study analyzes the (a) duration, size, and structure of Web search queries and sessions; (b) user demographics; (c) most popular multimedia Web searching terms; and (d) use of advanced Web search techniques including Boolean and natural language. The current study findings are compared with results from previous multimedia Web searching studies. The key findings are: (a) Since 1997, image search consistently is the dominant media type searched followed by audio and video; (b) multimedia search duration is still short (>50% of searching episodes are <1 min), using few search terms; (c) many multimedia searches are for information about people, especially in audio search; and (d) multimedia search has begun to shift from entertainment to other categories such as medical, sports, and technology (based on the most repeated terms). Implications for design of Web multimedia search engines are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerous expert elicitation methods have been suggested for generalised linear models (GLMs). This paper compares three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression. These methods were trialled on two experts in order to model the habitat suitability of the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The first elicitation approach is a geographically assisted indirect predictive method with a geographic information system (GIS) interface. The second approach is a predictive indirect method which uses an interactive graphical tool. The third method uses a questionnaire to elicit expert knowledge directly about the impact of a habitat variable on the response. Two variables (slope and aspect) are used to examine prior and posterior distributions of the three methods. The results indicate that there are some similarities and dissimilarities between the expert informed priors of the two experts formulated from the different approaches. The choice of elicitation method depends on the statistical knowledge of the expert, their mapping skills, time constraints, accessibility to experts and funding available. This trial reveals that expert knowledge can be important when modelling rare event data, such as threatened species, because experts can provide additional information that may not be represented in the dataset. However care must be taken with the way in which this information is elicited and formulated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Major Web search engines, such as AltaVista, are essential tools in the quest to locate online information. This article reports research that used transaction log analysis to examine the characteristics and changes in AltaVista Web searching that occurred from 1998 to 2002. The research questions we examined are (1) What are the changes in AltaVista Web searching from 1998 to 2002? (2) What are the current characteristics of AltaVista searching, including the duration and frequency of search sessions? (3) What changes in the information needs of AltaVista users occurred between 1998 and 2002? The results of our research show (1) a move toward more interactivity with increases in session and query length, (2) with 70% of session durations at 5 minutes or less, the frequency of interaction is increasing, but it is happening very quickly, and (3) a broadening range of Web searchers' information needs, with the most frequent terms accounting for less than 1% of total term usage. We discuss the implications of these findings for the development of Web search engines. © 2005 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prevention and safety promotion programmes. Traditionally, in-depth investigations of crash risks are conducted using exposure controlled study or case-control methodology. However, these studies need either observational data for control cases or exogenous exposure data like vehicle-kilometres travel, entry flow or product of conflicting flow for a particular traffic location, or a traffic site. These data are not readily available and often require extensive data collection effort on a system-wide basis. Aim: The objective of this research is to propose an alternative methodology to investigate crash risks of a road user group in different circumstances using readily available traffic police crash data. Methods: This study employs a combination of a log-linear model and the quasi-induced exposure technique to estimate crash risks of a road user group. While the log-linear model reveals the significant interactions and thus the prevalence of crashes of a road user group under various sets of traffic, environmental and roadway factors, the quasi-induced exposure technique estimates relative exposure of that road user in the same set of explanatory variables. Therefore, the combination of these two techniques provides relative measures of crash risks under various influences of roadway, environmental and traffic conditions. The proposed methodology has been illustrated using Brisbane motorcycle crash data of five years. Results: Interpretations of results on different combination of interactive factors show that the poor conspicuity of motorcycles is a predominant cause of motorcycle crashes. Inability of other drivers to correctly judge the speed and distance of an oncoming motorcyclist is also evident in right-of-way violation motorcycle crashes at intersections. Discussion and Conclusions: The combination of a log-linear model and the induced exposure technique is a promising methodology and can be applied to better estimate crash risks of other road users. This study also highlights the importance of considering interaction effects to better understand hazardous situations. A further study on the comparison between the proposed methodology and case-control method would be useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic safety studies demand more than what current micro-simulation models can provide as they presume that all drivers of motor vehicles exhibit safe behaviours. Several car-following models are used in various micro-simulation models. This research compares the mainstream car following models’ capabilities of emulating precise driver behaviour parameters such as headways and Time to Collisions. The comparison firstly illustrates which model is more robust in the metric reproduction. Secondly, the study conducted a series of sensitivity tests to further explore the behaviour of each model. Based on the outcome of these two steps exploration of the models, a modified structure and parameters adjustment for each car-following model is proposed to simulate more realistic vehicle movements, particularly headways and Time to Collision, below a certain critical threshold. NGSIM vehicle trajectory data is used to evaluate the modified models performance to assess critical safety events within traffic flow. The simulation tests outcomes indicate that the proposed modified models produce better frequency of critical Time to Collision than the generic models, while the improvement on the headway is not significant. The outcome of this paper facilitates traffic safety assessment using microscopic simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extracting and aggregating the relevant event records relating to an identified security incident from the multitude of heterogeneous logs in an enterprise network is a difficult challenge. Presenting the information in a meaningful way is an additional challenge. This paper looks at solutions to this problem by first identifying three main transforms; log collection, correlation, and visual transformation. Having identified that the CEE project will address the first transform, this paper focuses on the second, while the third is left for future work. To aggregate by correlating event records we demonstrate the use of two correlation methods, simple and composite. These make use of a defined mapping schema and confidence values to dynamically query the normalised dataset and to constrain result events to within a time window. Doing so improves the quality of results, required for the iterative re-querying process being undertaken. Final results of the process are output as nodes and edges suitable for presentation as a network graph.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Heat-related impacts may have greater public health implications as climate change continues. It is important to appropriately characterize the relationship between heatwave and health outcomes. However, it is unclear whether a case-crossover design can be effectively used to assess the event- or episode-related health effects. This study examined the association between exposure to heatwaves and mortality and emergency hospital admissions (EHAs) from non-external causes in Brisbane, Australia, using both case-crossover and time series analyses approaches. Methods Poisson generalised additive model (GAM) and time-stratified case-crossover analyses were used to assess the short-term impact of heatwaves on mortality and EHAs. Heatwaves exhibited a significant impact on mortality and EHAs after adjusting for air pollution, day of the week, and season. Results For time-stratified case-crossover analysis, odds ratios of mortality and EHAs during heatwaves were 1.62 (95% confidence interval (CI): 1.36–1.94) and 1.22 (95% CI: 1.14–1.30) at lag 1, respectively. Time series GAM models gave similar results. Relative risks of mortality and EHAs ranged from 1.72 (95% CI: 1.40–2.11) to 1.81 (95% CI: 1.56–2.10) and from 1.14 (95% CI: 1.06–1.23) to 1.28 (95% CI: 1.21–1.36) at lag 1, respectively. The risk estimates gradually attenuated after the lag of one day for both case-crossover and time series analyses. Conclusions The risk estimates from both case-crossover and time series models were consistent and comparable. This finding may have implications for future research on the assessment of event- or episode-related (e.g., heatwave) health effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spatial data are now prevalent in a wide range of fields including environmental and health science. This has led to the development of a range of approaches for analysing patterns in these data. In this paper, we compare several Bayesian hierarchical models for analysing point-based data based on the discretization of the study region, resulting in grid-based spatial data. The approaches considered include two parametric models and a semiparametric model. We highlight the methodology and computation for each approach. Two simulation studies are undertaken to compare the performance of these models for various structures of simulated point-based data which resemble environmental data. A case study of a real dataset is also conducted to demonstrate a practical application of the modelling approaches. Goodness-of-fit statistics are computed to compare estimates of the intensity functions. The deviance information criterion is also considered as an alternative model evaluation criterion. The results suggest that the adaptive Gaussian Markov random field model performs well for highly sparse point-based data where there are large variations or clustering across the space; whereas the discretized log Gaussian Cox process produces good fit in dense and clustered point-based data. One should generally consider the nature and structure of the point-based data in order to choose the appropriate method in modelling a discretized spatial point-based data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was designed to identify the neural networks underlying automatic auditory deviance detection in 10 healthy subjects using functional magnetic resonance imaging. We measured blood oxygenation level-dependent contrasts derived from the comparison of blocks of stimuli presented as a series of standard tones (50 ms duration) alone versus blocks that contained rare duration-deviant tones (100 ms) that were interspersed among a series of frequent standard tones while subjects were watching a silent movie. Possible effects of scanner noise were assessed by a “no tone” condition. In line with previous positron emission tomography and EEG source modeling studies, we found temporal lobe and prefrontal cortical activation that was associated with auditory duration mismatch processing. Data were also analyzed employing an event-related hemodynamic response model, which confirmed activation in response to duration-deviant tones bilaterally in the superior temporal gyrus and prefrontally in the right inferior and middle frontal gyri. In line with previous electrophysiological reports, mismatch activation of these brain regions was significantly correlated with age. These findings suggest a close relationship of the event-related hemodynamic response pattern with the corresponding electrophysiological activity underlying the event-related “mismatch negativity” potential, a putative measure of auditory sensory memory.