978 resultados para Event Log Comparison
Resumo:
This paper proposes the Clinical Pathway Analysis Method (CPAM) approach that enables the extraction of valuable organisational and medical information on past clinical pathway executions from the event logs of healthcare information systems. The method deals with the complexity of real-world clinical pathways by introducing a perspective-based segmentation of the date-stamped event log. CPAM enables the clinical pathway analyst to effectively and efficiently acquire a profound insight into the clinical pathways. By comparing the specific medical conditions of patients with the factors used for characterising the different clinical pathway variants, the medical expert can identify the best therapeutic option. Process mining-based analytics enables the acquisition of valuable insights into clinical pathways, based on the complete audit traces of previous clinical pathway instances. Additionally, the methodology is suited to assess guideline compliance and analyse adverse events. Finally, the methodology provides support for eliciting tacit knowledge and providing treatment selection assistance.
Resumo:
The care processes of healthcare providers are typically considered as human-centric, flexible, evolving, complex and multi-disciplinary. Consequently, acquiring an insight in the dynamics of these care processes can be an arduous task. A novel event log based approach for extracting valuable medical and organizational information on past executions of the care processes is presented in this study. Care processes are analyzed with the help of a preferential set of process mining techniques in order to discover recurring patterns, analyze and characterize process variants and identify adverse medical events.
Resumo:
This article presents a method for checking the conformance between an event log capturing the actual execution of a business process, and a model capturing its expected or normative execution. Given a business process model and an event log, the method returns a set of statements in natural language describing the behavior allowed by the process model but not observed in the log and vice versa. The method relies on a unified representation of process models and event logs based on a well-known model of concurrency, namely event structures. Specifically, the problem of conformance checking is approached by folding the input event log into an event structure, unfolding the process model into another event structure, and comparing the two event structures via an error-correcting synchronized product. Each behavioral difference detected in the synchronized product is then verbalized as a natural language statement. An empirical evaluation shows that the proposed method scales up to real-life datasets while producing more concise and higher-level difference descriptions than state-of-the-art conformance checking methods.
Resumo:
The contemporary directions of art galleries worldwide are changing as social patterns and demands, as well as visitor expectations of their experiences at art galleries, change. New programs and strategies are being developed in galleries to make these institutions more appealing to people who would not normally visit them, and one such strategy is the staging of special events. However, because galleries are staging an increasing number of special events, the factors motivating visitors to attend these institutions are changing. Visitors hope to have different experiences and encounters in the gallery during special events. This paper presents the findings from a study in Australia about visitors’ motivations to attend special events in galleries. It highlights the different factors that motivate visitors to attend the gallery specifically for a special event in comparison to visiting the gallery's permanent collections.
Resumo:
During summer 2014 (mid-July - mid-September 2014), early life-stage Fucus vesiculosus were exposed to combined ocean acidification and warming (OAW) in the presence and absence of enhanced nutrient levels (OAW x N experiment). Subsequently, F. vesiculosus germlings were exposed to a final upwelling disturbance during 3 days (mid-September 2014). Experiments were performed in the near-natural scenario "Kiel Outdoor Benthocosms" including natural fluctuations in the southwestern Baltic Sea, Kiel Fjord, Germany (54°27 'N, 10°11 'W). Genetically different sibling groups and different levels of genetic diversity were employed to test to which extent genetic variation would result in response variation. The data presented here show the phenotypical response (growth and survival) of the different experimental populations of F. vesiculosus under OAW, nutrient enrichment and the upwelling event. Log effect ratios demonstrate the responses to enhanced OAW and nutrient concentrations relative to the ambient conditons. Carbon, nitrogen content (% DW) and C:N ratios were measured after the exposure of ambient and high nutrient levels. Abiotic conditions the OAW x nutrient experiment and the upwelling event, are shown.
Resumo:
Standard Monte Carlo (sMC) simulation models have been widely used in AEC industry research to address system uncertainties. Although the benefits of probabilistic simulation analyses over deterministic methods are well documented, the sMC simulation technique is quite sensitive to the probability distributions of the input variables. This phenomenon becomes highly pronounced when the region of interest within the joint probability distribution (a function of the input variables) is small. In such cases, the standard Monte Carlo approach is often impractical from a computational standpoint. In this paper, a comparative analysis of standard Monte Carlo simulation to Markov Chain Monte Carlo with subset simulation (MCMC/ss) is presented. The MCMC/ss technique constitutes a more complex simulation method (relative to sMC), wherein a structured sampling algorithm is employed in place of completely randomized sampling. Consequently, gains in computational efficiency can be made. The two simulation methods are compared via theoretical case studies.
Resumo:
PURPOSE To compare diffusion-weighted functional magnetic resonance imaging (DfMRI), a novel alternative to the blood oxygenation level-dependent (BOLD) contrast, in a functional MRI experiment. MATERIALS AND METHODS Nine participants viewed contrast reversing (7.5 Hz) black-and-white checkerboard stimuli using block and event-related paradigms. DfMRI (b = 1800 mm/s2 ) and BOLD sequences were acquired. Four parameters describing the observed signal were assessed: percent signal change, spatial extent of the activation, the Euclidean distance between peak voxel locations, and the time-to-peak of the best fitting impulse response for different paradigms and sequences. RESULTS The BOLD conditions showed a higher percent signal change relative to DfMRI; however, event-related DfMRI showed the strongest group activation (t = 21.23, P < 0.0005). Activation was more diffuse and spatially closer to the BOLD response for DfMRI when the block design was used. DfMRIevent showed the shortest TTP (4.4 +/- 0.88 sec). CONCLUSION The hemodynamic contribution to DfMRI may increase with the use of block designs.
Resumo:
In this article, we explore whether cross-linguistic differences in grammatical aspect encoding may give rise to differences in memory and cognition. We compared native speakers of two languages that encode aspect differently (English and Swedish) in four tasks that examined verbal descriptions of stimuli, online triads matching, and memory-based triads matching with and without verbal interference. Results showed between-group differences in verbal descriptions and in memory-based triads matching. However, no differences were found in online triads matching and in memory-based triads matching with verbal interference. These findings need to be interpreted in the context of the overall pattern of performance, which indicated that both groups based their similarity judgments on common perceptual characteristics of motion events. These results show for the first time a cross-linguistic difference in memory as a function of differences in grammatical aspect encoding, but they also contribute to the emerging view that language fine tunes rather than shapes perceptual processes that are likely to be universal and unchanging.
Resumo:
A measurement of the underlying activity in events with a jet of transverse momentum in the several GeV region is performed in proton-proton collisions at √ s = 0:9 and 7TeV, using data collected by the CMS experiment at the LHC. The production of charged particles with pseudorapidity |η|<2 and transverse momentum pT >0:5 GeV/c is studied in the azimuthal region transverse to that of the leading set of charged particles forming a track-jet. A significant growth of the average multiplicity and scalar-pT sum of the particles in the transverse region is observed with increasing pT of the leading trackjet, followed by a much slower rise above a few GeV/c. For track-jet pT larger than a few GeV/c, the activity in the transverse region is approximately doubled with a centreof- mass energy increase from 0:9 to 7TeV. Predictions of several QCD-inspired models as implemented in pythia are compared to the data.
Resumo:
The occupant impact velocity (OIV) and acceleration severity index (ASI) are competing measures of crash severity used to assess occupant injury risk in full-scale crash tests involving roadside safety hardware, e.g. guardrail. Delta-V, or the maximum change in vehicle velocity, is the traditional metric of crash severity for real world crashes. This study compares the ability of the OIV, ASI, and delta-V to discriminate between serious and non-serious occupant injury in real world frontal collisions. Vehicle kinematics data from event data recorders (EDRs) were matched with detailed occupant injury information for 180 real world crashes. Cumulative probability of injury risk curves were generated using binary logistic regression for belted and unbelted data subsets. By comparing the available fit statistics and performing a separate ROC curve analysis, the more computationally intensive OIV and ASI were found to offer no significant predictive advantage over the simpler delta-V.
Resumo:
BACKGROUND Biodegradable polymers for release of antiproliferative drugs from drug-eluting stents aim to improve vascular healing. We assessed noninferiority of a novel ultrathin strut drug-eluting stent releasing sirolimus from a biodegradable polymer (Orsiro, O-SES) compared with the durable polymer Xience Prime everolimus-eluting stent (X-EES) in terms of the primary end point in-stent late lumen loss at 9 months. METHODS AND RESULTS A total of 452 patients were randomly assigned 2:1 to treatment with O-SES (298 patients, 332 lesions) or X-EES (154 patients, 173 lesions) in a multicenter, noninferiority trial. The primary end point was in-stent late loss at 9 months. O-SES was noninferior to X-EES for the primary end point (0.10±0.32 versus 0.11±0.29 mm; difference=0.00063 mm; 95% confidence interval, -0.06 to 0.07; Pnoninferiority<0.0001). Clinical outcome showed similar rates of target-lesion failure at 1 year (O-SES 6.5% versus X-EES 8.0%; hazard ratio=0.82; 95% confidence interval, 0.40-1.68; log-rank test: P=0.58) without cases of stent thrombosis. A subgroup of patients (n=55) underwent serial optical coherence tomography at 9 months, which demonstrated similar neointimal thickness among lesions allocated to O-SES and X-EES (0.10±0.04 mm versus 0.11±0.04 mm; -0.01 [-0.04, -0.01]; P=0.37). Another subgroup of patients (n=56) underwent serial intravascular ultrasound at baseline and 9 months indicating a potential difference in neointimal area at follow-up (O-SES, 0.16±0.33 mm(2) versus X-EES, 0.43±0.56 mm(2); P=0.04). CONCLUSIONS Compared with durable polymer X-EES, novel biodegradable polymer-based O-SES was found noninferior for the primary end point in-stent late lumen loss at 9 months. Clinical event rates were comparable without cases of stent thrombosis throughout 1 year of follow-up. CLINICAL TRIAL REGISTRATION URL http://www.clinicaltrials.gov. Unique identifier: NCT01356888.
Resumo:
The determination of size as well as power of a test is a vital part of a Clinical Trial Design. This research focuses on the simulation of clinical trial data with time-to-event as the primary outcome. It investigates the impact of different recruitment patterns, and time dependent hazard structures on size and power of the log-rank test. A non-homogeneous Poisson process is used to simulate entry times according to the different accrual patterns. A Weibull distribution is employed to simulate survival times according to the different hazard structures. The current study utilizes simulation methods to evaluate the effect of different recruitment patterns on size and power estimates of the log-rank test. The size of the log-rank test is estimated by simulating survival times with identical hazard rates between the treatment and the control arm of the study resulting in a hazard ratio of one. Powers of the log-rank test at specific values of hazard ratio (≠1) are estimated by simulating survival times with different, but proportional hazard rates for the two arms of the study. Different shapes (constant, decreasing, or increasing) of the hazard function of the Weibull distribution are also considered to assess the effect of hazard structure on the size and power of the log-rank test. ^
Resumo:
Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.
Resumo:
Searching for multimedia is an important activity for users of Web search engines. Studying user's interactions with Web search engine multimedia buttons, including image, audio, and video, is important for the development of multimedia Web search systems. This article provides results from a Weblog analysis study of multimedia Web searching by Dogpile users in 2006. The study analyzes the (a) duration, size, and structure of Web search queries and sessions; (b) user demographics; (c) most popular multimedia Web searching terms; and (d) use of advanced Web search techniques including Boolean and natural language. The current study findings are compared with results from previous multimedia Web searching studies. The key findings are: (a) Since 1997, image search consistently is the dominant media type searched followed by audio and video; (b) multimedia search duration is still short (>50% of searching episodes are <1 min), using few search terms; (c) many multimedia searches are for information about people, especially in audio search; and (d) multimedia search has begun to shift from entertainment to other categories such as medical, sports, and technology (based on the most repeated terms). Implications for design of Web multimedia search engines are discussed.