903 resultados para Generalization of Ehrenfest’s urn Model
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Resumo:
Logistics involves planning, managing, and organizing the flows of goods from the point of origin to the point of destination in order to meet some requirements. Logistics and transportation aspects are very important and represent a relevant costs for producing and shipping companies, but also for public administration and private citizens. The optimization of resources and the improvement in the organization of operations is crucial for all branches of logistics, from the operation management to the transportation. As we will have the chance to see in this work, optimization techniques, models, and algorithms represent important methods to solve the always new and more complex problems arising in different segments of logistics. Many operation management and transportation problems are related to the optimization class of problems called Vehicle Routing Problems (VRPs). In this work, we consider several real-world deterministic and stochastic problems that are included in the wide class of the VRPs, and we solve them by means of exact and heuristic methods. We treat three classes of real-world routing and logistics problems. We deal with one of the most important tactical problems that arises in the managing of the bike sharing systems, that is the Bike sharing Rebalancing Problem (BRP). We propose models and algorithms for real-world earthwork optimization problems. We describe the 3DP process and we highlight several optimization issues in 3DP. Among those, we define the problem related to the tool path definition in the 3DP process, the 3D Routing Problem (3DRP), which is a generalization of the arc routing problem. We present an ILP model and several heuristic algorithms to solve the 3DRP.
Resumo:
The vitamin D(3) and nicotine (VDN) model is a model of isolated systolic hypertension (ISH) due to arterial calcification raising arterial stiffness and vascular impedance similar to an aged and stiffened arterial tree. We therefore analyzed the impact of this aging model on normal and diseased hearts with myocardial infarction (MI). Wistar rats were treated with VDN (n = 9), subjected to MI by coronary ligation (n = 10), or subjected to a combination of both MI and VDN treatment (VDN/MI, n = 14). A sham-treated group served as control (Ctrl, n = 10). Transthoracic echocardiography was performed every 2 wk, whereas invasive indexes were obtained at week 8 before death. Calcium, collagen, and protein contents were measured in the heart and the aorta. Systolic blood pressure, pulse pressure, thoracic aortic calcium, and end-systolic elastance as an index of myocardial contractility were highest in the aging model group compared with MI and Ctrl groups (P(VDN) < 0.05, 2-way ANOVA). Left ventricular wall stress and brain natriuretic peptide (P(VDNxMI) = not significant) were highest, while ejection fraction, stroke volume, and cardiac output were lowest in the combined group versus all other groups (P(VDNxMI) < 0.05). The combination of ISH due to this aging model and MI demonstrates significant alterations in cardiac function. This model mimics several clinical phenomena of cardiovascular aging and may thus serve to further study novel therapies.
Resumo:
OBJECTIVES: Implementation of an experimental model to compare cartilage MR imaging by means of histological analyses. MATERIAL AND METHODS: MRI was obtained from 4 patients expecting total knee replacement at 1.5 and/or 3T prior surgery. The timeframe between pre-op MRI and knee replacement was within two days. Resected cartilage-bone samples were tagged with Ethi((R))-pins to reproduce the histological cutting course. Pre-operative scanning at 1.5T included following parameters for fast low angle shot (FLASH: TR/TE/FA=33ms/6ms/30 degrees , BW=110kHz, 120mmx120mm FOV, 256x256 matrix, 0.65mm slice-thickness) and double echo steady state (DESS: TR/TE/FA=23.7ms/6.9ms/40 degrees , BW=130kHz, 120x120mm FOV, 256x256 matrix, 0.65mm slice-thickness). At 3T, scan parameters were: FLASH (TR/TE/FA=12.2ms/5.1ms/10 degrees , BW=130kHz, 170x170mm FOV, 320x320, 0.5mm slice-thickness) and DESS (TR/TE/FA=15.6ms/4.5ms/25 degrees , BW=200kHz, 135mmx150mm FOV, 288x320matrix, 0.5mm slice-thickness). Imaging of the specimens was done the same day at 1.5T. MRI (Noyes) and histological (Mankin) score scales were correlated using the paired t-test. Sensitivity and specificity for the detection of different grades of cartilage degeneration were assessed. Inter-reader and intra-reader reliability was determined using Kappa analysis. RESULTS: Low correlation (sensitivity, specificity) was found for both sequences in normal to mild Mankin grades. Only moderate to severe changes were diagnosed with higher significance and specificity. The use of higher field-strengths was advantageous for both protocols with sensitivity values ranging from 13.6% to 93.3% (FLASH) and 20.5% to 96.2% (DESS). Kappa values ranged from 0.488 to 0.944. CONCLUSIONS: Correlating MR images with continuous histological slices was feasible by using three-dimensional imaging, multi-planar-reformat and marker pins. The capability of diagnosing early cartilage changes with high accuracy could not be proven for both FLASH and DESS.
Resumo:
In this paper, a simulation model of glucose-insulin metabolism for Type 1 diabetes patients is presented. The proposed system is based on the combination of Compartmental Models (CMs) and artificial Neural Networks (NNs). This model aims at the development of an accurate system, in order to assist Type 1 diabetes patients to handle their blood glucose profile and recognize dangerous metabolic states. Data from a Type 1 diabetes patient, stored in a database, have been used as input to the hybrid system. The data contain information about measured blood glucose levels, insulin intake, and description of food intake, along with the corresponding time. The data are passed to three separate CMs, which produce estimations about (i) the effect of Short Acting (SA) insulin intake on blood insulin concentration, (ii) the effect of Intermediate Acting (IA) insulin intake on blood insulin concentration, and (iii) the effect of carbohydrate intake on blood glucose absorption from the gut. The outputs of the three CMs are passed to a Recurrent NN (RNN) in order to predict subsequent blood glucose levels. The RNN is trained with the Real Time Recurrent Learning (RTRL) algorithm. The resulted blood glucose predictions are promising for the use of the proposed model for blood glucose level estimation for Type 1 diabetes patients.
Resumo:
Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the Bag of Features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5,000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10,000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.
Resumo:
In order to bridge interdisciplinary differences in Presence research and to establish connections between Presence and “older” concepts of psychology and communication, a theoretical model of the formation of Spatial Presence is proposed. It is applicable to the exposure to different media and intended to unify the existing efforts to develop a theory of Presence. The model includes assumptions about attention allocation, mental models, and involvement, and considers the role of media factors and user characteristics as well, thus incorporating much previous work. It is argued that a commonly accepted model of Spatial Presence is the only solution to secure further progress within the international, interdisciplinary and multiple-paradigm community of Presence research.
Resumo:
OBJECTIVE To evaluate the suitability of a minipig model for the study of bone healing and osseointegration of dental implants following bone splitting and expansion of narrow ridges. MATERIAL AND METHODS In four minipigs, the mandibular premolars and first molars were extracted together with removal of the buccal bone plate. Three months later, ridge splitting and expansion was performed with simultaneous placement of three titanium implants per quadrant. On one side of the mandible, the expanded bone gap between the implants was filled with an alloplastic biphasic calcium phosphate (BCP) material, while the gap on the other side was left unfilled. A barrier membrane was placed in half of the quadrants. After a healing period of 6 weeks, the animals were sacrificed for histological evaluation. RESULTS In all groups, no bone fractures occurred, no implants were lost, all 24 implants were osseointegrated, and the gap created by bone splitting was filled with new bone, irrespective of whether BCP or a barrier membrane was used. Slight exposure of five implants was observed, but did not lead to implant loss. The level of the most coronal bone-to-implant contact varied without being dependent on the use of BCP or a barrier membrane. In all groups, the BCP particles were not present deep in the bone-filled gap. However, BCP particles were seen at the crestal bone margin, where they were partly integrated in the new bone. CONCLUSIONS This new minipig model holds great promise for studying experimental ridge splitting/expansion. However, efforts must be undertaken to reduce implant exposure and buccal bone resorption.
Resumo:
The sensitivity of the neodymium isotopic composition (ϵNd) to tectonic rearrangements of seaways is investigated using an Earth System Model of Intermediate Complexity. The shoaling and closure of the Central American Seaway (CAS) is simulated, as well as the opening and deepening of Drake Passage (DP). Multiple series of equilibrium simulations with various intermediate depths are performed for both seaways, providing insight into ϵNd and circulation responses to progressive throughflow evolutions. Furthermore, the sensitivity of these responses to the Atlantic Meridional Overturning Circulation (AMOC) and the neodymium boundary source is examined. Modeled ϵNd changes are compared to sediment core and ferromanganese (Fe-Mn) crust data. The model results indicate that the North Atlantic ϵNd response to the CAS shoaling is highly dependent on the AMOC state, i.e., on the AMOC strength before the shoaling to shallow depths (preclosure). Three scenarios based on different AMOC forcings are discussed, of which the model-data agreement favors a shallow preclosure (Miocene) AMOC (∼6 Sv). The DP opening causes a rather complex circulation response, resulting in an initial South Atlantic ϵNd decrease preceding a larger increase. This feature may be specific to our model setup, which induces a vigorous CAS throughflow that is strongly anticorrelated to the DP throughflow. In freshwater experiments following the DP deepening, ODP Site 1090 is mainly influenced by AMOC and DP throughflow changes, while ODP Site 689 is more strongly influenced by Southern Ocean Meridional Overturning Circulation and CAS throughflow changes. The boundary source uncertainty is largest for shallow seaways and at shallow sites.
Resumo:
The evolution of the Atlantic Meridional Overturning Circulation (MOC) in 30 models of varying complexity is examined under four distinct Representative Concentration Pathways. The models include 25 Atmosphere-Ocean General Circulation Models (AOGCMs) or Earth System Models (ESMs) that submitted simulations in support of the 5th phase of the Coupled Model Intercomparison Project (CMIP5) and 5 Earth System Models of Intermediate Complexity (EMICs). While none of the models incorporated the additional effects of ice sheet melting, they all projected very similar behaviour during the 21st century. Over this period the strength of MOC reduced by a best estimate of 22% (18%–25%; 5%–95% confidence limits) for RCP2.6, 26% (23%–30%) for RCP4.5, 29% (23%–35%) for RCP6.0 and 40% (36%–44%) for RCP8.5. Two of the models eventually realized a slow shutdown of the MOC under RCP8.5, although no model exhibited an abrupt change of the MOC. Through analysis of the freshwater flux across 30°–32°S into the Atlantic, it was found that 40% of the CMIP5 models were in a bistable regime of the MOC for the duration of their RCP integrations. The results support previous assessments that it is very unlikely that the MOC will undergo an abrupt change to an off state as a consequence of global warming.
Resumo:
PURPOSE Rapid assessment and intervention is important for the prognosis of acutely ill patients admitted to the emergency department (ED). The aim of this study was to prospectively develop and validate a model predicting the risk of in-hospital death based on all available information available at the time of ED admission and to compare its discriminative performance with a non-systematic risk estimate by the triaging first health-care provider. METHODS Prospective cohort analysis based on a multivariable logistic regression for the probability of death. RESULTS A total of 8,607 consecutive admissions of 7,680 patients admitted to the ED of a tertiary care hospital were analysed. Most frequent APACHE II diagnostic categories at the time of admission were neurological (2,052, 24 %), trauma (1,522, 18 %), infection categories [1,328, 15 %; including sepsis (357, 4.1 %), severe sepsis (249, 2.9 %), septic shock (27, 0.3 %)], cardiovascular (1,022, 12 %), gastrointestinal (848, 10 %) and respiratory (449, 5 %). The predictors of the final model were age, prolonged capillary refill time, blood pressure, mechanical ventilation, oxygen saturation index, Glasgow coma score and APACHE II diagnostic category. The model showed good discriminative ability, with an area under the receiver operating characteristic curve of 0.92 and good internal validity. The model performed significantly better than non-systematic triaging of the patient. CONCLUSIONS The use of the prediction model can facilitate the identification of ED patients with higher mortality risk. The model performs better than a non-systematic assessment and may facilitate more rapid identification and commencement of treatment of patients at risk of an unfavourable outcome.
Resumo:
Large uncertainties exist concerning the impact of Greenland ice sheet melting on the Atlantic meridional overturning circulation (AMOC) in the future, partly due to different sensitivity of the AMOC to freshwater input in the North Atlantic among climate models. Here we analyse five projections from different coupled ocean–atmosphere models with an additional 0.1 Sv (1 Sv = 10 6 m3/s) of freshwater released around Greenland between 2050 and 2089. We find on average a further weakening of the AMOC at 26°N of 1.1 ± 0.6 Sv representing a 27 ± 14% supplementary weakening in 2080–2089, as compared to the weakening relative to 2006–2015 due to the effect of the external forcing only. This weakening is lower than what has been found with the same ensemble of models in an identical experimen - tal set-up but under recent historical climate conditions. This lower sensitivity in a warmer world is explained by two main factors. First, a tendency of decoupling is detected between the surface and the deep ocean caused by an increased thermal stratification in the North Atlantic under the effect of global warming. This induces a shoaling of ocean deep ventilation through convection hence ventilating only intermediate levels. The second important effect concerns the so-called Canary Current freshwater leakage; a process by which additionally released fresh water in the North Atlantic leaks along the Canary Current and escapes the convection zones towards the subtropical area. This leakage is increasing in a warming climate, which is a consequence of decreasing gyres asymmetry due to changes in Ekman rumping. We suggest that these modifications are related with the northward shift of the jet stream in a warmer world. For these two reasons the AMOC is less susceptible to freshwater perturbations (near the deep water formation sides) in the North Atlantic as compared to the recent historical climate conditions. Finally, we propose a bilinear model that accounts for the two former processes to give a conceptual explanation about the decreasing AMOC sensitivity due to freshwater input. Within the limit of this bilinear model, we find that 62 ± 8% of the reduction in sensitivity is related with the changes in gyre asymmetry and freshwater leakage and 38 ± 8% is due to the reduction in deep ocean ventilation associated with the increased stratification in the North Atlantic.
Resumo:
Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing bservation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14°C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.