953 resultados para Interval discrete log problem
Resumo:
The log-Burr XII regression model for grouped survival data is evaluated in the presence of many ties. The methodology for grouped survival data is based on life tables, where the times are grouped in k intervals, and we fit discrete lifetime regression models to the data. The model parameters are estimated by maximum likelihood and jackknife methods. To detect influential observations in the proposed model, diagnostic measures based on case deletion, so-called global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to these measures, the total local influence and influential estimates are also used. We conduct Monte Carlo simulation studies to assess the finite sample behavior of the maximum likelihood estimators of the proposed model for grouped survival. A real data set is analyzed using a regression model for grouped data.
Resumo:
This paper introduces a skewed log-Birnbaum-Saunders regression model based on the skewed sinh-normal distribution proposed by Leiva et al. [A skewed sinh-normal distribution and its properties and application to air pollution, Comm. Statist. Theory Methods 39 (2010), pp. 426-443]. Some influence methods, such as the local influence and generalized leverage, are presented. Additionally, we derived the normal curvatures of local influence under some perturbation schemes. An empirical application to a real data set is presented in order to illustrate the usefulness of the proposed model.
Resumo:
The numerical simulation of flows of highly elastic fluids has been the subject of intense research over the past decades with important industrial applications. Therefore, many efforts have been made to improve the convergence capabilities of the numerical methods employed to simulate viscoelastic fluid flows. An important contribution for the solution of the High-Weissenberg Number Problem has been presented by Fattal and Kupferman [J. Non-Newton. Fluid. Mech. 123 (2004) 281-285] who developed the matrix-logarithm of the conformation tensor technique, henceforth called log-conformation tensor. Its advantage is a better approximation of the large growth of the stress tensor that occur in some regions of the flow and it is doubly beneficial in that it ensures physically correct stress fields, allowing converged computations at high Weissenberg number flows. In this work we investigate the application of the log-conformation tensor to three-dimensional unsteady free surface flows. The log-conformation tensor formulation was applied to solve the Upper-Convected Maxwell (UCM) constitutive equation while the momentum equation was solved using a finite difference Marker-and-Cell type method. The resulting developed code is validated by comparing the log-conformation results with the analytic solution for fully developed pipe flows. To illustrate the stability of the log-conformation tensor approach in solving three-dimensional free surface flows, results from the simulation of the extrudate swell and jet buckling phenomena of UCM fluids at high Weissenberg numbers are presented. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We show how to construct a topological Markov map of the interval whose invariant probability measure is the stationary law of a given stochastic chain of infinite order. In particular we characterize the maps corresponding to stochastic chains with memory of variable length. The problem treated here is the converse of the classical construction of the Gibbs formalism for Markov expanding maps of the interval.
Resumo:
In this paper, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises under two criteria. The first one is an unconstrained mean-variance trade-off performance criterion along the time, and the second one is a minimum variance criterion along the time with constraints on the expected output. We present explicit conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. We conclude the paper by presenting a numerical example of a multi-period portfolio selection problem with regime switching in which it is desired to minimize the sum of the variances of the portfolio along the time under the restriction of keeping the expected value of the portfolio greater than some minimum values specified by the investor. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
[EN] In this paper, we have used Geographical Information Systems (GIS) to solve the planar Huff problem considering different demand distributions and forbidden regions. Most of the papers connected with the competitive location problems consider that the demand is aggregated in a finite set of points. In other few cases, the models suppose that the demand is distributed along the feasible region according to a functional form, mainly a uniform distribution. In this case, in addition to the discrete and uniform demand distributions we have considered that the demand is represented by a population surface model, that is, a raster map where each pixel has associated a value corresponding to the population living in the area that it covers...
Resumo:
In order to improve the animal welfare, the Council Directive 1999/74/EC (defining minimum standards for the welfare of laying hens) will ban conventional cage systems since 2012, in favour of enriched cages or floor systems. As a consequence an increased risk of bacterial contamination of eggshell is expected (EFSA, 2005). Furthermore egg-associated salmonellosis is an important public health problem throughout the world (Roberts et al., 1994). In this regard the introduction of efficient measures to reduce eggshell contamination by S. Enteritidis or other bacterial pathogens, and thus to prevent any potential or additional food safety risk for Human health, may be envisaged. The hot air pasteurization can be a viable alternative for the decontamination of the surface of the egg shell. Few studies have been performed on the decontamination power of this technique on table eggs (Hou et al, 1996; James et al., 2002). The aim of this study was to develop innovative techniques to remove surface contamination of shell eggs by hot air under natural or forced convection. Initially two simplified finite element models describing the thermal interaction between the air and egg were developed, respectively for the natural and forced convection. The numerical models were validated using an egg simulant equipped by type-K thermocouple (Chromel/Alumel). Once validated, the models allowed the selection of a thermal cycle with an inner temperature always lower than 55°C. Subsequently a specific apparatus composed by two hot air generators, one cold air generator and rolling cylinder support, was built to physically condition the eggs. The decontamination power of the thermal treatments was evaluated on shell eggs experimentally inoculated with either Salmonella Enteritidis, Escherichia coli, Listeria monocytogenes and on shell eggs containing only the indigenous microflora. The applicability of treatments was further evaluated by comparing quality traits of treated and not treated eggs immediately after the treatment and after 28 days of storage at 20°C. The results showed that the treatment characterized by two shots of hot air at 350°C for 8 sec, spaced by a cooling interval of 32 (forced convection), reduce the bacterial population of more than 90% (Salmonella enteritidis and Listeria monocytogenes). No statistically significant results were obtained comparing E. coli treated and not treated eggs as well as indigenous microflora treated and not treated eggs. A reduction of 2.6 log was observed on Salmonella enteritidis load of eggs immediately after the treatment in oven at 200°C for 200 minutes (natural convection). Furthermore no detrimental effects on quality traits of treated eggs were recorded. These results support the hot air techniques for the surface decontamination of table eggs as an effective industrial process.
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Resumo:
The main goal of this thesis is to facilitate the process of industrial automated systems development applying formal methods to ensure the reliability of systems. A new formulation of distributed diagnosability problem in terms of Discrete Event Systems theory and automata framework is presented, which is then used to enforce the desired property of the system, rather then just verifying it. This approach tackles the state explosion problem with modeling patterns and new algorithms, aimed for verification of diagnosability property in the context of the distributed diagnosability problem. The concepts are validated with a newly developed software tool.
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.
Resumo:
In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.
Resumo:
Purpose Recently, multiple clinical trials have demonstrated improved outcomes in patients with metastatic colorectal cancer. This study investigated if the improved survival is race dependent. Patients and Methods Overall and cancer-specific survival of 77,490 White and Black patients with metastatic colorectal cancer from the 1988–2008 Surveillance Epidemiology and End Results registry were compared using unadjusted and multivariable adjusted Cox proportional hazard regression as well as competing risk analyses. Results Median age was 69 years, 47.4 % were female and 86.0 % White. Median survival was 11 months overall, with an overall increase from 8 to 14 months between 1988 and 2008. Overall survival increased from 8 to 14 months for White, and from 6 to 13 months for Black patients. After multivariable adjustment, the following parameters were associated with better survival: White, female, younger, better educated and married patients, patients with higher income and living in urban areas, patients with rectosigmoid junction and rectal cancer, undergoing cancer-directed surgery, having well/moderately differentiated, and N0 tumors (p<0.05 for all covariates). Discrepancies in overall survival based on race did not change significantly over time; however, there was a significant decrease of cancer-specific survival discrepancies over time between White and Black patients with a hazard ratio of 0.995 (95 % confidence interval 0.991–1.000) per year (p=0.03). Conclusion A clinically relevant overall survival increase was found from 1988 to 2008 in this population-based analysis for both White and Black patients with metastatic colorectal cancer. Although both White and Black patients benefitted from this improvement, a slight discrepancy between the two groups remained.
Resumo:
We consider the problem of nonparametric estimation of a concave regression function F. We show that the supremum distance between the least square s estimatorand F on a compact interval is typically of order(log(n)/n)2/5. This entails rates of convergence for the estimator’s derivative. Moreover, we discuss the impact of additional constraints on F such as monotonicity and pointwise bounds. Then we apply these results to the analysis of current status data, where the distribution function of the event times is assumed to be concave.