88 resultados para Linear program model
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
We introduce the log-beta Weibull regression model based on the beta Weibull distribution (Famoye et al., 2005; Lee et al., 2007). We derive expansions for the moment generating function which do not depend on complicated functions. The new regression model represents a parametric family of models that includes as sub-models several widely known regression models that can be applied to censored survival data. We employ a frequentist analysis, a jackknife estimator, and a parametric bootstrap for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Further, for different parameter settings, sample sizes, and censoring percentages, several simulations are performed. In addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be extended to a modified deviance residual in the proposed regression model applied to censored data. We define martingale and deviance residuals to evaluate the model assumptions. The extended regression model is very useful for the analysis of real data and could give more realistic fits than other special regression models.
Resumo:
The concept of Fock space representation is developed to deal with stochastic spin lattices written in terms of fermion operators. A density operator is introduced in order to follow in parallel the developments of the case of bosons in the literature. Some general conceptual quantities for spin lattices are then derived, including the notion of generating function and path integral via Grassmann variables. The formalism is used to derive the Liouvillian of the d-dimensional Linear Glauber dynamics in the Fock-space representation. Then the time evolution equations for the magnetization and the two-point correlation function are derived in terms of the number operator. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A generalized version of the nonequilibrium linear Glauber model with q states in d dimensions is introduced and analyzed. The model is fully symmetric, its dynamics being invariant under all permutations of the q states. Exact expressions for the two-time autocorrelation and response functions on a d-dimensional lattice are obtained. In the stationary regime, the fluctuation-dissipation theorem holds, while in the transient the aging is observed with the fluctuation-dissipation ratio leading to the value predicted for the linear Glauber model.
Resumo:
The implementation of confidential contracts between a container liner carrier and its customers, because of the Ocean Shipping Reform Act (OSRA) 1998, demands a revision in the methodology applied in the carrier's planning of marketing and sales. The marketing and sales planning process should be more scientific and with a better use of operational research tools considering the selection of the customers under contracts, the duration of the contracts, the freight, and the container imbalances of these contracts are basic factors for the carrier's yield. This work aims to develop a decision support system based on a linear programming model to generate the business plan for a container liner carrier, maximizing the contribution margin of its freight.
Resumo:
Background: Worldwide distribution of surgical interventions is unequal. Developed countries account for the majority of surgeries and information about non-cardiac operations in developing countries is scarce. The purpose of our study was to describe the epidemiological data of non-cardiac surgeries performed in Brazil in the last years. Methods and Findings: This is a retrospective cohort study that investigated the time window from 1995 to 2007. We collected information from DATASUS, a national public health system database. The following variables were studied: number of surgeries, in-hospital expenses, blood transfusion related costs, length of stay and case fatality rates. The results were presented as sum, average and percentage. The trend analysis was performed by linear regression model. There were 32,659,513 non-cardiac surgeries performed in Brazil in thirteen years. An increment of 20.42% was observed in the number of surgeries in this period and nowadays nearly 3 million operations are performed annually. The cost of these procedures has increased tremendously in the last years. The increment of surgical cost was almost 200%. The total expenses related to surgical hospitalizations were more than $10 billion in all these years. The yearly cost of surgical procedures to public health system was more than $1.27 billion for all surgical hospitalizations, and in average, U$445.24 per surgical procedure. The total cost of blood transfusion was near $98 million in all years and annually approximately $10 million were spent in perioperative transfusion. The surgical mortality had an increment of 31.11% in the period. Actually, in 2007, the surgical mortality in Brazil was 1.77%. All the variables had a significant increment along the studied period: r square (r(2)) = 0.447 for the number of surgeries (P = 0.012), r(2) = 0.439 for in-hospital expenses (P = 0.014) and r(2) = 0.907 for surgical mortality (P = 0.0055). Conclusion: The volume of surgical procedures has increased substantially in Brazil through the past years. The expenditure related to these procedures and its mortality has also increased as the number of operations. Better planning of public health resource and strategies of investment are needed to supply the crescent demand of surgery in Brazil.
Resumo:
This paper studies a simplified methodology to integrate the real time optimization (RTO) of a continuous system into the model predictive controller in the one layer strategy. The gradient of the economic objective function is included in the cost function of the controller. Optimal conditions of the process at steady state are searched through the use of a rigorous non-linear process model, while the trajectory to be followed is predicted with the use of a linear dynamic model, obtained through a plant step test. The main advantage of the proposed strategy is that the resulting control/optimization problem can still be solved with a quadratic programming routine at each sampling step. Simulation results show that the approach proposed may be comparable to the strategy that solves the full economic optimization problem inside the MPC controller where the resulting control problem becomes a non-linear programming problem with a much higher computer load. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This study presents the results of Raman spectroscopy applied to the classification of arterial tissue based on a simplified model using basal morphological and biochemical information extracted from the Raman spectra of arteries. The Raman spectrograph uses an 830-nm diode laser, imaging spectrograph, and a CCD camera. A total of 111 Raman spectra from arterial fragments were used to develop the model, and those spectra were compared to the spectra of collagen, fat cells, smooth muscle cells, calcification, and cholesterol in a linear fit model. Non-atherosclerotic (NA), fatty and fibrous-fatty atherosclerotic plaques (A) and calcified (C) arteries exhibited different spectral signatures related to different morphological structures presented in each tissue type. Discriminant analysis based on Mahalanobis distance was employed to classify the tissue type with respect to the relative intensity of each compound. This model was subsequently tested prospectively in a set of 55 spectra. The simplified diagnostic model showed that cholesterol, collagen, and adipocytes were the tissue constituents that gave the best classification capability and that those changes were correlated to histopathology. The simplified model, using spectra obtained from a few tissue morphological and biochemical constituents, showed feasibility by using a small amount of variables, easily extracted from gross samples.
Resumo:
Nesse artigo, tem-se o interesse em avaliar diferentes estratégias de estimação de parâmetros para um modelo de regressão linear múltipla. Para a estimação dos parâmetros do modelo foram utilizados dados de um ensaio clínico em que o interesse foi verificar se o ensaio mecânico da propriedade de força máxima (EM-FM) está associada com a massa femoral, com o diâmetro femoral e com o grupo experimental de ratas ovariectomizadas da raça Rattus norvegicus albinus, variedade Wistar. Para a estimação dos parâmetros do modelo serão comparadas três metodologias: a metodologia clássica, baseada no método dos mínimos quadrados; a metodologia Bayesiana, baseada no teorema de Bayes; e o método Bootstrap, baseado em processos de reamostragem.
Resumo:
The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.
Resumo:
We consider the three-particle scattering S-matrix for the Landau-Lifshitz model by directly computing the set of the Feynman diagrams up to the second order. We show, following the analogous computations for the non-linear Schrdinger model [1, 2], that the three-particle S-matrix is factorizable in the first non-trivial order.
Resumo:
The class of symmetric linear regression models has the normal linear regression model as a special case and includes several models that assume that the errors follow a symmetric distribution with longer-than-normal tails. An important member of this class is the t linear regression model, which is commonly used as an alternative to the usual normal regression model when the data contain extreme or outlying observations. In this article, we develop second-order asymptotic theory for score tests in this class of models. We obtain Bartlett-corrected score statistics for testing hypotheses on the regression and the dispersion parameters. The corrected statistics have chi-squared distributions with errors of order O(n(-3/2)), n being the sample size. The corrections represent an improvement over the corresponding original Rao`s score statistics, which are chi-squared distributed up to errors of order O(n(-1)). Simulation results show that the corrected score tests perform much better than their uncorrected counterparts in samples of small or moderate size.
Resumo:
In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.
Resumo:
In this article, we consider local influence analysis for the skew-normal linear mixed model (SN-LMM). As the observed data log-likelihood associated with the SN-LMM is intractable, Cook`s well-known approach cannot be applied to obtain measures of local influence. Instead, we develop local influence measures following the approach of Zhu and Lee (2001). This approach is based on the use of an EM-type algorithm and is measurement invariant under reparametrizations. Four specific perturbation schemes are discussed. Results obtained for a simulated data set and a real data set are reported, illustrating the usefulness of the proposed methodology.
Resumo:
CONTEXT AND OBJECTIVES: Osteoporosis has frequently been observed in patients with rheumatoid arthritis. The present study was undertaken in order to evaluate factors associated with osteoporosis among women with rheumatoid arthritis. DESIGN AND SETTING: Cross-sectional study, carried out in a public hospital in São Paulo. METHODS: The participants were 83 women with rheumatoid arthritis (53.7 ± 10.0 years old). Bone mineral density (BMD) and body composition were measured by dual energy X-ray absorptiometry. The patients were divided into three groups according to BMD: group 1, normal BMD (n = 24); group 2, osteopenia (n = 38); and group 3, osteoporosis (n = 21). Tests were performed to compare differences in means and correlations, with adjustments for age, duration of disease and cumulative corticosteroid. The relationships between clinical factors, physical activity score, dietary intake, body composition and biochemical parameters were analyzed using linear regression models. RESULTS: Mean calcium, vitamin D and omega-6 intakes were lower than the recommendations. Associations were found between BMD and age, disease duration, parathyroid hormone concentration and fat intake. The linear regression model showed that being older, with more years of disease and lower weight were negatively correlated with BMD [Total femur = 0.552 + 0.06 (weight) + 0.019 (total physical activity) - 0.05 (age) - 0.003 (disease duration); R² = 48.1; P < 0.001]. CONCLUSION: The present study indicates that nutritional factors and body composition are associated with bone mass in women with rheumatoid arthritis.