888 resultados para Two-stage stochastic model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of translation invariant and locally defined binary image operators over large windows is made difficult by decreased statistical precision and increased training time. We present a complete framework for the application of stacked design, a recently proposed technique to create two-stage operators that circumvents that difficulty. We propose a novel algorithm, based on Information Theory, to find groups of pixels that should be used together to predict the Output Value. We employ this algorithm to automate the process of creating a set of first-level operators that are later combined in a global operator. We also propose a principled way to guide this combination, by using feature selection and model comparison. Experimental results Show that the proposed framework leads to better results than single stage design. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When the (X) over bar chart is in use, samples are regularly taken from the process, and their means are plotted on the chart. In some cases, it is too expensive to obtain the X values, but not the values of a correlated variable Y. This paper presents a model for the economic design of a two-stage control chart, that is. a control chart based on both performance (X) and surrogate (Y) variables. The process is monitored by the surrogate variable until it signals an out-of-control behavior, and then a switch is made to the (X) over bar chart. The (X) over bar chart is built with central, warning. and action regions. If an X sample mean falls in the central region, the process surveillance returns to the (Y) over bar chart. Otherwise. The process remains under the (X) over bar chart's surveillance until an (X) over bar sample mean falls outside the control limits. The search for an assignable cause is undertaken when the performance variable signals an out-of-control behavior. In this way, the two variables, are used in an alternating fashion. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A study is performed to examine the economic advantages of using performance and surrogate variables. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deterministic Optimal Reactive Power Dispatch problem has been extensively studied, such that the demand power and the availability of shunt reactive power compensators are known and fixed. Give this background, a two-stage stochastic optimization model is first formulated under the presumption that the load demand can be modeled as specified random parameters. A second stochastic chance-constrained model is presented considering uncertainty on the demand and the equivalent availability of shunt reactive power compensators. Simulations on six-bus and 30-bus test systems are used to illustrate the validity and essential features of the proposed models. This simulations shows that the proposed models can prevent to the power system operator about of the deficit of reactive power in the power system and suggest that shunt reactive sourses must be dispatched against the unavailability of any reactive source. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experiments of continuous alcoholic fermentation of sugarcane juice with flocculating yeast recycle were conducted in a system of two 0.22-L tower bioreactors in series, operated at a range of dilution rates (D (1) = D (2) = 0.27-0.95 h(-1)), constant recycle ratio (alpha = F (R) /F = 4.0) and a sugar concentration in the feed stream (S (0)) around 150 g/L. The data obtained in these experimental conditions were used to adjust the parameters of a mathematical model previously developed for the single-stage process. This model considers each of the tower bioreactors as a perfectly mixed continuous reactor and the kinetics of cell growth and product formation takes into account the limitation by substrate and the inhibition by ethanol and biomass, as well as the substrate consumption for cellular maintenance. The model predictions agreed satisfactorily with the measurements taken in both stages of the cascade. The major differences with respect to the kinetic parameters previously estimated for a single-stage system were observed for the maximum specific growth rate, for the inhibition constants of cell growth and for the specific rate of substrate consumption for cell maintenance. Mathematical models were validated and used to simulate alternative operating conditions as well as to analyze the performance of the two-stage process against that of the single-stage process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose three novel mathematical models for the two-stage lot-sizing and scheduling problems present in many process industries. The problem shares a continuous or quasi-continuous production feature upstream and a discrete manufacturing feature downstream, which must be synchronized. Different time-based scale representations are discussed. The first formulation encompasses a discrete-time representation. The second one is a hybrid continuous-discrete model. The last formulation is based on a continuous-time model representation. Computational tests with state-of-the-art MIP solver show that the discrete-time representation provides better feasible solutions in short running time. On the other hand, the hybrid model achieves better solutions for longer computational times and was able to prove optimality more often. The continuous-type model is the most flexible of the three for incorporating additional operational requirements, at a cost of having the worst computational performance. Journal of the Operational Research Society (2012) 63, 1613-1630. doi:10.1057/jors.2011.159 published online 7 March 2012

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose an alternative crack propagation algo- rithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algo- rithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equa- tions is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algo- rithm, we use five quasi-brittle benchmarks, all successfully solved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to evaluate in situ changes in the alveolar crest bone height around immediate implant-supported crowns in comparison to tooth-supported crowns (control) with the cervical margins located at the bone crest level, without occlusal load. In Group I, after extraction of 12 mandibular premolars from 4 adult dogs, implants from Branemark System (MK III TiU RP 4.0 x 11.5 mm) were placed to retain complete acrylic crowns. In Group II, premolars were prepared to receive complete metal crowns. Sixteen weeks after placement of the crowns (38 weeks after tooth extraction), the height of the alveolar bone crest was measured with a digital caliper. Data were analyzed statistically by the Mann-Whitney test at 5% significance level. The in situ analysis showed no statistically significant difference (p=0.880) between the implant-supported and the tooth-supported groups (1.528 + 0.459 mm and 1.570 + 0.263 mm, respectively). Based on the findings of the present study, it may be concluded that initial peri-implant bone loss may result from the remodeling process necessary to establish the biological space, similar to which occurs with tooth-supported crowns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new two-parameter integrable model with quantum superalgebra U-q[gl(3/1)] symmetry is proposed, which is an eight-state fermions model with correlated single-particle and pair hoppings as well as uncorrelated triple-particle hopping. The model is solved and the Bethe ansatz equations are obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hepatectomy may prolong the survival of colorectal cancer patients with liver metastases. Two-stage liver surgery is a valid option for the treatment of bilobar colorectal liver metastasis. This video demonstrates technical aspects of a two-stage pure laparoscopic hepatectomy for bilateral liver metastasis. To the authors` knowledge, this is the first description of a two-stage laparoscopic liver resection in the English literature. A 54-year-old man with right colon cancer and synchronous bilobar colorectal liver metastasis underwent laparoscopic right colon resection followed by oxaliplatin-based chemotherapy. The patient then was referred for surgical treatment of liver metastasis. Liver volumetry showed a small left liver remnant. Surgical planning was for a totally laparoscopic two-stage liver resection. The first stage involved laparoscopic resection of segment 3 and ligature of the right portal vein. The postoperative pathology showed high-grade liver steatosis. After 4 weeks, the left liver had regenerated, and volumetry of left liver was 43%. The second stage involved laparoscopic right hepatectomy using the intrahepatic Glissonian approach. Intrahepatic access to the main right Glissonian pedicle was achieved with two small incisions, and an endoscopic vascular stapling device was inserted between these incisions and fired. The line of liver transection was marked following the ischemic area. Liver transection was accomplished with the Harmonic scalpel and an endoscopic stapling device. The specimen was extracted through a suprapubic incision. The falciform ligament was fixed to maintain the left liver in its original anatomic position, avoiding hepatic vein kinking and outflow syndrome. The operative time was 90 min for stage 1 and 240 min for stage 2 of the procedure. The recoveries after the first and second operations were uneventful, and the patient was discharged on postoperative days 2 and 7, respectively. Two-stage liver resections can be performed safely using laparoscopy. The intrahepatic Glissonian approach is a useful tool for pedicle control of the right liver, especially after previous dissection of the hilar plate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To compare cervical length (CL) at 18-21 and 22-25 weeks` gestation in twin pregnancies in prediction of spontaneous preterm delivery and to examine cervical shortening. Methods: Retrospective cohort study of CL measured at 18-21 and 22-25 weeks` gestation in twin pregnancies. Results: Receiver operating characteristics (ROC) curve revealed area of 0.64 (95% CI 0.53-0.75) and 0.80 (95% CI 0.72-0.88) for measurements at 18-21 and 22-25 weeks, respectively (P <= 0.001). Sensitivities of 33.3% and 23% and negative predicting value (NPV) of 97.3% and 86.8% for delivery at <28 and <34 weeks gestation were reached for measurements at 18-21 weeks. Sensitivities of 71.4% and 38.2% and NPV of 99.1% and 91.4% for delivery at <28 and <34 weeks` gestation were reached for measurements at 22-25 weeks. Cervical length shortening analysis showed an area under ROC curve of 0.81 (95% CI 0.73-0.89) and best cut-off was at >= 2 mm/week. Sensitivities of 80% and 60.8% and NPV of 98.9% and 90.6% for delivery at <28 and <34 weeks gestation were reached. Conclusions: In twin gestations, assessment of CL at 22-25 weeks is better than assessment at 18-21 weeks to predict preterm delivery before 34 weeks. Cervical shortening at a rate of >= 2 mm/weeks between 18 and 25 weeks gestation was a good predictor of spontaneous preterm birth in this high-risk population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVE: To compare the results of preoperative Nd:YAG laser anterior capsulotomy versus two-stage continuous curvilinear capsulorhexis (CCC) in phacoemulsification of eyes with white intumescent cataracts and liquefied cortex. PATIENTS AND METHODS: Twenty-three eyes with white intumescent cataract were consecutively randomized for phacoemulsification with preoperative Nd:YAG laser anterior capsulotomy (group 1, n = 11) or two-stage CCC (group 2, n = 12) procedures. Intraoperative findings and postoperative outcomes were compared using the nonparametric tests. RESULTS: Postoperative Visual acuity, mean surgical time, mean effective phacoemulsification time, and frequency of complications were not significantly different between the two groups (P > .05). Two cases in each group were converted to the extracapsular technique. Excluding these four patients, surgical time was shorter In group 1 (P = .017). CONCLUSION: Preoperative Nd:YAG laser anterior capsulotomy is a safe technique in decompressing the capsular bag before phacoemulsification of white intumescent cataracts with liquefied cortex.