992 resultados para Multistage stochastic linear programs
Resumo:
A rigorous derivation of non-linear equations governing the dynamics of an axially loaded beam is given with a clear focus to develop robust low-dimensional models. Two important loading scenarios were considered, where a structure is subjected to a uniformly distributed axial and a thrust force. These loads are to mimic the main forces acting on an offshore riser, for which an analytical methodology has been developed and applied. In particular, non-linear normal modes (NNMs) and non-linear multi-modes (NMMs) have been constructed by using the method of multiple scales. This is to effectively analyse the transversal vibration responses by monitoring the modal responses and mode interactions. The developed analytical models have been crosschecked against the results from FEM simulation. The FEM model having 26 elements and 77 degrees-of-freedom gave similar results as the low-dimensional (one degree-of-freedom) non-linear oscillator, which was developed by constructing a so-called invariant manifold. The comparisons of the dynamical responses were made in terms of time histories, phase portraits and mode shapes. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We define a new type of self-similarity for one-parameter families of stochastic processes, which applies to certain important families of processes that are not self-similar in the conventional sense. This includes Hougaard Levy processes such as the Poisson processes, Brownian motions with drift and the inverse Gaussian processes, and some new fractional Hougaard motions defined as moving averages of Hougaard Levy process. Such families have many properties in common with ordinary self-similar processes, including the form of their covariance functions, and the fact that they appear as limits in a Lamperti-type limit theorem for families of stochastic processes.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
Capybaras were monitored weekly from 1998 to 2006 by counting individuals in three anthropogenic environments (mixed agricultural fields, forest and open areas) of southeastern Brazil in order to examine the possible influence of environmental variables (temperature, humidity, wind speed, precipitation and global radiation) on the detectability of this species. There was consistent seasonality in the number of capybaras in the study area, with a specific seasonal pattern in each area. Log-linear models were fitted to the sample counts of adult capybaras separately for each sampled area, with an allowance for monthly effects, time trends and the effects of environmental variables. Log-linear models containing effects for the months of the year and a quartic time trend were highly significant. The effects of environmental variables on sample counts were different in each type of environment. As environmental variables affect capybara detectability, they should be considered in future species survey/monitoring programs.
Resumo:
The economic occupation of an area of 500 ha for Piracicaba was studied with the irrigated cultures of maize, tomato, sugarcane and beans, having used models of deterministic linear programming and linear programming including risk for the Target-Motad model, where two situations had been analyzed. In the deterministic model the area was the restrictive factor and the water was not restrictive for none of the tested situations. For the first situation the gotten maximum income was of R$ 1,883,372.87 and for the second situation it was of R$ 1,821,772.40. In the model including risk a producer that accepts risk can in the first situation get the maximum income of R$ 1,883,372. 87 with a minimum risk of R$ 350 year(-1), and in the second situation R$ 1,821,772.40 with a minimum risk of R$ 40 year(-1). Already a producer averse to the risk can get in the first situation a maximum income of R$ 1,775,974.81 with null risk and for the second situation R$ 1.707.706, 26 with null risk, both without water restriction. These results stand out the importance of the inclusion of the risk in supplying alternative occupations to the producer, allowing to a producer taking of decision considered the risk aversion and the pretension of income.
Resumo:
We introduce the log-beta Weibull regression model based on the beta Weibull distribution (Famoye et al., 2005; Lee et al., 2007). We derive expansions for the moment generating function which do not depend on complicated functions. The new regression model represents a parametric family of models that includes as sub-models several widely known regression models that can be applied to censored survival data. We employ a frequentist analysis, a jackknife estimator, and a parametric bootstrap for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Further, for different parameter settings, sample sizes, and censoring percentages, several simulations are performed. In addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be extended to a modified deviance residual in the proposed regression model applied to censored data. We define martingale and deviance residuals to evaluate the model assumptions. The extended regression model is very useful for the analysis of real data and could give more realistic fits than other special regression models.
Resumo:
In this study, 20 Brazilian public schools have been assessed regarding good manufacturing practices and standard sanitation operating procedures implementation. We used a checklist comprised of 10 parts ( facilities and installations, water supply, equipments and tools, pest control, waste management, personal hygiene, sanitation, storage, documentation, and training), making a total of 69 questions. The implementing modification cost to the found nonconformities was also determined so that it could work with technical data as a based decision-making prioritization. The average nonconformity percentage at schools concerning to prerequisite program was 36%, from which 66% of them own inadequate installations, 65% waste management, 44% regarding documentation, and 35% water supply and sanitation. The initial estimated cost for changing has been U.S.$24,438 and monthly investments of 1.55% on the currently needed invested values. This would result in U.S.$0.015 increase on each served meal cost over the investment replacement within a year. Thus, we have concluded that such modifications are economically feasible and will be considered on technical requirements when prerequisite program implementation priorities are established.
Resumo:
The detection of seizure in the newborn is a critical aspect of neurological research. Current automatic detection techniques are difficult to assess due to the problems associated with acquiring and labelling newborn electroencephalogram (EEG) data. A realistic model for newborn EEG would allow confident development, assessment and comparison of these detection techniques. This paper presents a model for newborn EEG that accounts for its self-similar and non-stationary nature. The model consists of background and seizure sub-models. The newborn EEG background model is based on the short-time power spectrum with a time-varying power law. The relationship between the fractal dimension and the power law of a power spectrum is utilized for accurate estimation of the short-time power law exponent. The newborn EEG seizure model is based on a well-known time-frequency signal model. This model addresses all significant time-frequency characteristics of newborn EEG seizure which include; multiple components or harmonics, piecewise linear instantaneous frequency laws and harmonic amplitude modulation. Estimates of the parameters of both models are shown to be random and are modelled using the data from a total of 500 background epochs and 204 seizure epochs. The newborn EEG background and seizure models are validated against real newborn EEG data using the correlation coefficient. The results show that the output of the proposed models has a higher correlation with real newborn EEG than currently accepted models (a 10% and 38% improvement for background and seizure models, respectively).
Resumo:
Mediated physical activity interventions can reach large numbers of people at low cost. Programs delivered through the mail that target the stage of motivational readiness have been shown to increase activity. Communication technology (websites and e-mail) might provide a means for delivering similar programs. Randomized trial conducted between August and October 2001. Participants included staff at an Australian university (n=655; mean AGE=43, standard deviation, 10 years). Participants were randomized to either an 8-week, stage-targeted print program (Print) or 8-week, stage-targeted website (Web) program. The main outcome was change in self-reported physical activity.
Resumo:
The calculation of quantum dynamics is currently a central issue in theoretical physics, with diverse applications ranging from ultracold atomic Bose-Einstein condensates to condensed matter, biology, and even astrophysics. Here we demonstrate a conceptually simple method of determining the regime of validity of stochastic simulations of unitary quantum dynamics by employing a time-reversal test. We apply this test to a simulation of the evolution of a quantum anharmonic oscillator with up to 6.022×1023 (Avogadro's number) of particles. This system is realizable as a Bose-Einstein condensate in an optical lattice, for which the time-reversal procedure could be implemented experimentally.
Resumo:
A technique to simulate the grand canonical ensembles of interacting Bose gases is presented. Results are generated for many temperatures by averaging over energy-weighted stochastic paths, each corresponding to a solution of coupled Gross-Pitaevskii equations with phase noise. The stochastic gauge method used relies on an off-diagonal coherent-state expansion, thus taking into account all quantum correlations. As an example, the second-order spatial correlation function and momentum distribution for an interacting 1D Bose gas are calculated.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.