992 resultados para Multistage stochastic linear programs
Resumo:
Biller-Andorno and Jüni (2014), in a widely debated commentary published in the May 22 issue of the New England Journal of Medicine, accept the concept that mammography every 2 years from age 50 can decrease breast cancer mortality by 20%, that is, from five to four deaths per 1000 women over a 10-year period. Both the absolute and the relative risk of breast cancer death may vary depending on the baseline mortality rates in various populations and on the impact of screening mammography in reducing breast cancer mortality, which may well vary around the 20% estimate adopted. We accept, therefore, that there are still uncertainties in the absolute and relative impact of mammography screening on breast cancer mortality, given the different study schemes and mammography intervals, the differences in populations, and the continuous improvements in technology (Warner, 2011; Independent UK Panel on Breast Cancer Screening, 2012). We also agree on the observation that mammography has an appreciable impact on breast cancer mortality (Bosetti et al., 2012), but clearly a much smaller one on total mortality.
Resumo:
Langevin Equations of Ginzburg-Landau form, with multiplicative noise, are proposed to study the effects of fluctuations in domain growth. These equations are derived from a coarse-grained methodology. The Cahn-Hiliard-Cook linear stability analysis predicts some effects in the transitory regime. We also derive numerical algorithms for the computer simulation of these equations. The numerical results corroborate the analytical predictions of the linear analysis. We also present simulation results for spinodal decomposition at large times.
Resumo:
Two graphs with adjacency matrices $\mathbf{A}$ and $\mathbf{B}$ are isomorphic if there exists a permutation matrix $\mathbf{P}$ for which the identity $\mathbf{P}^{\mathrm{T}} \mathbf{A} \mathbf{P} = \mathbf{B}$ holds. Multiplying through by $\mathbf{P}$ and relaxing the permutation matrix to a doubly stochastic matrix leads to the linear programming relaxation known as fractional isomorphism. We show that the levels of the Sherali--Adams (SA) hierarchy of linear programming relaxations applied to fractional isomorphism interleave in power with the levels of a well-known color-refinement heuristic for graph isomorphism called the Weisfeiler--Lehman algorithm, or, equivalently, with the levels of indistinguishability in a logic with counting quantifiers and a bounded number of variables. This tight connection has quite striking consequences. For example, it follows immediately from a deep result of Grohe in the context of logics with counting quantifiers that a fixed number of levels of SA suffice to determine isomorphism of planar and minor-free graphs. We also offer applications in both finite model theory and polyhedral combinatorics. First, we show that certain properties of graphs, such as that of having a flow circulation of a prescribed value, are definable in the infinitary logic with counting with a bounded number of variables. Second, we exploit a lower bound construction due to Cai, Fürer, and Immerman in the context of counting logics to give simple explicit instances that show that the SA relaxations of the vertex-cover and cut polytopes do not reach their integer hulls for up to $\Omega(n)$ levels, where $n$ is the number of vertices in the graph.
Resumo:
Diplomityö tarkastelee säikeistettyä ohjelmointia rinnakkaisohjelmoinnin ylemmällä hierarkiatasolla tarkastellen erityisesti hypersäikeistysteknologiaa. Työssä tarkastellaan hypersäikeistyksen hyviä ja huonoja puolia sekä sen vaikutuksia rinnakkaisalgoritmeihin. Työn tavoitteena oli ymmärtää Intel Pentium 4 prosessorin hypersäikeistyksen toteutus ja mahdollistaa sen hyödyntäminen, missä se tuo suorituskyvyllistä etua. Työssä kerättiin ja analysoitiin suorituskykytietoa ajamalla suuri joukko suorituskykytestejä eri olosuhteissa (muistin käsittely, kääntäjän asetukset, ympäristömuuttujat...). Työssä tarkasteltiin kahdentyyppisiä algoritmeja: matriisioperaatioita ja lajittelua. Näissä sovelluksissa on säännöllinen muistinkäyttökuvio, mikä on kaksiteräinen miekka. Se on etu aritmeettis-loogisissa prosessoinnissa, mutta toisaalta huonontaa muistin suorituskykyä. Syynä siihen on nykyaikaisten prosessorien erittäin hyvä raaka suorituskyky säännöllistä dataa käsiteltäessä, mutta muistiarkkitehtuuria rajoittaa välimuistien koko ja useat puskurit. Kun ongelman koko ylittää tietyn rajan, todellinen suorituskyky voi pudota murto-osaan huippusuorituskyvystä.
Resumo:
We present a dual-trap optical tweezers setup which directly measures forces using linear momentum conservation. The setup uses a counter-propagating geometry, which allows momentum measurement on each beam separately. The experimental advantages of this setup include low drift due to all-optical manipulation, and a robust calibration (independent of the features of the trapped object or buffer medium) due to the force measurement method. Although this design does not attain the high-resolution of some co-propagating setups, we show that it can be used to perform different single molecule measurements: fluctuation-based molecular stiffness characterization at different forces and hopping experiments on molecular hairpins. Remarkably, in our setup it is possible to manipulate very short tethers (such as molecular hairpins with short handles) down to the limit where beads are almost in contact. The setup is used to illustrate a novel method for measuring the stiffness of optical traps and tethers on the basis of equilibrium force fluctuations, i.e., without the need of measuring the force vs molecular extension curve. This method is of general interest for dual trap optical tweezers setups and can be extended to setups which do not directly measure forces.
Resumo:
Lying at the core of statistical physics is the need to reduce the number of degrees of freedom in a system. Coarse-graining is a frequently-used procedure to bridge molecular modeling with experiments. In equilibrium systems, this task can be readily performed; however in systems outside equilibrium, a possible lack of equilibration of the eliminated degrees of freedom may lead to incomplete or even misleading descriptions. Here, we present some examples showing how an improper coarse-graining procedure may result in linear approaches to nonlinear processes, miscalculations of activation rates and violations of the fluctuation-dissipation theorem.
Resumo:
This research examines the impacts of the Swiss reform of the allocation of tasks which was accepted in 2004 and implemented in 2008 to "re-assign" the responsibilities between the federal government and the cantons. The public tasks were redistributed, according to the leading and fundamental principle of subsidiarity. Seven tasks came under exclusive federal responsibility; ten came under the control of the cantons; and twenty-two "common tasks" were allocated to both the Confederation and the cantons. For these common tasks it wasn't possible to separate the management and the implementation. In order to deal with nineteen of them, the reform introduced the conventions-programs (CPs), which are public law contracts signed by the Confederation with each canton. These CPs are generally valid for periods of four years (2008-11, 2012-15 and 2016-19, respectively). The third period is currently being prepared. By using the principal-agent theory I examine how contracts can improve political relations between a principal (Confederation) and an agent (canton). I also provide a first qualitative analysis by examining the impacts of these contracts on the vertical cooperation and on the implication of different actors by focusing my study on five CPs - protection of cultural heritage and conservation of historic monuments, encouragement of the integration of foreigners, economic development, protection against noise and protection of the nature and landscape - applied in five cantons, which represents twenty-five cases studies.
Resumo:
Development of research methods requires a systematic review of their status. This study focuses on the use of Hierarchical Linear Modeling methods in psychiatric research. Evaluation includes 207 documents published until 2007, included and indexed in the ISI Web of Knowledge databases; analyses focuses on the 194 articles in the sample. Bibliometric methods are used to describe the publications patterns. Results indicate a growing interest in applying the models and an establishment of methods after 2000. Both Lotka"s and Bradford"s distributions are adjusted to the data.
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
The Feller process is an one-dimensional diffusion process with linear drift and state-dependent diffusion coefficient vanishing at the origin. The process is positive definite and it is this property along with its linear character that have made Feller process a convenient candidate for the modeling of a number of phenomena ranging from single-neuron firing to volatility of financial assets. While general properties of the process have long been well known, less known are properties related to level crossing such as the first-passage and the escape problems. In this work we thoroughly address these questions.
Resumo:
Työn tavoitteena oli toteuttaa simulointimalli, jolla pystytään tutkimaan kestomagnetoidun tahtikoneen aiheuttaman vääntömomenttivärähtelyn vaikutuksia sähkömoottoriin liitetyssä mekaniikassa. Tarkoitus oli lisäksi selvittää kuinka kyseinen simulointimalli voidaan toteuttaa nykyaikaisia simulointiohjelmia käyttäen. Saatujen simulointitulosten oikeellisuus varmistettiin tätä työtä varten rakennetulla verifiointilaitteistolla. Tutkittava rakenne koostui akselista, johon kiinnitettiin epäkeskotanko. Epäkeskotankoon kiinnitettiin massa, jonka sijaintia voitiin muunnella. Massan asemaa muuttamalla saatiin rakenteelle erilaisia ominaistaajuuksia. Epäkeskotanko mallinnettiin joustavana elementtimenetelmää apuna käyttäen. Mekaniikka mallinnettiin dynamiikan simulointiin tarkoitetussa ADAMS –ohjelmistossa, johon joustavana mallinnettu epäkeskotanko tuotiin ANSYS –elementtimenetelmäohjelmasta. Mekaniikan malli siirrettiin SIMULINK –ohjelmistoon, jossa mallinnettiin myös sähkökäyttö. SIMULINK –ohjelmassa mallinnettiin sähkökäyttö, joka kuvaa kestomagnetoitua tahtikonetta. Kestomagnetoidun tahtikoneen yhtälöt perustuvat lineaarisiin differentiaaliyhtälöihin, joihin hammasvääntömomentin vaikutus on lisätty häiriösignaalina. Sähkökäytön malli tuottaa vääntömomenttia, joka syötetään ADAMS –ohjelmistolla mallinnettuun mekaniikkaan. Mekaniikan mallista otetaan roottorin kulmakiihtyvyyden arvo takaisinkytkentänä sähkömoottorin malliin. Näin saadaan aikaiseksi yhdistetty simulointi, joka koostuu sähkötoimilaitekäytöstä ja mekaniikasta. Tulosten perusteella voidaan todeta, että sähkökäyttöjen ja mekaniikan yhdistetty simulointi on mahdollista toteuttaa valituilla menetelmillä. Simuloimalla saadut tulokset vastaavat hyvin mitattuja tuloksia.
Resumo:
ABSTRACT The traditional method of net present value (NPV) to analyze the economic profitability of an investment (based on a deterministic approach) does not adequately represent the implicit risk associated with different but correlated input variables. Using a stochastic simulation approach for evaluating the profitability of blueberry (Vaccinium corymbosum L.) production in Chile, the objective of this study is to illustrate the complexity of including risk in economic feasibility analysis when the project is subject to several but correlated risks. The results of the simulation analysis suggest that the non-inclusion of the intratemporal correlation between input variables underestimate the risk associated with investment decisions. The methodological contribution of this study illustrates the complexity of the interrelationships between uncertain variables and their impact on the convenience of carrying out this type of business in Chile. The steps for the analysis of economic viability were: First, adjusted probability distributions for stochastic input variables (SIV) were simulated and validated. Second, the random values of SIV were used to calculate random values of variables such as production, revenues, costs, depreciation, taxes and net cash flows. Third, the complete stochastic model was simulated with 10,000 iterations using random values for SIV. This result gave information to estimate the probability distributions of the stochastic output variables (SOV) such as the net present value, internal rate of return, value at risk, average cost of production, contribution margin and return on capital. Fourth, the complete stochastic model simulation results were used to analyze alternative scenarios and provide the results to decision makers in the form of probabilities, probability distributions, and for the SOV probabilistic forecasts. The main conclusion shown that this project is a profitable alternative investment in fruit trees in Chile.
Resumo:
The prediction filters are well known models for signal estimation, in communications, control and many others areas. The classical method for deriving linear prediction coding (LPC) filters is often based on the minimization of a mean square error (MSE). Consequently, second order statistics are only required, but the estimation is only optimal if the residue is independent and identically distributed (iid) Gaussian. In this paper, we derive the ML estimate of the prediction filter. Relationships with robust estimation of auto-regressive (AR) processes, with blind deconvolution and with source separation based on mutual information minimization are then detailed. The algorithm, based on the minimization of a high-order statistics criterion, uses on-line estimation of the residue statistics. Experimental results emphasize on the interest of this approach.