888 resultados para Two-stage stochastic model
Resumo:
Auf dem Gebiet der Strukturdynamik sind computergestützte Modellvalidierungstechniken inzwischen weit verbreitet. Dabei werden experimentelle Modaldaten, um ein numerisches Modell für weitere Analysen zu korrigieren. Gleichwohl repräsentiert das validierte Modell nur das dynamische Verhalten der getesteten Struktur. In der Realität gibt es wiederum viele Faktoren, die zwangsläufig zu variierenden Ergebnissen von Modaltests führen werden: Sich verändernde Umgebungsbedingungen während eines Tests, leicht unterschiedliche Testaufbauten, ein Test an einer nominell gleichen aber anderen Struktur (z.B. aus der Serienfertigung), etc. Damit eine stochastische Simulation durchgeführt werden kann, muss eine Reihe von Annahmen für die verwendeten Zufallsvariablengetroffen werden. Folglich bedarf es einer inversen Methode, die es ermöglicht ein stochastisches Modell aus experimentellen Modaldaten zu identifizieren. Die Arbeit beschreibt die Entwicklung eines parameter-basierten Ansatzes, um stochastische Simulationsmodelle auf dem Gebiet der Strukturdynamik zu identifizieren. Die entwickelte Methode beruht auf Sensitivitäten erster Ordnung, mit denen Parametermittelwerte und Kovarianzen des numerischen Modells aus stochastischen experimentellen Modaldaten bestimmt werden können.
Resumo:
Optimal control theory is a powerful tool for solving control problems in quantum mechanics, ranging from the control of chemical reactions to the implementation of gates in a quantum computer. Gradient-based optimization methods are able to find high fidelity controls, but require considerable numerical effort and often yield highly complex solutions. We propose here to employ a two-stage optimization scheme to significantly speed up convergence and achieve simpler controls. The control is initially parametrized using only a few free parameters, such that optimization in this pruned search space can be performed with a simplex method. The result, considered now simply as an arbitrary function on a time grid, is the starting point for further optimization with a gradient-based method that can quickly converge to high fidelities. We illustrate the success of this hybrid technique by optimizing a geometric phase gate for two superconducting transmon qubits coupled with a shared transmission line resonator, showing that a combination of Nelder-Mead simplex and Krotov’s method yields considerably better results than either one of the two methods alone.
Resumo:
Two formulations of model-based object recognition are described. MAP Model Matching evaluates joint hypotheses of match and pose, while Posterior Marginal Pose Estimation evaluates the pose only. Local search in pose space is carried out with the Expectation--Maximization (EM) algorithm. Recognition experiments are described where the EM algorithm is used to refine and evaluate pose hypotheses in 2D and 3D. Initial hypotheses for the 2D experiments were generated by a simple indexing method: Angle Pair Indexing. The Linear Combination of Views method of Ullman and Basri is employed as the projection model in the 3D experiments.
Resumo:
In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression
Resumo:
Introducción: La Preeclampsia ocurre entre el 2-7% de los embarazos. Previos estudios han sugerido la asociación entre los niveles alterados de PAPP-A y la β-hCG libre con el desarrollo de Preeclampsia (PE) y/o Bajo Peso al Nacer (BPN). Metodología: El diseño del estudio es de Prueba Diagnóstica con enfoque de casos y controles. Las mediciones séricas de PAPP-A y la β-hCG libre, fueron realizadas entre la semana 11-13.6 días durante 2 años. Resultados: La cohorte incluyó 399 pacientes, la incidencia de PE fue de 2,26% y de BPN fue de 14.54%. El punto de corte del percentil 10 fue MoM PAPP-A: 0,368293 y MoM β-hCG libre: 0,412268; la especificidad en PE leve fue de 90,5 y para BPN de 90. Los MoM de la β-hCG libre, la edad y el peso materno se comportan como factores de riesgo, mientras que mayores valores de MoM de la PAPP-A y mayor número de partos factores de protección. Para el BPEG severo la edad materna y la paridad se comportan como factores de riesgo, mientras que un aumento promedio de los valores de los MoM de la PAPP-A y la β-hCG libre, como factores de protección en el desarrollo de BPEG Severo. Conclusiones: Existe una relación significativa entre los valores alterados de PAPP-A y de β-hCG libre, valorados a la semana 11 a 13 con la incidencia de Preeclampsia y de Bajo Peso al nacer en fetos cromosómicamente normales, mostrando unos niveles significativamente más bajos a medida que aumentaba la severidad de la enfermedad.
Resumo:
Introducción. La minería subterránea de carbón en Colombia, es una de las áreas productivas más importantes con grandes oportunidades de trabajo para el capital obrero. Sin embargo en el sector se ha venido incrementando la accidentalidad problema que tiende a aumentar por el desarrollo proyectado para el 2019 en cuanto al número de trabajadores y la llegada de nuevas tecnologías. Objetivo. Determinar la asociación entre los riesgos identificados por los trabajadores y los establecidos por la empresa en los respectivos subprogramas de medicina preventiva y del trabajo, higiene y seguridad Industrial y caracterizar las actividades de los subprogramas de medicina preventiva y del trabajo, higiene y seguridad industrial implementados en las empresas. Materiales y métodos Se realizó un estudio de corte transversal, con el propósito de establecer frecuencias de asociación entre los riesgos identificados por los trabajadores y los establecidos por las empresas de minería subterránea de carbón en el departamento de Boyacá. Se aplicaron dos cuestionarios individual y de empresa, se hizo la caracterización de las actividades y se determino la asociación entre ambas por medio de análisis estadístico, la muestra fue probabilística, estratificada, con asignación proporcional aleatoria y por conglomerados bietapicos. Resultados El estudio evidenció que apenas se presenta un cumplimiento levemente superior al 50% del ordenamiento legal. En cuanto a la identificación y conocimiento del riesgo, la asociación entre el riesgo reconocido por los trabajadores y los reportados por la empresa fue significativo en físico ruido 16,2% y trabajo en caliente 8,88%. La asociación entre el uso de elementos de protección personal por parte de los trabajadores y los entregados por la empresa fue significativa en protección respiratoria con cartucho 75,80% y protección aditiva tipo inserción 78,00% y tipo copa 80,40%. Conclusiones Se encontró entre otros que los riesgos identificados por los trabajadores y establecidos por las empresas son muy pocos y que apenas se cumplen los requisitos legales vigentes.
Resumo:
El artículo analiza los determinantes de la presencia de hijos no deseados en Colombia. Se utiliza la información de la Encuesta Nacional de Demografía y Salud (ENDS, 2005), específicamente para las mujeres de 40 años o más. Dadas las características especiales de la variable que se analiza, se utilizan modelos de conteo para verificar si determinadas características socioeconómicas como la educación o el estrato económico explican la presencia de hijos no deseados. Se encuentra que la educación de la mujer y el área de residencia son determinantes significativos de los nacimientos no planeados. Además, la relación negativa entre el número de hijos no deseados y la educación de la mujer arroja implicaciones clave en materia de política social.
Resumo:
We study the role of natural resource windfalls in explaining the efficiency of public expenditures. Using a rich dataset of expenditures and public good provision for 1,836 municipalities in Peru for period 2001-2010, we estimate a non-monotonic relationship between the efficiency of public good provision and the level of natural resource transfers. Local governments that were extremely favored by the boom of mineral prices were more efficient in using fiscal windfalls whereas those benefited with modest transfers were more inefficient. These results can be explained by the increase in political competition associated with the boom. However, the fact that increases in efficiency were related to reductions in public good provision casts doubts about the beneficial effects of political competition in promoting efficiency.
Resumo:
Changes to stratospheric sudden warmings (SSWs) over the coming century, as predicted by the Geophysical Fluid Dynamics Laboratory (GFDL) chemistry climate model [Atmospheric Model With Transport and Chemistry (AMTRAC)], are investigated in detail. Two sets of integrations, each a three-member ensemble, are analyzed. The first set is driven with observed climate forcings between 1960 and 2004; the second is driven with climate forcings from a coupled model run, including trace gas concentrations representing a midrange estimate of future anthropogenic emissions between 1990 and 2099. A small positive trend in the frequency of SSWs is found. This trend, amounting to 1 event/decade over a century, is statistically significant at the 90% confidence level and is consistent over the two sets of model integrations. Comparison of the model SSW climatology between the late 20th and 21st centuries shows that the increase is largest toward the end of the winter season. In contrast, the dynamical properties are not significantly altered in the coming century, despite the increase in SSW frequency. Owing to the intrinsic complexity of our model, the direct cause of the predicted trend in SSW frequency remains an open question.
Resumo:
A traditional method of validating the performance of a flood model when remotely sensed data of the flood extent are available is to compare the predicted flood extent to that observed. The performance measure employed often uses areal pattern-matching to assess the degree to which the two extents overlap. Recently, remote sensing of flood extents using synthetic aperture radar (SAR) and airborne scanning laser altimetry (LIDAR) has made more straightforward the synoptic measurement of water surface elevations along flood waterlines, and this has emphasised the possibility of using alternative performance measures based on height. This paper considers the advantages that can accrue from using a performance measure based on waterline elevations rather than one based on areal patterns of wet and dry pixels. The two measures were compared for their ability to estimate flood inundation uncertainty maps from a set of model runs carried out to span the acceptable model parameter range in a GLUE-based analysis. A 1 in 5-year flood on the Thames in 1992 was used as a test event. As is typical for UK floods, only a single SAR image of observed flood extent was available for model calibration and validation. A simple implementation of a two-dimensional flood model (LISFLOOD-FP) was used to generate model flood extents for comparison with that observed. The performance measure based on height differences of corresponding points along the observed and modelled waterlines was found to be significantly more sensitive to the channel friction parameter than the measure based on areal patterns of flood extent. The former was able to restrict the parameter range of acceptable model runs and hence reduce the number of runs necessary to generate an inundation uncertainty map. A result of this was that there was less uncertainty in the final flood risk map. The uncertainty analysis included the effects of uncertainties in the observed flood extent as well as in model parameters. The height-based measure was found to be more sensitive when increased heighting accuracy was achieved by requiring that observed waterline heights varied slowly along the reach. The technique allows for the decomposition of the reach into sections, with different effective channel friction parameters used in different sections, which in this case resulted in lower r.m.s. height differences between observed and modelled waterlines than those achieved by runs using a single friction parameter for the whole reach. However, a validation of the modelled inundation uncertainty using the calibration event showed a significant difference between the uncertainty map and the observed flood extent. While this was true for both measures, the difference was especially significant for the height-based one. This is likely to be due to the conceptually simple flood inundation model and the coarse application resolution employed in this case. The increased sensitivity of the height-based measure may lead to an increased onus being placed on the model developer in the production of a valid model
Resumo:
Using a simple stochastic model, the authors illustrate that the occurrence of a meridional dipole in the first empirical orthogonal function (EOF) of a time-dependent zonal jet is a simple consequence of the north–south excursion of the jet center, and this geometrical fact can be understood without appealing to fluid dynamical principles. From this it follows that one ought not, perhaps, be surprised at the fact that such dipoles, commonly referred to as the Arctic Oscillation (AO) or the Northern Annular Mode (NAM), have robustly been identified in many observational studies and appear to be ubiquitous in atmospheric models across a wide range of complexity.
Resumo:
1. We compared the baseline phosphorus (P) concentrations inferred by diatom-P transfer functions and export coefficient models at 62 lakes in Great Britain to assess whether the techniques produce similar estimates of historical nutrient status. 2. There was a strong linear relationship between the two sets of values over the whole total P (TP) gradient (2-200 mu g TP L-1). However, a systematic bias was observed with the diatom model producing the higher values in 46 lakes (of which values differed by more than 10 mu g TP L-1 in 21). The export coefficient model gave the higher values in 10 lakes (of which the values differed by more than 10 mu g TP L-1 in only 4). 3. The difference between baseline and present-day TP concentrations was calculated to compare the extent of eutrophication inferred by the two sets of model output. There was generally poor agreement between the amounts of change estimated by the two approaches. The discrepancy in both the baseline values and the degree of change inferred by the models was greatest in the shallow and more productive sites. 4. Both approaches were applied to two lakes in the English Lake District where long-term P data exist, to assess how well the models track measured P concentrations since approximately 1850. There was good agreement between the pre-enrichment TP concentrations generated by the models. The diatom model paralleled the steeper rise in maximum soluble reactive P (SRP) more closely than the gradual increase in annual mean TP in both lakes. The export coefficient model produced a closer fit to observed annual mean TP concentrations for both sites, tracking the changes in total external nutrient loading. 5. A combined approach is recommended, with the diatom model employed to reflect the nature and timing of the in-lake response to changes in nutrient loading, and the export coefficient model used to establish the origins and extent of changes in the external load and to assess potential reduction in loading under different management scenarios. 6. However, caution must be exercised when applying these models to shallow lakes where the export coefficient model TP estimate will not include internal P loading from lake sediments and where the diatom TP inferences may over-estimate TP concentrations because of the high abundance of benthic taxa, many of which are poor indicators of trophic state.
Resumo:
Remote sensing from space-borne platforms is often seen as an appealing method of monitoring components of the hydrological cycle, including river discharge, due to its spatial coverage. However, data from these platforms is often less than ideal because the geophysical properties of interest are rarely measured directly and the measurements that are taken can be subject to significant errors. This study assimilated water levels derived from a TerraSAR-X synthetic aperture radar image and digital aerial photography with simulations from a two dimensional hydraulic model to estimate discharge, inundation extent, depths and velocities at the confluence of the rivers Severn and Avon, UK. An ensemble Kalman filter was used to assimilate spot heights water levels derived by intersecting shorelines from the imagery with a digital elevation model. Discharge was estimated from the ensemble of simulations using state augmentation and then compared with gauge data. Assimilating the real data reduced the error between analyzed mean water levels and levels from three gauging stations to less than 0.3 m, which is less than typically found in post event water marks data from the field at these scales. Measurement bias was evident, but the method still provided a means of improving estimates of discharge for high flows where gauge data are unavailable or of poor quality. Posterior estimates of discharge had standard deviations between 63.3 m3s-1 and 52.7 m3s-1, which were below 15% of the gauged flows along the reach. Therefore, assuming a roughness uncertainty of 0.03-0.05 and no model structural errors discharge could be estimated by the EnKF with accuracy similar to that arguably expected from gauging stations during flood events. Quality control prior to assimilation, where measurements were rejected for being in areas of high topographic slope or close to tall vegetation and trees, was found to be essential. The study demonstrates the potential, but also the significant limitations of currently available imagery to reduce discharge uncertainty in un-gauged or poorly gauged basins when combined with model simulations in a data assimilation framework.
Resumo:
In this article we examine sources of technical efficiency for rice farming in Bangladesh. The motivation for the analysis is the need to close the rice yield gap to enable food security. We employ the DEA double bootstrap of Simar and Wilson (2007) to estimate and explain technical efficiency. This technique overcomes severe limitations inherent in using the two-stage DEA approach commonly employed in the efficiency literature. From a policy perspective our results show that potential efficiency gains to reduce the yield gap are greater than previously found. Statistically positive influences on technical efficiency are education, extension and credit, with age being a negative influence.
Resumo:
The paper provides one of the first applications of the double bootstrap procedure (Simar and Wilson 2007) in a two-stage estimation of the effect of environmental variables on non-parametric estimates of technical efficiency. This procedure enables consistent inference within models explaining efficiency scores, while simultaneously producing standard errors and confidence intervals for these efficiency scores. The application is to 88 livestock and 256 crop farms in the Czech Republic, split into individual and corporate.