63 resultados para the Balanced Scorecard
Resumo:
Many scientific and engineering applications involve inverting large matrices or solving systems of linear algebraic equations. Solving these problems with proven algorithms for direct methods can take very long to compute, as they depend on the size of the matrix. The computational complexity of the stochastic Monte Carlo methods depends only on the number of chains and the length of those chains. The computing power needed by inherently parallel Monte Carlo methods can be satisfied very efficiently by distributed computing technologies such as Grid computing. In this paper we show how a load balanced Monte Carlo method for computing the inverse of a dense matrix can be constructed, show how the method can be implemented on the Grid, and demonstrate how efficiently the method scales on multiple processors. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.
Resumo:
The precision of quasioptical null-balanced bridge instruments for transmission and reflection coefficient measurements at millimeter and submillimeter wavelengths is analyzed. A Jones matrix analysis is used to describe the amount of power reaching the detector as a function of grid angle orientation, sample transmittance/reflectance and phase delay. An analysis is performed of the errors involved in determining the complex transmission and reflection coefficient after taking into account the quantization error in the grid angle and micrometer readings, the transmission or reflection coefficient of the sample, the noise equivalent power of the detector, the source power and the post-detection bandwidth. For a system fitted with a rotating grid with resolution of 0.017 rad and a micrometer quantization error of 1 μm, a 1 mW source, and a detector with a noise equivalent power 5×10−9 W Hz−1/2, the maximum errors at an amplitude transmission or reflection coefficient of 0.5 are below ±0.025.
Resumo:
Milk is a complex and complete food containing an array of essential nutrients that contribute toward a healthy, balanced diet. Numerous epidemiological studies have revealed that high consumption of milk and dairy products may have protective effects against coronary heart disease (CHD), stroke, diabetes, certain cancers (such as colorectal and bladder cancers), and dementia, although the mechanisms of action are unclear. Despite this epidemiological evidence, milk fatty acid profiles often lead to a negative perception of milk and dairy products. However, altering the fatty acid profile of milk by changing the dairy cow diet is a successful strategy, and intervention studies have shown that this approach may lead to further benefits of milk/dairy consumption. Overall, evidence suggests individuals who consume a greater amount of milk and dairy products have a slightly better health advantage than those who do not consume milk and dairy products.
Resumo:
This article examines the operational characteristics of supply-chain partnerships and identifies the relational attributes that cultivate knowledge transfer in such partnerships. A set of theoretical propositions are developed. A case study of a computer manufacturer's supply chain was conducted to examine their validity. The findings support the view that trust, commitment, interdependence, shared meaning, and balanced power facilitate knowledge transfer in supply-chain partnerships, and that knowledge transfer should be treated as a dynamic multistage process.
Resumo:
The synthesis of galactooligosaccharides (GOS) by whole cells of Bifidobacterium bifidum NCIMB 41171 was investigated by developing a set of mathematical models. These were second order polynomial equations, which described responses related to the production of GOS constituents, the selectivity of lactose conversion into GOS, and the relative composition of the produced GOS mixture, as a function of the amount of biocatalyst, temperature, initial lactose concentration, and time. The synthesis reactions were followed for up to 36 h. Samples were withdrawn every 4 h, tested for β-galactosidase activity, and analysed for their carbohydrate content. GOS synthesis was well explained by the models, which were all significant (P < 0.001). The GOS yield increased as temperature increased from 40 °C to 60 °C, as transgalactosylation became more pronounced compared to hydrolysis. The relative composition of GOS produced changed significantly with the initial lactose concentration (P < 0.001); higher ratios of tri-, tetra-, and penta-galactooligosaccharides to transgalactosylated disaccharides were obtained as lactose concentration increased. Time was a critical factor, as a balanced state between GOS synthesis and hydrolysis was roughly attained in most cases between 12 and 20 h, and was followed by more pronounced GOS hydrolysis than synthesis.
Resumo:
The case for property has typically rested on the application of modern portfolio theory (MPT), in that property has been shown to offer increased diversification benefits within a multi asset portfolio without hurting portfolio returns especially for lower risk portfolios. However this view is based upon the use of historic, usually appraisal based, data for property. Recent research suggests strongly that such data significantly underestimates the risk characteristics of property, because appraisals explicitly or implicitly smooth out much of the real volatility in property returns. This paper examines the portfolio diversification effects of including property in a multi-asset portfolio, using UK appraisal based (smoothed) data and several derived de-smoothed series. Having considered the effects of de-smoothing, we then consider the inclusion of a further low risk asset (cash) in order to investigate further whether property's place in a low risk portfolio is maintained. The conclusions of this study are that the previous supposed benefits of including property have been overstated. Although property may still have a place in a 'balanced' institutional portfolio, the case for property needs to be reassessed and not be based simplistically on the application of MPT.
Resumo:
Our objective was to determine whether the endothelial nitric oxide synthase (eNOS) Glu298Asp polymorphism influences vascular response to raised NEFA enriched with saturated fatty acids (SFA) or long-chain (LC) n-3 polyunsaturated fatty acids (PUFA). Subjects were prospectively recruited for genotype (Glu298, n = 30 and Asp298, n = 29; balanced for age and gender) consumed SFA on two occasions, with and without the substitution of 0.07 g fat/kg body weight with LC n-3 PUFA, and with heparin infusion to elevate NEFA. Endothelial function was measured before and after NEFA elevation (240 min), with blood samples taken every 30 min. Flow-mediated dilation (FMD) decreased following SFA alone and increased following SFA+LC n-3 PUFA. There were 2-fold differences in the change in FMD response to the different fat loads between the Asp298 and Glu298 genotypes (P = 0.002) and between genders (P < 0.02). Sodium nitroprusside-induced reactivity, measured by laser Doppler imaging with iontophoresis, was significantly greater with SFA+LC n-3 PUFA in all female subjects (P < 0.001) but not in males. Elevated NEFA influences both endothelial-dependent and endothelial-independent vasodilation during the postprandial phase. Effects of fat composition appear to be genotype and gender dependent, with the greatest difference in vasodilatory response to the two fat loads seen in the Asp298 females.
Resumo:
We describe a one-port de-embedding technique suitable for the quasi-optical characterization of terahertz integrated components at frequencies beyond the operational range of most vector network analyzers. This technique is also suitable when the manufacturing of precision terminations to sufficiently fine tolerances for the application of a TRL de-embedding technique is not possible. The technique is based on vector reflection measurements of a series of easily realizable test pieces. A theoretical analysis is presented for the precision of the technique when implemented using a quasi-optical null-balanced bridge reflectometer. The analysis takes into account quantization effects in the linear and angular encoders associated with the balancing procedure, as well as source power and detector noise equivalent power. The precision in measuring waveguide characteristic impedance and attenuation using this de-embedding technique is further analyzed after taking into account changes in the power coupled due to axial, rotational, and lateral alignment errors between the device under test and the instruments' test port. The analysis is based on the propagation of errors after assuming imperfect coupling of two fundamental Gaussian beams. The required precision in repositioning the samples at the instruments' test-port is discussed. Quasi-optical measurements using the de-embedding process for a WR-8 adjustable precision short at 125 GHz are presented. The de-embedding methodology may be extended to allow the determination of S-parameters of arbitrary two-port junctions. The measurement technique proposed should prove most useful above 325 GHz where there is a lack of measurement standards.
Resumo:
The time-dependent climate response to changing concentrations of greenhouse gases and sulfate aerosols is studied using a coupled general circulation model of the atmosphere and the ocean (ECHAM4/OPYC3). The concentrations of the well-mixed greenhouse gases like CO2, CH4, N2O, and CFCs are prescribed for the past (1860–1990) and projected into the future according to International Panel on Climate Change (IPCC) scenario IS92a. In addition, the space–time distribution of tropospheric ozone is prescribed, and the tropospheric sulfur cycle is calculated within the coupled model using sulfur emissions of the past and projected into the future (IS92a). The radiative impact of the aerosols is considered via both the direct and the indirect (i.e., through cloud albedo) effect. It is shown that the simulated trend in sulfate deposition since the end of the last century is broadly consistent with ice core measurements, and the calculated radiative forcings from preindustrial to present time are within the uncertainty range estimated by IPCC. Three climate perturbation experiments are performed, applying different forcing mechanisms, and the results are compared with those obtained from a 300-yr unforced control experiment. As in previous experiments, the climate response is similar, but weaker, if aerosol effects are included in addition to greenhouse gases. One notable difference to previous experiments is that the strength of the Indian summer monsoon is not fundamentally affected by the inclusion of aerosol effects. Although the monsoon is damped compared to a greenhouse gas only experiment, it is still more vigorous than in the control experiment. This different behavior, compared to previous studies, is the result of the different land–sea distribution of aerosol forcing. Somewhat unexpected, the intensity of the global hydrological cycle becomes weaker in a warmer climate if both direct and indirect aerosol effects are included in addition to the greenhouse gases. This can be related to anomalous net radiative cooling of the earth’s surface through aerosols, which is balanced by reduced turbulent transfer of both sensible and latent heat from the surface to the atmosphere.
Resumo:
A system for continuous data assimilation is presented and discussed. To simulate the dynamical development a channel version of a balanced barotropic model is used and geopotential (height) data are assimilated into the models computations as data become available. In the first experiment the updating is performed every 24th, 12th and 6th hours with a given network. The stations are distributed at random in 4 groups in order to simulate 4 areas with different density of stations. Optimum interpolation is performed for the difference between the forecast and the valid observations. The RMS-error of the analyses is reduced in time, and the error being smaller the more frequent the updating is performed. The updating every 6th hour yields an error in the analysis less than the RMS-error of the observation. In a second experiment the updating is performed by data from a moving satellite with a side-scan capability of about 15°. If the satellite data are analysed at every time step before they are introduced into the system the error of the analysis is reduced to a value below the RMS-error of the observation already after 24 hours and yields as a whole a better result than updating from a fixed network. If the satellite data are introduced without any modification the error of the analysis is reduced much slower and it takes about 4 days to reach a comparable result to the one where the data have been analysed.
Resumo:
Observational and numerical evidence suggest that variability in the extratropical stratospheric circulation has a demonstrable impact on tropospheric variability on intraseasonal time scales. In this study, it is demonstrated that the amplitude of the observed tropospheric response to vacillations in the stratospheric flow is quantitatively similar to the zonal-mean balanced response to the anomalous wave forcing at stratospheric levels. It is further demonstrated that the persistence of the tropospheric response is consistent with the impact of anomalous diabatic heating in the polar stratosphere as stratospheric temperatures relax to climatology. The results contradict previous studies that suggest that variations in stratospheric wave drag are too weak to account for the attendant changes in the tropospheric flow. However, the results also reveal that stratospheric processes alone cannot account for the observed meridional redistribution of momentum within the troposphere.
Resumo:
The behavior of the ensemble Kalman filter (EnKF) is examined in the context of a model that exhibits a nonlinear chaotic (slow) vortical mode coupled to a linear (fast) gravity wave of a given amplitude and frequency. It is shown that accurate recovery of both modes is enhanced when covariances between fast and slow normal-mode variables (which reflect the slaving relations inherent in balanced dynamics) are modeled correctly. More ensemble members are needed to recover the fast, linear gravity wave than the slow, vortical motion. Although the EnKF tends to diverge in the analysis of the gravity wave, the filter divergence is stable and does not lead to a great loss of accuracy. Consequently, provided the ensemble is large enough and observations are made that reflect both time scales, the EnKF is able to recover both time scales more accurately than optimal interpolation (OI), which uses a static error covariance matrix. For OI it is also found to be problematic to observe the state at a frequency that is a subharmonic of the gravity wave frequency, a problem that is in part overcome by the EnKF.However, error in themodeled gravity wave parameters can be detrimental to the performance of the EnKF and remove its implied advantages, suggesting that a modified algorithm or a method for accounting for model error is needed.
Resumo:
Many physical systems exhibit dynamics with vastly different time scales. Often the different motions interact only weakly and the slow dynamics is naturally constrained to a subspace of phase space, in the vicinity of a slow manifold. In geophysical fluid dynamics this reduction in phase space is called balance. Classically, balance is understood by way of the Rossby number R or the Froude number F; either R ≪ 1 or F ≪ 1. We examined the shallow-water equations and Boussinesq equations on an f -plane and determined a dimensionless parameter _, small values of which imply a time-scale separation. In terms of R and F, ∈= RF/√(R^2+R^2 ) We then developed a unified theory of (extratropical) balance based on _ that includes all cases of small R and/or small F. The leading-order systems are ensured to be Hamiltonian and turn out to be governed by the quasi-geostrophic potential-vorticity equation. However, the height field is not necessarily in geostrophic balance, so the leading-order dynamics are more general than in quasi-geostrophy. Thus the quasi-geostrophic potential-vorticity equation (as distinct from the quasi-geostrophic dynamics) is valid more generally than its traditional derivation would suggest. In the case of the Boussinesq equations, we have found that balanced dynamics generally implies hydrostatic balance without any assumption on the aspect ratio; only when the Froude number is not small and it is the Rossby number that guarantees a timescale separation must we impose the requirement of a small aspect ratio to ensure hydrostatic balance.
Resumo:
The problem of spurious excitation of gravity waves in the context of four-dimensional data assimilation is investigated using a simple model of balanced dynamics. The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode, and can be initialized such that the model evolves on a so-called slow manifold, where the fast motion is suppressed. Identical twin assimilation experiments are performed, comparing the extended and ensemble Kalman filters (EKF and EnKF, respectively). The EKF uses a tangent linear model (TLM) to estimate the evolution of forecast error statistics in time, whereas the EnKF uses the statistics of an ensemble of nonlinear model integrations. Specifically, the case is examined where the true state is balanced, but observation errors project onto all degrees of freedom, including the fast modes. It is shown that the EKF and EnKF will assimilate observations in a balanced way only if certain assumptions hold, and that, outside of ideal cases (i.e., with very frequent observations), dynamical balance can easily be lost in the assimilation. For the EKF, the repeated adjustment of the covariances by the assimilation of observations can easily unbalance the TLM, and destroy the assumptions on which balanced assimilation rests. It is shown that an important factor is the choice of initial forecast error covariance matrix. A balance-constrained EKF is described and compared to the standard EKF, and shown to offer significant improvement for observation frequencies where balance in the standard EKF is lost. The EnKF is advantageous in that balance in the error covariances relies only on a balanced forecast ensemble, and that the analysis step is an ensemble-mean operation. Numerical experiments show that the EnKF may be preferable to the EKF in terms of balance, though its validity is limited by ensemble size. It is also found that overobserving can lead to a more unbalanced forecast ensemble and thus to an unbalanced analysis.