917 resultados para vector error correction model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims. Solar colors have been determined on the uvby-beta photometric system to test absolute solar fluxes, to examine colors predicted by model atmospheres as a function of stellar parameters (T(eff), log g, [Fe/H]), and to probe zero-points of T(eff) and metallicity scales. Methods. New uvby-beta photometry is presented for 73 solar-twin candidates. Most stars of our sample have also been observed spectroscopically to obtain accurate stellar parameters. Using the stars that most closely resemble the Sun, and complementing our data with photometry available in the literature, the solar colors on the uvby-beta system have been inferred. Our solar colors are compared with synthetic solar colors computed from absolute solar spectra and from the latest Kurucz (ATLAS9) and MARCS model atmospheres. The zero-points of different T(eff) and metallicity scales are verified and corrections are proposed. Results. Our solar colors are (b - y)(circle dot) = 0.4105 +/- 0.0015, m(1,circle dot) = 0.2122 +/- 0.0018, c(1,circle dot) = 0.3319 +/- 0.0054, and beta(circle dot) = 2.5915 +/- 0.0024. The (b - y)(circle dot) and m(1,circle dot) colors obtained from absolute spectrophotometry of the Sun agree within 3-sigma with the solar colors derived here when the photometric zero-points are determined from either the STIS HST observations of Vega or an ATLAS9 Vega model, but the c(1,circle dot) and beta(circle dot) synthetic colors inferred from absolute solar spectra agree with our solar colors only when the zero-points based on the ATLAS9 model are adopted. The Kurucz solar model provides a better fit to our observations than the MARCS model. For photometric values computed from the Kurucz models, (b - y)(circle dot) and m(1,circle dot) are in excellent agreement with our solar colors independently of the adopted zero-points, but for c(1,circle dot) and beta circle dot agreement is found only when adopting the ATLAS9 zero-points. The c(1,circle dot) color computed from both the Kurucz and MARCS models is the most discrepant, probably revealing problems either with the models or observations in the u band. The T(eff) calibration of Alonso and collaborators has the poorest performance (similar to 140 K off), while the relation of Casagrande and collaborators is the most accurate (within 10 K). We confirm that the Ramirez & Melendez uvby metallicity calibration, recommended by Arnadottir and collaborators to obtain [Fe/H] in F, G, and K dwarfs, needs a small (similar to 10%) zero-point correction to place the stars and the Sun on the same metallicity scale. Finally, we confirm that the c(1) index in solar analogs has a strong metallicity sensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the published KTeV samples of K(L) -> pi(+/-)e(-/+)nu and K(L) -> pi(+/-)mu(-/+)nu decays, we perform a reanalysis of the scalar and vector form factors based on the dispersive parametrization. We obtain phase-space integrals I(K)(e) = 0.15446 +/- 0.00025 and I(K)(mu) = 0.10219 +/- 0.00025. For the scalar form factor parametrization, the only free parameter is the normalized form factor value at the Callan-Treiman point (C); our best-fit results in InC = 0.1915 +/- 0.0122. We also study the sensitivity of C to different parametrizations of the vector form factor. The results for the phase-space integrals and C are then used to make tests of the standard model. Finally, we compare our results with lattice QCD calculations of F(K)/F(pi) and f(+)(0).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze the dynamical behavior of a quantum system under the actions of two counteracting baths: the inevitable energy draining reservoir and, in opposition, exciting the system, an engineered Glauber's amplifier. We follow the system dynamics towards equilibrium to map its distinctive behavior arising from the interplay of attenuation and amplification. Such a mapping, with the corresponding parameter regimes, is achieved by calculating the evolution of both the excitation and the Glauber-Sudarshan P function. Techniques to compute the decoherence and the fidelity of quantum states under the action of both counteracting baths, based on the Wigner function rather than the density matrix, are also presented. They enable us to analyze the similarity of the evolved state vector of the system with respect to the original one, for all regimes of parameters. Applications of this attenuation-amplification interplay are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Artesian confined aquifers do not need pumping energy, and water from the aquifer flows naturally at the wellhead. This study proposes correcting the method for analyzing flowing well tests presented by Jacob and Lohman (1952) by considering the head losses due to friction in the well casing. The application of the proposed correction allowed the determination of a transmissivity (T = 411 m(2)/d) and storage coefficient (S = 3 x 10(-4)) which appear to be representative for the confined Guarani Aquifer in the study area. Ignoring the correction due to head losses in the well casing, the error in transmissivity evaluation is about 18%. For the storage coefficient the error is of 5 orders of magnitude, resulting in physically unacceptable value. The effect of the proposed correction on the calculated radius of the cone of depression and corresponding well interference is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reconciliation can be divided into stages, each stage representing the performance of a mining operation, such as: long-term estimation, short-term estimation, planning, mining and mineral processing. The gold industry includes another stage which is the budget, when the company informs the financial market of its annual production forecast. The division of reconciliation into stages increases the reliability of the annual budget informed by the mining companies, while also detecting and correcting the critical steps responsible for the overall estimation error by the optimization of sampling protocols and equipment. This paper develops and validates a new reconciliation model for the gold industry, which is based on correct sampling practices and the subdivision of reconciliation into stages, aiming for better grade estimates and more efficient control of the mining industry`s processes, from resource estimation to final production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The TCP/IP architecture was consolidated as a standard to the distributed systems. However, there are several researches and discussions about alternatives to the evolution of this architecture and, in this study area, this work presents the Title Model to contribute with the application needs support by the cross layer ontology use and the horizontal addressing, in a next generation Internet. For a practical viewpoint, is showed the network cost reduction for the distributed programming example, in networks with layer 2 connectivity. To prove the title model enhancement, it is presented the network analysis performed for the message passing interface, sending a vector of integers and returning its sum. By this analysis, it is confirmed that the current proposal allows, in this environment, a reduction of 15,23% over the total network traffic, in bytes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leaf wetness duration (LWD) models based on empirical approaches offer practical advantages over physically based models in agricultural applications, but their spatial portability is questionable because they may be biased to the climatic conditions under which they were developed. In our study, spatial portability of three LWD models with empirical characteristics - a RH threshold model, a decision tree model with wind speed correction, and a fuzzy logic model - was evaluated using weather data collected in Brazil, Canada, Costa Rica, Italy and the USA. The fuzzy logic model was more accurate than the other models in estimating LWD measured by painted leaf wetness sensors. The fraction of correct estimates for the fuzzy logic model was greater (0.87) than for the other models (0.85-0.86) across 28 sites where painted sensors were installed, and the degree of agreement k statistic between the model and painted sensors was greater for the fuzzy logic model (0.71) than that for the other models (0.64-0.66). Values of the k statistic for the fuzzy logic model were also less variable across sites than those of the other models. When model estimates were compared with measurements from unpainted leaf wetness sensors, the fuzzy logic model had less mean absolute error (2.5 h day(-1)) than other models (2.6-2.7 h day(-1)) after the model was calibrated for the unpainted sensors. The results suggest that the fuzzy logic model has greater spatial portability than the other models evaluated and merits further validation in comparison with physical models under a wider range of climate conditions. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By applying a directed evolution methodology specific enzymatic characteristics can be enhanced, but to select mutants of interest from a large mutant bank, this approach requires high throughput screening and facile selection. To facilitate such primary screening of enhanced clones, an expression system was tested that uses a green fluorescent protein (GFP) tag from Aequorea victoria linked to the enzyme of interest. As GFP`s fluorescence is readily measured, and as there is a 1:1 molar correlation between the target protein and GFP, the concept proposed was to determine whether GFP could facilitate primary screening of error-prone PCR (EPP) clones. For this purpose a thermostable beta-glucosidase (BglA) from Fervidobacterium sp. was used as a model enzyme. A vector expressing the chimeric protein BglA-GFP-6XHis was constructed and the fusion protein purified and characterized. When compared to the native proteins, the components of the fusion displayed modified characteristics, such as enhanced GFP thermostability and a higher BglA optimum temperature. Clones carrying mutant BglA proteins obtained by EPP, were screened based on the BglA/GFP activity ratio. Purified tagged enzymes from selected clones resulted in modified substrate specificity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quantitative description of the quantum entanglement between a qubit and its environment is considered. Specifically, for the ground state of the spin-boson model, the entropy of entanglement of the spin is calculated as a function of α, the strength of the ohmic coupling to the environment, and ɛ, the level asymmetry. This is done by a numerical renormalization group treatment of the related anisotropic Kondo model. For ɛ=0, the entanglement increases monotonically with α, until it becomes maximal for α→1-. For fixed ɛ>0, the entanglement is a maximum as a function of α for a value, α=αM

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called. 632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fed-batch culture can offer significant improvement in recombinant protein production compared to batch culture in the baculovirus expression vector system (BEVS), as shown by Nguyen et al. (1993) and Bedard et al. (1994) among others. However, a thorough analysis of fed-batch culture to determine its limits in improving recombinant protein production over batch culture has yet to be performed. In this work, this issue is addressed by the optimisation of single-addition fed-batch culture. This type of fed-batch culture involves the manual addition of a multi-component nutrient feed to batch culture before infection with the baculovirus. The nutrient feed consists of yeastolate ultrafiltrate, lipids, amino acids, vitamins, trace elements, and glucose, which were added to batch cultures of Spodoptera frugiperda (Sf9) cells before infection with a recombinant Autographa californica nuclear polyhedrosis virus (Ac-NPV) expressing beta-galactosidase (beta-Gal). The fed-batch production of beta-Gal was optimised using response surface methods (RSM). The optimisation was performed in two stages, starting with a screening procedure to determine the most important variables and ending with a central-composite experiment to obtain a response surface model of volumetric beta-Gal production. The predicted optimum volumetric yield of beta-Gal in fed-batch culture was 2.4-fold that of the best yields in batch culture. This result was confirmed by a statistical analysis of the best fed-batch and batch data (with average beta-Gal yields of 1.2 and 0.5 g/L, respectively) obtained from this laboratory. The response surface model generated can be used to design a more economical fed-batch operation, in which nutrient feed volumes are minimised while maintaining acceptable improvements in beta-Gal yield. (C) 1998 John Wiley & Sons, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.