970 resultados para Offer calculation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Public opinion surveys have become progressively incorporated into systems of official statistics. Surveys of the economic climate are usually qualitative because they collect opinions of businesspeople and/or experts about the long-term indicators described by a number of variables. In such cases the responses are expressed in ordinal numbers, that is, the respondents verbally report, for example, whether during a given trimester the sales or the new orders have increased, decreased or remained the same as in the previous trimester. These data allow to calculate the percent of respondents in the total population (results are extrapolated), who select every one of the three options. Data are often presented in the form of an index calculated as the difference between the percent of those who claim that a given variable has improved in value and of those who claim that it has deteriorated. As in any survey conducted on a sample the question of the measurement of the sample error of the results has to be addressed, since the error influences both the reliability of the results and the calculation of the sample size adequate for a desired confidence interval. The results presented here are based on data from the Survey of the Business Climate (Encuesta de Clima Empresarial) developed through the collaboration of the Statistical Institute of Catalonia (Institut d’Estadística de Catalunya) with the Chambers of Commerce (Cámaras de Comercio) of Sabadell and Terrassa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper considers the use of artificial regression in calculating different types of score test when the log

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To further validate the doubly labeled water method for measurement of CO2 production and energy expenditure in humans, we compared it with near-continuous respiratory gas exchange in nine healthy young adult males. Subjects were housed in a respiratory chamber for 4 days. Each received 2H2(18)O at either a low (n = 6) or a moderate (n = 3) isotope dose. Low and moderate doses produced initial 2H enrichments of 5 and 10 X 10(-3) atom percent excess, respectively, and initial 18O enrichments of 2 and 2.5 X 10(-2) atom percent excess, respectively. Total body water was calculated from isotope dilution in saliva collected at 4 and 5 h after the dose. CO2 production was calculated by the two-point method using the isotopic enrichments of urines collected just before each subject entered and left the chamber. Isotope enrichments relative to predose samples were measured by isotope ratio mass spectrometry. At low isotope dose, doubly labeled water overestimated average daily energy expenditure by 8 +/- 9% (SD) (range -7 to 22%). At moderate dose the difference was reduced to +4 +/- 5% (range 0-9%). The isotope elimination curves for 2H and 18O from serial urines collected from one of the subjects showed expected diurnal variations but were otherwise quite smooth. The overestimate may be due to approximations in the corrections for isotope fractionation and isotope dilution. An alternative approach to the corrections is presented that reduces the overestimate to 1%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Through the history of Electrical Engineering education, vectorial and phasorial diagrams have been used as a fundamental learning tool. At present, computational power has replaced them by long data lists, the result of solving equation systems by means of numerical methods. In this sense, diagrams have been shifted to an academic background and although theoretically explained, they are not used in a practical way within specific examples. This fact may be against the understanding of the complex behavior of the electrical power systems by students. This article proposes a modification of the classical Perrine-Baum diagram construction to allowing both a more practical representation and a better understanding of the behavior of a high-voltage electric line under different levels of load. This modification allows, at the same time, the forecast of the obsolescence of this behavior and line’s loading capacity. Complementary, we evaluate the impact of this tool in the learning process showing comparative undergraduate results during three academic years

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the static field limit, the vibrational hyperpolarizability consists of two contributions due to: (1) the shift in the equilibrium geometry (known as nuclear relaxation), and (2) the change in the shape of the potential energy surface (known as curvature). Simple finite field methods have previously been developed for evaluating these static field contributions and also for determining the effect of nuclear relaxation on dynamic vibrational hyperpolarizabilities in the infinite frequency approximation. In this paper the finite field approach is extended to include, within the infinite frequency approximation, the effect of curvature on the major dynamic nonlinear optical processes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vibrational configuration interaction method used to obtain static vibrational (hyper)polarizabilities is extended to dynamic nonlinear optical properties in the infinite optical frequency approximation. Illustrative calculations are carried out on H2 O and N H3. The former molecule is weakly anharmonic while the latter contains a strongly anharmonic umbrella mode. The effect on vibrational (hyper)polarizabilities due to various truncations of the potential energy and property surfaces involved in the calculation are examined

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our new simple method for calculating accurate Franck-Condon factors including nondiagonal (i.e., mode-mode) anharmonic coupling is used to simulate the C2H4+X2B 3u←C2H4X̃1 Ag band in the photoelectron spectrum. An improved vibrational basis set truncation algorithm, which permits very efficient computations, is employed. Because the torsional mode is highly anharmonic it is separated from the other modes and treated exactly. All other modes are treated through the second-order perturbation theory. The perturbation-theory corrections are significant and lead to a good agreement with experiment, although the separability assumption for torsion causes the C2 D4 results to be not as good as those for C2 H4. A variational formulation to overcome this circumstance, and deal with large anharmonicities in general, is suggested

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two common methods of accounting for electric-field-induced perturbations to molecular vibration are analyzed and compared. The first method is based on a perturbation-theoretic treatment and the second on a finite-field treatment. The relationship between the two, which is not immediately apparent, is made by developing an algebraic formalism for the latter. Some of the higher-order terms in this development are documented here for the first time. As well as considering vibrational dipole polarizabilities and hyperpolarizabilities, we also make mention of the vibrational Stark effec

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A variational approach for reliably calculating vibrational linear and nonlinear optical properties of molecules with large electrical and/or mechanical anharmonicity is introduced. This approach utilizes a self-consistent solution of the vibrational Schrödinger equation for the complete field-dependent potential-energy surface and, then, adds higher-level vibrational correlation corrections as desired. An initial application is made to static properties for three molecules of widely varying anharmonicity using the lowest-level vibrational correlation treatment (i.e., vibrational Møller-Plesset perturbation theory). Our results indicate when the conventional Bishop-Kirtman perturbation method can be expected to break down and when high-level vibrational correlation methods are likely to be required. Future improvements and extensions are discussed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gross-to-Net is a payroll calculator modeled after the actual payroll calculation used for state employees’ paychecks. This calculator can be used to project changes in net pay when there are changes in pay amounts, hours worked, mandatory and voluntary deductions, including all pre-tax deductions such as retirement, insurances, deferred compensation or flexible spending plans. Federal and state tax withholding, retirement rates, OASDI and Medicare (FICA) are calculated using current rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this project is to accomplish an application software based on Matlab to calculate the radioelectrical coverage by surface wave of broadcast radiostations in the band of Medium Wave (WM) all around the world. Also, given the location of a transmitting and a receiving station, the software should be able to calculate the electric field that the receiver should receive at that specific site. In case of several transmitters, the program should search for the existence of Inter-Symbol Interference, and calculate the field strenght accordingly. The application should ask for the configuration parameters of the transmitter radiostation within a Graphical User Interface (GUI), and bring back the resulting coverage above a map of the area under study. For the development of this project, it has been used several conductivity databases of different countries, and a high-resolution elevation database (GLOBE). Also, to calculate the field strenght due to groundwave propagation, it has been used ITU GRWAVE program, which must be integrated into a Matlab interface to be used by the application developed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In studies of the natural history of HIV-1 infection, the time scale of primary interest is the time since infection. Unfortunately, this time is very often unknown for HIV infection and using the follow-up time instead of the time since infection is likely to provide biased results because of onset confounding. Laboratory markers such as the CD4 T-cell count carry important information concerning disease progression and can be used to predict the unknown date of infection. Previous work on this topic has made use of only one CD4 measurement or based the imputation on incident patients only. However, because of considerable intrinsic variability in CD4 levels and because incident cases are different from prevalent cases, back calculation based on only one CD4 determination per person or on characteristics of the incident sub-cohort may provide unreliable results. Therefore, we propose a methodology based on the repeated individual CD4 T-cells marker measurements that use both incident and prevalent cases to impute the unknown date of infection. Our approach uses joint modelling of the time since infection, the CD4 time path and the drop-out process. This methodology has been applied to estimate the CD4 slope and impute the unknown date of infection in HIV patients from the Swiss HIV Cohort Study. A procedure based on the comparison of different slope estimates is proposed to assess the goodness of fit of the imputation. Results of simulation studies indicated that the imputation procedure worked well, despite the intrinsic high volatility of the CD4 marker.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary