993 resultados para Measurements models
Resumo:
We present two integrable spin ladder models which possess a general free parameter besides the rung coupling J. The models are exactly solvable by means of the Bethe ansatz method and we present the Bethe ansatz equations. We analyze the elementary excitations of the models which reveal the existence of a gap for both models that depends on the free parameter. (C) 2003 American Institute of Physics.
Resumo:
This paper presents a review of the time-domain polarization measurement techniques for the condition assessment of aged transformer insulation. The polarization process is first described with appropriate dielectric response theories and then commonly used polarization methods are described with special emphasis on the most widely used return voltage(rv) measurement. Most recent emphasis has been directed to techniques of determining moisture content of insulation indirectly by measuring rv parameters. The major difficulty still lies with the accurate interpretation of return voltage results. This paper investigates different thoughts regarding the interpretation of rv results for different moisture and ageing conditions. Other time domain polarization measurement techniques and their results are also presented in this paper.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
The two steps of nitrification, namely the oxidation of ammonia to nitrite and nitrite to nitrate, often need to be considered separately in process studies. For a detailed examination, it is desirable to monitor the two-step sequence using online measurements. In this paper, the use of online titrimetric and off-gas analysis (TOGA) methods for the examination of the process is presented. Using the known reaction stoichiometry, combination of the measured signals (rates of hydrogen ion production, oxygen uptake and carbon dioxide transfer) allows the determination of the three key process rates, namely the ammonia consumption rate, the nitrite accumulation rate and the nitrate production rate. Individual reaction rates determined with the TOGA sensor under a number of operation conditions are presented. The rates calculated directly from the measured signals are compared with those obtained from offline liquid sample analysis. Statistical analysis confirms that the results from the two approaches match well. This result could not have been guaranteed using alternative online methods. As a case study, the influences of pH and dissolved oxygen (DO) on nitrite accumulation are tested using the proposed method. It is shown that nitrite accumulation decreased with increasing DO and pH. Possible reasons for these observations are discussed. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
[1] Comprehensive measurements are presented of the piezometric head in an unconfined aquifer during steady, simple harmonic oscillations driven by a hydrostatic clear water reservoir through a vertical interface. The results are analyzed and used to test existing hydrostatic and nonhydrostatic, small-amplitude theories along with capillary fringe effects. As expected, the amplitude of the water table wave decays exponentially. However, the decay rates and phase lags indicate the influence of both vertical flow and capillary effects. The capillary effects are reconciled with observations of water table oscillations in a sand column with the same sand. The effects of vertical flows and the corresponding nonhydrostatic pressure are reasonably well described by small-amplitude theory for water table waves in finite depth aquifers. That includes the oscillation amplitudes being greater at the bottom than at the top and the phase lead of the bottom compared with the top. The main problems with respect to interpreting the measurements through existing theory relate to the complicated boundary condition at the interface between the driving head reservoir and the aquifer. That is, the small-amplitude, finite depth expansion solution, which matches a hydrostatic boundary condition between the bottom and the mean driving head level, is unrealistic with respect to the pressure variation above this level. Hence it cannot describe the finer details of the multiple mode behavior close to the driving head boundary. The mean water table height initially increases with distance from the forcing boundary but then decreases again, and its asymptotic value is considerably smaller than that previously predicted for finite depth aquifers without capillary effects. Just as the mean water table over-height is smaller than predicted by capillarity-free shallow aquifer models, so is the amplitude of the second harmonic. In fact, there is no indication of extra second harmonics ( in addition to that contained in the driving head) being generated at the interface or in the interior.
Resumo:
Background and aims: Hip fracture is a devastating event in terms of outcome in the elderly, and the best predictor of hip fracture risk is hip bone density, usually measured by dual X-ray absorptiometry (DXA). However, bone density can also be ascertained from computerized tomography (CT) scans, and mid-thigh scans are frequently employed to assess the muscle and fat composition of the lower limb. Therefore, we examined if it was possible to predict hip bone density using mid-femoral bone density. Methods: Subjects were 803 ambulatory white and black women and men, aged 70-79 years, participating in the Health, Aging and Body Composition (Health ABC) Study. Bone mineral content (BMC, g) and volumetric bone mineral density (vBMD, mg/cm(3)) of the mid-femur were obtained by CT, whereas BMC and areal bone mineral density (aBMD, g/cm(2)) of the hip (femoral neck and trochanter) were derived from DXA. Results: In regression analyses stratified by race and sex, the coefficient of determination was low with mid-femoral BMC, explaining 6-27% of the variance in hip BMC, with a standard error of estimate (SEE) ranging from 16 to 22% of the mean. For mid-femur vBMD, the variance explained in hip aBMD was 2-17% with a SEE ranging from 15 to 18%. Adjusting aBMD to approximate volumetric density did not improve the relationships. In addition, the utility of fracture prediction was examined. Forty-eight subjects had one or more fractures (various sites) during a mean follow-up of 4.07 years. In logistic regression analysis, there was no association between mid-femoral vBMD and fracture (all fractures), whereas a 1 SD increase in hip BMD was associated with reduced odds for fracture of similar to60%. Conclusions: These results do not support the use of CT-derived mid-femoral vBMD or BMC to predict DXA-measured hip bone mineral status, irrespective of race or sex in older adults. Further, in contrast to femoral neck and trochanter BMD, mid-femur vBMD was not able to predict fracture (all fractures). (C) 2003, Editrice Kurtis.
Resumo:
A cultura do cafeeiro sempre ocupou lugar de destaque na economia do país, dada sua importância na área econômica e social, buscando cada vez mais, um mercado diferenciado, com novas tecnologias para a melhoria da qualidade da bebida. A atividade de lavagem e despolpa de frutos do cafeeiro, necessária para a redução do custo de secagem e a melhoria da qualidade de bebida, é geradora de grandes volumes de resíduos sólidos e líquidos, ricos em material orgânico e inorgânico. A água residuária da atividade de beneficiamento do café (ARC) é gerada anualmente em grande volume no Espírito Santo, e aliado aos nutrientes existentes nesse efluente indica a sua viabilidade de reaproveitamento na fertirrigação de culturas agrícolas. O objetivo do presente trabalho foi avaliar o efeito de diferentes doses de água residuária de café no crescimento, na absorção e interação entre nutrientes e no estado nutricional do milho. Para tanto, foi conduzido um experimento em casa de vegetação utilizando-se o delineamento experimental inteiramente casualizado, onde foram aplicadas 7 doses de ARC, com 3 repetições, em unidades experimentais constituídas por vasos com 2 dm³ de solo. As doses foram equivalentes a 0,00, 15,17, 30,35, 45,52, 60,70, 75,87 e 91,05 litros de ARC por m² de solo. Realizou-se a semeadura de cinco sementes de milho híbrido BR 206 por vaso e cinco dias após a germinação das plantas foi feito o desbaste, mantendo-se três plantas por vaso. Aos trinta dias após a germinação determinou-se o diâmetro do caule (DC), área foliar (AF), matéria seca da parte aérea (MSPA), matéria seca do sistema radicular (MSR), relação parte aérea/raiz (MSPA/MSR), razão massa radicular (matéria seca raiz/matéria seca total) e razão área foliar (área foliar/matéria seca total). Na parte aérea das plantas foi determinado os teores dos macronutrientes (N, P, K, Ca, Mg e S). Os dados foram submetidos a análise de variância e as variáveis em função das doses de ARC submetidas a análise de regressão. Para as variáveis dependentes foi calculado o coeficiente de correlação linear de Pearson. A ARC serviu como fonte de nutrientes para as plantas de milho, aumentou o rendimento da maioria das variáveis de crescimento e os teores de N, K e S. Porém diminuiu o teor de Ca, Mg e P da parte aérea das plantas além de indicar que altas doses promovem desbalanceamento na relação entre nutrientes.
Resumo:
This paper examines the performance of Portuguese equity funds investing in the domestic and in the European Union market, using several unconditional and conditional multi-factor models. In terms of overall performance, we find that National funds are neutral performers, while European Union funds under-perform the market significantly. These results do not seem to be a consequence of management fees. Overall, our findings are supportive of the robustness of conditional multi-factor models. In fact, Portuguese equity funds seem to be relatively more exposed to smallcaps and more value-oriented. Also, they present strong evidence of time-varying betas and, in the case of the European Union funds, of time-varying alphas too. Finally, in terms of market timing, our tests suggest that mutual fund managers in our sample do not exhibit any market timing abilities. Nevertheless, we find some evidence of timevarying conditional market timing abilities but only at the individual fund level.
Resumo:
Abstract. Interest in design and development of graphical user interface (GUIs) is growing in the last few years. However, correctness of GUI's code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper describes our approach to reverse engineering abstract GUI models directly from the Java/Swing code.
Resumo:
Color model representation allows characterizing in a quantitative manner, any defined color spectrum of visible light, i.e. with a wavelength between 400nm and 700nm. To accomplish that, each model, or color space, is associated with a function that allows mapping the spectral power distribution of the visible electromagnetic radiation, in a space defined by a set of discrete values that quantify the color components composing the model. Some color spaces are sensitive to changes in lighting conditions. Others assure the preservation of certain chromatic features, remaining immune to these changes. Therefore, it becomes necessary to identify the strengths and weaknesses of each model in order to justify the adoption of color spaces in image processing and analysis techniques. This chapter will address the topic of digital imaging, main standards and formats. Next we will set the mathematical model of the image acquisition sensor response, which enables assessment of the various color spaces, with the aim of determining their invariance to illumination changes.
Resumo:
Current software development relies increasingly on non-trivial coordination logic for com- bining autonomous services often running on di erent platforms. As a rule, however, in typical non-trivial software systems, such a coordination layer is strongly weaved within the application at source code level. Therefore, its precise identi cation becomes a major methodological (and technical) problem which cannot be overestimated along any program understanding or refactoring process. Open access to source code, as granted in OSS certi cation, provides an opportunity for the devel- opment of methods and technologies to extract, from source code, the relevant coordination information. This paper is a step in this direction, combining a number of program analysis techniques to automatically recover coordination information from legacy code. Such information is then expressed as a model in Orc, a general purpose orchestration language
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.
Resumo:
A growing number of predicting corporate failure models has emerged since 60s. Economic and social consequences of business failure can be dramatic, thus it is not surprise that the issue has been of growing interest in academic research as well as in business context. The main purpose of this study is to compare the predictive ability of five developed models based on three statistical techniques (Discriminant Analysis, Logit and Probit) and two models based on Artificial Intelligence (Neural Networks and Rough Sets). The five models were employed to a dataset of 420 non-bankrupt firms and 125 bankrupt firms belonging to the textile and clothing industry, over the period 2003–09. Results show that all the models performed well, with an overall correct classification level higher than 90%, and a type II error always less than 2%. The type I error increases as we move away from the year prior to failure. Our models contribute to the discussion of corporate financial distress causes. Moreover it can be used to assist decisions of creditors, investors and auditors. Additionally, this research can be of great contribution to devisers of national economic policies that aim to reduce industrial unemployment.