910 resultados para Vignetting Correction
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
16-electrode phantoms are developed and studied with a simple instrumentation developed for Electrical Impedance Tomography. An analog instrumentation is developed with a sinusoidal current generator and signal conditioner circuit. Current generator is developed withmodified Howland constant current source fed by a voltage controlled oscillator and the signal conditioner circuit consisting of an instrumentation amplifier and a narrow band pass filter. Electronic hardware is connected to the electrodes through a DIP switch based multiplexer module. Phantoms with different electrode size and position are developed and the EIT forward problem is studied using the forward solver. A low frequency low magnitude sinusoidal current is injected to the surface electrodes surrounding the phantom boundary and the differential potential is measured by a digital multimeter. Comparing measured potential with the simulated data it is intended to reduce the measurement error and an optimum phantom geometry is suggested. Result shows that the common mode electrode reduces the common mode error of the EIT electronics and reduces the error potential in the measured data. Differential potential is reduced up to 67 mV at the voltage electrode pair opposite to the current electrodes. Offset potential is measured and subtracted from the measured data for further correction. It is noticed that the potential data pattern depends on the electrode width and the optimum electrode width is suggested. It is also observed that measured potential becomes acceptable with a 20 mm solution column above and below the electrode array level.
Resumo:
One mole of diethyl dixanthogen reacts with 26 moles of chloramine-T, and this reaction can be used for the determination of the dixanthogen. Higher alkyl dixanthogens react in a more complicated fashion, but may still be estimated using an empirical correction.
Resumo:
A better understanding of the limiting step in a first order phase transition, the nucleation process, is of major importance to a variety of scientific fields ranging from atmospheric sciences to nanotechnology and even to cosmology. This is due to the fact that in most phase transitions the new phase is separated from the mother phase by a free energy barrier. This barrier is crossed in a process called nucleation. Nowadays it is considered that a significant fraction of all atmospheric particles is produced by vapor-to liquid nucleation. In atmospheric sciences, as well as in other scientific fields, the theoretical treatment of nucleation is mostly based on a theory known as the Classical Nucleation Theory. However, the Classical Nucleation Theory is known to have only a limited success in predicting the rate at which vapor-to-liquid nucleation takes place at given conditions. This thesis studies the unary homogeneous vapor-to-liquid nucleation from a statistical mechanics viewpoint. We apply Monte Carlo simulations of molecular clusters to calculate the free energy barrier separating the vapor and liquid phases and compare our results against the laboratory measurements and Classical Nucleation Theory predictions. According to our results, the work of adding a monomer to a cluster in equilibrium vapour is accurately described by the liquid drop model applied by the Classical Nucleation Theory, once the clusters are larger than some threshold size. The threshold cluster sizes contain only a few or some tens of molecules depending on the interaction potential and temperature. However, the error made in modeling the smallest of clusters as liquid drops results in an erroneous absolute value for the cluster work of formation throughout the size range, as predicted by the McGraw-Laaksonen scaling law. By calculating correction factors to Classical Nucleation Theory predictions for the nucleation barriers of argon and water, we show that the corrected predictions produce nucleation rates that are in good comparison with experiments. For the smallest clusters, the deviation between the simulation results and the liquid drop values are accurately modelled by the low order virial coefficients at modest temperatures and vapour densities, or in other words, in the validity range of the non-interacting cluster theory by Frenkel, Band and Bilj. Our results do not indicate a need for a size dependent replacement free energy correction. The results also indicate that Classical Nucleation Theory predicts the size of the critical cluster correctly. We also presents a new method for the calculation of the equilibrium vapour density, surface tension size dependence and planar surface tension directly from cluster simulations. We also show how the size dependence of the cluster surface tension in equimolar surface is a function of virial coefficients, a result confirmed by our cluster simulations.
Resumo:
Recently, focus of real estate investment has expanded from the building-specific level to the aggregate portfolio level. The portfolio perspective requires investment analysis for real estate which is comparable with that of other asset classes, such as stocks and bonds. Thus, despite its distinctive features, such as heterogeneity, high unit value, illiquidity and the use of valuations to measure performance, real estate should not be considered in isolation. This means that techniques which are widely used for other assets classes can also be applied to real estate. An important part of investment strategies which support decisions on multi-asset portfolios is identifying the fundamentals of movements in property rents and returns, and predicting them on the basis of these fundamentals. The main objective of this thesis is to find the key drivers and the best methods for modelling and forecasting property rents and returns in markets which have experienced structural changes. The Finnish property market, which is a small European market with structural changes and limited property data, is used as a case study. The findings in the thesis show that is it possible to use modern econometric tools for modelling and forecasting property markets. The thesis consists of an introduction part and four essays. Essays 1 and 3 model Helsinki office rents and returns, and assess the suitability of alternative techniques for forecasting these series. Simple time series techniques are able to account for structural changes in the way markets operate, and thus provide the best forecasting tool. Theory-based econometric models, in particular error correction models, which are constrained by long-run information, are better for explaining past movements in rents and returns than for predicting their future movements. Essay 2 proceeds by examining the key drivers of rent movements for several property types in a number of Finnish property markets. The essay shows that commercial rents in local markets can be modelled using national macroeconomic variables and a panel approach. Finally, Essay 4 investigates whether forecasting models can be improved by accounting for asymmetric responses of office returns to the business cycle. The essay finds that the forecast performance of time series models can be improved by introducing asymmetries, and the improvement is sufficient to justify the extra computational time and effort associated with the application of these techniques.
Resumo:
In the thesis we consider inference for cointegration in vector autoregressive (VAR) models. The thesis consists of an introduction and four papers. The first paper proposes a new test for cointegration in VAR models that is directly based on the eigenvalues of the least squares (LS) estimate of the autoregressive matrix. In the second paper we compare a small sample correction for the likelihood ratio (LR) test of cointegrating rank and the bootstrap. The simulation experiments show that the bootstrap works very well in practice and dominates the correction factor. The tests are applied to international stock prices data, and the .nite sample performance of the tests are investigated by simulating the data. The third paper studies the demand for money in Sweden 1970—2000 using the I(2) model. In the fourth paper we re-examine the evidence of cointegration between international stock prices. The paper shows that some of the previous empirical results can be explained by the small-sample bias and size distortion of Johansen’s LR tests for cointegration. In all papers we work with two data sets. The first data set is a Swedish money demand data set with observations on the money stock, the consumer price index, gross domestic product (GDP), the short-term interest rate and the long-term interest rate. The data are quarterly and the sample period is 1970(1)—2000(1). The second data set consists of month-end stock market index observations for Finland, France, Germany, Sweden, the United Kingdom and the United States from 1980(1) to 1997(2). Both data sets are typical of the sample sizes encountered in economic data, and the applications illustrate the usefulness of the models and tests discussed in the thesis.
Resumo:
Experimental studies are presented to show the effect of thermal stresses on thermal contact conductance (TCC) at low contact pressures. It is observed that in a closed contact assembly, contact pressure acting on the interface changes with the changing temperature of contact members. This change in contact pressure consequently causes variations in the TCC of the junction. A relationship between temperature change and the corresponding magnitude of developed thermal stress in a contact assembly is determined experimentally. Inclusion of a term called temperature dependent load correction factor is suggested in the theoretical model for TCC to make it capable of predicting TCC values more accurately in contact assemblies that experience large temperature fluctuations. [DOI: 10.1115/1.4001615]
Resumo:
1,2-Enedioic systems, being sterically perturbed from planarity do not show the effect of the extended conjugation expected of a (formal) trienic entity. In the absence of a model which approximates to a uniplanar situation, the strategy of replacing an ester group in the enedioates by a cyano (for which less stringent steric demand may be presumed) and noting the correction concomitant to this replacement was adopted to arrive at a notional figure for the position of maximal absorption in the planar enedioates. From this the conclusion, subject to substantiation by molecular mechanical or quantum chemical calculations, was drawn that even the E-isomeric and comparatively less substituted enedioates are highly sterically perturbed. An alternative to an earlier explanation of the bathochromic shift of absorption maxima encountered in the 5-cyclic ene-ester and ene-nitrile, relative to the 6-cyclic analogues (observed also with the enedioates and cyanovinyl ester systems), seen later to have been based on unwarranted premises, has been advanced. A comment on the absorption characteristics of enedioic anhydrides has been appended.
Resumo:
A formal way of deriving fluctuation-correlation relations in dense sheared granular media, starting with the Enskog approximation for the collision integral in the Chapman-Enskog theory, is discussed. The correlation correction to the viscosity is obtained using the ring-kinetic equation, in terms of the correlations in the hydrodynamic modes of the linearised Enskog equation. It is shown that the Green-Kubo formula for the shear viscosity emerges from the two-body correlation function obtained from the ring-kinetic equation.
Resumo:
People in many countries are affected by fluorosis owing to the high levels of fluoride in drinking water. An inexpensive method for estimating the concentration of the fluoride ion in drinking water would be helpful in identifying safe sources of water and also in monitoring the performance of defluoridation techniques. For this purpose, a simple, inexpensive, and portable colorimeter has been developed in the present work. It is used in conjunction with the SPADNS method, which shows a color change in the visible region on addition of water containing fluoride to a reagent solution. Groundwater samples were collected from different parts of the state of Karnataka, India and analysed for fluoride. The results obtained using the colorimeter and the double beam spectrophotometer agreed fairly well. The costs of the colorimeter and of the chemicals required per test were about Rs. 250 (US$ 5) and Rs. 2.5 (US$ 0.05), respectively. In addition, the cost of the chemicals required for constructing the calibration curve was about Rs. 15 (US$ 0.3). (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
According to Wen's theory, a universal behavior of the fractional quantum Hall edge is expected at sufficiently low energies, where the dispersion of the elementary edge excitation is linear. A microscopic calculation shows that the actual dispersion is indeed linear at low energies, but deviates from linearity beyond certain energy, and also exhibits an "edge roton minimum." We determine the edge exponent from a microscopic approach, and find that the nonlinearity of the dispersion makes a surprisingly small correction to the edge exponent even at energies higher than the roton energy. We explain this insensitivity as arising from the fact that the energy at maximum spectral weight continues to show an almost linear behavior up to fairly high energies. We also study, in an effective-field theory, how interactions modify the exponent for a reconstructed edge with multiple edge modes. Relevance to experiment is discussed.
Resumo:
We revise and extend the extreme value statistic, introduced in Gupta et al., to study direction dependence in the high-redshift supernova data, arising either from departures, from the cosmological principle or due to direction-dependent statistical systematics in the data. We introduce a likelihood function that analytically marginalizes over the,Hubble constant and use it to extend our previous statistic. We also introduce a new statistic that is sensitive to direction dependence arising from living off-centre inside a large void as well as from previously mentioned reasons for anisotropy. We show that for large data sets, this statistic has a limiting form that can be computed analytically. We apply our statistics to the gold data sets from Riess et al., as in our previous work. Our revision and extension of the previous statistic show that the effect of marginalizing over the Hubble constant instead of using its best-fitting value on our results is only marginal. However, correction of errors in our previous work reduces the level of non-Gaussianity in the 2004 gold data that were found in our earlier work. The revised results for the 2007 gold data show that the data are consistent with isotropy and Gaussianity. Our second statistic confirms these results.
Resumo:
Although the first procedure in a seeing human eye using excimer laser was reported in 1988 (McDonald et al. 1989, O'Connor et al. 2006) just three studies (Kymionis et al. 2007, O'Connor et al. 2006, Rajan et al. 2004) with a follow-up over ten years had been published when this thesis was started. The present thesis aims to investigate 1) the long-term outcomes of excimer laser refractive surgery performed for myopia and/or astigmatism by photorefractive keratectomy (PRK) and laser-in situ- keratomileusis (LASIK), 2) the possible differences in postoperative outcomes and complications when moderate-to-high astigmatism is treated with PRK or LASIK, 3) the presence of irregular astigmatism that depend exclusively on the corneal epithelium, and 4) the role of corneal nerve recovery in corneal wound healing in PRK enhancement. Our results revealed that in long-term the number of eyes that achieved uncorrected visual acuity (UCVA)≤0.0 and ≤0.5 (logMAR) was higher after PRK than after LASIK. Postoperative stability was slightly better after PRK than after LASIK. In LASIK treated eyes the incidence of myopic regression was more pronounced when the intended correction was over >6.0 D and in patients aged <30 years.Yet the intended corrections in our study were higher for LASIK than for PRK eyes. No differences were found in percentages of eyes with best corrected visual acuity (BCVA) or loss of two or more lines of visual acuity between PRK and LASIK in the long-term. The postoperative long-term outcomes of PRK with two different delivery systems broad beam and scanning laser were compared and revealed no differences. Postoperative outcomes of moderate-to-high astigmatism yielded better results in terms of UCVA and less compromise or loss of two more lines of BCVA after LASIK that after PRK.Similar stability for both procedures was revealed. Vector analysis showed that LASIK outcomes tended to be more accurate than PRK outcomes, yet no statistically differences were found. Irregular astigmatism secondary to recurrent corneal erosion due to map-dot-fingerprint was successfully treated with phototherapeutic keratectomy (PTK). Preoperative videokeratographies (VK) showed irregular astigmatism. However, postoperatively, all eyes showed a regular pattern. No correlation was found between pre- and postoperative VK patterns. Postoperative outcomes of late PRK in eyes originally subjected to LASIK showed that all (7/7) eyes achieved UCVA ≤0.5 at last follow-up (range 3 — 11 months), and no eye lost lines of BCVA. Postoperatively all eyes developed and initial mild haze (0.5 — 1) into the first month. Yet, at last follow-up 5/7 eyes showed a haze of 0.5 and this was no longer evident in 2/7 eyes. Based on these results, we demonstrated that the long-term outcomes after PRK and LASIK were safe and efficient, with similar stability for both procedures. The PRK outcomes were similar when treated by broad-beam or scanning slit laser. LASIK was better than PRK to correct moderate-to-high astigmatism, yet both procedures showed a tendency of undercorrection. Irregular astigmatism was proven to be able to depend exclusively from the corneal epithelium. If this kind of astigmatism is present in the cornea and a customized PRK/LASIK correction is done based on wavefront measurements an irregular astigmatism may be produced rather than treated. Corneal sensory nerve recovery should have an important role in the modulation of the corneal wound healing and post-operative anterior stromal scarring. PRK enhancement may be an option in eyes with previous LASIK after a sufficient time interval that in at least 2 years.
Resumo:
The conformation of (Pro-Gly-Phe)n in trifluoroethanol was investigated using CD, nmr and ir techniques. After making appropriate correction for the contribution of the phenylalanine chromophore to the observed CD spectra of the polytripeptide at several temperatures, it is found that (Pro-Gly-Phe)n can exist in a partially triple-helical conformation in this solvent a t low temperatures. The nmr and ir data support this conclusion. In conjunction with recent theoretical sutdies, our data offer an explanation for the preferential occurrence of the Phe residue in position 2 of the tripeptide sequence Gly-R2-R3, in collagen.
Resumo:
The aim of this dissertation is to model economic variables by a mixture autoregressive (MAR) model. The MAR model is a generalization of linear autoregressive (AR) model. The MAR -model consists of K linear autoregressive components. At any given point of time one of these autoregressive components is randomly selected to generate a new observation for the time series. The mixture probability can be constant over time or a direct function of a some observable variable. Many economic time series contain properties which cannot be described by linear and stationary time series models. A nonlinear autoregressive model such as MAR model can a plausible alternative in the case of these time series. In this dissertation the MAR model is used to model stock market bubbles and a relationship between inflation and the interest rate. In the case of the inflation rate we arrived at the MAR model where inflation process is less mean reverting in the case of high inflation than in the case of normal inflation. The interest rate move one-for-one with expected inflation. We use the data from the Livingston survey as a proxy for inflation expectations. We have found that survey inflation expectations are not perfectly rational. According to our results information stickiness play an important role in the expectation formation. We also found that survey participants have a tendency to underestimate inflation. A MAR model has also used to model stock market bubbles and crashes. This model has two regimes: the bubble regime and the error correction regime. In the error correction regime price depends on a fundamental factor, the price-dividend ratio, and in the bubble regime, price is independent of fundamentals. In this model a stock market crash is usually caused by a regime switch from a bubble regime to an error-correction regime. According to our empirical results bubbles are related to a low inflation. Our model also imply that bubbles have influences investment return distribution in both short and long run.