931 resultados para parametric oscillators and amplifiers
Resumo:
Modern medical imaging techniques enable the acquisition of in vivo high resolution images of the vascular system. Most common methods for the detection of vessels in these images, such as multiscale Hessian-based operators and matched filters, rely on the assumption that at each voxel there is a single cylinder. Such an assumption is clearly violated at the multitude of branching points that are easily observed in all, but the Most focused vascular image studies. In this paper, we propose a novel method for detecting vessels in medical images that relaxes this single cylinder assumption. We directly exploit local neighborhood intensities and extract characteristics of the local intensity profile (in a spherical polar coordinate system) which we term as the polar neighborhood intensity profile. We present a new method to capture the common properties shared by polar neighborhood intensity profiles for all the types of vascular points belonging to the vascular system. The new method enables us to detect vessels even near complex extreme points, including branching points. Our method demonstrates improved performance over standard methods on both 2D synthetic images and 3D animal and clinical vascular images, particularly close to vessel branching regions. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The objective of the study was to evaluate saliva flow rate, buffer capacity, pH levels, and dental caries experience (DCE) in autistic individuals, comparing the results with a control group (CG). The study was performed on 25 noninstitutionalized autistic boys, divided in two groups. G1 composed of ten children, ages 3-8. G2 composed of 15 adolescents ages 9-13. The CG was composed of 25 healthy boys, randomly selected and also divided in two groups: CG3 composed of 14 children ages 4-8, and CG4 composed of 11 adolescents ages 9-14. Whole saliva was collected under slight suction, and pH and buffer capacity were determined using a digital pHmeter. Buffer capacity was measured by titration using 0.01 N HCl, and the flow rate expressed in ml/min, and the DCE was expressed by decayed, missing, and filled teeth (permanent dentition [DMFT] and primary dentition [dmft]). Data were plotted and submitted to nonparametric (Kruskal-Wallis) and parametric (Student`s t test) statistical tests with a significance level less than 0.05. When comparing G1 and CG3, groups did not differ in flow rate, pH levels, buffer capacity, or DMFT. Groups G2 and CG4 differ significantly in pH (p = 0.007) and pHi = 7.0 (p = 0.001), with lower scores for G2. In autistic individuals aged 3-8 and 9-13, medicated or not, there was no significant statistical difference in flow rate, pH, and buffer capacity. The comparison of DCE among autistic children and CG children with deciduous (dmft) and mixed/permanent decayed, missing, and filled teeth (DMFT) did not show statistical difference (p = 0.743). Data suggest that autistic individuals have neither a higher flow rate nor a better buffer capacity. Similar DCE was observed in both groups studied.
Resumo:
In Sweden, 90% of the solar heating systems are solar domestic hot water and heating systems (SDHW&H), so called combisystems. These generally supply most of the domestic hot water needs during the summer and have enough capacity to supply some energy to the heating system during spring and autumn. This paper describes a standard Swedish combisystem and how the output from it varies with heating load, climate within Sweden, and how it can be increased with improved system design. A base case is defined using the standard combi- system, a modern Swedish single family house and the climate of Stockholm. Using the simulation program Trnsys, parametric studies have been performed on the base case and improved system designs. The solar fraction could be increased from 17.1% for the base case to 22.6% for the best system design, given the same system size, collector type and load. A short analysis of the costs of changed system design is given, showing that payback times for additional investment are from 5-8 years. Measurements on system components in the laboratory have been used to verify the simulation models used. More work is being carried out in order to find even better system designs, and further improvements in system performance are expected.
Resumo:
The study reported here is part of a large project for evaluation of the Thermo-Chemical Accumulator (TCA), a technology under development by the Swedish company ClimateWell AB. The studies concentrate on the use of the technology for comfort cooling. This report concentrates on measurements in the laboratory, modelling and system simulation. The TCA is a three-phase absorption heat pump that stores energy in the form of crystallised salt, in this case Lithium Chloride (LiCl) with water being the other substance. The process requires vacuum conditions as with standard absorption chillers using LiBr/water. Measurements were carried out in the laboratories at the Solar Energy Research Center SERC, at Högskolan Dalarna as well as at ClimateWell AB. The measurements at SERC were performed on a prototype version 7:1 and showed that this prototype had several problems resulting in poor and unreliable performance. The main results were that: there was significant corrosion leading to non-condensable gases that in turn caused very poor performance; unwanted crystallisation caused blockages as well as inconsistent behaviour; poor wetting of the heat exchangers resulted in relatively high temperature drops there. A measured thermal COP for cooling of 0.46 was found, which is significantly lower than the theoretical value. These findings resulted in a thorough redesign for the new prototype, called ClimateWell 10 (CW10), which was tested briefly by the authors at ClimateWell. The data collected here was not large, but enough to show that the machine worked consistently with no noticeable vacuum problems. It was also sufficient for identifying the main parameters in a simulation model developed for the TRNSYS simulation environment, but not enough to verify the model properly. This model was shown to be able to simulate the dynamic as well as static performance of the CW10, and was then used in a series of system simulations. A single system model was developed as the basis of the system simulations, consisting of a CW10 machine, 30 m2 flat plate solar collectors with backup boiler and an office with a design cooling load in Stockholm of 50 W/m2, resulting in a 7.5 kW design load for the 150 m2 floor area. Two base cases were defined based on this: one for Stockholm using a dry cooler with design cooling rate of 30 kW; one for Madrid with a cooling tower with design cooling rate of 34 kW. A number of parametric studies were performed based on these two base cases. These showed that the temperature lift is a limiting factor for cooling for higher ambient temperatures and for charging with fixed temperature source such as district heating. The simulated evacuated tube collector performs only marginally better than a good flat plate collector if considering the gross area, the margin being greater for larger solar fractions. For 30 m2 collector a solar faction of 49% and 67% were achieved for the Stockholm and Madrid base cases respectively. The average annual efficiency of the collector in Stockholm (12%) was much lower than that in Madrid (19%). The thermal COP was simulated to be approximately 0.70, but has not been possible to verify with measured data. The annual electrical COP was shown to be very dependent on the cooling load as a large proportion of electrical use is for components that are permanently on. For the cooling loads studied, the annual electrical COP ranged from 2.2 for a 2000 kWh cooling load to 18.0 for a 21000 kWh cooling load. There is however a potential to reduce the electricity consumption in the machine, which would improve these figures significantly. It was shown that a cooling tower is necessary for the Madrid climate, whereas a dry cooler is sufficient for Stockholm although a cooling tower does improve performance. The simulation study was very shallow and has shown a number of areas that are important to study in more depth. One such area is advanced control strategy, which is necessary to mitigate the weakness of the technology (low temperature lift for cooling) and to optimally use its strength (storage).
Resumo:
Climate change has resulted in substantial variations in annual extreme rainfall quantiles in different durations and return periods. Predicting the future changes in extreme rainfall quantiles is essential for various water resources design, assessment, and decision making purposes. Current Predictions of future rainfall extremes, however, exhibit large uncertainties. According to extreme value theory, rainfall extremes are rather random variables, with changing distributions around different return periods; therefore there are uncertainties even under current climate conditions. Regarding future condition, our large-scale knowledge is obtained using global climate models, forced with certain emission scenarios. There are widely known deficiencies with climate models, particularly with respect to precipitation projections. There is also recognition of the limitations of emission scenarios in representing the future global change. Apart from these large-scale uncertainties, the downscaling methods also add uncertainty into estimates of future extreme rainfall when they convert the larger-scale projections into local scale. The aim of this research is to address these uncertainties in future projections of extreme rainfall of different durations and return periods. We plugged 3 emission scenarios with 2 global climate models and used LARS-WG, a well-known weather generator, to stochastically downscale daily climate models’ projections for the city of Saskatoon, Canada, by 2100. The downscaled projections were further disaggregated into hourly resolution using our new stochastic and non-parametric rainfall disaggregator. The extreme rainfall quantiles can be consequently identified for different durations (1-hour, 2-hour, 4-hour, 6-hour, 12-hour, 18-hour and 24-hour) and return periods (2-year, 10-year, 25-year, 50-year, 100-year) using Generalized Extreme Value (GEV) distribution. By providing multiple realizations of future rainfall, we attempt to measure the extent of total predictive uncertainty, which is contributed by climate models, emission scenarios, and downscaling/disaggregation procedures. The results show different proportions of these contributors in different durations and return periods.
Resumo:
Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.
Resumo:
This paper is a theoretica1 and empirica1 study of the re1ationship between indexing po1icy and feedback mechanisms in the inflationary adjustment process in Brazil. The focus of our study is on two policy issues: (1) did the Brazilian system of indexing of interest rates, the exchange rate, and wages make inflation so dependent on its own past values that it created a significant feedback process and inertia in the behaviour of inflation in and (2) was the feedback effect of past inf1ation upon itself so strong that dominated the effect of monetary/fiscal variables upon current inflation? This paper develops a simple model designed to capture several "stylized facts" of Brazi1ian indexing po1icy. Separate ru1es of "backward indexing" for interest rates, the exchange rate, and wages, reflecting the evolution of po1icy changes in Brazil, are incorporated in a two-sector model of industrial and agricultural prices. A transfer function derived irom this mode1 shows inflation depending on three factors: (1) past values of inflation, (2) monetary and fiscal variables, and (3) supply- .shock variables. The indexing rules for interest rates, the exchange rate, and wages place restrictions on the coefficients of the transfer function. Variations in the policy-determined parameters of the indexing rules imply changes in the coefficients of the transfer function for inflation. One implication of this model, in contrast to previous results derived in analytically simpler models of indexing, is that a higher degree of indexing does not make current inflation more responsive to current monetary shocks. The empirical section of this paper studies the central hypotheses of this model through estimation of the inflation transfer function with time-varying parameters. The results show a systematic non-random variation of the transfer function coefficients closely synchronized with changes in the observed values of the wage-indexing parameters. Non-parametric tests show the variation of the transfer function coefficients to be statistically significant at the time of the changes in wage indexing rules in Brazil. As the degree of indexing increased, the inflation feadback coefficients increased, while the effect of external price and agricultura shocs progressively increased and monetary effects progressively decreased.
Resumo:
This paper presents semiparametric estimators of changes in inequality measures of a dependent variable distribution taking into account the possible changes on the distributions of covariates. When we do not impose parametric assumptions on the conditional distribution of the dependent variable given covariates, this problem becomes equivalent to estimation of distributional impacts of interventions (treatment) when selection to the program is based on observable characteristics. The distributional impacts of a treatment will be calculated as differences in inequality measures of the potential outcomes of receiving and not receiving the treatment. These differences are called here Inequality Treatment Effects (ITE). The estimation procedure involves a first non-parametric step in which the probability of receiving treatment given covariates, the propensity-score, is estimated. Using the inverse probability weighting method to estimate parameters of the marginal distribution of potential outcomes, in the second step weighted sample versions of inequality measures are computed. Root-N consistency, asymptotic normality and semiparametric efficiency are shown for the semiparametric estimators proposed. A Monte Carlo exercise is performed to investigate the behavior in finite samples of the estimator derived in the paper. We also apply our method to the evaluation of a job training program.
Resumo:
In this thesis, we investigate some aspects of the interplay between economic regulation and the risk of the regulated firm. In the first chapter, the main goal is to understand the implications a mainstream regulatory model (Laffont and Tirole, 1993) have on the systematic risk of the firm. We generalize the model in order to incorporate aggregate risk, and find that the optimal regulatory contract must be severely constrained in order to reproduce real-world systematic risk levels. We also consider the optimal profit-sharing mechanism, with an endogenous sharing rate, to explore the relationship between contract power and beta. We find results compatible with the available evidence that high-powered regimes impose more risk to the firm. In the second chapter, a joint work with Daniel Lima from the University of California, San Diego (UCSD), we start from the observation that regulated firms are subject to some regulatory practices that potentially affect the symmetry of the distribution of their future profits. If these practices are anticipated by investors in the stock market, the pattern of asymmetry in the empirical distribution of stock returns may differ among regulated and non-regulated companies. We review some recently proposed asymmetry measures that are robust to the empirical regularities of return data and use them to investigate whether there are meaningful differences in the distribution of asymmetry between these two groups of companies. In the third and last chapter, three different approaches to the capital asset pricing model of Kraus and Litzenberger (1976) are tested with recent Brazilian data and estimated using the generalized method of moments (GMM) as a unifying procedure. We find that ex-post stock returns generally exhibit statistically significant coskewness with the market portfolio, and hence are sensitive to squared market returns. However, while the theoretical ground for the preference for skewness is well established and fairly intuitive, we did not find supporting evidence that investors require a premium for supporting this risk factor in Brazil.
Resumo:
With the ever increasing demands for high complexity consumer electronic products, market pressures demand faster product development and lower cost. SoCbased design can provide the required design flexibility and speed by allowing the use of IP cores. However, testing costs in the SoC environment can reach a substantial percent of the total production cost. Analog testing costs may dominate the total test cost, as testing of analog circuits usually require functional verification of the circuit and special testing procedures. For RF analog circuits commonly used in wireless applications, testing is further complicated because of the high frequencies involved. In summary, reducing analog test cost is of major importance in the electronic industry today. BIST techniques for analog circuits, though potentially able to solve the analog test cost problem, have some limitations. Some techniques are circuit dependent, requiring reconfiguration of the circuit being tested, and are generally not usable in RF circuits. In the SoC environment, as processing and memory resources are available, they could be used in the test. However, the overhead for adding additional AD and DA converters may be too costly for most systems, and analog routing of signals may not be feasible and may introduce signal distortion. In this work a simple and low cost digitizer is used instead of an ADC in order to enable analog testing strategies to be implemented in a SoC environment. Thanks to the low analog area overhead of the converter, multiple analog test points can be observed and specific analog test strategies can be enabled. As the digitizer is always connected to the analog test point, it is not necessary to include muxes and switches that would degrade the signal path. For RF analog circuits, this is specially useful, as the circuit impedance is fixed and the influence of the digitizer can be accounted for in the design phase. Thanks to the simplicity of the converter, it is able to reach higher frequencies, and enables the implementation of low cost RF test strategies. The digitizer has been applied successfully in the testing of both low frequency and RF analog circuits. Also, as testing is based on frequency-domain characteristics, nonlinear characteristics like intermodulation products can also be evaluated. Specifically, practical results were obtained for prototyped base band filters and a 100MHz mixer. The application of the converter for noise figure evaluation was also addressed, and experimental results for low frequency amplifiers using conventional opamps were obtained. The proposed method is able to enhance the testability of current mixed-signal designs, being suitable for the SoC environment used in many industrial products nowadays.
Resumo:
Building Risk-Neutral Densities (RND) from options data can provide market-implied expectations about the future behavior of a financial variable. And market expectations on financial variables may influence macroeconomic policy decisions. It can be useful also for corporate and financial institutions decision making. This paper uses the Liu et all (2007) approach to estimate the option-implied Risk-neutral densities from the Brazilian Real/US Dollar exchange rate distribution. We then compare the RND with actual exchange rates, on a monthly basis, in order to estimate the relative risk-aversion of investors and also obtain a Real-world density for the exchange rate. We are the first to calculate relative risk-aversion and the option-implied Real World Density for an emerging market currency. Our empirical application uses a sample of Brazilian Real/US Dollar options traded at BM&F-Bovespa from 1999 to 2011. The RND is estimated using a Mixture of Two Log-Normals distribution and then the real-world density is obtained by means of the Liu et al. (2007) parametric risktransformations. The relative risk aversion is calculated for the full sample. Our estimated value of the relative risk aversion parameter is around 2.7, which is in line with other articles that have estimated this parameter for the Brazilian Economy, such as Araújo (2005) and Issler and Piqueira (2000). Our out-of-sample evaluation results showed that the RND has some ability to forecast the Brazilian Real exchange rate. Abe et all (2007) found also mixed results in the out-of-sample analysis of the RND forecast ability for exchange rate options. However, when we incorporate the risk aversion into RND in order to obtain a Real-world density, the out-of-sample performance improves substantially, with satisfactory results in both Kolmogorov and Berkowitz tests. Therefore, we would suggest not using the “pure” RND, but rather taking into account risk aversion in order to forecast the Brazilian Real exchange rate.
Resumo:
We study semiparametric two-step estimators which have the same structure as parametric doubly robust estimators in their second step. The key difference is that we do not impose any parametric restriction on the nuisance functions that are estimated in a first stage, but retain a fully nonparametric model instead. We call these estimators semiparametric doubly robust estimators (SDREs), and show that they possess superior theoretical and practical properties compared to generic semiparametric two-step estimators. In particular, our estimators have substantially smaller first-order bias, allow for a wider range of nonparametric first-stage estimates, rate-optimal choices of smoothing parameters and data-driven estimates thereof, and their stochastic behavior can be well-approximated by classical first-order asymptotics. SDREs exist for a wide range of parameters of interest, particularly in semiparametric missing data and causal inference models. We illustrate our method with a simulation exercise.
Resumo:
This paper performs a thorough statistical examination of the time-series properties of the daily market volatility index (VIX) from the Chicago Board Options Exchange (CBOE). The motivation lies not only on the widespread consensus that the VIX is a barometer of the overall market sentiment as to what concerns investors' risk appetite, but also on the fact that there are many trading strategies that rely on the VIX index for hedging and speculative purposes. Preliminary analysis suggests that the VIX index displays long-range dependence. This is well in line with the strong empirical evidence in the literature supporting long memory in both options-implied and realized variances. We thus resort to both parametric and semiparametric heterogeneous autoregressive (HAR) processes for modeling and forecasting purposes. Our main ndings are as follows. First, we con rm the evidence in the literature that there is a negative relationship between the VIX index and the S&P 500 index return as well as a positive contemporaneous link with the volume of the S&P 500 index. Second, the term spread has a slightly negative long-run impact in the VIX index, when possible multicollinearity and endogeneity are controlled for. Finally, we cannot reject the linearity of the above relationships, neither in sample nor out of sample. As for the latter, we actually show that it is pretty hard to beat the pure HAR process because of the very persistent nature of the VIX index.
Resumo:
In this paper, we propose a class of ACD-type models that accommodates overdispersion, intermittent dynamics, multiple regimes, and sign and size asymmetries in financial durations. In particular, our functional coefficient autoregressive conditional duration (FC-ACD) model relies on a smooth-transition autoregressive specification. The motivation lies on the fact that the latter yields a universal approximation if one lets the number of regimes grows without bound. After establishing that the sufficient conditions for strict stationarity do not exclude explosive regimes, we address model identifiability as well as the existence, consistency, and asymptotic normality of the quasi-maximum likelihood (QML) estimator for the FC-ACD model with a fixed number of regimes. In addition, we also discuss how to consistently estimate using a sieve approach a semiparametric variant of the FC-ACD model that takes the number of regimes to infinity. An empirical illustration indicates that our functional coefficient model is flexible enough to model IBM price durations.
Resumo:
This paper deals with the estimation and testing of conditional duration models by looking at the density and baseline hazard rate functions. More precisely, we foeus on the distance between the parametric density (or hazard rate) function implied by the duration process and its non-parametric estimate. Asymptotic justification is derived using the functional delta method for fixed and gamma kernels, whereas finite sample properties are investigated through Monte Carlo simulations. Finally, we show the practical usefulness of such testing procedures by carrying out an empirical assessment of whether autoregressive conditional duration models are appropriate to oIs for modelling price durations of stocks traded at the New York Stock Exchange.