993 resultados para Coefficients
Resumo:
JPEG2000 és un estàndard de compressió d’imatges que utilitza la transformada wavelet i, posteriorment, una quantificació uniforme dels coeficients amb dead-zone. Els coeficients wavelet presenten certes dependències tant estadístiques com visuals. Les dependències estadístiques es tenen en compte a l'esquema JPEG2000, no obstant, no passa el mateix amb les dependències visuals. En aquest treball, es pretén trobar una representació més adaptada al sistema visual que la que proporciona JPEG2000 directament. Per trobar-la utilitzarem la normalització divisiva dels coeficients, tècnica que ja ha demostrat resultats tant en decorrelació estadística de coeficients com perceptiva. Idealment, el que es voldria fer és reconvertir els coeficients a un espai de valors en els quals un valor més elevat dels coeficients impliqui un valor més elevat d'aportació visual, i utilitzar aquest espai de valors per a codificar. A la pràctica, però, volem que el nostre sistema de codificació estigui integrat a un estàndard. És per això que utilitzarem JPEG2000, estàndard de la ITU que permet una elecció de les distorsions en la codificació, i utilitzarem la distorsió en el domini de coeficients normalitzats com a mesura de distorsió per a escollir quines dades s'envien abans.
Resumo:
In contrast to previous results combining all ages we find positive effects of comparison income on happiness for the under 45s, and negative effects for those over 45. In the BHPS these coefficients are several times the magnitude of own income effects. In GSOEP they cancel to give no effect of effect of comparison income on life satisfaction in the whole sample, when controlling for fixed effects, and time-in-panel, and with flexible, age-group dummies. The residual age-happiness relationship is hump-shaped in all three countries. Results are consistent with a simple life cycle model of relative income under uncertainty.
Resumo:
This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.
Resumo:
We use factor augmented vector autoregressive models with time-varying coefficients to construct a financial conditions index. The time-variation in the parameters allows for the weights attached to each financial variable in the index to evolve over time. Furthermore, we develop methods for dynamic model averaging or selection which allow the financial variables entering into the FCI to change over time. We discuss why such extensions of the existing literature are important and show them to be so in an empirical application involving a wide range of financial variables.
Resumo:
Using survey expectations data and Markov-switching models, this paper evaluates the characteristics and evolution of investors' forecast errors about the yen/dollar exchange rate. Since our model is derived from the uncovered interest rate parity (UIRP) condition and our data cover a period of low interest rates, this study is also related to the forward premium puzzle and the currency carry trade strategy. We obtain the following results. First, with the same forecast horizon, exchange rate forecasts are homogeneous among different industry types, but within the same industry, exchange rate forecasts differ if the forecast time horizon is different. In particular, investors tend to undervalue the future exchange rate for long term forecast horizons; however, in the short run they tend to overvalue the future exchange rate. Second, while forecast errors are found to be partly driven by interest rate spreads, evidence against the UIRP is provided regardless of the forecasting time horizon; the forward premium puzzle becomes more significant in shorter term forecasting errors. Consistent with this finding, our coefficients on interest rate spreads provide indirect evidence of the yen carry trade over only a short term forecast horizon. Furthermore, the carry trade seems to be active when there is a clear indication that the interest rate will be low in the future.
Resumo:
Purpose: Revolutionary endovascular treatments are on the verge of being available for management of ascending aortic diseases. Morphometric measurements of the ascending aorta have already been done with ECG-gated MDCT to help such therapeutic development. However the reliability of these measurements remains unknown. The objective of this work was to compare the intraobserver and interobserver variability of CAD (computer aided diagnosis) versus manual measurements in the ascending aorta. Methods and materials: Twenty-six consecutive patients referred for ECG-gated CT thoracic angiography (64-row CT scanner) were evaluated. Measurements of the maximum and minimum ascending aorta diameters at mid-distance between the brachiocephalic artery and the aortic valve were obtained automatically with a commercially available CAD and manually by two observers separately. Both observers repeated the measurements during a different session at least one month after the first measurements. Intraclass coefficients as well the Bland and Altman method were used for comparison between measurements. Two-paired t-test was used to determine the significance of intraobserver and interobserver differences (alpha = 0.05). Results: There is a significant difference between CAD and manual measurements in the maximum diameter (p = 0.004) for the first observer, whereas the difference was significant for minimum diameter between the second observer and the CAD (p <0.001). Interobserver variability showed a weak agreement when measurements were done manually. Intraobserver variability was lower with the CAD compared to the manual measurements (limits of variability: from -0.7 to 0.9 mm for the former and from -1.2 to 1.3 mm for the latter). Conclusion: In order to improve reproductibility of measurements whenever needed, pre- and post-therapeutic management of the ascending aorta may benefit from follow-up done by a unique observer with the help of CAD.
Resumo:
We analyse the role of time-variation in coefficients and other sources of uncertainty in exchange rate forecasting regressions. Our techniques incorporate the notion that the relevant set of predictors and their corresponding weights, change over time. We find that predictive models which allow for sudden rather than smooth, changes in coefficients significantly beat the random walk benchmark in out-of-sample forecasting exercise. Using innovative variance decomposition scheme, we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients' variability, as the main factors hindering models' forecasting performance. The uncertainty regarding the choice of the predictor is small.
Resumo:
OBJECTIVES: Advances in biopsychosocial science have underlined the importance of taking social history and life course perspective into consideration in primary care. For both clinical and research purposes, this study aims to develop and validate a standardised instrument measuring both material and social deprivation at an individual level. METHODS: We identified relevant potential questions regarding deprivation using a systematic review, structured interviews, focus group interviews and a think-aloud approach. Item response theory analysis was then used to reduce the length of the 38-item questionnaire and derive the deprivation in primary care questionnaire (DiPCare-Q) index using data obtained from a random sample of 200 patients during their planned visits to an ambulatory general internal medicine clinic. Patients completed the questionnaire a second time over the phone 3 days later to enable us to assess reliability. Content validity of the DiPCare-Q was then assessed by 17 general practitioners. Psychometric properties and validity of the final instrument were investigated in a second set of patients. The DiPCare-Q was administered to a random sample of 1898 patients attending one of 47 different private primary care practices in western Switzerland along with questions on subjective social status, education, source of income, welfare status and subjective poverty. RESULTS: Deprivation was defined in three distinct dimensions: material (eight items), social (five items) and health deprivation (three items). Item consistency was high in both the derivation (Kuder-Richardson Formula 20 (KR20) =0.827) and the validation set (KR20 =0.778). The DiPCare-Q index was reliable (interclass correlation coefficients=0.847) and was correlated to subjective social status (r(s)=-0.539). CONCLUSION: The DiPCare-Q is a rapid, reliable and validated instrument that may prove useful for measuring both material and social deprivation in primary care.
Resumo:
It has been recently emphasized that, if individuals have heterogeneous dynamics, estimates of shock persistence based on aggregate data are significatively higher than those derived from its disaggregate counterpart. However, a careful examination of the implications of this statement on the various tools routinely employed to measure persistence is missing in the literature. This paper formally examines this issue. We consider a disaggregate linear model with heterogeneous dynamics and compare the values of several measures of persistence across aggregation levels. Interestingly, we show that the average persistence of aggregate shocks, as measured by the impulse response function (IRF) of the aggregate model or by the average of the individual IRFs, is identical on all horizons. This result remains true even in situations where the units are (short-memory) stationary but the aggregate process is long-memory or even nonstationary. In contrast, other popular persistence measures, such as the sum of the autoregressive coefficients or the largest autoregressive root, tend to be higher the higher the aggregation level. We argue, however, that this should be seen more as an undesirable property of these measures than as evidence of different average persistence across aggregation levels. The results are illustrated in an application using U.S. inflation data.
Resumo:
Therapeutic drug monitoring (TDM) may contribute to optimizing the efficacy and safety of antifungal therapy because of the large variability in drug pharmacokinetics. Rapid, sensitive, and selective laboratory methods are needed for efficient TDM. Quantification of several antifungals in a single analytical run may best fulfill these requirements. We therefore developed a multiplex ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method requiring 100 μl of plasma for simultaneous quantification within 7 min of fluconazole, itraconazole, hydroxyitraconazole, posaconazole, voriconazole, voriconazole-N-oxide, caspofungin, and anidulafungin. Protein precipitation with acetonitrile was used in a single extraction procedure for eight analytes. After reverse-phase chromatographic separation, antifungals were quantified by electrospray ionization-triple-quadrupole mass spectrometry by selected reaction monitoring detection using the positive mode. Deuterated isotopic compounds of azole antifungals were used as internal standards. The method was validated based on FDA recommendations, including assessment of extraction yields, matrix effect variability (<9.2%), and analytical recovery (80.1 to 107%). The method is sensitive (lower limits of azole quantification, 0.01 to 0.1 μg/ml; those of echinocandin quantification, 0.06 to 0.1 μg/ml), accurate (intra- and interassay biases of -9.9 to +5% and -4.0 to +8.8%, respectively), and precise (intra- and interassay coefficients of variation of 1.2 to 11.1% and 1.2 to 8.9%, respectively) over clinical concentration ranges (upper limits of quantification, 5 to 50 μg/ml). Thus, we developed a simple, rapid, and robust multiplex UPLC-MS/MS assay for simultaneous quantification of plasma concentrations of six antifungals and two metabolites. This offers, by optimized and cost-effective lab resource utilization, an efficient tool for daily routine TDM aimed at maximizing the real-time efficacy and safety of different recommended single-drug antifungal regimens and combination salvage therapies, as well as a tool for clinical research.
Resumo:
Purpose: To evaluate the feasibility, determine the optimal b-value, and assess the utility of 3-T diffusion-weighted MR imaging (DWI) of the spine in differentiating benign from pathologic vertebral compression fractures.Methods and Materials: Twenty patients with 38 vertebral compression fractures (24 benign, 14 pathologic) and 20 controls (total: 23 men, 17 women, mean age 56.2years) were included from December 2010 to May 2011 in this IRB-approved prospective study. MR imaging of the spine was performed on a 3-T unit with T1-w, fat-suppressed T2-w, gadolinium-enhanced fat-suppressed T1-w and zoomed-EPI (2D RF excitation pulse combined with reduced field-of-view single-shot echo-planar readout) diffusion-w (b-values: 0, 300, 500 and 700s/mm2) sequences. Two radiologists independently assessed zoomed-EPI image quality in random order using a 4-point scale: 1=excellent to 4=poor. They subsequently measured apparent diffusion coefficients (ADCs) in normal vertebral bodies and compression fractures, in consensus.Results: Lower b-values correlated with better image quality scores, with significant differences between b=300 (mean±SD=2.6±0.8), b=500 (3.0±0.7) and b=700 (3.6±0.6) (all p<0.001). Mean ADCs of normal vertebral bodies (n=162) were 0.23, 0.17 and 0.11×10-3mm2/s with b=300, 500 and 700s/mm2, respectively. In contrast, mean ADCs were 0.89, 0.70 and 0.59×10-3mm2/s for benign vertebral compression fractures and 0.79, 0.66 and 0.51×10-3mm2/s for pathologic fractures with b=300, 500 and 700s/mm2, respectively. No significant difference was found between ADCs of benign and pathologic fractures.Conclusion: 3-T DWI of the spine is feasible and lower b-values (300s/mm2) are recommended. However, our preliminary results show no advantage of DWI in differentiating benign from pathologic vertebral compression fractures.
Resumo:
As is known, the Kyoto Protocol proposes to reinforce national policies for emission reduction and, furthermore, to cooperate with other contracting parties. In this context, it would be necessary to assess these emissions, both in general and specifically, by pollutants and/or among productive sectors. The object of this paper is precisely to estimate the polluting emissions of industrial origin in Catalonia in the year 2001, in a multivariate context which explicitly allows a distinction to be made between the polluter and/or the productive sector causing this emission. Six pollutants considered, four directly related to greenhouse effect. A multi-level model, with two levels, pollutants and productive sectors, was specified. Both technological progress and elasticity of capital were introduced as random effects. Hence, it has been permitted that these coefficients vary according to one or other level. The most important finding in this paper is that elasticity of capital has been estimated as very non-elastic, with a range which varies between 0.162 (the paper industry) and 0.556 (commerce). In fact, and generally speaking, the greater capital the sector has, the less elasticity of capital has been estimated. Key words: Kyoto protocol, multilevel model, technological progress
Resumo:
We present a new a-priori estimate for discrete coagulation fragmentation systems with size-dependent diffusion within a bounded, regular domain confined by homogeneous Neumann boundary conditions. Following from a duality argument, this a-priori estimate provides a global L2 bound on the mass density and was previously used, for instance, in the context of reaction-diffusion equations. In this paper we demonstrate two lines of applications for such an estimate: On the one hand, it enables to simplify parts of the known existence theory and allows to show existence of solutions for generalised models involving collision-induced, quadratic fragmentation terms for which the previous existence theory seems difficult to apply. On the other hand and most prominently, it proves mass conservation (and thus the absence of gelation) for almost all the coagulation coefficients for which mass conservation is known to hold true in the space homogeneous case.
Resumo:
There are far-reaching conceptual similarities between bi-static surface georadar and post-stack, "zero-offset" seismic reflection data, which is expressed in largely identical processing flows. One important difference is, however, that standard deconvolution algorithms routinely used to enhance the vertical resolution of seismic data are notoriously problematic or even detrimental to the overall signal quality when applied to surface georadar data. We have explored various options for alleviating this problem and have tested them on a geologically well-constrained surface georadar dataset. Standard stochastic and direct deterministic deconvolution approaches proved to be largely unsatisfactory. While least-squares-type deterministic deconvolution showed some promise, the inherent uncertainties involved in estimating the source wavelet introduced some artificial "ringiness". In contrast, we found spectral balancing approaches to be effective, practical and robust means for enhancing the vertical resolution of surface georadar data, particularly, but not exclusively, in the uppermost part of the georadar section, which is notoriously plagued by the interference of the direct air- and groundwaves. For the data considered in this study, it can be argued that band-limited spectral blueing may provide somewhat better results than standard band-limited spectral whitening, particularly in the uppermost part of the section affected by the interference of the air- and groundwaves. Interestingly, this finding is consistent with the fact that the amplitude spectrum resulting from least-squares-type deterministic deconvolution is characterized by a systematic enhancement of higher frequencies at the expense of lower frequencies and hence is blue rather than white. It is also consistent with increasing evidence that spectral "blueness" is a seemingly universal, albeit enigmatic, property of the distribution of reflection coefficients in the Earth. Our results therefore indicate that spectral balancing techniques in general and spectral blueing in particular represent simple, yet effective means of enhancing the vertical resolution of surface georadar data and, in many cases, could turn out to be a preferable alternative to standard deconvolution approaches.
Resumo:
The McMillan map is a one-parameter family of integrable symplectic maps of the plane, for which the origin is a hyperbolic fixed point with a homoclinic loop, with small Lyapunov exponent when the parameter is small. We consider a perturbation of the McMillan map for which we show that the loop breaks in two invariant curves which are exponentially close one to the other and which intersect transversely along two primary homoclinic orbits. We compute the asymptotic expansion of several quantities related to the splitting, namely the Lazutkin invariant and the area of the lobe between two consecutive primary homoclinic points. Complex matching techniques are in the core of this work. The coefficients involved in the expansion have a resurgent origin, as shown in [MSS08].