33 resultados para encoding of measurement streams


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We did a subject-level meta-analysis of the changes (Δ) in blood pressure (BP) observed 3 and 6 months after renal denervation (RDN) at 10 European centers. Recruited patients (n=109; 46.8% women; mean age 58.2 years) had essential hypertension confirmed by ambulatory BP. From baseline to 6 months, treatment score declined slightly from 4.7 to 4.4 drugs per day. Systolic/diastolic BP fell by 17.6/7.1 mm Hg for office BP, and by 5.9/3.5, 6.2/3.4, and 4.4/2.5 mm Hg for 24-h, daytime and nighttime BP (P0.03 for all). In 47 patients with 3- and 6-month ambulatory measurements, systolic BP did not change between these two time points (P0.08). Normalization was a systolic BP of <140 mm Hg on office measurement or <130 mm Hg on 24-h monitoring and improvement was a fall of 10 mm Hg, irrespective of measurement technique. For office BP, at 6 months, normalization, improvement or no decrease occurred in 22.9, 59.6 and 22.9% of patients, respectively; for 24-h BP, these proportions were 14.7, 31.2 and 34.9%, respectively. Higher baseline BP predicted greater BP fall at follow-up; higher baseline serum creatinine was associated with lower probability of improvement of 24-h BP (odds ratio for 20-μmol l(-1) increase, 0.60; P=0.05) and higher probability of experiencing no BP decrease (OR, 1.66; P=0.01). In conclusion, BP responses to RDN include regression-to-the-mean and remain to be consolidated in randomized trials based on ambulatory BP monitoring. For now, RDN should remain the last resort in patients in whom all other ways to control BP failed, and it must be cautiously used in patients with renal impairment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A review of nearly three decades of cross-cultural research shows that this domain still has to address several issues regarding the biases of data collection and sampling methods, the lack of clear and consensual definitions of constructs and variables, and measurement invariance issues that seriously limit the comparability of results across cultures. Indeed, a large majority of the existing studies are still based on the anthropological model, which compares two cultures and mainly uses convenience samples of university students. This paper stresses the need to incorporate a larger variety of regions and cultures in the research designs, the necessity to theorize and identify a larger set of variables in order to describe a human environment, and the importance of overcoming methodological weaknesses to improve the comparability of measurement results. Cross-cultural psychology is at the next crossroads in it's development, and researchers can certainly make major contributions to this domain if they can address these weaknesses and challenges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.