79 resultados para Estimator standard error and efficiency


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the power of various parameters of the vestibulo-ocular reflex (VOR) in detecting unilateral peripheral vestibular dysfunction and in characterizing certain inner ear pathologies. STUDY DESIGN: Prospective study of consecutive ambulatory patients presenting with acute onset of peripheral vertigo and spontaneous nystagmus. SETTING: Tertiary referral center. PATIENTS: Seventy-four patients (40 females, 34 males) and 22 normal subjects (11 females, 11 males) were included in the study. Patients were classified in three main diagnoses: vestibular neuritis: 40; viral labyrinthitis: 22; Meniere's disease: 12. METHODS: The VOR function was evaluated by standard caloric and impulse rotary tests (velocity step). A mathematical model of vestibular function was used to characterize the VOR response to rotational stimulation. The diagnostic value of the different VOR parameters was assessed by uni- and multivariable logistic regression. RESULTS: In univariable analysis, caloric asymmetry emerged as the most powerful VOR parameter in identifying unilateral vestibular deficit, with a boundary limit set at 20%. In multivariable analysis, the combination of caloric asymmetry and rotational time constant asymmetry significantly improved the discriminatory power over caloric alone (p<0.0001) and produced a detection score with a correct classification of 92.4%. In discriminating labyrinthine diseases, different combinations of the VOR parameters were obtained for each diagnosis (p<0.003) supporting that the VOR characteristics differ between the three inner ear disorders. However, the clinical usefulness of these characteristics in separating the pathologies was limited. CONCLUSION: We propose a powerful logistic model combining the indices of caloric and time constant asymmetries to detect a peripheral vestibular loss, with an accuracy of 92.4%. Based on vestibular data only, the discrimination between the different inner ear diseases is statistically possible, which supports different pathophysiologic changes in labyrinthine pathologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: We devised a randomised controlled trial to evaluate the effectiveness and efficiency of an intervention based on case management care for frequent emergency department users. The aim of the intervention is to reduce such patients' emergency department use, to improve their quality of life, and to reduce costs consequent on frequent use. The intervention consists of a combination of comprehensive case management care and standard emergency care. It uses a clinical case management model that is patient-identified, patient-directed, and developed to provide high intensity services. It provides a continuum of hospital- and community-based patient services, which include clinical assessment, outreach referral, and coordination and communication with other service providers. METHODS/DESIGN: We aim to recruit, during the first year of the study, 250 patients who visit the emergency department of the University Hospital of Lausanne, Switzerland. Eligible patients will have visited the emergency department 5 or more times during the previous 12 months. Randomisation of the participants to the intervention or control groups will be computer generated and concealed. The statistician and each patient will be blinded to the patient's allocation. Participants in the intervention group (N = 125), additionally to standard emergency care, will receive case management from a team, 1 (ambulatory care) to 3 (hospitalization) times during their stay and after 1, 3, and 5 months, at their residence, in the hospital or in the ambulatory care setting. In between the consultations provided, the patients will have the opportunity to contact, at any moment, the case management team. Participants in the control group (N = 125) will receive standard emergency care only. Data will be collected at baseline and 2, 5.5, 9, and 12 months later, including: number of emergency department visits, quality of life (EuroQOL and WHOQOL), health services use, and relevant costs. Data on feelings of discrimination and patient's satisfaction will also be collected at the baseline and 12 months later. DISCUSSION: Our study will help to clarify knowledge gaps regarding the positive outcomes (emergency department visits, quality of life, efficiency, and cost-utility) of an intervention based on case management care. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT01934322.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Identifying the boundary of a species' niche from observational and environmental data is a common problem in ecology and conservation biology and a variety of techniques have been developed or applied to model niches and predict distributions. Here, we examine the performance of some pattern-recognition methods as ecological niche models (ENMs). Particularly, one-class pattern recognition is a flexible and seldom used methodology for modelling ecological niches and distributions from presence-only data. The development of one-class methods that perform comparably to two-class methods (for presence/absence data) would remove modelling decisions about sampling pseudo-absences or background data points when absence points are unavailable. 2. We studied nine methods for one-class classification and seven methods for two-class classification (five common to both), all primarily used in pattern recognition and therefore not common in species distribution and ecological niche modelling, across a set of 106 mountain plant species for which presence-absence data was available. We assessed accuracy using standard metrics and compared trade-offs in omission and commission errors between classification groups as well as effects of prevalence and spatial autocorrelation on accuracy. 3. One-class models fit to presence-only data were comparable to two-class models fit to presence-absence data when performance was evaluated with a measure weighting omission and commission errors equally. One-class models were superior for reducing omission errors (i.e. yielding higher sensitivity), and two-classes models were superior for reducing commission errors (i.e. yielding higher specificity). For these methods, spatial autocorrelation was only influential when prevalence was low. 4. These results differ from previous efforts to evaluate alternative modelling approaches to build ENM and are particularly noteworthy because data are from exhaustively sampled populations minimizing false absence records. Accurate, transferable models of species' ecological niches and distributions are needed to advance ecological research and are crucial for effective environmental planning and conservation; the pattern-recognition approaches studied here show good potential for future modelling studies. This study also provides an introduction to promising methods for ecological modelling inherited from the pattern-recognition discipline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: After a peak in the late 1980s, cancer mortality in Europe has declined by ∼10% in both sexes up to the early 2000s. We provide an up-to-date picture of patterns and trends in mortality from major cancers in Europe. METHODS: We analyzed cancer mortality data from the World Health Organization for 25 cancer sites and 34 European countries (plus the European Union, EU) in 2005-2009. We computed age-standardized rates (per 100 000 person-years) using the world standard population and provided an overview of trends since 1980 for major European countries, using joinpoint regression. RESULTS: Cancer mortality in the EU steadily declined since the late 1980s, with reductions by 1.6% per year in 2002-2009 in men and 1% per year in 1993-2009 in women. In western Europe, rates steadily declined over the last two decades for stomach and colorectal cancer, Hodgkin lymphoma, and leukemias in both sexes, breast and (cervix) uterine cancer in women, and testicular cancer in men. In central/eastern Europe, mortality from major cancer sites has been increasing up to the late 1990s/early 2000s. In most Europe, rates have been increasing for lung cancer in women and for pancreatic cancer and soft tissue sarcomas in both sexes, while they have started to decline over recent years for multiple myeloma. In 2005-2009, there was still an over twofold difference between the highest male cancer mortality in Hungary (235.2/100 000) and the lowest one in Sweden (112.9/100 000), and a 1.7-fold one in women (from 124.4 in Denmark to 71.0/100 000 in Spain). CONCLUSIONS: With the major exceptions of female lung cancer and pancreatic cancer in both sexes, in the last quinquennium, cancer mortality has moderately but steadily declined across Europe. However, substantial differences across countries persist, requiring targeted interventions on risk factor control, early diagnosis, and improved management and pharmacological treatment for selected cancer sites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Organization of the Thesis The remainder of the thesis comprises five chapters and a conclusion. The next chapter formalizes the envisioned theory into a tractable model. Section 2.2 presents a formal description of the model economy: the individual heterogeneity, the individual objective, the UI setting, the population dynamics and the equilibrium. The welfare and efficiency criteria for qualifying various equilibrium outcomes are proposed in section 2.3. The fourth section shows how the model-generated information can be computed. Chapter 3 transposes the model from chapter 2 in conditions that enable its use in the analysis of individual labor market strategies and their implications for the labor market equilibrium. In section 3.2 the Swiss labor market data sets, stylized facts, and the UI system are presented. The third section outlines and motivates the parameterization method. In section 3.4 the model's replication ability is evaluated and some aspects of the parameter choice are discussed. Numerical solution issues can be found in the appendix. Chapter 4 examines the determinants of search-strategic behavior in the model economy and its implications for the labor market aggregates. In section 4.2, the unemployment duration distribution is examined and related to search strategies. Section 4.3 shows how the search- strategic behavior is influenced by the UI eligibility and section 4.4 how it is determined by individual heterogeneity. The composition effects generated by search strategies in labor market aggregates are examined in section 4.5. The last section evaluates the model's replication of empirical unemployment escape frequencies reported in Sheldon [67]. Chapter 5 applies the model economy to examine the effects on the labor market equilibrium of shocks to the labor market risk structure, to the deep underlying labor market structure and to the UI setting. Section 5.2 examines the effects of the labor market risk structure on the labor market equilibrium and the labor market strategic behavior. The effects of alterations in the labor market deep economic structural parameters, i.e. individual preferences and production technology, are shown in Section 5.3. Finally, the UI setting impacts on the labor market are studied in Section 5.4. This section also evaluates the role of the UI authority monitoring and the differences in the Way changes in the replacement rate and the UI benefit duration affect the labor market. In chapter 6 the model economy is applied in counterfactual experiments to assess several aspects of the Swiss labor market movements in the nineties. Section 6.2 examines the two equilibria characterizing the Swiss labor market in the nineties, the " growth" equilibrium with a "moderate" UI regime and the "recession" equilibrium with a more "generous" UI. Section 6.3 evaluates the isolated effects of the structural shocks, while the isolated effects of the UI reforms are analyzed in section 6.4. Particular dimensions of the UI reforms, the duration, replacement rate and the tax rate effects, are studied in section 6.5, while labor market equilibria without benefits are evaluated in section 6.6. In section 6.7 the structural and institutional interactions that may act as unemployment amplifiers are discussed in view of the obtained results. A welfare analysis based on individual welfare in different structural and UI settings is presented in the eighth section. Finally, the results are related to more favorable unemployment trends after 1997. The conclusion evaluates the features embodied in the model economy with respect to the resulting model dynamics to derive lessons from the model design." The thesis ends by proposing guidelines for future improvements of the model and directions for further research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: In 2004, complementary and alternative medicine (CAM) was offered by physicians in one-third of Swiss hospitals. Since then, CAM health policy has changed considerably. This study aimed to describe the present supply and use of CAM in hospitals in the French-speaking part of Switzerland, and to explore qualitatively the characteristics of this offer. METHODS: Between June 2011 and March 2012, a short questionnaire was sent to the medical directors of hospitals (n = 46), asking them whether CAM was offered, where and by whom. Then, a semi-directive interview was conducted with ten CAM therapists. RESULTS: Among 37 responses (return rate 80%), 19 medical directors indicated that their hospital offered at least one CAM and 18 reported that they did not. Acupuncture was the most frequently available CAM, followed by manual therapies, osteopathy and aromatherapy. The disciplines that offered CAM most frequently were rehabilitation, gynaecology and obstetrics, palliative care, psychiatry, and anaesthetics. In eight out of ten interviews, it appeared that the procedures for introducing a CAM in the hospital were not tightly supervised by the hospital and were mainly based on the goodwill of the therapists, rather than clinical/scientific evidence. CONCLUSION: The number of hospitals offering CAM in the French-speaking part of Switzerland seemed to have risen since 2004. The selection of a CAM to be offered in a hospital should be based on the same procedure of evaluation and validation as conventional therapy, and if the safety and efficiency of the CAM is evidence-based, it should receive the same resources as a conventional therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Human speech is greatly influenced by the speakers' affective state, such as sadness, happiness, grief, guilt, fear, anger, aggression, faintheartedness, shame, sexual arousal, love, amongst others. Attentive listeners discover a lot about the affective state of their dialog partners with no great effort, and without having to talk about it explicitly during a conversation or on the phone. On the other hand, speech dysfunctions, such as slow, delayed or monotonous speech, are prominent features of affective disorders. METHODS: This project was comprised of four studies with healthy volunteers from Bristol (English: n = 117), Lausanne (French: n = 128), Zurich (German: n = 208), and Valencia (Spanish: n = 124). All samples were stratified according to gender, age, and education. The specific study design with different types of spoken text along with repeated assessments at 14-day intervals allowed us to estimate the 'natural' variation of speech parameters over time, and to analyze the sensitivity of speech parameters with respect to form and content of spoken text. Additionally, our project included a longitudinal self-assessment study with university students from Zurich (n = 18) and unemployed adults from Valencia (n = 18) in order to test the feasibility of the speech analysis method in home environments. RESULTS: The normative data showed that speaking behavior and voice sound characteristics can be quantified in a reproducible and language-independent way. The high resolution of the method was verified by a computerized assignment of speech parameter patterns to languages at a success rate of 90%, while the correct assignment to texts was 70%. In the longitudinal self-assessment study we calculated individual 'baselines' for each test person along with deviations thereof. The significance of such deviations was assessed through the normative reference data. CONCLUSIONS: Our data provided gender-, age-, and language-specific thresholds that allow one to reliably distinguish between 'natural fluctuations' and 'significant changes'. The longitudinal self-assessment study with repeated assessments at 1-day intervals over 14 days demonstrated the feasibility and efficiency of the speech analysis method in home environments, thus clearing the way to a broader range of applications in psychiatry. © 2014 S. Karger AG, Basel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cilengitide, a cyclicized arginine-glycine-aspartic acid-containing pentapeptide, potently blocks ανβ3 and ανβ5 integrin activation. Integrins are upregulated in many malignancies and mediate a wide variety of tumor-stroma interactions. Cilengitide and other integrin-targeting therapeutics have preclinical activity against many cancer subtypes including glioblastoma (GBM), the most common and deadliest CNS tumor. Cilengitide is active against orthotopic GBM xenografts and can augment radiotherapy and chemotherapy in these models. In Phase I and II GBM trials, cilengitide and the combination of cilengitide with standard temozolomide and radiation demonstrate consistent antitumor activity and a favorable safety profile. Cilengitide is currently under evaluation in a pivotal, randomized Phase III study (Cilengitide in Combination With Temozolomide and Radiotherapy in Newly Diagnosed Glioblastoma Phase III Randomized Clinical Trial [CENTRIC]) for newly diagnosed GBM. In addition, randomized controlled Phase II studies with cilengitide are ongoing for non-small-cell lung cancer and squamous cell carcinoma of the head and neck. Cilengitide is the first integrin inhibitor in clinical Phase III development for oncology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questionnaire studies indicate that high-anxious musicians may suffer from hyperventilation symptoms before and/or during performance. Reported symptoms include amongst others shortness of breath, fast or deep breathing, dizziness and thumping heart. A self-report study by Widmer, Conway, Cohen and Davies (1997) shows that up to seventy percent of the tested highly anxious musicians are hyperventilators during performance. However, no study has yet tested if these self-reported symptoms reflect actual cardiorespiratory changes just before and during performance. Disturbances in breathing patterns and hyperventilation may negatively affect the performance quality in stressful performance situations. The main goal of this study is to determine if music performance anxiety is manifest physiologically in specific correlates of cardiorespiratory activity. We studied 74 professional music students of Swiss Music Universities divided into two groups (high- and lowanxious) based on their self-reported performance anxiety (State-Trait Anxiety Inventory by Spielberger). The students were tested in three distinct situations: baseline, performance without audience, performance with audience. We measured a) breathing patterns, end-tidal carbon dioxide, which is a good non-invasive estimator for hyperventilation, and cardiac activation and b) self-perceived emotions and self-perceived physiological activation. Analyses of heart rate, respiratory rate, self-perceived palpitations, self-perceived shortness of breath and self-perceived anxiety for the 15 most and the 15 least anxious musicians show that high-anxious and low-anxious music students have a comparable physiological activation during the different measurement periods. However, highanxious music students feel significantly more anxious and perceive significantly stronger palpitations and significantly stronger shortness of breath just before and during a public performance. The results indicate that low- and high-anxious music students a) do not differ in the considered physiological responses and b) differ in the considered self-perceived physiological symptoms and the selfreported anxiety before and/or during a public performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been shown that repolarization alternans, a beat-to-beat alternation in action potential duration, enhances dispersion of repolarization above a critical heart rate and promotes susceptibility to ventricular arrhythmias. It is unknown whether repolarization alternans is measurable in the atria using standard pacemakers and whether it plays a role in promoting atrial fibrillation. In this work, atrial repolarization alternans amplitude and periodicity are studied in a sheep model of pacing-induced atrial fibrillation. Two pacemakers, each with one right atrial and ventricular lead, were implanted in 4 male sheep after ablation of the atrioventricular junction. The first one was used to deliver rapid pacing for measurements of right atrial repolarization alternans and the second one to record a unipolar electrogram. Atrial repolarization alternans appeared rate-dependent and its amplitude increased as a function of pacing rate. Repolarization alternans was intermittent but no periodicity was detected. An increase of repolarization alternans preceding episodes of non-sustained atrial fibrillation suggests that repolarization alternans is a promising parameter for assessment of atrial fibrillation susceptibility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Body fat distribution, particularly centralized obesity, is associated with metabolic risk above and beyond total adiposity. We performed genome-wide association of abdominal adipose depots quantified using computed tomography (CT) to uncover novel loci for body fat distribution among participants of European ancestry. Subcutaneous and visceral fat were quantified in 5,560 women and 4,997 men from 4 population-based studies. Genome-wide genotyping was performed using standard arrays and imputed to ~2.5 million Hapmap SNPs. Each study performed a genome-wide association analysis of subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), VAT adjusted for body mass index, and VAT/SAT ratio (a metric of the propensity to store fat viscerally as compared to subcutaneously) in the overall sample and in women and men separately. A weighted z-score meta-analysis was conducted. For the VAT/SAT ratio, our most significant p-value was rs11118316 at LYPLAL1 gene (p = 3.1 × 10E-09), previously identified in association with waist-hip ratio. For SAT, the most significant SNP was in the FTO gene (p = 5.9 × 10E-08). Given the known gender differences in body fat distribution, we performed sex-specific analyses. Our most significant finding was for VAT in women, rs1659258 near THNSL2 (p = 1.6 × 10-08), but not men (p = 0.75). Validation of this SNP in the GIANT consortium data demonstrated a similar sex-specific pattern, with observed significance in women (p = 0.006) but not men (p = 0.24) for BMI and waist circumference (p = 0.04 [women], p = 0.49 [men]). Finally, we interrogated our data for the 14 recently published loci for body fat distribution (measured by waist-hip ratio adjusted for BMI); associations were observed at 7 of these loci. In contrast, we observed associations at only 7/32 loci previously identified in association with BMI; the majority of overlap was observed with SAT. Genome-wide association for visceral and subcutaneous fat revealed a SNP for VAT in women. More refined phenotypes for body composition and fat distribution can detect new loci not previously uncovered in large-scale GWAS of anthropometric traits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Prospective non-randomised comparison of full-thickness pedicled diaphragm flap with intercostal muscle flap in terms of morbidity and efficiency for bronchial stump coverage after induction therapy followed by pneumonectomy for non-small cell lung cancer (NSCLC). METHODS: Between 1996 and 1998, a consecutive series of 26 patients underwent pneumonectomy following induction therapy. Half of the patients underwent mediastinal reinforcement by use of a pedicled intercostal muscle flap (IF) and half of the patients by use of a pedicled full-thickness diaphragm muscle flap (DF). Patients in both groups were matched according to age, gender, side of pneumonectomy and stage of NSCLC. Postoperative morbidity and mortality were recorded. Six months follow-up including physical examination and pulmonary function testing was performed to examine the incidence of bronchial stump fistulae, gastro-esophageal disorders or chest wall complaints. RESULTS: There was no 30-day mortality in both groups. Complications were observed in one of 13 patients after IF and five of 13 after DF including pneumonia in two (one IF and one DF), visceral herniations in three (DF) and bronchopleural fistula in one patient (DF). There were no symptoms of gastro-esophageal reflux disease (GERD). Postoperative pulmonary function testing revealed no significant differences between the two groups. CONCLUSIONS: Pedicled intercostal and diaphragmatic muscle flaps are both valuable and effective tools for prophylactic mediastinal reinforcement following induction therapy and pneumonectomy. In our series of patients, IF seemed to be associated with a smaller operation-related morbidity than DF, although the difference was not significant. Pedicled full-thickness diaphragmatic flaps may be indicated after induction therapy and extended pneumonectomy with pericardial resection in order to cover the stump and close the pericardial defect since they do not adversely influence pulmonary function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.