991 resultados para Quit Attempt Methods
Resumo:
OBJECTIVES: Pulmonary valve insufficiency remains a leading cause for reoperations in congenital cardiac surgery. The current percutaneous approach is limited by the size of the access vessel and variable right ventricular outflow tract morphology. This study assesses the feasibility of transapical pulmonary valve replacement based on a new valved stent construction concept. METHODS: A new valved stent design was implanted off-pump under continuous intracardiac echocardiographic and fluoroscopic guidance into the native right ventricular outflow tract in 8 pigs (48.5 +/- 6.0 kg) through the right ventricular apex, and device function was studied by using invasive and noninvasive measures. RESULTS: Procedural success was 100% at the first attempt. Procedural time was 75 +/- 15 minutes. All devices were delivered at the target site with good acute valve function. No valved stents dislodged. No animal had significant regurgitation or paravalvular leaking on intracardiac echocardiographic analysis. All animals had a competent tricuspid valve and no signs of right ventricular dysfunction. The planimetric valve orifice was 2.85 +/- 0.32 cm(2). No damage to the pulmonary artery or structural defect of the valved stents was found at necropsy. CONCLUSIONS: This study confirms the feasibility of direct access valve replacement through the transapical procedure for replacement of the pulmonary valve, as well as validity of the new valved stent design concept. The transapical procedure is targeting a broader patient pool, including the very young and the adult patient. The device design might not be restricted to failing conduits only and could allow for implantation in a larger patient population, including those with native right ventricular outflow tract configurations.
Resumo:
This paper presents 3-D brain tissue classificationschemes using three recent promising energy minimizationmethods for Markov random fields: graph cuts, loopybelief propagation and tree-reweighted message passing.The classification is performed using the well knownfinite Gaussian mixture Markov Random Field model.Results from the above methods are compared with widelyused iterative conditional modes algorithm. Theevaluation is performed on a dataset containing simulatedT1-weighted MR brain volumes with varying noise andintensity non-uniformities. The comparisons are performedin terms of energies as well as based on ground truthsegmentations, using various quantitative metrics.
Resumo:
BACKGROUND: Smokers have a lower body weight compared to non-smokers. Smoking cessation is associated with weight gain in most cases. A hormonal mechanism of action might be implicated in weight variations related to smoking, and leptin might be implicated. We made secondary analyses of an RCT, with a hypothesis-free exploratory approach to study the dynamic of leptin following smoking cessation. METHODS: We measured serum leptin levels among 271 sedentary smokers willing to quit who participated in a randomized controlled trial assessing a 9-week moderate-intensity physical activity intervention as an aid for smoking cessation. We adjusted leptin for body fat levels. We performed linear regressions to test for an association between leptin levels and the study group over time. RESULTS: One year after smoking cessation, the mean serum leptin change was +3.23 mg/l (SD 4.89) in the control group and +1.25 mg/l (SD 4.86) in the intervention group (p of the difference < 0.05). When adjusted for body fat levels, leptin was higher in the control group than in the intervention group (p of the difference < 0.01). The mean weight gain was +2.91 (SD 6.66) Kg in the intervention and +3.33 (SD 4.47) Kg in the control groups, respectively (p not significant). CONCLUSIONS: Serum leptin levels significantly increased after smoking cessation, in spite of substantial weight gain. The leptin dynamic might be different in chronic tobacco users who quit smoking, and physical activity might impact the dynamic of leptin in such a situation. CLINICAL TRIAL REGISTRATION NUMBER: NCT00521391.
Resumo:
Introduction: Quantitative measures of degree of lumbar spinal stenosis (LSS) such as antero-posterior diameter of the canal or dural sac cross sectional area vary widely and do not correlate with clinical symptoms or results of surgical decompression. In an effort to improve quantification of stenosis we have developed a grading system based on the morphology of the dural sac and its contents as seen on T2 axial images. The grading comprises seven categories ranging form normal to the most severe stenosis and takes into account the ratio of rootlet/CSF content. Material and methods: Fifty T2 axial MRI images taken at disc level from twenty seven symptomatic lumbar spinal stenosis patients who underwent decompressive surgery were classified into seven categories by five observers and reclassified 2 weeks later by the same investigators. Intra- and inter-observer reliability of the classification were assessed using Cohen's and Fleiss' kappa statistics, respectively. Results: Generally, the morphology grading system itself was well adopted by the observers. Its success in application is strongly influenced by the identification of the dural sac. The average intraobserver Cohen's kappa was 0.53 ± 0.2. The inter-observer Fleiss' kappa was 0.38 ± 0.02 in the first rating and 0.3 ± 0.03 in the second rating repeated after two weeks. Discussion: In this attempt, the teaching of the observers was limited to an introduction to the general idea of the morphology grading system and one example MRI image per category. The identification of the dimension of the dural sac may be a difficult issue in absence of complete T1 T2 MRI image series as it was the case here. The similarity of the CSF to possibly present fat on T2 images was the main reason of mismatch in the assignment of the cases to a category. The Fleiss correlation factors of the five observers are fair and the proposed morphology grading system is promising.
Resumo:
This paper ia an attempt to clarify the relationship between fractionalization,polarization and conflict. The literature on the measurement of ethnic diversityhas taken as given that the proper measure for heterogeneity can be calculatedby using the fractionalization index. This index is widely used in industrialeconomics and, for empirical purposes, the ethnolinguistic fragmentation isready available for regression exercises. Nevertheless the adequacy of asynthetic index of hetergeneity depends on the intrinsic characteristicsof the heterogeneous dimension to be measured. In the case of ethnicdiversity there is a very strong conflictive dimension. For this reasonwe argue that the measure of heterogeneity should be one of the class ofpolarization measures. In fact the intuition of the relationship betweenconflict and fractionalization do not hold for more than two groups. Incontrast with the usual problem of polarization indices, which are ofdifficult empirical implementation without making some arbitrary choiceof parameters, we show that the RQ index, proposed by Reynal-Querol (2002),is the only discrete polarization measure that satisfies the basic propertiesof polarization. Additionally we present a derivation of the RQ index froma simple rent seeking model. In the empirical section we show that whileethnic polarization has a positive effect on civil wars and, indirectly ongrowth, this effect is not present when we use ethnic fractionalization.
Resumo:
The paper contrasts empirically the results of alternative methods for estimating thevalue and the depreciation of mineral resources. The historical data of Mexico andVenezuela, covering the period 1920s-1980s, is used to contrast the results of severalmethods. These are the present value, the net price method, the user cost method andthe imputed income method. The paper establishes that the net price and the user costare not competing methods as such, but alternative adjustments to different scenariosof closed and open economies. The results prove that the biases of the methods, ascommonly described in the theoretical literature, only hold under the most restrictedscenario of constant rents over time. It is argued that the difference between what isexpected to happen and what actually did happen is for the most part due to a missingvariable, namely technological change. This is an important caveat to therecommendations made based on these models.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
Bacteria are generally difficult specimens to prepare for conventional resin section electron microscopy and mycobacteria, with their thick and complex cell envelope layers being especially prone to artefacts. Here we made a systematic comparison of different methods for preparing Mycobacterium smegmatis for thin section electron microscopy analysis. These methods were: (1) conventional preparation by fixatives and epoxy resins at ambient temperature. (2) Tokuyasu cryo-section of chemically fixed bacteria. (3) rapid freezing followed by freeze substitution and embedding in epoxy resin at room temperature or (4) combined with Lowicryl HM20 embedding and ultraviolet (UV) polymerization at low temperature and (5) CEMOVIS, or cryo electron microscopy of vitreous sections. The best preservation of bacteria was obtained with the cryo electron microscopy of vitreous sections method, as expected, especially with respect to the preservation of the cell envelope and lipid bodies. By comparison with cryo electron microscopy of vitreous sections both the conventional and Tokuyasu methods produced different, undesirable artefacts. The two different types of freeze-substitution protocols showed variable preservation of the cell envelope but gave acceptable preservation of the cytoplasm, but not lipid bodies, and bacterial DNA. In conclusion although cryo electron microscopy of vitreous sections must be considered the 'gold standard' among sectioning methods for electron microscopy, because it avoids solvents and stains, the use of optimally prepared freeze substitution also offers some advantages for ultrastructural analysis of bacteria.
Resumo:
The question of where retroviral DNA becomes integrated in chromosomes is important for understanding (i) the mechanisms of viral growth, (ii) devising new anti-retroviral therapy, (iii) understanding how genomes evolve, and (iv) developing safer methods for gene therapy. With the completion of genome sequences for many organisms, it has become possible to study integration targeting by cloning and sequencing large numbers of host-virus DNA junctions, then mapping the host DNA segments back onto the genomic sequence. This allows statistical analysis of the distribution of integration sites relative to the myriad types of genomic features that are also being mapped onto the sequence scaffold. Here we present methods for recovering and analyzing integration site sequences.
Resumo:
AbstractText BACKGROUND: Profiling sperm DNA present on vaginal swabs taken from rape victims often contributes to identifying and incarcerating rapists. Large amounts of the victim's epithelial cells contaminate the sperm present on swabs, however, and complicate this process. The standard method for obtaining relatively pure sperm DNA from a vaginal swab is to digest the epithelial cells with Proteinase K in order to solubilize the victim's DNA, and to then physically separate the soluble DNA from the intact sperm by pelleting the sperm, removing the victim's fraction, and repeatedly washing the sperm pellet. An alternative approach that does not require washing steps is to digest with Proteinase K, pellet the sperm, remove the victim's fraction, and then digest the residual victim's DNA with a nuclease. METHODS: The nuclease approach has been commercialized in a product, the Erase Sperm Isolation Kit (PTC Labs, Columbia, MO, USA), and five crime laboratories have tested it on semen-spiked female buccal swabs in a direct comparison with their standard methods. Comparisons have also been performed on timed post-coital vaginal swabs and evidence collected from sexual assault cases. RESULTS: For the semen-spiked buccal swabs, Erase outperformed the standard methods in all five laboratories and in most cases was able to provide a clean male profile from buccal swabs spiked with only 1,500 sperm. The vaginal swabs taken after consensual sex and the evidence collected from rape victims showed a similar pattern of Erase providing superior profiles. CONCLUSIONS: In all samples tested, STR profiles of the male DNA fractions obtained with Erase were as good as or better than those obtained using the standard methods.
Resumo:
In this paper we attempt to describe the general reasons behind the world populationexplosion in the 20th century. The size of the population at the end of the century inquestion, deemed excessive by some, was a consequence of a dramatic improvementin life expectancies, attributable, in turn, to scientific innovation, the circulation ofinformation and economic growth. Nevertheless, fertility is a variable that plays acrucial role in differences in demographic growth. We identify infant mortality, femaleeducation levels and racial identity as important exogenous variables affecting fertility.It is estimated that in poor countries one additional year of primary schooling forwomen leads to 0.614 child less per couple on average (worldwide). While it may bepossible to identify a global tendency towards convergence in demographic trends,particular attention should be paid to the case of Africa, not only due to its differentdemographic patterns, but also because much of the continent's population has yet toexperience improvement in quality of life generally enjoyed across the rest of theplanet.
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing oneor more parameters in their definition. Methods that can be linked in this way arecorrespondence analysis, unweighted or weighted logratio analysis (the latter alsoknown as "spectral mapping"), nonsymmetric correspondence analysis, principalcomponent analysis (with and without logarithmic transformation of the data) andmultidimensional scaling. In this presentation I will show how several of thesemethods, which are frequently used in compositional data analysis, may be linkedthrough parametrizations such as power transformations, linear transformations andconvex linear combinations. Since the methods of interest here all lead to visual mapsof data, a "movie" can be made where where the linking parameter is allowed to vary insmall steps: the results are recalculated "frame by frame" and one can see the smoothchange from one method to another. Several of these "movies" will be shown, giving adeeper insight into the similarities and differences between these methods.
Resumo:
Conventional financial accounting informationis slanted in favour of certain economic interests. This paper argues in favour ofaccounting information capturing and showingrelevant aspects of the economic-social situation,and of decision-making based on it allowingfor decisions to be taken with economic-social,and not purely economic-weighted, awareness.