505 resultados para After-images.
Resumo:
Background Comparison of a multimodal intervention WE CALL (study initiated phone support/information provision) versus a passive intervention YOU CALL (participant can contact a resource person) in individuals with first mild stroke. Methods and Results This study is a single-blinded randomized clinical trial. Primary outcome includes unplanned use of health services (participant diaries) for adverse events and quality of life (Euroquol-5D, Quality of Life Index). Secondary outcomes include planned use of health services (diaries), mood (Beck Depression Inventory II), and participation (Assessment of Life Habits [LIFE-H]). Blind assessments were done at baseline, 6, and 12 months. A mixed model approach for statistical analysis on an intention-to-treat basis was used where the group factor was intervention type and occasion factor time, with a significance level of 0.01. We enrolled 186 patients (WE=92; YOU=94) with a mean age of 62.5±12.5 years, and 42.5% were women. No significant differences were seen between groups at 6 months for any outcomes with both groups improving from baseline on all measures (effect sizes ranged from 0.25 to 0.7). The only significant change for both groups from 6 months to 1 year (n=139) was in the social domains of the LIFE-H (increment in score, 0.4/9±1.3 [95% confidence interval, 0.1–0.7]; effect size, 0.3). Qualitatively, the WE CALL intervention was perceived as reassuring, increased insight, and problem solving while decreasing anxiety. Only 6 of 94 (6.4%) YOU CALL participants availed themselves of the intervention. Conclusions Although the 2 groups improved equally over time, WE CALL intervention was perceived as helpful, whereas YOU CALL intervention was not used.
Resumo:
Background More than 60% of new strokes each year are "mild" in severity and this proportion is expected to rise in the years to come. Within our current health care system those with "mild" stroke are typically discharged home within days, without further referral to health or rehabilitation services other than advice to see their family physician. Those with mild stroke often have limited access to support from health professionals with stroke-specific knowledge who would typically provide critical information on topics such as secondary stroke prevention, community reintegration, medication counselling and problem solving with regard to specific concerns that arise. Isolation and lack of knowledge may lead to a worsening of health problems including stroke recurrence and unnecessary and costly health care utilization. The purpose of this study is to assess the effectiveness, for individuals who experience a first "mild" stroke, of a sustainable, low cost, multimodal support intervention (comprising information, education and telephone support) - "WE CALL" compared to a passive intervention (providing the name and phone number of a resource person available if they feel the need to) - "YOU CALL", on two primary outcomes: unplanned-use of health services for negative events and quality of life. Method/Design We will recruit 384 adults who meet inclusion criteria for a first mild stroke across six Canadian sites. Baseline measures will be taken within the first month after stroke onset. Participants will be stratified according to comorbidity level and randomised to one of two groups: YOU CALL or WE CALL. Both interventions will be offered over a six months period. Primary outcomes include unplanned use of heath services for negative event (frequency calendar) and quality of life (EQ-5D and Quality of Life Index). Secondary outcomes include participation level (LIFE-H), depression (Beck Depression Inventory II) and use of health services for health promotion or prevention (frequency calendar). Blind assessors will gather data at mid-intervention, end of intervention and one year follow up. Discussion If effective, this multimodal intervention could be delivered in both urban and rural environments. For example, existing infrastructure such as regional stroke centers and existing secondary stroke prevention clinics, make this intervention, if effective, deliverable and sustainable.
Resumo:
Background: Prediction of outcome after stroke is important for triage decisions, prognostic estimates for family and for appropriate resource utilization. Prognostication must be timely and simply applied. Several scales have shown good prognostic value. In Calgary, the Orpington Prognostic Score (OPS) has been used to predict outcome as an aid to rehabilitation triage. However, the OPS has not been assessed at one week for predictive capability. Methods: Among patients admitted to a sub-acute stroke unit, OPS from the first week were examined to determine if any correlation existed between final disposition after rehabilitation and first week score. The predictive validity of the OPS at one week was compared to National Institute of Health Stroke Scale (NIHSS) score at 24 hours using logistic regression and receiver operator characteristics analysis. The primary outcome was final disposition after discharge from the stroke unit if the patient went directly home, or died, or from the inpatient rehabilitation unit. Results: The first week OPS was highly predictive of final disposition. However, no major advantage in using the first week OPS was observed when compared to 24h NIHSS score. Both scales were equally predictive of final disposition of stroke patients, post rehabilitation. Conclusion: The first week OPS can be used to predict final outcome. The NIHSS at 24h provides the same prognostic information.
Resumo:
One hundred seventy-six consecutive patients treated with IV tissue plasminogen activator (tPA) for acute ischemic stroke were examined prospectively, and orolingual angioedema was found in nine (5.1%; 95% CI 2.3 to 9.5). The reaction was typically mild, transient, and contralateral to the ischemic hemisphere. Risk of angioedema was associated with angiotensin-converting enzyme inhibitors (relative risk [RR] 13.6; 95% CI 3.0 to 62.7) and signs on initial CT of ischemia in the insular and frontal cortex (RR 9.1; 95% CI 1.4 to 30.0).
Resumo:
In response to scientific breakthroughs in biotechnology, the development of new technologies, and the demands of a hungry capitalist marketplace, patent law has expanded to accommodate a range of biological inventions. There has been much academic and public debate as to whether gene patents have a positive impact upon research and development, health-care, and the protection of the environment. In a satire of prevailing patenting practices, the English poet and part-time casino waitress, Donna MacLean, sought a patent application - GB0000180.0 - in respect of herself. She explained that she had satisfied the usual patent criteria - in that she was novel, inventive, and useful: It has taken 30 years of hard labor for me to discover and invent myself, and now I wish to protect my invention from unauthorized exploitation, genetic or otherwise. I am new: I have led a private existence and I have not made the invention of myself public. I am not obvious (2000: 18). MacLean said she had many industrial applications. ’For example, my genes can be used in medical research to extremely profitable ends - I therefore wish to have sole control of my own genetic material' (2000: 18). She observed in an interview: ’There's a kind of unpleasant, grasping, greedy atmosphere at the moment around the mapping of the human genome ... I wanted to see if a human being could protect their own genes in law' (Meek, 2000). This special issue of Law in Context charts a new era in the long-standing debate over biological inventions. In the wake of the expansion of patentable subject matter, there has been great strain placed upon patent criteria - such as ’novelty', ’inventive step', and ’utility'. Furthermore, there has been a new focus upon legal doctrines which facilitate access to patented inventions - like the defence of experimental use, the ’Bolar' exception, patent pooling, and compulsory licensing. There has been a concerted effort to renew patent law with an infusion of ethical principles dealing with informed consent and benefit sharing. There has also been a backlash against the commercialisation of biological inventions, and a call by some activists for the abolition of patents on genetic inventions. This collection considers a wide range of biological inventions - ranging from micro-organisms, plants and flowers and transgenic animals to genes, express sequence tags, and research tools, as well as genetic diagnostic tests and pharmaceutical drugs. It is thus an important corrective to much policy work, which has been limited in its purview to merely gene patents and biomedical research. This collection compares and contrasts the various approaches of a number of jurisdictions to the legal problems in respect of biological inventions. In particular, it looks at the complexities of the 1998 European Union Directive on the Legal Protection of Biotechnological Inventions, as well as decisions of member states, such as the Netherlands, and peripheral states, like Iceland. The edition considers US jurisprudence on patent law and policy, as well as recent developments in Canada. It also focuses upon recent developments in Australia - especially in the wake of parallel policy inquiries into gene patents and access to genetic resources.
Resumo:
This paper investigates the reasons why some technologies, defying general expectations and the established models of technological change, may not disappear from the market after having been displaced from their once-dominant status. Our point of departure is that the established models of technological change are not suitable to explain this as they predominantly focus on technological dominance, giving attention to the technologies that display highest performance levels and gain greatest market share. And yet, technological landscapes are rife with technological designs that do not fulfil these conditions. Using the LP record as an empirical case, we propose that the central mechanism at play in the continuing market presence of once-dominant technologies is the recasting of their technological features from the functional-utilitarian to the aesthetic realm, with an additional element concerning communal interaction among users. The findings that emerge from our quantitative textual analysis of over 200,000 posts on a prominent online LP-related discussion forum (between 2002 and 2010) also suggest that the post-dominance technology adopters and users appear to share many key characteristics with the earliest adopters of new technologies, rather than with late-stage adopters which precede them.
Resumo:
The behavior of small molecules on a surface depends critically on both molecule–substrate and intermolecular interactions. We present here a detailed comparative investigation of 1,3,5-benzene tricarboxylic acid (trimesic acid, TMA) on two different surfaces: highly oriented pyrolytic graphite (HOPG) and single-layer graphene (SLG) grown on a polycrystalline Cu foil. On the basis of high-resolution scanning tunnelling microscopy (STM) images, we show that the epitaxy matrix for the hexagonal TMA chicken wire phase is identical on these two surfaces, and, using density functional theory (DFT) with a non-local van der Waals correlation contribution, we identify the most energetically favorable adsorption geometries. Simulated STM images based on these calculations suggest that the TMA lattice can stably adsorb on sites other than those identified to maximize binding interactions with the substrate. This is consistent with our net energy calculations that suggest that intermolecular interactions (TMA–TMA dimer bonding) are dominant over TMA–substrate interactions in stabilizing the system. STM images demonstrate the robustness of the TMA films on SLG, where the molecular network extends across the variable topography of the SLG substrates and remains intact after rinsing and drying the films. These results help to elucidate molecular behavior on SLG and suggest significant similarities between adsorption on HOPG and SLG.
Resumo:
In this paper, we used a nonconservative Lagrangian mechanics approach to formulate a new statistical algorithm for fluid registration of 3-D brain images. This algorithm is named SAFIRA, acronym for statistically-assisted fluid image registration algorithm. A nonstatistical version of this algorithm was implemented, where the deformation was regularized by penalizing deviations from a zero rate of strain. In, the terms regularizing the deformation included the covariance of the deformation matrices Σ and the vector fields (q). Here, we used a Lagrangian framework to reformulate this algorithm, showing that the regularizing terms essentially allow nonconservative work to occur during the flow. Given 3-D brain images from a group of subjects, vector fields and their corresponding deformation matrices are computed in a first round of registrations using the nonstatistical implementation. Covariance matrices for both the deformation matrices and the vector fields are then obtained and incorporated (separately or jointly) in the nonconservative terms, creating four versions of SAFIRA. We evaluated and compared our algorithms' performance on 92 3-D brain scans from healthy monozygotic and dizygotic twins; 2-D validations are also shown for corpus callosum shapes delineated at midline in the same subjects. After preliminary tests to demonstrate each method, we compared their detection power using tensor-based morphometry (TBM), a technique to analyze local volumetric differences in brain structure. We compared the accuracy of each algorithm variant using various statistical metrics derived from the images and deformation fields. All these tests were also run with a traditional fluid method, which has been quite widely used in TBM studies. The versions incorporating vector-based empirical statistics on brain variation were consistently more accurate than their counterparts, when used for automated volumetric quantification in new brain images. This suggests the advantages of this approach for large-scale neuroimaging studies.
Resumo:
We apply an information-theoretic cost metric, the symmetrized Kullback-Leibler (sKL) divergence, or $J$-divergence, to fluid registration of diffusion tensor images. The difference between diffusion tensors is quantified based on the sKL-divergence of their associated probability density functions (PDFs). Three-dimensional DTI data from 34 subjects were fluidly registered to an optimized target image. To allow large image deformations but preserve image topology, we regularized the flow with a large-deformation diffeomorphic mapping based on the kinematics of a Navier-Stokes fluid. A driving force was developed to minimize the $J$-divergence between the deforming source and target diffusion functions, while reorienting the flowing tensors to preserve fiber topography. In initial experiments, we showed that the sKL-divergence based on full diffusion PDFs is adaptable to higher-order diffusion models, such as high angular resolution diffusion imaging (HARDI). The sKL-divergence was sensitive to subtle differences between two diffusivity profiles, showing promise for nonlinear registration applications and multisubject statistical analysis of HARDI data.
Resumo:
Reliable quantitative analysis of white matter connectivity in the brain is an open problem in neuroimaging, with common solutions requiring tools for fiber tracking, tractography segmentation and estimation of intersubject correspondence. This paper proposes a novel, template matching approach to the problem. In the proposed method, a deformable fiber-bundle model is aligned directly with the subject tensor field, skipping the fiber tracking step. Furthermore, the use of a common template eliminates the need for tractography segmentation and defines intersubject shape correspondence. The method is validated using phantom DTI data and applications are presented, including automatic fiber-bundle reconstruction and tract-based morphometry. © 2009 Elsevier Inc. All rights reserved.
Resumo:
The ENIGMA (Enhancing NeuroImaging Genetics through Meta-Analysis) Consortium was set up to analyze brain measures and genotypes from multiple sites across the world to improve the power to detect genetic variants that influence the brain. Diffusion tensor imaging (DTI) yields quantitative measures sensitive to brain development and degeneration, and some common genetic variants may be associated with white matter integrity or connectivity. DTI measures, such as the fractional anisotropy (FA) of water diffusion, may be useful for identifying genetic variants that influence brain microstructure. However, genome-wide association studies (GWAS) require large populations to obtain sufficient power to detect and replicate significant effects, motivating a multi-site consortium effort. As part of an ENIGMA-DTI working group, we analyzed high-resolution FA images from multiple imaging sites across North America, Australia, and Europe, to address the challenge of harmonizing imaging data collected at multiple sites. Four hundred images of healthy adults aged 18-85 from four sites were used to create a template and corresponding skeletonized FA image as a common reference space. Using twin and pedigree samples of different ethnicities, we used our common template to evaluate the heritability of tract-derived FA measures. We show that our template is reliable for integrating multiple datasets by combining results through meta-analysis and unifying the data through exploratory mega-analyses. Our results may help prioritize regions of the FA map that are consistently influenced by additive genetic factors for future genetic discovery studies. Protocols and templates are publicly available at (http://enigma.loni.ucla.edu/ongoing/dti-working-group/).
Resumo:
Information from the full diffusion tensor (DT) was used to compute voxel-wise genetic contributions to brain fiber microstructure. First, we designed a new multivariate intraclass correlation formula in the log-Euclidean framework. We then analyzed used the full multivariate structure of the tensor in a multivariate version of a voxel-wise maximum-likelihood structural equation model (SEM) that computes the variance contributions in the DTs from genetic (A), common environmental (C) and unique environmental (E) factors. Our algorithm was tested on DT images from 25 identical and 25 fraternal twin pairs. After linear and fluid registration to a mean template, we computed the intraclass correlation and Falconer's heritability statistic for several scalar DT-derived measures and for the full multivariate tensors. Covariance matrices were found from the DTs, and inputted into SEM. Analyzing the full DT enhanced the detection of A and C effects. This approach should empower imaging genetics studies that use DTI.
Resumo:
There is a major effort in medical imaging to develop algorithms to extract information from DTI and HARDI, which provide detailed information on brain integrity and connectivity. As the images have recently advanced to provide extraordinarily high angular resolution and spatial detail, including an entire manifold of information at each point in the 3D images, there has been no readily available means to view the results. This impedes developments in HARDI research, which need some method to check the plausibility and validity of image processing operations on HARDI data or to appreciate data features or invariants that might serve as a basis for new directions in image segmentation, registration, and statistics. We present a set of tools to provide interactive display of HARDI data, including both a local rendering application and an off-screen renderer that works with a web-based viewer. Visualizations are presented after registration and averaging of HARDI data from 90 human subjects, revealing important details for which there would be no direct way to appreciate using conventional display of scalar images.
Resumo:
An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required. The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.