938 resultados para H-INDEX DISTRIBUTION
Resumo:
Electrical impedance tomography is a novel technology capable of quantifying ventilation distribution in the lung in real time during various therapeutic manoeuvres. The technique requires changes to the patient’s position to place the electrical impedance tomography electrodes circumferentially around the thorax. The impact of these position changes on the time taken to stabilise the regional distribution of ventilation determined by electrical impedance tomography is unknown. This study aimed to determine the time taken for the regional distribution of ventilation determined by electrical impedance tomography to stabilise after changing position. Eight healthy, male volunteers were connected to electrical impedance tomography and a pneumotachometer. After 30 minutes stabilisation supine, participants were moved into 60 degrees Fowler’s position and then returned to supine. Thirty minutes was spent in each position. Concurrent readings of ventilation distribution and tidal volumes were taken every five minutes. A mixed regression model with a random intercept was used to compare the positions and changes over time. The anterior-posterior distribution stabilised after ten minutes in Fowler’s position and ten minutes after returning to supine. Left-right stabilisation was achieved after 15 minutes in Fowler’s position and supine. A minimum of 15 minutes of stabilisation should be allowed for spontaneously breathing individuals when assessing ventilation distribution. This time allows stabilisation to occur in the anterior-posterior direction as well as the left-right direction.
Resumo:
Background More than 60% of new strokes each year are "mild" in severity and this proportion is expected to rise in the years to come. Within our current health care system those with "mild" stroke are typically discharged home within days, without further referral to health or rehabilitation services other than advice to see their family physician. Those with mild stroke often have limited access to support from health professionals with stroke-specific knowledge who would typically provide critical information on topics such as secondary stroke prevention, community reintegration, medication counselling and problem solving with regard to specific concerns that arise. Isolation and lack of knowledge may lead to a worsening of health problems including stroke recurrence and unnecessary and costly health care utilization. The purpose of this study is to assess the effectiveness, for individuals who experience a first "mild" stroke, of a sustainable, low cost, multimodal support intervention (comprising information, education and telephone support) - "WE CALL" compared to a passive intervention (providing the name and phone number of a resource person available if they feel the need to) - "YOU CALL", on two primary outcomes: unplanned-use of health services for negative events and quality of life. Method/Design We will recruit 384 adults who meet inclusion criteria for a first mild stroke across six Canadian sites. Baseline measures will be taken within the first month after stroke onset. Participants will be stratified according to comorbidity level and randomised to one of two groups: YOU CALL or WE CALL. Both interventions will be offered over a six months period. Primary outcomes include unplanned use of heath services for negative event (frequency calendar) and quality of life (EQ-5D and Quality of Life Index). Secondary outcomes include participation level (LIFE-H), depression (Beck Depression Inventory II) and use of health services for health promotion or prevention (frequency calendar). Blind assessors will gather data at mid-intervention, end of intervention and one year follow up. Discussion If effective, this multimodal intervention could be delivered in both urban and rural environments. For example, existing infrastructure such as regional stroke centers and existing secondary stroke prevention clinics, make this intervention, if effective, deliverable and sustainable.
Resumo:
BACKGROUND CONTEXT: The Neck Disability Index frequently is used to measure outcomes of the neck. The statistical rigor of the Neck Disability Index has been assessed with conflicting outcomes. To date, Confirmatory Factor Analysis of the Neck Disability Index has not been reported for a suitably large population study. Because the Neck Disability Index is not a condition-specific measure of neck function, initial Confirmatory Factor Analysis should consider problematic neck patients as a homogenous group. PURPOSE: We sought to analyze the factor structure of the Neck Disability Index through Confirmatory Factor Analysis in a symptomatic, homogeneous, neck population, with respect to pooled populations and gender subgroups. STUDY DESIGN: This was a secondary analysis of pooled data. PATIENT SAMPLE: A total of 1,278 symptomatic neck patients (67.5% female, median age 41 years), 803 nonspecific and 475 with whiplash-associated disorder. OUTCOME MEASURES: The Neck Disability Index was used to measure outcomes. METHODS: We analyzed pooled baseline data from six independent studies of patients with neck problems who completed Neck Disability Index questionnaires at baseline. The Confirmatory Factor Analysis was considered in three scenarios: the full sample and separate sexes. Models were compared empirically for best fit. RESULTS: Two-factor models have good psychometric properties across both the pooled and sex subgroups. However, according to these analyses, the one-factor solution is preferable from both a statistical perspective and parsimony. The two-factor model was close to significant for the male subgroup (p<.07) where questions separated into constructs of mental function (pain, reading headaches and concentration) and physical function (personal care, lifting, work, driving, sleep, and recreation). CONCLUSIONS: The Neck Disability Index demonstrated a one-factor structure when analyzed by Confirmatory Factor Analysis in a pooled, homogenous sample of neck problem patients. However, a two-factor model did approach significance for male subjects where questions separated into constructs of mental and physical function. Further investigations in different conditions, subgroup and sex-specific populations are warranted.
Resumo:
Background The purpose of this study was to adapt and validate the Foot Function Index to the Spanish (FFI-Sp) following the guidelines of the American Academy of Orthopaedic Surgeons. Methods A cross-sectional study 80 participants with some foot pathology. A statistical analysis was made, including a correlation study with other questionnaires (the Foot Health Status Questionnaire, EuroQol 5-D, Visual Analogue Pain Scale, and the Short Form SF-12 Health Survey). Data analysis included reliability, construct and criterion-related validity and factor analyses. Results The principal components analysis with varimax rotation produced 3 principal factors that explained 80% of the variance. The confirmatory factor analysis showed an acceptable fit with a comparative fit index of 0.78. The FFI-Sp demonstrated excellent internal consistency on the three subscales: pain 0.95; disability 0.96; and activity limitation 0.69, the subscale that scored lowest. The correlation between the FFI-Sp and the other questionnaires was high to moderate. Conclusions The Spanish version of the Foot Function Index (FFI-Sp) is a tool that is a valid and reliable tool with a very good internal consistency for use in the assessment of pain, disability and limitation of the function of the foot, for use both in clinic and research.
Resumo:
We commend Swanenburg et al. (2013) on translation, development, and clinimetric analysis of the NDI-G. However, the dual-factor structure with factor analysis and the high level of internal consistency (IC) highlighted in their discussion were not emphasized in the abstract or conclusion. These points may imply some inconsistencies with the final conclusions since determination of stable point estimates with the study's small sample are exceedingly difficult.
Resumo:
We usually find low levels of fitness condition affect other aspects of living for people with ID like dependency in carrying out activivities of daily living. Therefore we find high levels of dependency in activities of daily living due to poor fitness condition. The aim of the study is to explore the criterion validity of the Barthel index with a physical fitness test. An observational cross-sectional study was conducted. Data from the Barthel index and a physical fitness test were measured in 122 adults with intellectual disability. The data were analysed to find out the relationship between four categories of the physical fitness test and the Barthel index. It needs to be stressed that the correlations between the Barthel index and leg, abdominal and arm strength can confirm that these physical test are predictive of the Barthel index. The correlations between the balance variables as functional reach and single-leg stance with eyes open shown relationships with Barthel Index. We found important correlations between the physical fitness test and the Barthel index, so we can affirm that some physical fitness features are predictor variables of the Barthel index.
Resumo:
Species distribution models (SDMs) are considered to exemplify Pattern rather than Process based models of a species' response to its environment. Hence when used to map species distribution, the purpose of SDMs can be viewed as interpolation, since species response is measured at a few sites in the study region, and the aim is to interpolate species response at intermediate sites. Increasingly, however, SDMs are also being used to also extrapolate species-environment relationships beyond the limits of the study region as represented by the training data. Regardless of whether SDMs are to be used for interpolation or extrapolation, the debate over how to implement SDMs focusses on evaluating the quality of the SDM, both ecologically and mathematically. This paper proposes a framework that includes useful tools previously employed to address uncertainty in habitat modelling. Together with existing frameworks for addressing uncertainty more generally when modelling, we then outline how these existing tools help inform development of a broader framework for addressing uncertainty, specifically when building habitat models. As discussed earlier we focus on extrapolation rather than interpolation, where the emphasis on predictive performance is diluted by the concerns for robustness and ecological relevance. We are cognisant of the dangers of excessively propagating uncertainty. Thus, although the framework provides a smorgasbord of approaches, it is intended that the exact menu selected for a particular application, is small in size and targets the most important sources of uncertainty. We conclude with some guidance on a strategic approach to identifying these important sources of uncertainty. Whilst various aspects of uncertainty in SDMs have previously been addressed, either as the main aim of a study or as a necessary element of constructing SDMs, this is the first paper to provide a more holistic view.
Resumo:
We propose a new information-theoretic metric, the symmetric Kullback-Leibler divergence (sKL-divergence), to measure the difference between two water diffusivity profiles in high angular resolution diffusion imaging (HARDI). Water diffusivity profiles are modeled as probability density functions on the unit sphere, and the sKL-divergence is computed from a spherical harmonic series, which greatly reduces computational complexity. Adjustment of the orientation of diffusivity functions is essential when the image is being warped, so we propose a fast algorithm to determine the principal direction of diffusivity functions using principal component analysis (PCA). We compare sKL-divergence with other inner-product based cost functions using synthetic samples and real HARDI data, and show that the sKL-divergence is highly sensitive in detecting small differences between two diffusivity profiles and therefore shows promise for applications in the nonlinear registration and multisubject statistical analysis of HARDI data.
Resumo:
Diffusion weighted magnetic resonance (MR) imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of 6 directions, second-order tensors can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve crossing fiber tracts. Recently, a number of high-angular resolution schemes with greater than 6 gradient directions have been employed to address this issue. In this paper, we introduce the Tensor Distribution Function (TDF), a probability function defined on the space of symmetric positive definite matrices. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the diffusion orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function.
Resumo:
Fractional anisotropy (FA), a very widely used measure of fiber integrity based on diffusion tensor imaging (DTI), is a problematic concept as it is influenced by several quantities including the number of dominant fiber directions within each voxel, each fiber's anisotropy, and partial volume effects from neighboring gray matter. With High-angular resolution diffusion imaging (HARDI) and the tensor distribution function (TDF), one can reconstruct multiple underlying fibers per voxel and their individual anisotropy measures by representing the diffusion profile as a probabilistic mixture of tensors. We found that FA, when compared with TDF-derived anisotropy measures, correlates poorly with individual fiber anisotropy, and may sub-optimally detect disease processes that affect myelination. By contrast, mean diffusivity (MD) as defined in standard DTI appears to be more accurate. Overall, we argue that novel measures derived from the TDF approach may yield more sensitive and accurate information than DTI-derived measures.
Resumo:
High-angular resolution diffusion imaging (HARDI) can reconstruct fiber pathways in the brain with extraordinary detail, identifying anatomical features and connections not seen with conventional MRI. HARDI overcomes several limitations of standard diffusion tensor imaging, which fails to model diffusion correctly in regions where fibers cross or mix. As HARDI can accurately resolve sharp signal peaks in angular space where fibers cross, we studied how many gradients are required in practice to compute accurate orientation density functions, to better understand the tradeoff between longer scanning times and more angular precision. We computed orientation density functions analytically from tensor distribution functions (TDFs) which model the HARDI signal at each point as a unit-mass probability density on the 6D manifold of symmetric positive definite tensors. In simulated two-fiber systems with varying Rician noise, we assessed how many diffusionsensitized gradients were sufficient to (1) accurately resolve the diffusion profile, and (2) measure the exponential isotropy (EI), a TDF-derived measure of fiber integrity that exploits the full multidirectional HARDI signal. At lower SNR, the reconstruction accuracy, measured using the Kullback-Leibler divergence, rapidly increased with additional gradients, and EI estimation accuracy plateaued at around 70 gradients.
Resumo:
Index tracking is an investment approach where the primary objective is to keep portfolio return as close as possible to a target index without purchasing all index components. The main purpose is to minimize the tracking error between the returns of the selected portfolio and a benchmark. In this paper, quadratic as well as linear models are presented for minimizing the tracking error. The uncertainty is considered in the input data using a tractable robust framework that controls the level of conservatism while maintaining linearity. The linearity of the proposed robust optimization models allows a simple implementation of an ordinary optimization software package to find the optimal robust solution. The proposed model of this paper employs Morgan Stanley Capital International Index as the target index and the results are reported for six national indices including Japan, the USA, the UK, Germany, Switzerland and France. The performance of the proposed models is evaluated using several financial criteria e.g. information ratio, market ratio, Sharpe ratio and Treynor ratio. The preliminary results demonstrate that the proposed model lowers the amount of tracking error while raising values of portfolio performance measures.
Resumo:
This paper describes part of an engineering study that was undertaken to demonstrate that a multi-megawatt Photovoltaic (PV) generation system could be connected to a rural 11 kV feeder without creating power quality issues for other consumers. The paper concentrates solely on the voltage regulation aspect of the study as this was the most innovative part of the study. The study was carried out using the time-domain software package, PSCAD/EMTDC. The software model included real time data input of actual measured load and scaled PV generation data, along with real-time substation voltage regulator and PV inverter reactive power control. The outputs from the model plot real-time voltage, current and power variations throughout the daily load and PV generation variations. Other aspects of the study not described in the paper include the analysis of harmonics, voltage flicker, power factor, voltage unbalance and system losses.
Resumo:
The development of Electric Energy Storage (EES) integrated with Renewable Energy Resources (RER) has increased use of optimum scheduling strategy in distribution systems. Optimum scheduling of EES can reduce cost of purchased energy by retailers while improve the reliability of customers in distribution system. This paper proposes an optimum scheduling strategy for EES and the evaluation of its impact on reliability of distribution system. Case study shows the impact of the proposed strategy on reliability indices of a distribution system.
Resumo:
Distribution Revolution is a collection of interviews with leading film and TV professionals concerning the many ways that digital delivery systems are transforming the entertainment business. These interviews provide lively insider accounts from studio executives, distribution professionals, and creative talent of the tumultuous transformation of film and TV in the digital era. The first section features interviews with top executives at major Hollywood studios, providing a window into the big-picture concerns of media conglomerates with respect to changing business models, revenue streams, and audience behaviors. The second focuses on innovative enterprises that are providing path-breaking models for new modes of content creation, curation, and distribution—creatively meshing the strategies and practices of Hollywood and Silicon Valley. And the final section offers insights from creative talent whose professional practices, compensation, and everyday working conditions have been transformed over the past ten years. Taken together, these interviews demonstrate that virtually every aspect of the film and television businesses is being affected by the digital distribution revolution, a revolution that has likely just begun. Interviewees include: • Gary Newman, Chairman, 20th Century Fox Television • Kelly Summers, Former Vice President, Global Business Development and New Media Strategy, Walt Disney Studios • Thomas Gewecke, Chief Digital Officer and Executive Vice President, Strategy and Business Development, Warner Bros. Entertainment • Ted Sarandos, Chief Content Officer, Netflix • Felicia D. Henderson, Writer-Producer, Soul Food, Gossip Girl • Dick Wolf, Executive Producer and Creator, Law & Order