96 resultados para Filmic approach methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background
Low patient adherence to treatment is associated with poorer health outcomes in bronchiectasis. We sought to use the Theoretical Domains Framework (TDF) (a framework derived from 33 psychological theories) and behavioural change techniques (BCTs) to define the content of an intervention to change patients’ adherence in bronchiectasis (Stage 1 and 2) and stakeholder expert panels to define its delivery (Stage 3).

Methods
We conducted semi-structured interviews with patients with bronchiectasis about barriers and motivators to adherence to treatment and focus groups or interviews with bronchiectasis healthcare professionals (HCPs) about their ability to change patients’ adherence to treatment. We coded these data to the 12 domain TDF to identify relevant domains for patients and HCPs (Stage 1). Three researchers independently mapped relevant domains for patients and HCPs to a list of 35 BCTs to identify two lists (patient and HCP) of potential BCTs for inclusion (Stage 2). We presented these lists to three expert panels (two with patients and one with HCPs/academics from across the UK). We asked panels who the intervention should target, who should deliver it, at what intensity, in what format and setting, and using which outcome measures (Stage 3).

Results
Eight TDF domains were perceived to influence patients’ and HCPs’ behaviours: Knowledge, Skills, Beliefs about capability, Beliefs about consequences, Motivation, Social influences, Behavioural regulation and Nature of behaviours (Stage 1). Twelve BCTs common to patients and HCPs were included in the intervention: Monitoring, Self-monitoring, Feedback, Action planning, Problem solving, Persuasive communication, Goal/target specified:behaviour/outcome, Information regarding behaviour/outcome, Role play, Social support and Cognitive restructuring (Stage 2). Participants thought that an individualised combination of these BCTs should be delivered to all patients, by a member of staff, over several one-to-one and/or group visits in secondary care. Efficacy should be measured using pulmonary exacerbations, hospital admissions and quality of life (Stage 3).

Conclusions
Twelve BCTs form the intervention content. An individualised selection from these 12 BCTs will be delivered to all patients over several face-to-face visits in secondary care. Future research should focus on developing physical materials to aid delivery of the intervention prior to feasibility and pilot testing. If effective, this intervention may improve adherence and health outcomes for those with bronchiectasis in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new approach to speech enhancement from single-channel measurements involving both noise and channel distortion (i.e., convolutional noise), and demonstrates its applications for robust speech recognition and for improving noisy speech quality. The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise for speech estimation. Third, we present an iterative algorithm which updates the noise and channel estimates of the corpus data model. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new approach to single-channel speech enhancement involving both noise and channel distortion (i.e., convolutional noise). The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise. Third, we present an iterative algorithm for improved speech estimates. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement. Index Terms: corpus-based speech model, longest matching segment, speech enhancement, speech recognition

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several one-dimensional design methods have been used to predict the off-design performance of three modern centrifugal compressors for automotive turbocharging. The three methods used are single-zone, two-zone, and a more recent statistical method. The predicted results from each method are compared against empirical data taken from standard hot gas stand tests for each turbocharger. Each of the automotive turbochargers considered in this study have notably different geometries and are of varying application. Due to the non-adiabatic test conditions, the empirical data has been corrected for the effect of heat transfer to ensure comparability with the 1D models. Each method is evaluated for usability and accuracy in both pressure ratio and efficiency prediction. The paper presents an insight into the limitations of each of these models when applied to one-dimensional automotive turbocharger design, and proposes that a corrected single-zone modelling approach has the greatest potential for further development, whilst the statistical method could be immediately introduced to a design process where design variations are limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Worldwide, the building sector requires the production of 4 billion tonnes of cement annually, consuming more than 40% of global energy. Alkali activated “cementless” binders have recently emerged as a novel eco-friendly construction material with a promising potential to replace ordinary Portland cement. These binders consist of a class of inorganic polymer formed mainly by the reaction between an alkaline solution and an aluminosilicate source. Precursor materials for this reaction can be found in secondary material streams from different industrial sectors, from energy to agro-alimentary. However, the suitability of these materials in developing the polymerisation reaction must be assessed through a detailed chemical and physical characterisation, ensuring the availability of required chemical species in the appropriate quantity and physical state. Furthermore, the binder composition needs to be defined in terms of proper alkali activation dosages, water content in the mix, and curing conditions. The mix design must satisfy mechanical requirements and compliance to desired engineering properties (workability, setting time) for ensuring the suitability of the binder in replacing Portland cement in concrete applications. This paper offers a structured approach for the development of secondary material-based binders, from their identification to mix design and production procedure development. Essential features of precursor material can be determined through chemical and physical characterisation methods and advanced microscope techniques. Important mixing parameters and binder properties requirements are examined and some examples of developed binders are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: We proposed to exploit hypoxia-inducible factor (HIF)-1alpha overexpression in prostate tumours and use this transcriptional machinery to control the expression of the suicide gene cytosine deaminase (CD) through binding of HIF-1alpha to arrangements of hypoxia response elements. CD is a prodrug activation enzyme, which converts inactive 5-fluorocytosine to active 5-fluorouracil (5-FU), allowing selective killing of vector containing cells.

METHODS: We developed a pair of vectors, containing either five or eight copies of the hypoxia response element (HRE) isolated from the vascular endothelial growth factor (pH5VCD) or glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (pH8GCD) gene, respectively. The kinetics of the hypoxic induction of the vectors and sensitization effects were evaluated in 22Rv1 and DU145 cells in vitro.

RESULTS: The CD protein as selectively detected in lysates of transiently transfected 22Rv1 and DU145 cells following hypoxic exposure. This is the first evidence of GAPDH HREs being used to control a suicide gene therapy strategy. Detectable CD levels were sustained upon reoxygenation and prolonged hypoxic exposures. Hypoxia-induced chemoresistance to 5-FU was overcome in both cell lines treated with this suicide gene therapy approach. Hypoxic transfectants were sensitized to prodrug concentrations that were ten-fold lower than those that are clinically relevant. Moreover, the surviving fraction of reoxygenated transfectants could be further reduced with the concomitant delivery of clinically relevant single radiation doses.

CONCLUSIONS: This strategy thus has the potential to sensitize the hypoxic compartment of prostate tumours and improve the outcome of current therapies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper is a reflection on the use of photographs in multiple case study research. It explores the crossovers between interpreting visual artefacts, the qualitative approach to case study research in organisations, and the move from cases to theory guided by the grounded theory tenets. The paper proposes an additional use of photographs as a visual method to those in the literature, as a device for data analysis. Photograph-based analysis techniques are explored, using e sequence of individual images and photo collages on case data, moving from interpretation of single to multiple case themes. This makes the case of using photograph analysis as an interpretation device for case research to illuminate theory development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Heckman-type selection models have been used to control HIV prevalence estimates for selection bias when participation in HIV testing and HIV status are associated after controlling for observed variables. These models typically rely on the strong assumption that the error terms in the participation and the outcome equations that comprise the model are distributed as bivariate normal.
Methods: We introduce a novel approach for relaxing the bivariate normality assumption in selection models using copula functions. We apply this method to estimating HIV prevalence and new confidence intervals (CI) in the 2007 Zambia Demographic and Health Survey (DHS) by using interviewer identity as the selection variable that predicts participation (consent to test) but not the outcome (HIV status).
Results: We show in a simulation study that selection models can generate biased results when the bivariate normality assumption is violated. In the 2007 Zambia DHS, HIV prevalence estimates are similar irrespective of the structure of the association assumed between participation and outcome. For men, we estimate a population HIV prevalence of 21% (95% CI = 16%–25%) compared with 12% (11%–13%) among those who consented to be tested; for women, the corresponding figures are 19% (13%–24%) and 16% (15%–17%).
Conclusions: Copula approaches to Heckman-type selection models are a useful addition to the methodological toolkit of HIV epidemiology and of epidemiology in general. We develop the use of this approach to systematically evaluate the robustness of HIV prevalence estimates based on selection models, both empirically and in a simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the past decade, many molecular components of clathrin-mediated endocytosis have been identified and proposed to play various hypothetical roles in the process [Nat. Rev. Neurosci. 1 (2000) 161; Nature 422 (2003) 37]. One limitation to the evaluation of these hypotheses is the efficiency and resolution of immunolocalization protocols currently in use. In order to facilitate the evaluation of these hypotheses and to understand more fully the molecular mechanisms of clathrin-mediated endocytosis, we have developed a protocol allowing enhanced and reliable subcellular immunolocalization of proteins in synaptic endocytic zones in situ. Synapses established by giant reticulospinal axons in lamprey are used as a model system for these experiments. These axons are unbranched and reach up to 80-100 microm in diameter. Synaptic active zones and surrounding endocytic zones are established on the surface of the axonal cylinder. To provide access for antibodies to the sites of synaptic vesicle recycling, axons are lightly fixed and cut along their longitudinal axis. To preserve the ultrastructure of the synaptic endocytic zone, antibodies are applied without the addition of detergents. Opened axons are incubated with primary antibodies, which are detected with secondary antibodies conjugated to gold particles. Specimens are then post-fixed and processed for electron microscopy. This approach allows preservation of the ultrastructure of the endocytic sites during immunolabeling procedures, while simultaneously achieving reliable immunogold detection of proteins on endocytic intermediates. To explore the utility of this approach, we have investigated the localization of a GTPase, dynamin, on clathrin-coated intermediates in the endocytic zone of the lamprey giant synapse. Using the present immunogold protocol, we confirm the presence of dynamin on late stage coated pits [Nature 422 (2003) 37] and also demonstrate that dynamin is recruited to the coat of endocytic intermediates from the very early stages of the clathrin coat formation. Thus, our experiments show that the current pre-embedding immunogold method is a useful experimental tool to study the molecular mechanisms of synaptic vesicle recycling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poverty means more than having a low income and includes exclusion from a minimally accepted way of life. It is now common practice in Europe to measure progress against poverty in terms of low income, material deprivation rates and some combination of both. This makes material deprivation indicators, and their selection, highly significant in its own right. The ‘consensual poverty’ approach is to identify deprivation items which a majority of the population agree constitute life’s basic necessities, accepting that these items will need revised over time to reflect social change. Traditionally, this has been carried out in the UK through specialised poverty surveys using a Sort Card (SC) technique.

Based on analysis of a 2012 omnibus survey, and discussions with three interviewers, this article examines how perception of necessities is affected by mode of administration – SC and Computer Assisted Personal Interviewing (CAPI). More CAPI respondents scored deprivation items necessary. Greatest disparities are in material items where 25 out of 32 items were significantly higher via CAPI. Closer agreement is found in social participation with 3 out of 14 activities significantly different. Consensus is higher on children’s material deprivation.
We consider influencing variables which could account for the disparities and believe that the SC method produces a more considered response. However, in light of technological advances, we question how long the SC method will remain socially acceptable. This paper concludes that the CAPI method can be easily modified without compromising the benefits of the SC method in capturing thoughtful responses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A previous review of research on the practice of offender supervision identified the predominant use of interview-based methodologies and limited use of other research approaches (Robinson and Svensson, 2013). It also found that most research has tended to be locally focussed (i.e. limited to one jurisdiction) with very few comparative studies. This article reports on the application of a visual method in a small-scale comparative study. Practitioners in five European countries participated and took photographs of the places and spaces where offender supervision occurs. The aims of the study were two-fold: firstly to explore the utility of a visual approach in a comparative context; and secondly to provide an initial visual account of the environment in which offender supervision takes place. In this article we address the first of these aims. We describe the application of the method in some depth before addressing its strengths and weaknesses. We conclude that visual methods provide a useful tool for capturing data about the environments in which offender supervision takes place and potentially provide a basis for more normative explorations about the practices of offender supervision in comparative contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction
Mild cognitive impairment (MCI) has clinical value in its ability to predict later dementia. A better understanding of cognitive profiles can further help delineate who is most at risk of conversion to dementia. We aimed to (1) examine to what extent the usual MCI subtyping using core criteria corresponds to empirically defined clusters of patients (latent profile analysis [LPA] of continuous neuropsychological data) and (2) compare the two methods of subtyping memory clinic participants in their prediction of conversion to dementia.

Methods
Memory clinic participants (MCI, n = 139) and age-matched controls (n = 98) were recruited. Participants had a full cognitive assessment, and results were grouped (1) according to traditional MCI subtypes and (2) using LPA. MCI participants were followed over approximately 2 years after their initial assessment to monitor for conversion to dementia.

Results
Groups were well matched for age and education. Controls performed significantly better than MCI participants on all cognitive measures. With the traditional analysis, most MCI participants were in the amnestic multidomain subgroup (46.8%) and this group was most at risk of conversion to dementia (63%). From the LPA, a three-profile solution fit the data best. Profile 3 was the largest group (40.3%), the most cognitively impaired, and most at risk of conversion to dementia (68% of the group).

Discussion
LPA provides a useful adjunct in delineating MCI participants most at risk of conversion to dementia and adds confidence to standard categories of clinical inference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is shown that under certain conditions it is possible to obtain a good speech estimate from noise without requiring noise estimation. We study an implementation of the theory, namely wide matching, for speech enhancement. The new approach performs sentence-wide joint speech segment estimation subject to maximum recognizability to gain noise robustness. Experiments have been conducted to evaluate the new approach with variable noises and SNRs from -5 dB to noise free. It is shown that the new approach, without any estimation of the noise, significantly outperformed conventional methods in the low SNR conditions while retaining comparable performance in the high SNR conditions. It is further suggested that the wide matching and deep learning approaches can be combined towards a highly robust and accurate speech estimator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Online forums are becoming a popular way of finding useful
information on the web. Search over forums for existing discussion
threads so far is limited to keyword-based search due
to the minimal effort required on part of the users. However,
it is often not possible to capture all the relevant context in a
complex query using a small number of keywords. Examplebased
search that retrieves similar discussion threads given
one exemplary thread is an alternate approach that can help
the user provide richer context and vastly improve forum
search results. In this paper, we address the problem of
finding similar threads to a given thread. Towards this, we
propose a novel methodology to estimate similarity between
discussion threads. Our method exploits the thread structure
to decompose threads in to set of weighted overlapping
components. It then estimates pairwise thread similarities
by quantifying how well the information in the threads are
mutually contained within each other using lexical similarities
between their underlying components. We compare our
proposed methods on real datasets against state-of-the-art
thread retrieval mechanisms wherein we illustrate that our
techniques outperform others by large margins on popular
retrieval evaluation measures such as NDCG, MAP, Precision@k
and MRR. In particular, consistent improvements of
up to 10% are observed on all evaluation measures