1000 resultados para inhaler techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

European Regulation 1169/2011 requires producers of foods that contain refined vegetable oils to label the oil types. A novel rapid and staged methodology has been developed for the first time to identify common oil species in oil blends. The qualitative method consists of a combination of a Fourier Transform Infrared (FTIR) spectroscopy to profile the oils and fatty acid chromatographic analysis to confirm the composition of the oils when required. Calibration models and specific classification criteria were developed and all data were fused into a simple decision-making system. The single lab validation of the method demonstrated the very good performance (96% correct classification, 100% specificity, 4% false positive rate). Only a small fraction of the samples needed to be confirmed with the majority of oils identified rapidly using only the spectroscopic procedure. The results demonstrate the huge potential of the methodology for a wide range of oil authenticity work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The in-line measurement of COD and NH4-N in the WWTP inflow is crucial for the timely monitoring of biological wastewater treatment processes and for the development of advanced control strategies for optimized WWTP operation. As a direct measurement of COD and NH4-N requires expensive and high maintenance in-line probes or analyzers, an approach estimating COD and NH4-N based on standard and spectroscopic in-line inflow measurement systems using Machine Learning Techniques is presented in this paper. The results show that COD estimation using Radom Forest Regression with a normalized MSE of 0.3, which is sufficiently accurate for practical applications, can be achieved using only standard in-line measurements. In the case of NH4-N, a good estimation using Partial Least Squares Regression with a normalized MSE of 0.16 is only possible based on a combination of standard and spectroscopic in-line measurements. Furthermore, the comparison of regression and classification methods shows that both methods perform equally well in most cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Monitoring the emergence and transmission of Pseudomonas aeruginosa strains among cystic fibrosis (CF) patients is important for infection control in CF centers internationally. A recently developed multilocus sequence typing (MLST) scheme is used for epidemiologic analyses of P. aeruginosa outbreaks; however, little is known about its suitability for isolates from CF patients compared with that of pulsed-field gel electrophoresis (PFGE) and enterobacterial repetitive intergenic consensus-PCR (ERIC-PCR). As part of a prevalence study of P. aeruginosa strains in Australian CF clinics, we compared the discriminatory power and concordance of ERIC-PCR, PFGE, and MLST among 93 CF sputum and 11 control P. aeruginosa isolates. PFGE and MLST analyses were also performed on 30 paired isolates collected 85 to 354 days apart from 30 patients attending two CF centers separated by 3,600 kilometers in order to detect within-host evolution. Each of the three methods displayed high levels of concordance and discrimination; however, overall lower discrimination was seen with ERIC-PCR than with MLST and PFGE. Analysis of the 50 ERIC-PCR types yielded 54 PFGE types, which were related by ≤ 6 band differences, and 59 sequence types, which were classified into 7 BURST groups and 42 singletons. MLST also proved useful for detecting novel and known strains and for inferring relatedness among unique PFGE types. However, 47% of the paired isolates produced PFGE patterns that within 1 year differed by one to five bands, whereas with MLST all paired isolates remained identical. MLST thus represents a categorical analysis tool with resolving power similar to that of PFGE for typing P. aeruginosa. Its focus on highly conserved housekeeping genes is particularly suited for long-term clinical monitoring and detecting novel strains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: In a selective group of patients accelerated partial breast irradiation (APBI) might be applied after conservative breast surgery to reduce the amount of irradiated healthy tissue. The role of volumetric modulated arc therapy (VMAT) and voluntary moderately deep inspiration breath-hold (vmDIBH) techniques in further reducing irradiated healthy – especially heart – tissue is investigated.

Material and methods: For 37 partial breast planning target volumes (PTVs), three-dimensional conformal radiotherapy (3D-CRT) (3 – 5 coplanar or non-coplanar 6 and/or 10 MV beams) and VMAT (two partial 6 MV arcs) plans were made on CTs acquired in free-breathing (FB) and/or in vmDIBH. Dose-volume parameters for the PTV, heart, lungs, and breasts were compared. 

Results: Better dose conformity was achieved with VMAT compared to 3D-CRT (conformity index 1.24 0.09 vs. 1.49 0.20). Non-PTV ipsilateral breast receiving 50% of the prescribed dose was on average reduced by 28% in VMAT plans compared to 3D-CRT plans. Mean heart dose (MHD) reduced from 2.0 (0.1 – 5.1) Gy in 3D-CRT(FB) to 0.6 (0.1 – 1.6) Gy in VMAT(vmDIBH). VMAT is benefi cial for MHD reduction if MHD with 3D-CRT exceeds 0.5Gy. Cardiac dose reduction as a result of VMAT increases with increasing initial MHD, and adding vmDIBH reduces the cardiac dose further. Mean dose to the ipsilateral lung decreased from 3.7 (0.7 – 8.7) to 1.8 (0.5 – 4.0) Gy with VMAT(vmDIBH) compared to 3D-CRT(FB). VMAT resulted in a slight increase in the contralateral breast dose (DMean ) always remaining 1.9 Gy). 

Conclusions: For APBI patients, VMAT improves PTV dose conformity and delivers lower doses to the ipsilateral breast and lung compared to 3D-CRT. This goes at the cost of a slight but acceptable increase of the contralateral breast dose. VMAT reduces cardiac dose if MHD exceeds 0.5 Gy for 3D-CRT. Adding vmDIBH results in a further reduction of heart and ipsilateral lung dose. 

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatically determining and assigning shared and meaningful text labels to data extracted from an e-Commerce web page is a challenging problem. An e-Commerce web page can display a list of data records, each of which can contain a combination of data items (e.g. product name and price) and explicit labels, which describe some of these data items. Recent advances in extraction techniques have made it much easier to precisely extract individual data items and labels from a web page, however, there are two open problems: 1. assigning an explicit label to a data item, and 2. determining labels for the remaining data items. Furthermore, improvements in the availability and coverage of vocabularies, especially in the context of e-Commerce web sites, means that we now have access to a bank of relevant, meaningful and shared labels which can be assigned to extracted data items. However, there is a need for a technique which will take as input a set of extracted data items and assign automatically to them the most relevant and meaningful labels from a shared vocabulary. We observe that the Information Extraction (IE) community has developed a great number of techniques which solve problems similar to our own. In this work-in-progress paper we propose our intention to theoretically and experimentally evaluate different IE techniques to ascertain which is most suitable to solve this problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Critical decisions are made by decision-makers throughout
the life-cycle of large-scale projects. These decisions are crucial as they
have a direct impact upon the outcome and the success of projects. To aid
decision-makers in the decision making process we present an evidential
reasoning framework. This approach utilizes the Dezert-Smarandache
theory to fuse heterogeneous evidence sources that suffer from levels
of uncertainty, imprecision and conflicts to provide beliefs for decision
options. To analyze the impact of source reliability and priority upon
the decision making process, a reliability discounting technique and a
priority discounting technique, are applied. A maximal consistent subset
is constructed to aid in dening where discounting should be applied.
Application of the evidential reasoning framework is illustrated using a
case study based in the Aerospace domain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Medical Research Council (MRC) guidelines recommend applying theory within complex interventions to explain how behaviour change occurs. Guidelines endorse self-management of chronic low back pain (CLBP) and osteoarthritis (OA), but evidence for its effectiveness is weak. Objective: This literature review aimed to determine the use of behaviour change theory and techniques within randomised controlled trials of group-based self-management programmes for chronic musculoskeletal pain, specifically CLBP and OA. Methods: A two-phase search strategy of electronic databases was used to identify systematic reviews and studies relevant to this area. Articles were coded for their use of behaviour change theory, and the number of behaviour change techniques (BCTs) was identified using a 93-item taxonomy, Taxonomy (v1). Results: 25 articles of 22 studies met the inclusion criteria, of which only three reported having based their intervention on theory, and all used Social Cognitive Theory. A total of 33 BCTs were coded across all articles with the most commonly identified techniques being '. instruction on how to perform the behaviour', '. demonstration of the behaviour', '. behavioural practice', '. credible source', '. graded tasks' and '. body changes'. Conclusion: Results demonstrate that theoretically driven research within group based self-management programmes for chronic musculoskeletal pain is lacking, or is poorly reported. Future research that follows recommended guidelines regarding the use of theory in study design and reporting is warranted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical downscaling (SD) methods have become a popular, low-cost and accessible means of bridging the gap between the coarse spatial resolution at which climate models output climate scenarios and the finer spatial scale at which impact modellers require these scenarios, with various different SD techniques used for a wide range of applications across the world. This paper compares the Generator for Point Climate Change (GPCC) model and the Statistical DownScaling Model (SDSM)—two contrasting SD methods—in terms of their ability to generate precipitation series under non-stationary conditions across ten contrasting global climates. The mean, maximum and a selection of distribution statistics as well as the cumulative frequencies of dry and wet spells for four different temporal resolutions were compared between the models and the observed series for a validation period. Results indicate that both methods can generate daily precipitation series that generally closely mirror observed series for a wide range of non-stationary climates. However, GPCC tends to overestimate higher precipitation amounts, whilst SDSM tends to underestimate these. This infers that GPCC is more likely to overestimate the effects of precipitation on a given impact sector, whilst SDSM is likely to underestimate the effects. GPCC performs better than SDSM in reproducing wet and dry day frequency, which is a key advantage for many impact sectors. Overall, the mixed performance of the two methods illustrates the importance of users performing a thorough validation in order to determine the influence of simulated precipitation on their chosen impact sector.