894 resultados para Confusion Assessment Method
Resumo:
The suitability of the capillary dried blood spot (DBS) sampling method was assessed for simultaneous phenotyping of cytochrome P450 (CYP) enzymes and P-glycoprotein (P-gp) using a cocktail approach. Ten volunteers received an oral cocktail capsule containing low doses of the probes bupropion (CYP2B6), flurbiprofen (CYP2C9), omeprazole (CYP2C19), dextromethorphan (CYP2D6), midazolam (CYP3A), and fexofenadine (P-gp) with coffee/Coke (CYP1A2) on four occasions. They received the cocktail alone (session 1), and with the CYP inhibitors fluvoxamine and voriconazole (session 2) and quinidine (session 3). In session 4, subjects received the cocktail after a 7-day pretreatment with the inducer rifampicin. The concentrations of probes/metabolites were determined in DBS and plasma using a single liquid chromatography-tandem mass spectrometry method. The pharmacokinetic profiles of the drugs were comparable in DBS and plasma. Important modulation of CYP and P-gp activities was observed in the presence of inhibitors and the inducer. Minimally invasive one- and three-point (at 2, 3, and 6 h) DBS-sampling methods were found to reliably reflect CYP and P-gp activities at each session.
Resumo:
BACKGROUND: Postmenopausal women with hormone receptor-positive early breast cancer have persistent, long-term risk of breast-cancer recurrence and death. Therefore, trials assessing endocrine therapies for this patient population need extended follow-up. We present an update of efficacy outcomes in the Breast International Group (BIG) 1-98 study at 8·1 years median follow-up. METHODS: BIG 1-98 is a randomised, phase 3, double-blind trial of postmenopausal women with hormone receptor-positive early breast cancer that compares 5 years of tamoxifen or letrozole monotherapy, or sequential treatment with 2 years of one of these drugs followed by 3 years of the other. Randomisation was done with permuted blocks, and stratified according to the two-arm or four-arm randomisation option, participating institution, and chemotherapy use. Patients, investigators, data managers, and medical reviewers were masked. The primary efficacy endpoint was disease-free survival (events were invasive breast cancer relapse, second primaries [contralateral breast and non-breast], or death without previous cancer event). Secondary endpoints were overall survival, distant recurrence-free interval (DRFI), and breast cancer-free interval (BCFI). The monotherapy comparison included patients randomly assigned to tamoxifen or letrozole for 5 years. In 2005, after a significant disease-free survival benefit was reported for letrozole as compared with tamoxifen, a protocol amendment facilitated the crossover to letrozole of patients who were still receiving tamoxifen alone; Cox models and Kaplan-Meier estimates with inverse probability of censoring weighting (IPCW) are used to account for selective crossover to letrozole of patients (n=619) in the tamoxifen arm. Comparison of sequential treatments to letrozole monotherapy included patients enrolled and randomly assigned to letrozole for 5 years, letrozole for 2 years followed by tamoxifen for 3 years, or tamoxifen for 2 years followed by letrozole for 3 years. Treatment has ended for all patients and detailed safety results for adverse events that occurred during the 5 years of treatment have been reported elsewhere. Follow-up is continuing for those enrolled in the four-arm option. BIG 1-98 is registered at clinicaltrials.govNCT00004205. FINDINGS: 8010 patients were included in the trial, with a median follow-up of 8·1 years (range 0-12·4). 2459 were randomly assigned to monotherapy with tamoxifen for 5 years and 2463 to monotherapy with letrozole for 5 years. In the four-arm option of the trial, 1546 were randomly assigned to letrozole for 5 years, 1548 to tamoxifen for 5 years, 1540 to letrozole for 2 years followed by tamoxifen for 3 years, and 1548 to tamoxifen for 2 years followed by letrozole for 3 years. At a median follow-up of 8·7 years from randomisation (range 0-12·4), letrozole monotherapy was significantly better than tamoxifen, whether by IPCW or intention-to-treat analysis (IPCW disease-free survival HR 0·82 [95% CI 0·74-0·92], overall survival HR 0·79 [0·69-0·90], DRFI HR 0·79 [0·68-0·92], BCFI HR 0·80 [0·70-0·92]; intention-to-treat disease-free survival HR 0·86 [0·78-0·96], overall survival HR 0·87 [0·77-0·999], DRFI HR 0·86 [0·74-0·998], BCFI HR 0·86 [0·76-0·98]). At a median follow-up of 8·0 years from randomisation (range 0-11·2) for the comparison of the sequential groups with letrozole monotherapy, there were no statistically significant differences in any of the four endpoints for either sequence. 8-year intention-to-treat estimates (each with SE ≤1·1%) for letrozole monotherapy, letrozole followed by tamoxifen, and tamoxifen followed by letrozole were 78·6%, 77·8%, 77·3% for disease-free survival; 87·5%, 87·7%, 85·9% for overall survival; 89·9%, 88·7%, 88·1% for DRFI; and 86·1%, 85·3%, 84·3% for BCFI. INTERPRETATION: For postmenopausal women with endocrine-responsive early breast cancer, a reduction in breast cancer recurrence and mortality is obtained by letrozole monotherapy when compared with tamoxifen montherapy. Sequential treatments involving tamoxifen and letrozole do not improve outcome compared with letrozole monotherapy, but might be useful strategies when considering an individual patient's risk of recurrence and treatment tolerability. FUNDING: Novartis, United States National Cancer Institute, International Breast Cancer Study Group.
Resumo:
In this thesis, we study the use of prediction markets for technology assessment. We particularly focus on their ability to assess complex issues, the design constraints required for such applications and their efficacy compared to traditional techniques. To achieve this, we followed a design science research paradigm, iteratively developing, instantiating, evaluating and refining the design of our artifacts. This allowed us to make multiple contributions, both practical and theoretical. We first showed that prediction markets are adequate for properly assessing complex issues. We also developed a typology of design factors and design propositions for using these markets in a technology assessment context. Then, we showed that they are able to solve some issues related to the R&D portfolio management process and we proposed a roadmap for their implementation. Finally, by comparing the instantiation and the results of a multi-criteria decision method and a prediction market, we showed that the latter are more efficient, while offering similar results. We also proposed a framework for comparing forecasting methods, to identify the constraints based on contingency factors. In conclusion, our research opens a new field of application of prediction markets and should help hasten their adoption by enterprises. Résumé français: Dans cette thèse, nous étudions l'utilisation de marchés de prédictions pour l'évaluation de nouvelles technologies. Nous nous intéressons plus particulièrement aux capacités des marchés de prédictions à évaluer des problématiques complexes, aux contraintes de conception pour une telle utilisation et à leur efficacité par rapport à des techniques traditionnelles. Pour ce faire, nous avons suivi une approche Design Science, développant itérativement plusieurs prototypes, les instanciant, puis les évaluant avant d'en raffiner la conception. Ceci nous a permis de faire de multiples contributions tant pratiques que théoriques. Nous avons tout d'abord montré que les marchés de prédictions étaient adaptés pour correctement apprécier des problématiques complexes. Nous avons également développé une typologie de facteurs de conception ainsi que des propositions de conception pour l'utilisation de ces marchés dans des contextes d'évaluation technologique. Ensuite, nous avons montré que ces marchés pouvaient résoudre une partie des problèmes liés à la gestion des portes-feuille de projets de recherche et développement et proposons une feuille de route pour leur mise en oeuvre. Finalement, en comparant la mise en oeuvre et les résultats d'une méthode de décision multi-critère et d'un marché de prédiction, nous avons montré que ces derniers étaient plus efficaces, tout en offrant des résultats semblables. Nous proposons également un cadre de comparaison des méthodes d'évaluation technologiques, permettant de cerner au mieux les besoins en fonction de facteurs de contingence. En conclusion, notre recherche ouvre un nouveau champ d'application des marchés de prédiction et devrait permettre d'accélérer leur adoption par les entreprises.
Resumo:
Body accelerations during human walking were recorded by a portable measuring device. A new method for parameterizing body accelerations and finding the pattern of walking is outlined. Two neural networks were designed to recognize each pattern and estimate the speed and incline of walking. Six subjects performed treadmill walking followed by self-paced walking on an outdoor test circuit involving roads of various inclines. The neural networks were first "trained" by known patterns of treadmill walking. Then the inclines, the speeds, and the distance covered during overground walking (outdoor circuit) were estimated. The results show a good agreement between actual and predicted variables. The standard deviation of estimated incline was less than 2.6% and the maximum of the coefficient of variation of speed estimation is 6%. To the best of our knowledge, these results constitute the first assessment of speed, incline and distance covered during level and slope walking and offer investigators a new tool for assessing levels of outdoor physical activity.
Resumo:
OBJECTIVE: To describe a method to obtain a profile of the duration and intensity (speed) of walking periods over 24 hours in women under free-living conditions. DESIGN: A new method based on accelerometry was designed for analyzing walking activity. In order to take into account inter-individual variability of acceleration, an individual calibration process was used. Different experiments were performed to highlight the variability of acceleration vs walking speed relationship, to analyze the speed prediction accuracy of the method, and to test the assessment of walking distance and duration over 24-h. SUBJECTS: Twenty-eight women were studied (mean+/-s.d.) age: 39.3+/-8.9 y; body mass: 79.7+/-11.1 kg; body height: 162.9+/-5.4 cm; and body mass index (BMI) 30.0+/-3.8 kg/m(2). RESULTS: Accelerometer output was significantly correlated with speed during treadmill walking (r=0.95, P<0.01), and short unconstrained walks (r=0.86, P<0.01), although with a large inter-individual variation of the regression parameters. By using individual calibration, it was possible to predict walking speed on a standard urban circuit (predicted vs measured r=0.93, P<0.01, s.e.e.=0.51 km/h). In the free-living experiment, women spent on average 79.9+/-36.0 (range: 31.7-168.2) min/day in displacement activities, from which discontinuous short walking activities represented about 2/3 and continuous ones 1/3. Total walking distance averaged 2.1+/-1.2 (range: 0.4-4.7) km/day. It was performed at an average speed of 5.0+/-0.5 (range: 4.1-6.0) km/h. CONCLUSION: An accelerometer measuring the anteroposterior acceleration of the body can estimate walking speed together with the pattern, intensity and duration of daily walking activity.
Resumo:
The increasing availability and precision of digital elevation model (DEM) helps in the assessment of landslide prone areas where only few data are available. This approach is performed in 6 main steps which include: DEM creation; identification of geomorphologic features; determination of the main sets of discontinuities; mapping of the most likely dangerous structures; preliminary rock-fall assessment; estimation of the large instabilities volumes. The method is applied to two the cases studies in the Oppstadhornet mountain (730m alt): (1) a 10 millions m3 slow-moving rockslide and (2) a potential high-energy rock falling prone area. The orientations of the foliation and of the major discontinuities have been determined directly from the DEM. These results are in very good agreement with field measurements. Spatial arrangements of discontinuities and foliation with the topography revealed hazardous structures. Maps of potential occurrence of these hazardous structures show highly probable sliding areas at the foot of the main landslide and potential rock falls in the eastern part of the mountain.
Resumo:
The prognosis of superficial bladder cancer in terms of recurrence and disease progression is related to bladder tumor multiplicity and the presence of concomitant "plane" tumors such as high-grade dysplasia and carcinoma in situ. This study in 33 patients aimed to demonstrate the role of fluorescence cystoscopy in transurethral resection of superficial bladder cancer. The method is based on the detection of protoporphyrin-IX-induced fluorescence in urothelial cancer cells by topical administration of 5-aminolevulinic acid. The sensitivity and the specificity of this procedure on apparently normal mucosa in superficial bladder cancer are estimated to be 82.9% and 81.3%, respectively. Thus, fluorescence cytoscopy is a simple and reliable method for mapping the bladder mucosa, especially in the case of multifocal bladder disease, and it facilitates the screening of occult dysplasia.
Resumo:
Delta(9)-Tetrahydrocannabinol (THC) is frequently found in the blood of drivers suspected of driving under the influence of cannabis or involved in traffic crashes. The present study used a double-blind crossover design to compare the effects of medium (16.5 mg THC) and high doses (45.7 mg THC) of hemp milk decoctions or of a medium dose of dronabinol (20 mg synthetic THC, Marinol on several skills required for safe driving. Forensic interpretation of cannabinoids blood concentrations were attempted using the models proposed by Daldrup (cannabis influencing factor or CIF) and Huestis and coworkers. First, the time concentration-profiles of THC, 11-hydroxy-Delta(9)-tetrahydrocannabinol (11-OH-THC) (active metabolite of THC), and 11-nor-9-carboxy-Delta(9)-tetrahydrocannabinol (THCCOOH) in whole blood were determined by gas chromatography-mass spectrometry-negative ion chemical ionization. Compared to smoking studies, relatively low concentrations were measured in blood. The highest mean THC concentration (8.4 ng/mL) was achieved 1 h after ingestion of the strongest decoction. Mean maximum 11-OH-THC level (12.3 ng/mL) slightly exceeded that of THC. THCCOOH reached its highest mean concentration (66.2 ng/mL) 2.5-5.5 h after intake. Individual blood levels showed considerable intersubject variability. The willingness to drive was influenced by the importance of the requested task. Under significant cannabinoids influence, the participants refused to drive when they were asked whether they would agree to accomplish several unimportant tasks, (e.g., driving a friend to a party). Most of the participants reported a significant feeling of intoxication and did not appreciate the effects, notably those felt after drinking the strongest decoction. Road sign and tracking testing revealed obvious and statistically significant differences between placebo and treatments. A marked impairment was detected after ingestion of the strongest decoction. A CIF value, which relies on the molar ratio of main active to inactive cannabinoids, greater than 10 was found to correlate with a strong feeling of intoxication. It also matched with a significant decrease in the willingness to drive, and it matched also with a significant impairment in tracking performances. The mathematic model II proposed by Huestis et al. (1992) provided at best a rough estimate of the time of oral administration with 27% of actual values being out of range of the 95% confidence interval. The sum of THC and 11-OH-THC blood concentrations provided a better estimate of impairment than THC alone. This controlled clinical study points out the negative influence on fitness to drive after medium or high dose oral THC or dronabinol.
Resumo:
Purpose. The aim of this study was to identify new surfactants with low skin irritant properties for use in pharmaceutical and cosmetic formulations, employing cell culture as an alternative method to in vivo testing. In addition, we sought to establish whether potential cytotoxic properties were related to the size of the counterions bound to the surfactants. Methods. Cytotoxicity was assessed in the mouse fibroblast cell line 3T6, and the human keratinocyte cell line NCTC 2544, using the MTT assay and uptake of the vital dye neutral red 24 h after dosing (NRU). Results. Lysine-derivative surfactants showed higher IC50s than did commercial anionic irritant compounds such as sodium dodecyl sulphate, proving to be no more harmful than amphoteric betaines. The aggressiveness of the surfactants depended upon the size of their constituent counterions: surfactants associated with lighter counterions showed a proportionally higher aggressivity than those with heavier ones. Conclusions. Synthetic lysine-derivative anionic surfactants are less irritant than commercial surfactants such as sodium dodecyl sulphate and Hexadecyltrimethylammonium bromide and are similar to Betaines. These surfactants may offer promising applications in pharmaceutical and cosmetic preparations, representing a potential alternative to commercial anionic surfactants as a result of their low irritancy potential.
Resumo:
BACKGROUND: Human speech is greatly influenced by the speakers' affective state, such as sadness, happiness, grief, guilt, fear, anger, aggression, faintheartedness, shame, sexual arousal, love, amongst others. Attentive listeners discover a lot about the affective state of their dialog partners with no great effort, and without having to talk about it explicitly during a conversation or on the phone. On the other hand, speech dysfunctions, such as slow, delayed or monotonous speech, are prominent features of affective disorders. METHODS: This project was comprised of four studies with healthy volunteers from Bristol (English: n = 117), Lausanne (French: n = 128), Zurich (German: n = 208), and Valencia (Spanish: n = 124). All samples were stratified according to gender, age, and education. The specific study design with different types of spoken text along with repeated assessments at 14-day intervals allowed us to estimate the 'natural' variation of speech parameters over time, and to analyze the sensitivity of speech parameters with respect to form and content of spoken text. Additionally, our project included a longitudinal self-assessment study with university students from Zurich (n = 18) and unemployed adults from Valencia (n = 18) in order to test the feasibility of the speech analysis method in home environments. RESULTS: The normative data showed that speaking behavior and voice sound characteristics can be quantified in a reproducible and language-independent way. The high resolution of the method was verified by a computerized assignment of speech parameter patterns to languages at a success rate of 90%, while the correct assignment to texts was 70%. In the longitudinal self-assessment study we calculated individual 'baselines' for each test person along with deviations thereof. The significance of such deviations was assessed through the normative reference data. CONCLUSIONS: Our data provided gender-, age-, and language-specific thresholds that allow one to reliably distinguish between 'natural fluctuations' and 'significant changes'. The longitudinal self-assessment study with repeated assessments at 1-day intervals over 14 days demonstrated the feasibility and efficiency of the speech analysis method in home environments, thus clearing the way to a broader range of applications in psychiatry. © 2014 S. Karger AG, Basel.
Resumo:
The purpose of this work was to evaluate the ability of 80 MHz ultrasonography to differentiate intra-retinal layers and quantitatively assess photoreceptor dystrophy in small animal models. Four groups of 10 RCS rats each (five dystrophic and five controls) were explored at 25, 35, 45 and 55 days post-natal (PN). A series of retina cross-sections were obtained ex vivo from outside intact eyes using an 80 MHz three-dimensional ultrasound backscatter microscope (20-microm-axial resolution). Ultrasound features of normal retina were correlated to those of corresponding histology and thickness measurements of photoreceptor segment and nuclear layers were performed on all groups. To show the ability of 80 MHz ultrasonography to distinguish the retinal degeneration in vivo, one RCS rat was explored at 25 and 55 days post-natal. Ultrasound image of normal retina displayed four distinct layers marked by reflections at neurites/nuclei interfaces and permitted to differentiate the photoreceptor segment and nuclear layers. The backscatter level from the retina was shown to be related to the size, density and organization of the intra-layer structure. Ultrasound thickness measurements highly correlated with histologic measurements. A thinning (p<0.05) of outer nuclear layer (ONL) was detected over time for controls and was thought to be assigned to retina maturation. Retinal degeneration started at PN35 and resulted in a more pronounced ONL thinning (p<0.05) over time. ONL degeneration was accompanied by segment layer thickening (p<0.05) at PN35 and thinning thereafter. These changes may indicate accumulation of outer segment debris at PN35 then progressive destruction. In vivo images of rat intra-retinal structure showed the ability of the method to distinguish the photoreceptor layer changes. Our results indicate that 80 MHz ultrasonography reveals intra-retinal layers and is sensitive to age and degenerative changes of photoreceptors. This technique has great potential to follow-up retinal dystrophy and therapeutic effects in vivo.
Resumo:
The goal of this work is to develop a method to objectively compare the performance of a digital and a screen-film mammography system in terms of image quality. The method takes into account the dynamic range of the image detector, the detection of high and low contrast structures, the visualisation of the images and the observer response. A test object, designed to represent a compressed breast, was constructed from various tissue equivalent materials ranging from purely adipose to purely glandular composition. Different areas within the test object permitted the evaluation of low and high contrast detection, spatial resolution and image noise. All the images (digital and conventional) were captured using a CCD camera to include the visualisation process in the image quality assessment. A mathematical model observer (non-prewhitening matched filter), that calculates the detectability of high and low contrast structures using spatial resolution, noise and contrast, was used to compare the two technologies. Our results show that for a given patient dose, the detection of high and low contrast structures is significantly better for the digital system than for the conventional screen-film system studied. The method of using a test object with a large tissue composition range combined with a camera to compare conventional and digital imaging modalities can be applied to other radiological imaging techniques. In particular it could be used to optimise the process of radiographic reading of soft copy images.
Resumo:
This work deals with the elaboration of flood hazard maps. These maps reflect the areas prone to floods based on the effects of Hurricane Mitch in the Municipality of Jucuarán of El Salvador. Stream channels located in the coastal range in the SE of El Salvador flow into the Pacific Ocean and generate alluvial fans. Communities often inhabit these fans can be affected by floods. The geomorphology of these stream basins is associated with small areas, steep slopes, well developed regolite and extensive deforestation. These features play a key role in the generation of flash-floods. This zone lacks comprehensive rainfall data and gauging stations. The most detailed topographic maps are on a scale of 1:25 000. Given that the scale was not sufficiently detailed, we used aerial photographs enlarged to the scale of 1:8000. The effects of Hurricane Mitch mapped on these photographs were regarded as the reference event. Flood maps have a dual purpose (1) community emergency plans, (2) regional land use planning carried out by local authorities. The geomorphological method is based on mapping the geomorphological evidence (alluvial fans, preferential stream channels, erosion and sedimentation, man-made terraces). Following the interpretation of the photographs this information was validated on the field and complemented by eyewitness reports such as the height of water and flow typology. In addition, community workshops were organized to obtain information about the evolution and the impact of the phenomena. The superimposition of this information enables us to obtain a comprehensive geomorphological map. Another aim of the study was the calculation of the peak discharge using the Manning and the paleohydraulic methods and estimates based on geomorphologic criterion. The results were compared with those obtained using the rational method. Significant differences in the order of magnitude of the calculated discharges were noted. The rational method underestimated the results owing to short and discontinuous periods of rainfall data with the result that probabilistic equations cannot be applied. The Manning method yields a wide range of results because of its dependence on the roughness coefficient. The paleohydraulic method yielded higher values than the rational and Manning methods. However, it should be pointed out that it is possible that bigger boulders could have been moved had they existed. These discharge values are lower than those obtained by the geomorphological estimates, i.e. much closer to reality. The flood hazard maps were derived from the comprehensive geomorphological map. Three categories of hazard were established (very high, high and moderate) using flood energy, water height and velocity flow deduced from geomorphological and eyewitness reports.
Resumo:
OBJECTIVES: To assess the accuracy of high-resolution (HR) magnetic resonance imaging (MRI) in diagnosing early-stage optic nerve (ON) invasion in a retinoblastoma cohort. METHODS: This IRB-approved, prospective multicenter study included 95 patients (55 boys, 40 girls; mean age, 29 months). 1.5-T MRI was performed using surface coils before enucleation, including spin-echo unenhanced and contrast-enhanced (CE) T1-weighted sequences (slice thickness, 2 mm; pixel size <0.3 × 0.3 mm(2)). Images were read by five neuroradiologists blinded to histopathologic findings. ROC curves were constructed with AUC assessment using a bootstrap method. RESULTS: Histopathology identified 41 eyes without ON invasion and 25 with prelaminar, 18 with intralaminar and 12 with postlaminar invasion. All but one were postoperatively classified as stage I by the International Retinoblastoma Staging System. The accuracy of CE-T1 sequences in identifying ON invasion was limited (AUC = 0.64; 95 % CI, 0.55 - 0.72) and not confirmed for postlaminar invasion diagnosis (AUC = 0.64; 95 % CI, 0.47 - 0.82); high specificities (range, 0.64 - 1) and negative predictive values (range, 0.81 - 0.97) were confirmed. CONCLUSION: HR-MRI with surface coils is recommended to appropriately select retinoblastoma patients eligible for primary enucleation without the risk of IRSS stage II but cannot substitute for pathology in differentiating the first degrees of ON invasion. KEY POINTS: • HR-MRI excludes advanced optic nerve invasion with high negative predictive value. • HR-MRI accurately selects patients eligible for primary enucleation. • Diagnosis of early stages of optic nerve invasion still relies on pathology. • Several physiological MR patterns may mimic optic nerve invasion.
Resumo:
OBJECTIVES: A survey was undertaken among Swiss occupational hygienists and other professionals to identify the different exposure assessment methods used, the contextual parameters observed and the uses, difficulties and possible developments of exposure models for field application. METHODS: A questionnaire was mailed to 121 occupational hygienists, all members of the Swiss Occupational Hygiene Society. A shorter questionnaire was also sent to registered occupational physicians and selected safety specialists. Descriptive statistics and multivariate analyses were performed. RESULTS: The response rate for occupational hygienists was 60%. The so-called expert judgement appeared to be the most widely used method, but its efficiency and reliability were both judged with very low scores. Long-term sampling was perceived as the most efficient and reliable method. Various determinants of exposure, such as emission rate and work activity, were often considered important, even though they were not included in the exposure assessment processes. Near field local phenomena determinants were also judged important for operator exposure estimation. CONCLUSION: Exposure models should be improved to integrate factors which are more easily accessible to practitioners. Descriptors of emission and local phenomena should also be included.