987 resultados para Threshold Limit Values
Resumo:
The main objective of this paper aims at developing a methodology that takes into account the human factor extracted from the data base used by the recommender systems, and which allow to resolve the specific problems of prediction and recommendation. In this work, we propose to extract the user's human values scale from the data base of the users, to improve their suitability in open environments, such as the recommender systems. For this purpose, the methodology is applied with the data of the user after interacting with the system. The methodology is exemplified with a case study
Resumo:
OBJECTIVE: Home blood pressure (BP) monitoring is recommended by several clinical guidelines and has been shown to be feasible in elderly persons. Wrist manometers have recently been proposed for such home BP measurement, but their accuracy has not been previously assessed in elderly patients. METHODS: Forty-eight participants (33 women and 15 men, mean age 81.3±8.0 years) had their BP measured with a wrist device with position sensor and an arm device in random order in a sitting position. RESULTS: Average BP measurements were consistently lower with the wrist than arm device for systolic BP (120.1±2.2 vs. 130.5±2.2 mmHg, P<0.001, means±SD) and diastolic BP (66.0±1.3 vs. 69.7±1.3 mmHg, P<0.001). Moreover, a 10 mmHg or greater difference between the arm and wrist device was observed in 54.2 and 18.8% of systolic and diastolic measures, respectively. CONCLUSION: Compared with the arm device, the wrist device with position sensor systematically underestimated systolic as well as diastolic BP. The magnitude of the difference is clinically significant and questions the use of the wrist device to monitor BP in elderly persons. This study points to the need to validate BP measuring devices in all age groups, including in elderly persons.
Resumo:
Objectives. The goal of this study is to evaluate a T2-mapping sequence by: (i) measuring the reproducibility intra- and inter-observer variability in healthy volunteers in two separate scanning session with a T2 reference phantom; (2) measuring the mean T2 relaxation times by T2-mapping in infarcted myocardium in patients with subacute MI and compare it with patient's the gold standard X-ray coronary angiography and healthy volunteers results. Background. Myocardial edema is a consequence of an inflammation of the tissue, as seen in myocardial infarct (MI). It can be visualized by cardiovascular magnetic resonance (CMR) imaging using the T2 relaxation time. T2-mapping is a quantitative methodology that has the potential to address the limitation of the conventional T2-weighted (T2W) imaging. Methods. The T2-mapping protocol used for all MRI scans consisted in a radial gradient echo acquisition with a lung-liver navigator for free-breathing acquisition and affine image registration. Mid-basal short axis slices were acquired.T2-maps analyses: 2 observers semi- automatically segmented the left ventricle in 6 segments accordingly to the AHA standards. 8 healthy volunteers (age: 27 ± 4 years; 62.5% male) were scanned in 2 separate sessions. 17 patients (age : 61.9 ± 13.9 years; 82.4% male) with subacute STEMI (70.6%) and NSTEMI underwent a T2-mapping scanning session. Results. In healthy volunteers, the mean inter- and intra-observer variability over the entire short axis slice (segment 1 to 6) was 0.1 ms (95% confidence interval (CI): -0.4 to 0.5, p = 0.62) and 0.2 ms (95% CI: -2.8 to 3.2, p = 0.94, respectively. T2 relaxation time measurements with and without the correction of the phantom yielded an average difference of 3.0 ± 1.1 % and 3.1 ± 2.1 % (p = 0.828), respectively. In patients, the inter-observer variability in the entire short axis slice (S1-S6), was 0.3 ms (95% CI: -1.8 to 2.4, p = 0.85). Edema location as determined through the T2-mapping and the coronary artery occlusion as determined on X-ray coronary angiography correlated in 78.6%, but only in 60% in apical infarcts. All except one of the maximal T2 values in infarct patients were greater than the upper limit of the 95% confidence interval for normal myocardium. Conclusions. The T2-mapping methodology is accurate in detecting infarcted, i.e. edematous tissue in patients with subacute infarcts. This study further demonstrated that this T2-mapping technique is reproducible and robust enough to be used on a segmental basis for edema detection without the need of a phantom to yield a T2 correction factor. This new quantitative T2-mapping technique is promising and is likely to allow for serial follow-up studies in patients to improve our knowledge on infarct pathophysiology, on infarct healing, and for the assessment of novel treatment strategies for acute infarctions.
Resumo:
A low arousal threshold is believed to predispose to breathing instability during sleep. The present authors hypothesised that trazodone, a nonmyorelaxant sleep-promoting agent, would increase the effort-related arousal threshold in obstructive sleep apnoea (OSA) patients. In total, nine OSA patients, mean+/-sd age 49+/-9 yrs, apnoea/hypopnoea index 52+/-32 events.h(-1), were studied on 2 nights, one with trazodone at 100 mg and one with a placebo, in a double blind randomised fashion. While receiving continuous positive airway pressure (CPAP), repeated arousals were induced: 1) by increasing inspired CO(2) and 2) by stepwise decreases in CPAP level. Respiratory effort was measured with an oesophageal balloon. End-tidal CO(2 )tension (P(ET,CO(2))) was monitored with a nasal catheter. During trazodone nights, compared with placebo nights, the arousals occurred at a higher P(ET,CO(2)) level (mean+/-sd 7.30+/-0.57 versus 6.62+/-0.64 kPa (54.9+/-4.3 versus 49.8+/-4.8 mmHg), respectively). When arousals were triggered by increasing inspired CO(2) level, the maximal oesophageal pressure swing was greater (19.4+/-4.0 versus 13.1+/-4.9 cm H(2)O) and the oesophageal pressure nadir before the arousals was lower (-5.1+/-4.7 versus -0.38+/-4.2 cm H(2)O) with trazodone. When arousals were induced by stepwise CPAP drops, the maximal oesophageal pressure swings before the arousals did not differ. Trazodone at 100 mg increased the effort-related arousal threshold in response to hypercapnia in obstructive sleep apnoea patients and allowed them to tolerate higher CO(2) levels.
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
The pharmacokinetic profile of imatinib has been assessed in healthy subjects and in population studies among thousands of patients with CML or GIST. Imatinib is rapidly and extensively absorbed from the GI tract, reaching a peak plasma concentration (Cmax) within 1-4 h following administration. Imatinib bioavailability is high (98%) and independent of food intake. Imatinib undergoes rapid and extensive distribution into tissues, with minimal penetration into the central nervous system. In the circulation, it is approximately 95% bound to plasma proteins, principally α1-acid glycoprotein (AGP) and albumin. Imatinib undergoes metabolism in the liver via the cytochrome P450 enzyme system (CYP), with CYP3A4 being the main isoenzyme involved. The N-desmethyl metabolite CGP74588 is the major circulating active metabolite. The typical elimination half-life for imatinib is approximately 14-22 h. Imatinib is characterized by large inter-individual pharmacokinetic variability, which reflects in a wide spread of concentrations observed under standard dosage. Besides adherence, several factors have been shown to influence this variability, especially demographic characteristics (sex, age, body weight and disease diagnosis), blood count characteristics, enzyme activity (mainly CYP3A4), drug interactions, activity of efflux transporters and plasma levels of AGP. Additionally, recent retrospective studies have shown that drug exposure, reflected in either the area under the concentration-time curve (AUC) or more conveniently the trough level (Cmin), correlates with treatment outcomes. Increased toxicity has been associated with high plasma levels, and impaired clinical efficacy with low plasma levels. While no upper concentration limit has been formally established, a lower limit for imatinib Cmin of about 1000 ng/mL has been proposed repeatedly for improving outcomes in CML and GIST patients. Imatinib is licensed for use in chronic phase CML and GIST at a fixed dose of 400 mg once daily (600 mg in some other indications) despite substantial pharmacokinetic variability caused by both genetic and acquired factors. The dose can be modified on an individual basis in cases of insufficient response or substantial toxic effects. Imatinib would, however, meet traditional criteria for a therapeutic drug monitoring (TDM) program: long-term therapy, measurability, high inter-individual but restricted intra-individual variability, limited pharmacokinetic predictability, effect of drug interactions, consistent association between concentration and response, suggested therapeutic threshold, reversibility of effect and absence of early markers of efficacy and toxic effects. Large-scale, evidence-based assessments of drug concentration monitoring are therefore still warranted for the personalization of imatinib treatment.
Resumo:
Background: Excessive exposure to solar Ultra-Violet (UV) light is the main cause of most skin cancers in humans. Factors such as the increase of solar irradiation at ground level (anthropic pollution), the rise in standard of living (vacation in sunny areas), and (mostly) the development of outdoor activities have contributed to increase exposure. Thus, unsurprisingly, incidence of skin cancers has increased over the last decades more than that of any other cancer. Melanoma is the most lethal cutaneous cancer, while cutaneous carcinomas are the most common cancer type worldwide. UV exposure depends on environmental as well as individual factors related to activity. The influence of individual factors on exposure among building workers was investigated in a previous study. Posture and orientation were found to account for at least 38% of the total variance of relative individual exposure. A high variance of short-term exposure was observed between different body locations, indicating the occurrence of intense, subacute exposures. It was also found that effective short-term exposure ranged between 0 and 200% of ambient irradiation, suggesting that ambient irradiation is a poor predictor of effective exposure. Various dosimetric techniques enable to assess individual effective exposure, but dosimetric measurements remain tedious and tend to be situation-specific. As a matter of facts, individual factors (exposure time, body posture and orientation in the sun) often limit the extrapolation of exposure results to similar activities conducted in other conditions. Objective: The research presented in this paper aims at developing and validating a predictive tool of effective individual exposure to solar UV. Methods: Existing computer graphic techniques (3D rendering) were adapted to reflect solar exposure conditions and calculate short-term anatomical doses. A numerical model, represented as a 3D triangular mesh, is used to represent the exposed body. The amount of solar energy received by each "triangle is calculated, taking into account irradiation intensity, incidence angle and possible shadowing from other body parts. The model take into account the three components of the solar irradiation (direct, diffuse and albedo) as well as the orientation and posture of the body. Field measurements were carried out using a forensic mannequin at the Payerne MeteoSwiss station. Short-term dosimetric measurements were performed in 7 anatomical locations for 5 body postures. Field results were compared to the model prediction obtained from the numerical model. Results: The best match between prediction and measurements was obtained for upper body parts such as shoulders (Ratio Modelled/Measured; Mean = 1.21, SD = 0.34) and neck (Mean = 0.81, SD = 0.32). Small curved body parts such as forehead (Mean = 6.48, SD = 9.61) exhibited a lower matching. The prediction is less accurate for complex postures such as kneeling (Mean = 4.13, SD = 8.38) compared to standing up (Mean = 0.85, SD = 0.48). The values obtained from the dosimeters and the ones computed from the model are globally consistent. Conclusion: Although further development and validation are required, these results suggest that effective exposure could be predicted for a given activity (work or leisure) in various ambient irradiation conditions. Using a generic modelling approach is of high interest in terms of implementation costs as well as predictive and retrospective capabilities.
Resumo:
The relative contributions of Alzheimer disease (AD) and vascular lesion burden to the occurrence of cognitive decline are more difficult to define in the oldest-old than they are in younger cohorts. To address this issue, we examined 93 prospectively documented autopsy cases from 90 to 103 years with various degrees of AD lesions, lacunes, and microvascular pathology. Cognitive assessment was performed prospectively using the Clinical Dementia Rating scale. Neuropathologic evaluation included the Braak neurofibrillary tangle (NFT) and β-amyloid (Aβ) protein deposition staging and bilateral semiquantitative assessment of vascular lesions. Statistics included regression models and receiver operating characteristic analyses. Braak NFTs, Aβ deposition, and cortical microinfarcts (CMIs) predicted 30% of Clinical Dementia Rating variability and 49% of the presence of dementia. Braak NFT and CMI thresholds yielded 0.82 sensitivity, 0.91 specificity, and 0.84 correct classification rates for dementia. Using these threshold values, we could distinguish 3 groups of demented cases and propose criteria for neuropathologic definition of mixed dementia, pure vascular dementia, and AD in very old age. Braak NFT staging and severity of CMI allow for defining most of demented cases in the oldest-old. Most importantly, single cutoff scores for these variables that could be used in the future to formulate neuropathologic criteria for mixed dementia in this age group were identified.
Resumo:
Abstract The neo-liberal capitalist ideology has come under heavy fire with anecdotal evidence indicating a link between these same values and unethical behavior. Academic institutions reflect social values and act as socializing agents for the young. Can this explain the high and increasing rates of cheating that currently prevail in education? Our first chapter examines the question of whether self-enhancement values of power and açhievement, the individual level equivalent of neo-liberal capitalist values, predict positive attitudes towards cheating. Furthermore, we explore the mediating role of motivational factors. Results of four studies reveal that self-enhancement value endorsement predicts the adoption of performance-approach goals, a relationship mediated by introjected regulation, namely desire for social approval and that self-enhancement value endorsement also predicts the condoning of cheating, a relationship mediated by performance-approach goal adoption. However, self-transcendence values prescribed by a normatively salient source have the potential to reduce the link between self-enhancement value endorsément and attitudes towards cheating. Normative assessment constitutes a key tool used by academic institutions to socialize young people to accept the competitive, meritocratic nature of a sociéty driven by a neo-liberal capitalist ideology. As such, the manifest function of grades is to motivate students to work hard and to buy into the competing ethos. Does normative assessment fulfill these functions? Our second chapter explores the reward-intrinsic motivation question in the context of grading, arguably a high-stakes reward. In two experiments, the relative capacity of graded high performance as compared to the task autonomy experienced in an ungraded task to predict post-task intrinsic motivation is assessed. Results show that whilst the graded task performance predicts post-task appreciation, it fails to predict ongoing motivation. However, perceived autonomy experienced in non-graded condition, predicts both post-task appreciation and ongoing motivation. Our third chapter asks whether normative assessment inspires the spirit of competition in students. Results of three experimental studies reveal that expectation of a grade for a task, compared to no grade, induces greater adoption of performance-avoidance, but not performance-approach, goals. Experiment 3 provides an explanatory mechanism for this, showing that reduced autonomous motivation experienced in previous graded tasks mediates the relationship between grading and adoption of performance avoidance goals in a subsequent task. The above results, when combined, provide evidence as to the deleterious effects of self enhancement values and the associated practice of normative assessment in school on student motivation, goals and ethics. We conclude by using value and motivation theory to explore solutions to this problem.
Resumo:
We present the derivation of the continuous-time equations governing the limit dynamics of discrete-time reaction-diffusion processes defined on heterogeneous metapopulations. We show that, when a rigorous time limit is performed, the lack of an epidemic threshold in the spread of infections is not limited to metapopulations with a scale-free architecture, as it has been predicted from dynamical equations in which reaction and diffusion occur sequentially in time
Resumo:
Purpose: To investigate the effect of incremental increases in intraocular straylight on threshold measurements made by three modern forms of perimetry: Standard Automated Perimetry (SAP) using Octopus (Dynamic, G-Pattern), Pulsar Perimetry (PP) (TOP, 66 points) and the Moorfields Motion Displacement Test (MDT) (WEBS, 32 points).Methods: Four healthy young observers were recruited (mean age 26yrs [25yrs, 28yrs]), refractive correction [+2 D, -4.25D]). Five white opacity filters (WOF), each scattering light by different amounts were used to create incremental increases in intraocular straylight (IS). Resultant IS values were measured with each WOF and at baseline (no WOF) for each subject using a C-Quant Straylight Meter (Oculus, Wetzlar, Germany). A 25 yr old has an IS value of ~0.85 log(s). An increase of 40% in IS to 1.2log(s) corresponds to the physiological value of a 70yr old. Each WOFs created an increase in IS between 10-150% from baseline, ranging from effects similar to normal aging to those found with considerable cataract. Each subject underwent 6 test sessions over a 2-week period; each session consisted of the 3 perimetric tests using one of the five WOFs and baseline (both instrument and filter were randomised).Results: The reduction in sensitivity from baseline was calculated. A two-way ANOVA on mean change in threshold (where subjects were treated as rows in the block and each increment in fog filters was treated as column) was used to examine the effect of incremental increases in straylight. Both SAP (p<0.001) and Pulsar (p<0.001) were significantly affected by increases in straylight. The MDT (p=0.35) remained comparatively robust to increases in straylight.Conclusions: The Moorfields MDT measurement of threshold is robust to effects of additional straylight as compared to SAP and PP.
Exact asymptotics and limit theorems for supremum of stationary chi-processes over a random interval
Resumo:
The availability of high resolution Digital Elevation Models (DEM) at a regional scale enables the analysis of topography with high levels of detail. Hence, a DEM-based geomorphometric approach becomes more accurate for detecting potential rockfall sources. Potential rockfall source areas are identified according to the slope angle distribution deduced from high resolution DEM crossed with other information extracted from geological and topographic maps in GIS format. The slope angle distribution can be decomposed in several Gaussian distributions that can be considered as characteristic of morphological units: rock cliffs, steep slopes, footslopes and plains. A terrain is considered as potential rockfall sources when their slope angles lie over an angle threshold, which is defined where the Gaussian distribution of the morphological unit "Rock cliffs" become dominant over the one of "Steep slopes". In addition to this analysis, the cliff outcrops indicated by the topographic maps were added. They contain however "flat areas", so that only the slope angles values above the mode of the Gaussian distribution of the morphological unit "Steep slopes" were considered. An application of this method is presented over the entire Canton of Vaud (3200 km2), Switzerland. The results were compared with rockfall sources observed on the field and orthophotos analysis in order to validate the method. Finally, the influence of the cell size of the DEM is inspected by applying the methodology over six different DEM resolutions.
Resumo:
[1] We present new analytical data of major and trace elements for the geological MPI-DING glasses KL2-G, ML3B-G, StHs6/80-G, GOR128-G, GOR132-G, BM90/21-G, T1-G, and ATHO-G. Different analytical methods were used to obtain a large spectrum of major and trace element data, in particular, EPMA, SIMS, LA-ICPMS, and isotope dilution by TIMS and ICPMS. Altogether, more than 60 qualified geochemical laboratories worldwide contributed to the analyses, allowing us to present new reference and information values and their uncertainties ( at 95% confidence level) for up to 74 elements. We complied with the recommendations for the certification of geological reference materials by the International Association of Geoanalysts (IAG). The reference values were derived from the results of 16 independent techniques, including definitive ( isotope dilution) and comparative bulk ( e. g., INAA, ICPMS, SSMS) and microanalytical ( e. g., LA-ICPMS, SIMS, EPMA) methods. Agreement between two or more independent methods and the use of definitive methods provided traceability to the fullest extent possible. We also present new and recently published data for the isotopic compositions of H, B, Li, O, Ca, Sr, Nd, Hf, and Pb. The results were mainly obtained by high-precision bulk techniques, such as TIMS and MC-ICPMS. In addition, LA-ICPMS and SIMS isotope data of B, Li, and Pb are presented.