985 resultados para Objective Monitoring
Resumo:
OBJECTIVE: To assess the association between microalbuminuria with ambulatory blood pressure monitoring in normotensive individuals with insulin-dependent diabetes mellitus. METHODS: Thirty-seven patients underwent determination of the rate of urinary excretion of albumin through radioimmunoassay and ambulatory blood pressure monitoring. Their mean age was 26.5±6.7 years, and the mean duration of their disease was 8 (1-34) years. Microalbuminuria was defined as urinary excretion of albumin > or = 20 and <200µg/min in at least 2 out of 3 urine samples. RESULTS: Nine (24.3%) patients were microalbuminuric. Ambulatory blood pressure monitoring in the microalbuminuric patients had higher mean pressure values, mainly the systolic pressure, during sleep as compared with that in the normoalbuminuric patients (120.1±8.3 vs 110.8±7.1 mmHg; p=0.007). The pressure load was higher in the microalbuminuric individuals, mainly the systolic pressure load during wakefulness [6.3 (2.9-45.9) vs 1.6 (0-16%); p=0.001]. This was the variable that better correlated with the urinary excretion of albumin (rS=0.61; p<0.001). Systolic pressure load >50% and diastolic pressure load > 30% during sleep was associated with microalbuminuria (p=0.008). The pressure drop during sleep did not differ between the groups. CONCLUSION: Microalbuminuric normotensive insulin-dependent diabetic patients show greater mean pressure value and pressure load during ambulatory blood pressure monitoring, and these variables correlate with urinary excretion of albumin.
Resumo:
OBJECTIVE: To evaluate the relationship between 24-hour ambulatory arterial blood pressure monitoring and the prognosis of patients with advanced congestive heart failure. METHODS: We studied 38 patients with NYHA functional class IV congestive heart failure, and analyzed left ventricular ejection fraction, diastolic diameter, and ambulatory blood pressure monitoring data. RESULTS: Twelve deaths occurred. Left ventricular ejection fraction (35.2±7.3%) and diastolic diameter (72.2±7.8mm) were not correlated with the survival. The mean 24-hour (SBP24), waking (SBPw), and sleeping (SBPs) systolic pressures of the living patients were higher than those of the deceased patients and were significant for predicting survival. Patients with mean SBP24, SBPv, and SBPs > or = 105mmHg had longer survival (p=0.002, p=0.01 and p=0.0007, respectively). Patients with diastolic blood pressure sleep decrements (dip) and patients with mean blood pressure dip <=6mmHg had longer survival (p=0.04 and p=0.01, respectively). In the multivariate analysis, SBPs was the only variable with an odds ratio of 7.61 (CI: 1.56; 3704) (p=0.01). Patients with mean SBP<105mmHg were 7.6 times more likely to die than those with SBP > or = 105 mmHg CONCLUSION: Ambulatory blood pressure monitoring appears to be a useful method for evaluating patients with congestive heart failure.
Resumo:
OBJECTIVE: To assess the influence of the quality of sleep on the nocturnal physiological drop in blood pressure during ambulatory blood pressure monitoring. METHODS: We consecutively assessed ambulatory blood pressure monitoring, the degree of tolerance for the examination, and the quality of sleep in 168 patients with hypertension or with the suspected "white-coat" effect. Blood pressure fall during sleep associated with a specific questionnaire and an analogical visual scale of tolerance for ambulatory blood pressure monitoring were used to assess usual sleep and sleep on the day of examination. Two specialists in sleep disturbances classified the patients into 2 groups: those with normal sleep and those with abnormal sleep. RESULTS: Fifty-nine (35 %) patients comprised the abnormal sleep group. Findings regarding the quality of sleep on the day of ambulatory blood pressure monitoring as compared with those regarding the quality of sleep on a usual day were different and were as follows, respectively: total duration of sleep (-12.4±4.7 versus -42.2±14.9 minutes, P=0.02), latency of sleep (0.4±2.7 versus 17±5.1 minutes, P<0.001), number of awakenings (0.1±0.1 versus 1.35±0.3 times, P<0.001), and tolerance for ambulatory blood pressure monitoring (8±0.2 versus 6.7±0.35, P=0.035). An abnormal drop in blood pressure during sleep occurred in 20 (18%) patients in the normal sleep group and in 14 (24%) patients in the abnormal sleep group, P=0.53. CONCLUSION: Ambulatory blood pressure monitoring causes sleep disturbances in some patients, and a positive association between quality of sleep and tolerance for the examination was observed.
Resumo:
Background: The importance of measuring blood pressure before morning micturition and in the afternoon, while working, is yet to be established in relation to the accuracy of home blood pressure monitoring (HBPM). Objective: To compare two HBPM protocols, considering 24-hour ambulatory blood pressure monitoring (wakefulness ABPM) as gold-standard and measurements taken before morning micturition (BM) and in the afternoon (AM), for the best diagnosis of systemic arterial hypertension (SAH), and their association with prognostic markers. Methods: After undergoing 24-hour wakefulness ABPM, 158 participants (84 women) were randomized for 3- or 5-day HBPM. Two variations of the 3-day protocol were considered: with measurements taken before morning micturition and in the afternoon (BM+AM); and with post-morning-micturition and evening measurements (PM+EM). All patients underwent echocardiography (for left ventricular hypertrophy - LVH) and urinary albumin measurement (for microalbuminuria - MAU). Result: Kappa statistic for the diagnosis of SAH between wakefulness-ABPM and standard 3-day HBPM, 3-day HBPM (BM+AM) and (PM+EM), and 5-day HBPM were 0.660, 0.638, 0.348 and 0.387, respectively. The values of sensitivity of (BM+AM) versus (PM+EM) were 82.6% × 71%, respectively, and of specificity, 84.8% × 74%, respectively. The positive and negative predictive values were 69.1% × 40% and 92.2% × 91.2%, respectively. The comparisons of intraclass correlations for the diagnosis of LVH and MAU between (BM+AM) and (PM+EM) were 0.782 × 0.474 and 0.511 × 0.276, respectively. Conclusions: The 3 day-HBPM protocol including measurements taken before morning micturition and during work in the afternoon showed the best agreement with SAH diagnosis and the best association with prognostic markers.
Resumo:
Within last few years a new type of instruments called Terrestrial Laser Scanners (TLS) entered to the commercial market. These devices brought a possibility to obtain completely new type of spatial, three dimensional data describing the object of interest. TLS instruments are generating a type of data that needs a special treatment. Appearance of this technique made possible to monitor deformations of very large objects, like investigated here landslides, with new quality level. This change is visible especially with relation to the size and number of the details that can be observed with this new method. Taking into account this context presented here work is oriented on recognition and characterization of raw data received from the TLS instruments as well as processing phases, tools and techniques to do them. Main objective are definition and recognition of the problems related with usage of the TLS data, characterization of the quality single point generated by TLS, description and investigation of the TLS processing approach for landslides deformation measurements allowing to obtain 3D deformation characteristic and finally validation of the obtained results. The above objectives are based on the bibliography studies and research work followed by several experiments that will prove the conclusions.
Resumo:
OBJECTIVE:: To examine the accuracy of brain multimodal monitoring-consisting of intracranial pressure, brain tissue PO2, and cerebral microdialysis-in detecting cerebral hypoperfusion in patients with severe traumatic brain injury. DESIGN:: Prospective single-center study. PATIENTS:: Patients with severe traumatic brain injury. SETTING:: Medico-surgical ICU, university hospital. INTERVENTION:: Intracranial pressure, brain tissue PO2, and cerebral microdialysis monitoring (right frontal lobe, apparently normal tissue) combined with cerebral blood flow measurements using perfusion CT. MEASUREMENTS AND MAIN RESULTS:: Cerebral blood flow was measured using perfusion CT in tissue area around intracranial monitoring (regional cerebral blood flow) and in bilateral supra-ventricular brain areas (global cerebral blood flow) and was matched to cerebral physiologic variables. The accuracy of intracranial monitoring to predict cerebral hypoperfusion (defined as an oligemic regional cerebral blood flow < 35 mL/100 g/min) was examined using area under the receiver-operating characteristic curves. Thirty perfusion CT scans (median, 27 hr [interquartile range, 20-45] after traumatic brain injury) were performed on 27 patients (age, 39 yr [24-54 yr]; Glasgow Coma Scale, 7 [6-8]; 24/27 [89%] with diffuse injury). Regional cerebral blood flow correlated significantly with global cerebral blood flow (Pearson r = 0.70, p < 0.01). Compared with normal regional cerebral blood flow (n = 16), low regional cerebral blood flow (n = 14) measurements had a higher proportion of samples with intracranial pressure more than 20 mm Hg (13% vs 30%), brain tissue PO2 less than 20 mm Hg (9% vs 20%), cerebral microdialysis glucose less than 1 mmol/L (22% vs 57%), and lactate/pyruvate ratio more than 40 (4% vs 14%; all p < 0.05). Compared with intracranial pressure monitoring alone (area under the receiver-operating characteristic curve, 0.74 [95% CI, 0.61-0.87]), monitoring intracranial pressure + brain tissue PO2 (area under the receiver-operating characteristic curve, 0.84 [0.74-0.93]) or intracranial pressure + brain tissue PO2+ cerebral microdialysis (area under the receiver-operating characteristic curve, 0.88 [0.79-0.96]) was significantly more accurate in predicting low regional cerebral blood flow (both p < 0.05). CONCLUSION:: Brain multimodal monitoring-including intracranial pressure, brain tissue PO2, and cerebral microdialysis-is more accurate than intracranial pressure monitoring alone in detecting cerebral hypoperfusion at the bedside in patients with severe traumatic brain injury and predominantly diffuse injury.
Resumo:
The Kilombero Malaria Project (KMP) attemps to define opperationally useful indicators of levels of transmission and disease and health system relevant monitoring indicators to evaluate the impact of disease control at the community or health facility level. The KMP is longitudinal community based study (N = 1024) in rural Southern Tanzania, investigating risk factors for malarial morbidity and developing household based malaria control strategies. Biweekly morbidity and bimonthly serological, parasitological and drug consumption surveys are carried out in all study households. Mosquito densities are measured biweekly in 50 sentinel houses by timed light traps. Determinants of transmission and indicators of exposure were not strongly aggregated within households. Subjective morbidity (recalled fever), objective morbidity (elevated body temperature and high parasitaemia) and chloroquine consumption were strongly aggregated within a few households. Nested analysis of anti-NANP40 antibody suggest that only approximately 30% of the titer variance can explained by household clustering and that the largest proportion of antibody titer variability must be explained by non-measured behavioral determinants relating to an individual's level of exposure within a household. Indicators for evaluation and monitoring and outcome measures are described within the context of health service management to describe control measure output in terms of community effectiveness.
Resumo:
OBJECTIVE: To reach a consensus on the clinical use of ambulatory blood pressure monitoring (ABPM). METHODS: A task force on the clinical use of ABPM wrote this overview in preparation for the Seventh International Consensus Conference (23-25 September 1999, Leuven, Belgium). This article was amended to account for opinions aired at the conference and to reflect the common ground reached in the discussions. POINTS OF CONSENSUS: The Riva Rocci/Korotkoff technique, although it is prone to error, is easy and cheap to perform and remains worldwide the standard procedure for measuring blood pressure. ABPM should be performed only with properly validated devices as an accessory to conventional measurement of blood pressure. Ambulatory recording of blood pressure requires considerable investment in equipment and training and its use for screening purposes cannot be recommended. ABPM is most useful for identifying patients with white-coat hypertension (WCH), also known as isolated clinic hypertension, which is arbitrarily defined as a clinic blood pressure of more than 140 mmHg systolic or 90 mmHg diastolic in a patient with daytime ambulatory blood pressure below 135 mmHg systolic and 85 mmHg diastolic. Some experts consider a daytime blood pressure below 130 mmHg systolic and 80 mmHg diastolic optimal. Whether WCH predisposes subjects to sustained hypertension remains debated. However, outcome is better correlated to the ambulatory blood pressure than it is to the conventional blood pressure. Antihypertensive drugs lower the clinic blood pressure in patients with WCH but not the ambulatory blood pressure, and also do not improve prognosis. Nevertheless, WCH should not be left unattended. If no previous cardiovascular complications are present, treatment could be limited to follow-up and hygienic measures, which should also account for risk factors other than hypertension. ABPM is superior to conventional measurement of blood pressure not only for selecting patients for antihypertensive drug treatment but also for assessing the effects both of non-pharmacological and of pharmacological therapy. The ambulatory blood pressure should be reduced by treatment to below the thresholds applied for diagnosing sustained hypertension. ABPM makes the diagnosis and treatment of nocturnal hypertension possible and is especially indicated for patients with borderline hypertension, the elderly, pregnant women, patients with treatment-resistant hypertension and patients with symptoms suggestive of hypotension. In centres with sufficient financial resources, ABPM could become part of the routine assessment of patients with clinic hypertension. For patients with WCH, it should be repeated at annual or 6-monthly intervals. Variation of blood pressure throughout the day can be monitored only by ABPM, but several advantages of the latter technique can also be obtained by self-measurement of blood pressure, a less expensive method that is probably better suited to primary practice and use in developing countries. CONCLUSIONS: ABPM or equivalent methods for tracing the white-coat effect should become part of the routine diagnostic and therapeutic procedures applied to treated and untreated patients with elevated clinic blood pressures. Results of long-term outcome trials should better establish the advantage of further integrating ABPM as an accessory to conventional sphygmomanometry into the routine care of hypertensive patients and should provide more definite information on the long-term cost-effectiveness. Because such trials are not likely to be funded by the pharmaceutical industry, governments and health insurance companies should take responsibility in this regard.
Resumo:
Background and objective: Therapeutic Drug Monitoring (TDM) has been introduced early 1970 in our hospital (CHUV). It represents nowadays an important routine activity of the Division of Clinical Pharmacology and Toxicology (PCL), and its impact and utility for clinicians required assessment. This study thus evaluated the impact of TDM recommendations in terms of dosage regimen adaptation. Design: A prospective observational study was conducted over 5 weeks. The primary objective was to evaluate the application of our TDM recommendations and to identify potential factors associated to variations in their implementation. The secondary objective was to identify pre-analytical problems linked to the collection and processing of blood samples. Setting: Four representative clinical units at CHUV. Main outcome measure: Clinical data, drug related data (intake, collection and processing) and all information regarding the implementation of clinical recommendations were collected and analyzed by descriptive statistics. Results: A total of 241 blood measurement requests were collected, among which 105 triggered a recommendation. 37% of the recommendations delivered were applied, 25 % partially applied and 34% not applied. In 4% it was not applicable. The factors determinant for implementation were the clinical unit and the mode of transmission of the recommendation (written vs oral). No clear difference between types of drugs could be detected. Pre-analytical problems were not uncommon, mostly related to completion of request forms and delays in blood sampling (equilibration or steady-state not reached). We have identified 6% of inappropriate and unusable drug level measurements that could cause a substantial cost for the hospital. Conclusion: This survey highlighted a better implementation of TDM recommendations in clinical units where this routine is well integrated and understood by the medical staff. Our results emphasize the importance of communication with the nurse or the physician in charge, either to transmit clinical recommendations or to establish consensual therapeutic targets in specific conditions. Development of strong partnerships between clinical pharmacists or pharmacologists and clinical units would be beneficial to improve the impact of this clinical activity.
Resumo:
Objective Biomonitoring of solvents using the unchanged substance in urine as exposure indicator is still relatively scarce due to some discrepancies between the results reported in the literature. Based on the assessment of toluene exposure, the aim of this work was to evaluate the effects of some steps likely to bias the results and to measure urinary toluene both in volunteers experimentally exposed and in workers of rotogravure factories. Methods Static headspace was used for toluene analysis. o-Cresol was also measured for comparison. Urine collection, storage and conservation conditions were studied to evaluate possible loss or contamination of toluene in controlled situations applied to six volunteers in an exposure chamber according to four scenarios with exposure at stable levels from 10 to 50 ppm. Kinetics of elimination of toluene were determined over 24 h. A field study was then carried out in a total of 29 workers from two rotogravure printing facilities. Results Potential contamination during urine collection in the field is confirmed to be a real problem but technical precautions for sampling, storage and analysis can be easily followed to control the situation. In the volunteers at rest, urinary toluene showed a rapid increase after 2 h with a steady level after about 3 h. At 47.1 ppm the mean cumulated excretion was about 0.005% of the amount of the toluene ventilated. Correlation between the toluene levels in air and in end of exposure urinary sample was excellent (r = 0.965). In the field study, the median personal exposure to toluene was 32 ppm (range 3.6-148). According to the correlations between environmental and biological monitoring data, the post-shift urinary toluene (r = 0.921) and o-cresol (r = 0.873) concentrations were, respectively, 75.6 mu g/l and 0.76 mg/g creatinine for 50 ppm toluene personal exposure. The corresponding urinary toluene concentration before the next shift was 11 mu g/l (r = 0.883). Conclusion Urinary toluene was shown once more time a very interesting surrogate to o-cresol and could be recommended as a biomarker of choice for solvent exposure. [Authors]
Resumo:
Background Delirium is an independent predictor of increased length of stay, mortality, and treatment costs in critical care patients. Its incidence may be underestimated or overestimated if delirium is assessed by using subjective clinical impression alone rather than an objective instrument. Objectives To determine frequency of discrepancies between subjective and objective delirium monitoring. Methods An observational cohort study was performed in a surgical-cardiosurgical 31-bed intensive care unit of a university hospital. Patients' delirium status was rated daily by bedside nurses on the basis of subjective individual clinical impressions and by medical students on the basis of scores on the objective Confusion Assessment Method for the Intensive Care Unit. Results Of 160 patients suitable for analysis, 38.8% (n = 62) had delirium according to objective criteria at some time during their stay in the intensive care unit. A total of 436 paired observations were analyzed. Delirium was diagnosed in 26.1% of observations (n = 114) with the objective method. This percentage included 6.4% (n = 28) in whom delirium was not recognized via subjective criteria. According to subjective criteria, delirium was present in 29.4% of paired observations (n = 128), including 9.6% (n = 42) with no objective indications of delirium. A total of 8 patients with no evidence of delirium according to the objective criteria were prescribed haloperidol and lorazepam because the subjective method indicated they had delirium. Conclusions Use of objective criteria helped detect delirium in more patients and also identified patients mistakenly thought to have delirium who actually did not meet objective criteria for diagnosis of the condition.
Resumo:
The purpose of resource management is the efficient and effective use of network resources, for instance bandwidth. In this article, a connection oriented network scenario is considered, where a certain amount of bandwidth is reserved for each label switch path (LSP), which is a logical path, in a MPLS or GMPLS environment. Assuming there is also some kind of admission control (explicit or implicit), these environments typically provide quality of service (QoS) guarantees. It could happen that some LSPs become busy, thus rejecting connections, while other LSPs may be under-utilised. We propose a distributed lightweight monitoring technique, based on threshold values, the objective of which is to detect congestion when it occurs in an LSP and activate the corresponding alarm which will trigger a dynamic bandwidth reallocation mechanism
Resumo:
Aims: Therapeutic Drug Monitoring (TDM) is an established tool to optimize thepharmacotherapy with immunosupressants, antibiotics, antiretroviral agents, anticonvulsantsand psychotropic drugs. The TDM expert group of the Association ofNeuropsychopharmacolgy and Pharmacopsychiatry recommended clinical guidelinesfor TDM of psychotropic drugs in 2004 and in 2011. They allocate 4 levelsof recommendation based on studies reporting plasma concentrations and clinicaloutcomes. To evaluate the additional benefit for drugs without direct evidence forTDM and to verify the recommendation levels of the expert group the authorsbuilt a new rating scale. Methods: This rating scale included 28 items and wasdivided in 5 categories: Efficacy, toxicity, pharmacokinetics, patient characteristicsand cost effectiveness. A literature search was performed for 10 antidepressants,10 antipsychotics, 8 drugs used in the treatment of substance related disordersand lithium, thereafter, a comparison with the assessment of the TDMexpert group was carried out. Results: The antidepressants as well as the antipsychoticsshowed a high and significant correlation with the recommendations inthe consensus guidelines. However, meanderings could be detected for the drugsused in the therapy of substance related disorders, for which TDM is mostly notestablished yet. The result of the antidepressants and antipsychotics permits aclassification of the reachable points; upper 13 - TDM strongly recommended10 to 13 - TDM recommended, 8 to 10 - TDM useful and below 8 - TDMpotentially useful. Conclusion: These results suggest this rating scale is sensitiveto detect the appropriateness of TDM for drug treatment. For those drugs TDM isnot established a more objective estimation is possible, thus the scoring helps tofocus on the most likely drugs to require TDM.
Resumo:
Abstract Study objective: The arousal state changes during spinal anesthesia. It is not clear if BIS and others devices could monitor the induced neuroaxial blockade sedation. Our objective was evaluate BIS and entropy values when spinal anesthesia is done. Design: We developed a prospective study. Patients: 40 patients were included in this study, ASA I-III, over 60 years old, undergoing spinal anesthesia, without premedication scheduled for orthopedics procedures. Intervention: Spinal anesthesia was performed with the unseated volunteer in the lateral decubitus position with a 25-gauge Whitacre needle at L2-L3 space, andanesthesia was done with 12 mg of 0.5% hyperbaric bupivacaine. Patients were positioned supine for 5 min after spinal anesthesia. Measurements: Observer’s Assessment of Alertness/Sedation OAA/S, response (RE) and state entropy (SE) and BIS, and standard hemodynamic measures. Main results: Statistical analysis were performed by Wilcoxon test or ANOVA, p<0.05 was considered statistically significant.RE and BIS showed a better correlation with the OAA/S scale values (Pk 0.81 and 0.82) than SE (Pk 0.69). The OAA/S, RE and SE showed significative differences from basal values after 30 min of neuroaxial anesthesia (ANOVA p<0.05). BIS showed differences after 40 min (ANOVA p<0.05). There were no differences between BIS and RE values along the study (ANOVA p>0.05). Conclusions: The spinal anesthesia decreased the cortical activity and these were founded by OAA/S scale and depth anesthetics monitors. OAA/S was a more sensitive value of this induced sedation. BIS and RE showed a better correlation with OAA/S scale than SE.
Resumo:
OBJECTIVE: Routine prenatal screening for Down syndrome challenges professional non-directiveness and patient autonomy in daily clinical practices. This paper aims to describe how professionals negotiate their role when a pregnant woman asks them to become involved in the decision-making process implied by screening. METHODS: Forty-one semi-structured interviews were conducted with gynaecologists-obstetricians (n=26) and midwives (n=15) in a large Swiss city. RESULTS: Three professional profiles were constructed along a continuum that defines the relative distance or proximity towards patients' demands for professional involvement in the decision-making process. The first profile insists on enforcing patient responsibility, wherein the healthcare provider avoids any form of professional participation. A second profile defends the idea of a shared decision making between patients and professionals. The third highlights the intervening factors that justify professionals' involvement in decisions. CONCLUSIONS: These results illustrate various applications of the principle of autonomy and highlight the complexity of the doctor-patient relationship amidst medical decisions today.