974 resultados para monitoring framework
Resumo:
OBJECTIVE: Incomplete compliance is one of several possible causes of uncontrolled hypertension. Yet, non-compliance remains largely unrecognized and is falsely interpreted as treatment resistance, because it is difficult to confirm or exclude objectively. The goal of this study was to evaluate the potential benefits of electronic monitoring of drug compliance in the management of patients with resistant hypertension. METHODS: Forty-one hypertensive patients resistant to a three-drug regimen (average blood pressure 156/ 106 +/- 23/11 mmHg, mean +/- SD) were studied prospectively. They were informed that for the next 2 months, their presently prescribed drugs would be provided in electronic monitors, without any change in treatment, so as to provide the treating physician with a measure of their compliance. Thereafter, patients were offered the possibility of prolonging the monitoring of compliance for another 2 month period, during which treatment was adapted if necessary. RESULTS: Monitoring of compliance alone was associated with a significant improvement of blood pressure at 2 months (145/97 +/- 20/15 mmHg, P < 0.01). During monitoring, blood pressure was normalized (systolic < 140 mmHg or diastolic < 90 mmHg) in one-third of the patients and insufficient compliance was unmasked in another 20%. When analysed according to tertiles of compliance, patients with the lowest compliance exhibited significantly higher achieved diastolic blood pressures (P = 0.04). In 30 patients, compliance was monitored up to 4 months and drug therapy was adapted whenever necessary. In these patients, a further significant decrease in blood pressure was obtained (from 150/100 +/- 18/15 to 143/94 +/- 22/11 mmHg, P = 0.04/0.02). CONCLUSIONS: These results suggest that objective monitoring of compliance using electronic devices may be a useful step in the management of patients with refractory hypertension, as it enables physicians to take rational decisions based on reliable and objective data of drug compliance and hence to improve blood pressure control.
Resumo:
Locating new wind farms is of crucial importance for energy policies of the next decade. To select the new location, an accurate picture of the wind fields is necessary. However, characterizing wind fields is a difficult task, since the phenomenon is highly nonlinear and related to complex topographical features. In this paper, we propose both a nonparametric model to estimate wind speed at different time instants and a procedure to discover underrepresented topographic conditions, where new measuring stations could be added. Compared to space filling techniques, this last approach privileges optimization of the output space, thus locating new potential measuring sites through the uncertainty of the model itself.
Resumo:
Abstract: This article presents both a brief systemic intervention method (IBS) consisting in 6 sessions developed in an ambulatory service for couples and families, and two research projects done in collaboration with the Institute for Psychotherapy of the University of Lausanne. The first project is quantitative and it aims at evaluating the effectiveness of ISB. One of its main feature is that outcomes are assessed at different levels of individual and family functioning: 1) symptoms and individual functioning; 2) quality of marital relationship; 3) parental and co-parental relationships; 4) familial relationships. The second project is a qualitative case study about a marital therapy which identifies and analyses significant moments of the therapeutic process from the patients' perspective. Methodology was largely inspired by Daniel Stem's work about "moments of meeting" in psychotherapy. Results show that patients' theories about relationship and change are important elements that deepen our understanding of the change process in couple and family therapy. The interest of associating clinicians and researchers for the development and validation of a new clinical model is discussed.
Resumo:
Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.
Resumo:
INTRODUCTION: Although long-term video-EEG monitoring (LVEM) is routinely used to investigate paroxysmal events, short-term video-EEG monitoring (SVEM) lasting <24 h is increasingly recognized as a cost-effective tool. Since, however, relatively few studies addressed the yield of SVEM among different diagnostic groups, we undertook the present study to investigate this aspect. METHODS: We retrospectively analyzed 226 consecutive SVEM recordings over 6 years. All patients were referred because routine EEGs were inconclusive. Patients were classified into 3 suspected diagnostic groups: (1) group with epileptic seizures, (2) group with psychogenic nonepileptic seizures (PNESs), and (3) group with other or undetermined diagnoses. We assessed recording lengths, interictal epileptiform discharges, epileptic seizures, PNESs, and the definitive diagnoses obtained after SVEM. RESULTS: The mean age was 34 (±18.7) years, and the median recording length was 18.6 h. Among the 226 patients, 127 referred for suspected epilepsy - 73 had a diagnosis of epilepsy, none had a diagnosis of PNESs, and 54 had other or undetermined diagnoses post-SVEM. Of the 24 patients with pre-SVEM suspected PNESs, 1 had epilepsy, 12 had PNESs, and 11 had other or undetermined diagnoses. Of the 75 patients with other diagnoses pre-SVEM, 17 had epilepsy, 11 had PNESs, and 47 had other or undetermined diagnoses. After SVEM, 15 patients had definite diagnoses other than epilepsy or PNESs, while in 96 patients, diagnosis remained unclear. Overall, a definitive diagnosis could be reached in 129/226 (57%) patients. CONCLUSIONS: This study demonstrates that in nearly 3/5 patients without a definitive diagnosis after routine EEG, SVEM allowed us to reach a diagnosis. This procedure should be encouraged in this setting, given its time-effectiveness compared with LVEM.
Resumo:
Questions: A multiple plot design was developed for permanent vegetation plots. How reliable are the different methods used in this design and which changes can we measure? Location: Alpine meadows (2430 m a.s.l.) in the Swiss Alps. Methods: Four inventories were obtained from 40 m(2) plots: four subplots (0.4 m(2)) with a list of species, two 10m transects with the point method (50 points on each), one subplot (4 m2) with a list of species and visual cover estimates as a percentage and the complete plot (40 m(2)) with a list of species and visual estimates in classes. This design was tested by five to seven experienced botanists in three plots. Results: Whatever the sampling size, only 45-63% of the species were seen by all the observers. However, the majority of the overlooked species had cover < 0.1%. Pairs of observers overlooked 10-20% less species than single observers. The point method was the best method for cover estimate, but it took much longer than visual cover estimates, and 100 points allowed for the monitoring of only a very limited number of species. The visual estimate as a percentage was more precise than classes. Working in pairs did not improve the estimates, but one botanist repeating the survey is more reliable than a succession of different observers. Conclusion: Lists of species are insufficient for monitoring. It is necessary to add cover estimates to allow for subsequent interpretations in spite of the overlooked species. The choice of the method depends on the available resources: the point method is time consuming but gives precise data for a limited number of species, while visual estimates are quick but allow for recording only large changes in cover. Constant pairs of observers improve the reliability of the records.
Resumo:
This article sets out a theoretical framework for the study of organisational change within political alliances. To achieve this objective it uses as a starting point a series of premises, the most notable of which include the definition of organisational change as a discrete, complex and focussed phenomenon of changes in power within the party. In accordance with these premises, it analyses the synthetic model of organisational change proposed by Panebianco (1988). After examining its limitations, a number of amendments are proposed to adapt it to the way political alliances operate. The above has resulted in the design of four new models. In order to test its validity and explanatory power in a preliminary manner, the second part looks at the organisational change of the UDC within the CiU alliance between 1978 and 2001. The discussion and conclusions reached demonstrate the problems of determinism of the Panebianco model and suggest, tentatively, the importance of the power balance within the alliance as a key factor.
Resumo:
Fluctuations in ammonium (NH4+), measured as NH4-N loads using an ion-selective electrode installed at the inlet of a sewage treatment plant, showed a distinctive pattern which was associated to weekly (i.e., commuters) and seasonal (i.e., holidays) fluctuations of the population. Moreover, population size estimates based on NH4-N loads were lower compared to census data. Diurnal profiles of benzoylecgonine (BE) and 11-nor-9-carboxy-Δ9-tetrahydrocannabinol (THC-COOH) were shown to be strongly correlated to NH4-N. Characteristic patterns, which reflect the prolonged nocturnal activity of people during the weekend, could be observed for BE, cocaine, and a major metabolite of MDMA (i.e., 4-hydroxy-3-methoxymethamphetamine). Additional 24 h composite samples were collected between February and September 2013. Per-capita loads (i.e., grams per day per 1000 inhabitants) were computed using census data and NH4-N measurements. Normalization with NH4-N did not modify the overall pattern, suggesting that the magnitude of fluctuations in the size of the population is negligible compared to those of illicit drug loads. Results show that fluctuations in the size of the population over longer periods of time or during major events can be monitored using NH4-N loads: either using raw NH4-N loads or population size estimates based on NH4-N loads, if information about site-specific NH4-N population equivalents is available.
Resumo:
The paper uses a range of primary-source empirical evidence to address the question: ‘why is it to hard to value intangible assets?’ The setting is venture capital investment in high technology companies. While the investors are risk specialists and financial experts, the entrepreneurs are more knowledgeable about product innovation. Thus the context lends itself to analysis within a principal-agent framework, in which information asymmetry may give rise to adverse selection, pre-contract, and moral hazard, post-contract. We examine how the investor might attenuate such problems and attach a value to such high-tech investments in what are often merely intangible assets, through expert due diligence, monitoring and control. Qualitative evidence is used to qualify the more clear cut picture provided by a principal-agent approach to a more mixed picture in which the ‘art and science’ of investment appraisal are utilised by both parties alike
Resumo:
This paper develops a general theoretical framework within which a heterogeneous group taxpayers confront a market that supplies a variety of schemes for reducing tax liability, and uses this framework to explore the impact of a wide range of anti-avoidance policies. Schemes differ in their legal effectiveness and hence in the risks to which they expose taxpayers - risks which go beyond the risk of audit considered in the conventional literature on evasion. Given the individual taxpayer’s circumstances, the prices charged for the schemes and the policy environment, the model predicts (i) whether or not any given taxpayer will acquire a scheme, and (ii) if they do so, which type of scheme they will acquire. The paper then analyses how these decisions, and hence the tax gap, are influenced by four generic types of policy: Disclosure – earlier information leading to faster closure of loopholes; Penalties – introduction of penalties for failed avoidance; Policy Design – fundamental policy changes that design out opportunities for avoidance; Product Register - the introduction of GAARs or mini-GAARs that give greater clarity about how different types of scheme will be treated. The paper shows that when considering the indirect/behavioural effects of policies on the tax gap it is important to recognise that these operate on two different margins. First policies will have deterrence effects – their impact on the quantum of taxpayers choosing to acquire different types schemes as distinct to acquiring no scheme at all. There will be a range of such deterrence effects reflecting the range of schemes available in the market. But secondly, since different schemes generate different tax gaps, policies will also have switching effects as they induce taxpayers who previously acquired one type of scheme to acquire another. The first three types of policy generate positive deterrence effects but differ in the switching effects they produce. The fourth type of policy produces mixed deterrence effects.
Resumo:
OBJECTIVE:: To examine the accuracy of brain multimodal monitoring-consisting of intracranial pressure, brain tissue PO2, and cerebral microdialysis-in detecting cerebral hypoperfusion in patients with severe traumatic brain injury. DESIGN:: Prospective single-center study. PATIENTS:: Patients with severe traumatic brain injury. SETTING:: Medico-surgical ICU, university hospital. INTERVENTION:: Intracranial pressure, brain tissue PO2, and cerebral microdialysis monitoring (right frontal lobe, apparently normal tissue) combined with cerebral blood flow measurements using perfusion CT. MEASUREMENTS AND MAIN RESULTS:: Cerebral blood flow was measured using perfusion CT in tissue area around intracranial monitoring (regional cerebral blood flow) and in bilateral supra-ventricular brain areas (global cerebral blood flow) and was matched to cerebral physiologic variables. The accuracy of intracranial monitoring to predict cerebral hypoperfusion (defined as an oligemic regional cerebral blood flow < 35 mL/100 g/min) was examined using area under the receiver-operating characteristic curves. Thirty perfusion CT scans (median, 27 hr [interquartile range, 20-45] after traumatic brain injury) were performed on 27 patients (age, 39 yr [24-54 yr]; Glasgow Coma Scale, 7 [6-8]; 24/27 [89%] with diffuse injury). Regional cerebral blood flow correlated significantly with global cerebral blood flow (Pearson r = 0.70, p < 0.01). Compared with normal regional cerebral blood flow (n = 16), low regional cerebral blood flow (n = 14) measurements had a higher proportion of samples with intracranial pressure more than 20 mm Hg (13% vs 30%), brain tissue PO2 less than 20 mm Hg (9% vs 20%), cerebral microdialysis glucose less than 1 mmol/L (22% vs 57%), and lactate/pyruvate ratio more than 40 (4% vs 14%; all p < 0.05). Compared with intracranial pressure monitoring alone (area under the receiver-operating characteristic curve, 0.74 [95% CI, 0.61-0.87]), monitoring intracranial pressure + brain tissue PO2 (area under the receiver-operating characteristic curve, 0.84 [0.74-0.93]) or intracranial pressure + brain tissue PO2+ cerebral microdialysis (area under the receiver-operating characteristic curve, 0.88 [0.79-0.96]) was significantly more accurate in predicting low regional cerebral blood flow (both p < 0.05). CONCLUSION:: Brain multimodal monitoring-including intracranial pressure, brain tissue PO2, and cerebral microdialysis-is more accurate than intracranial pressure monitoring alone in detecting cerebral hypoperfusion at the bedside in patients with severe traumatic brain injury and predominantly diffuse injury.
Resumo:
En el siguiente documento podrá encontrar de una forma clara y entendedora, a través de la creación de un sencillo aplicativo, el mecanismo para la creación de una aplicación J2EE basada en el framework de desarrollo Yakarta Struts. En el mismo partirá desde cero, desde el inicio en la captación de requerimientos, pasando por la etapa de análisis y diseño y la posterior implementación.