50 resultados para Optimal Control Problems
Resumo:
OBJECTIVE: Current pulsatile ventricular assist devices operate asynchronous with the left ventricle in fixed-rate or fill-to-empty modes because electrocardiogram-triggered modes have been abandoned. We hypothesize that varying the ejection delay in the synchronized mode yields more precise control of hemodynamics and left ventricular loading. This allows for a refined management that may be clinically beneficial. METHODS: Eight sheep received a Thoratec paracorporeal ventricular assist device (Thoratec Corp, Pleasanton, Calif) via ventriculo-aortic cannulation. Left ventricular pressure and volume, aortic pressure, pulmonary flow, pump chamber pressure, and pump inflow and outflow were recorded. The pump was driven by a clinical pneumatic drive unit (Medos Medizintechnik AG, Stolberg, Germany) synchronously with the native R-wave. The start of pump ejection was delayed between 0% and 100% of the cardiac period in 10% increments. For each of these delays, hemodynamic variables were compared with baseline data using paired t tests. RESULTS: The location of the minimum of stroke work was observed at a delay of 10% (soon after aortic valve opening), resulting in a median of 43% reduction in stroke work compared with baseline. Maximum stroke work occurred at a median delay of 70% with a median stroke work increase of 11% above baseline. Left ventricular volume unloading expressed by end-diastolic volume was most pronounced for copulsation (delay 0%). CONCLUSIONS: The timing of pump ejection in synchronized mode yields control over left ventricular energetics and can be a method to achieve gradual reloading of a recoverable left ventricle. The traditionally suggested counterpulsation is not optimal in ventriculo-aortic cannulation when maximum unloading is desired.
Resumo:
Potent anthelmintics were introduced into the Swiss market several decades ago. Despite this, gastrointestinal nematodes (GIN), lungworms and the large liver fluke (Fasciola hepatica) can successfully inhabit Swiss ruminant farms. This is mainly due to a high reproductive capacity as well as very efficient survival strategies. In addition some species readily develop anthelmintic resistance. GIN-infections in young cattle are under comparatively good control. However, prophylactic measures are compromised where adult stock is also affected due to incomplete development of immune protection. Under these circumstances control measures must include all age groups. This results in fewer helminths in refugia thus may accelerate the development of anthelmintic resistance. This review aims to present a synopsis of the significance of the major helminth infections obtained on pasture by large and small ruminants in Switzerland. Currently available strategies for strategic helminth control are summarized and an outlook is given on new developments which might expand the spectrum of control measures relevant for veterinary practice in the future.
Resumo:
BACKGROUND: Short-acting agents for neuromuscular block (NMB) require frequent dosing adjustments for individual patient's needs. In this study, we verified a new closed-loop controller for mivacurium dosing in clinical trials. METHODS: Fifteen patients were studied. T1% measured with electromyography was used as input signal for the model-based controller. After induction of propofol/opiate anaesthesia, stabilization of baseline electromyography signal was awaited and a bolus of 0.3 mg kg-1 mivacurium was then administered to facilitate endotracheal intubation. Closed-loop infusion was started thereafter, targeting a neuromuscular block of 90%. Setpoint deviation, the number of manual interventions and surgeon's complaints were recorded. Drug use and its variability between and within patients were evaluated. RESULTS: Median time of closed-loop control for the 11 patients included in the data processing was 135 [89-336] min (median [range]). Four patients had to be excluded because of sensor problems. Mean absolute deviation from setpoint was 1.8 +/- 0.9 T1%. Neither manual interventions nor complaints from the surgeons were recorded. Mean necessary mivacurium infusion rate was 7.0 +/- 2.2 microg kg-1 min-1. Intrapatient variability of mean infusion rates over 30-min interval showed high differences up to a factor of 1.8 between highest and lowest requirement in the same patient. CONCLUSIONS: Neuromuscular block can precisely be controlled with mivacurium using our model-based controller. The amount of mivacurium needed to maintain T1% at defined constant levels differed largely between and within patients. Closed-loop control seems therefore advantageous to automatically maintain neuromuscular block at constant levels.
Resumo:
OBJECTIVES: To examine differences in risk factor (RF) management between peripheral artery disease (PAD) and coronary artery (CAD) or cerebrovascular disease (CVD), as well as the impact of RF control on major 1-year cardiovascular (CV) event rates. METHODS: The REACH Registry recruited >68000 outpatients aged >/=45 years with established atherothrombotic disease or >/=3 RFs for atherothrombosis. The predictors of RF control that were evaluated included: (1) patient demographics, (2) mode of PAD diagnosis, and (3) concomitant CAD and/or CVD. RESULTS: RF control was less frequent in patients with PAD (n=8322), compared with those with CAD or CVD (but no PAD, n=47492) [blood pressure; glycemia; total cholesterol; smoking cessation (each P<0.001)]. Factors independently associated with optimal RF control in patients with PAD were male gender (OR=1.9); residence in North America (OR=3.5), Japan (OR=2.5) or Latin America (OR=1.5); previous coronary revascularization (OR=1.3); and statin use (OR=1.4); whereas prior leg amputation was a negative predictor (OR=0.7) (P<0.001). Optimal RF control was associated with fewer 1-year CV ischemic symptoms or events. CONCLUSIONS: Patients with PAD do not achieve RF control as frequently as individuals with CAD or CVD. Improved RF control is associated with a positive impact on 1-year CV event rates.
Resumo:
OBJECTIVE: To review trial design issues related to control groups. DESIGN: Review of the literature with specific reference to critical care trials. MAIN RESULTS AND CONCLUSIONS: Performing randomized controlled trials in the critical care setting presents specific problems: studies include patients with rapidly lethal conditions, the majority of intensive care patients suffer from syndromes rather than from well-definable diseases, the severity of such syndromes cannot be precisely assessed, and the treatment consists of interacting therapies. Interactions between physiology, pathophysiology, and therapies are at best marginally understood and may have a major impact on study design and interpretation of results. Selection of the right control group is crucial for the interpretation and clinical implementation of results. Studies comparing new interventions with current ones or different levels of current treatments have the problem of the necessity of defining "usual care." Usual care controls without any constraints typically include substantial heterogeneity. Constraints in the usual therapy may help to reduce some variation. Inclusion of unrestricted usual care groups may help to enhance safety. Practice misalignment is a novel problem in which patients receive a treatment that is the direct opposite of usual care, and occurs when fixed-dose interventions are used in situations where care is normally titrated. Practice misalignment should be considered in the design and interpretation of studies on titrated therapies.
Resumo:
Bovine spongiform encephalopathy (BSE) rapid tests and routine BSE-testing laboratories underlie strict regulations for approval. Due to the lack of BSE-positive control samples, however, full assay validation at the level of individual test runs and continuous monitoring of test performance on-site is difficult. Most rapid tests use synthetic prion protein peptides, but it is not known to which extend they reflect the assay performance on field samples, and whether they are sufficient to indicate on-site assay quality problems. To address this question we compared the test scores of the provided kit peptide controls to those of standardized weak BSE-positive tissue samples in individual test runs as well as continuously over time by quality control charts in two widely used BSE rapid tests. Our results reveal only a weak correlation between the weak positive tissue control and the peptide control scores. We identified kit-lot related shifts in the assay performances that were not reflected by the peptide control scores. Vice versa, not all shifts indicated by the peptide control scores indeed reflected a shift in the assay performance. In conclusion these data highlight that the use of the kit peptide controls for continuous quality control purposes may result in unjustified rejection or acceptance of test runs. However, standardized weak positive tissue controls in combination with Shewhart-CUSUM control charts appear to be reliable in continuously monitoring assay performance on-site to identify undesired deviations.
Resumo:
BACKGROUND Muscle strength greatly influences gait kinematics. The question was whether this association is similar in different diseases. METHODS Data from instrumented gait analysis of 716 patients were retrospectively assessed. The effect of muscle strength on gait deviations, namely the gait profile score (GPS) was evaluated by means of generalised least square models. This was executed for seven different patient groups. The groups were formed according to the type of disease: orthopaedic/neurologic, uni-/bilateral affection, and flaccid/spastic muscles. RESULTS Muscle strength had a negative effect on GPS values, which did not significantly differ amongst the different patient groups. However, an offset of the GPS regression line was found, which was mostly dependent on the basic disease. Surprisingly, spastic patients, who have reduced strength and additionally spasticity in clinical examination, and flaccid neurologic patients showed the same offset. Patients with additional lack of trunk control (Tetraplegia) showed the largest offset. CONCLUSION Gait kinematics grossly depend on muscle strength. This was seen in patients with very different pathologies. Nevertheless, optimal correction of biomechanics and muscle strength may still not lead to a normal gait, especially in that of neurologic patients. The basic disease itself has an additional effect on gait deviations expressed as a GPS-offset of the linear regression line.
Resumo:
BACKGROUND Infectious diseases and social contacts in early life have been proposed to modulate brain tumour risk during late childhood and adolescence. METHODS CEFALO is an interview-based case-control study in Denmark, Norway, Sweden and Switzerland, including children and adolescents aged 7-19 years with primary intracranial brain tumours diagnosed between 2004 and 2008 and matched population controls. RESULTS The study included 352 cases (participation rate: 83%) and 646 controls (71%). There was no association with various measures of social contacts: daycare attendance, number of childhours at daycare, attending baby groups, birth order or living with other children. Cases of glioma and embryonal tumours had more frequent sick days with infections in the first 6 years of life compared with controls. In 7-19 year olds with 4+ monthly sick day, the respective odds ratios were 2.93 (95% confidence interval: 1.57-5.50) and 4.21 (95% confidence interval: 1.24-14.30). INTERPRETATION There was little support for the hypothesis that social contacts influence childhood and adolescent brain tumour risk. The association between reported sick days due to infections and risk of glioma and embryonal tumour may reflect involvement of immune functions, recall bias or inverse causality and deserve further attention.
Resumo:
BACKGROUND The use of combination antiretroviral therapy (cART) comprising three antiretroviral medications from at least two classes of drugs is the current standard treatment for HIV infection in adults and children. Current World Health Organization (WHO) guidelines for antiretroviral therapy recommend early treatment regardless of immunologic thresholds or the clinical condition for all infants (less than one years of age) and children under the age of two years. For children aged two to five years current WHO guidelines recommend (based on low quality evidence) that clinical and immunological thresholds be used to identify those who need to start cART (advanced clinical stage or CD4 counts ≤ 750 cells/mm(3) or per cent CD4 ≤ 25%). This Cochrane review will inform the current available evidence regarding the optimal time for treatment initiation in children aged two to five years with the goal of informing the revision of WHO 2013 recommendations on when to initiate cART in children. OBJECTIVES To assess the evidence for the optimal time to initiate cART in treatment-naive, HIV-infected children aged 2 to 5 years. SEARCH METHODS We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, the AEGIS conference database, specific relevant conferences, www.clinicaltrials.gov, the World Health Organization International Clinical Trials Registry platform and reference lists of articles. The date of the most recent search was 30 September 2012. SELECTION CRITERIA Randomised controlled trials (RCTs) that compared immediate with deferred initiation of cART, and prospective cohort studies which followed children from enrolment to start of cART and on cART. DATA COLLECTION AND ANALYSIS Two review authors considered studies for inclusion in the review, assessed the risk of bias, and extracted data on the primary outcome of death from all causes and several secondary outcomes, including incidence of CDC category C and B clinical events and per cent CD4 cells (CD4%) at study end. For RCTs we calculated relative risks (RR) or mean differences with 95% confidence intervals (95% CI). For cohort data, we extracted relative risks with 95% CI from adjusted analyses. We combined results from RCTs using a random effects model and examined statistical heterogeneity. MAIN RESULTS Two RCTs in HIV-positive children aged 1 to 12 years were identified. One trial was the pilot study for the larger second trial and both compared initiation of cART regardless of clinical-immunological conditions with deferred initiation until per cent CD4 dropped to <15%. The two trials were conducted in Thailand, and Thailand and Cambodia, respectively. Unpublished analyses of the 122 children enrolled at ages 2 to 5 years were included in this review. There was one death in the immediate cART group and no deaths in the deferred group (RR 2.9; 95% CI 0.12 to 68.9). In the subgroup analysis of children aged 24 to 59 months, there was one CDC C event in each group (RR 0.96; 95% CI 0.06 to 14.87) and 8 and 11 CDC B events in the immediate and deferred groups respectively (RR 0.95; 95% CI 0.24 to 3.73). In this subgroup, the mean difference in CD4 per cent at study end was 5.9% (95% CI 2.7 to 9.1). One cohort study from South Africa, which compared the effect of delaying cART for up to 60 days in 573 HIV-positive children starting tuberculosis treatment (median age 3.5 years), was also included. The adjusted hazard ratios for the effect on mortality of delaying ART for more than 60 days was 1.32 (95% CI 0.55 to 3.16). AUTHORS' CONCLUSIONS This systematic review shows that there is insufficient evidence from clinical trials in support of either early or CD4-guided initiation of ART in HIV-infected children aged 2 to 5 years. Programmatic issues such as the retention in care of children in ART programmes in resource-limited settings will need to be considered when formulating WHO 2013 recommendations.
Resumo:
Objectives: To update the 2006 systematic review of the comparative benefits and harms of erythropoiesis-stimulating agent (ESA) strategies and non-ESA strategies to manage anemia in patients undergoing chemotherapy and/or radiation for malignancy (excluding myelodysplastic syndrome and acute leukemia), including the impact of alternative thresholds for initiating treatment and optimal duration of therapy. Data sources: Literature searches were updated in electronic databases (n=3), conference proceedings (n=3), and Food and Drug Administration transcripts. Multiple sources (n=13) were searched for potential gray literature. A primary source for current survival evidence was a recently published individual patient data meta-analysis. In that meta-analysis, patient data were obtained from investigators for studies enrolling more than 50 patients per arm. Because those data constitute the most currently available data for this update, as well as the source for on-study (active treatment) mortality data, we limited inclusion in the current report to studies enrolling more than 50 patients per arm to avoid potential differential endpoint ascertainment in smaller studies. Review methods: Title and abstract screening was performed by one or two (to resolve uncertainty) reviewers; potentially included publications were reviewed in full text. Two or three (to resolve disagreements) reviewers assessed trial quality. Results were independently verified and pooled for outcomes of interest. The balance of benefits and harms was examined in a decision model. Results: We evaluated evidence from 5 trials directly comparing darbepoetin with epoetin, 41 trials comparing epoetin with control, and 8 trials comparing darbepoetin with control; 5 trials evaluated early versus late (delay until Hb ≤9 to 11 g/dL) treatment. Trials varied according to duration, tumor types, cancer therapy, trial quality, iron supplementation, baseline hemoglobin, ESA dosing frequency (and therefore amount per dose), and dose escalation. ESAs decreased the risk of transfusion (pooled relative risk [RR], 0.58; 95% confidence interval [CI], 0.53 to 0.64; I2 = 51%; 38 trials) without evidence of meaningful difference between epoetin and darbepoetin. Thromboembolic event rates were higher in ESA-treated patients (pooled RR, 1.51; 95% CI, 1.30 to 1.74; I2 = 0%; 37 trials) without difference between epoetin and darbepoetin. In 14 trials reporting the Functional Assessment of Cancer Therapy (FACT)-Fatigue subscale, the most common patient-reported outcome, scores decreased by −0.6 in control arms (95% CI, −6.4 to 5.2; I2 = 0%) and increased by 2.1 in ESA arms (95% CI, −3.9 to 8.1; I2 = 0%). There were fewer thromboembolic and on-study mortality adverse events when ESA treatment was delayed until baseline Hb was less than 10 g/dL, in keeping with current treatment practice, but the difference in effect from early treatment was not significant, and the evidence was limited and insufficient for conclusions. No evidence informed optimal duration of therapy. Mortality was increased during the on-study period (pooled hazard ratio [HR], 1.17; 95% CI, 1.04 to 1.31; I2 = 0%; 37 trials). There was one additional death for every 59 treated patients when the control arm on-study mortality was 10 percent and one additional death for every 588 treated patients when the control-arm on-study mortality was 1 percent. A cohort decision model yielded a consistent result—greater loss of life-years when control arm on-study mortality was higher. There was no discernible increase in mortality with ESA use over the longest available followup (pooled HR, 1.04; 95% CI, 0.99 to 1.10; I2 = 38%; 44 trials), but many trials did not include an overall survival endpoint and potential time-dependent confounding was not considered. Conclusions: Results of this update were consistent with the 2006 review. ESAs reduced the need for transfusions and increased the risk of thromboembolism. FACT-Fatigue scores were better with ESA use but the magnitude was less than the minimal clinically important difference. An increase in mortality accompanied the use of ESAs. An important unanswered question is whether dosing practices and overall ESA exposure might influence harms.
Resumo:
Vibrations, Posture, and the Stabilization of Gaze: An Experimental Study on Impedance Control R. KREDEL, A. GRIMM & E.-J. HOSSNER University of Bern, Switzerland Introduction Franklin and Wolpert (2011) identify impedance control, i.e., the competence to resist changes in position, velocity or acceleration caused by environmental disturbances, as one of five computational mechanisms which allow for skilled and fluent sen-sorimotor behavior. Accordingly, impedance control is of particular interest in situa-tions in which the motor task exhibits unpredictable components as it is the case in downhill biking or downhill skiing. In an experimental study, the question is asked whether impedance control, beyond its benefits for motor control, also helps to stabi-lize gaze what, in turn, may be essential for maintaining other control mechanisms (e.g., the internal modeling of future states) in an optimal range. Method In a 3x2x4 within-subject ANOVA design, 72 participants conducted three tests on visual acuity and contrast (Landolt / Grating and Vernier) in two different postures (standing vs. squat) on a platform vibrating at four different frequencies (ZEPTOR; 0 Hz, 4 Hz, 8 Hz, 12 Hz; no random noise; constant amplitude) in a counterbalanced or-der with 1-minute breaks in-between. In addition, perceived exertion (Borg) was rated by participants after each condition. Results For Landolt and Grating, significant main effects for posture and frequency are re-vealed, representing lower acuity/contrast thresholds for standing and for higher fre-quencies in general, as well as a significant interaction (p < .05), standing for in-creasing posture differences with increasing frequencies. Overall, performance could be maintained at the 0 Hz/standing level up to a frequency of 8 Hz, if bending of the knees was allowed. The fact that this result is not only due to exertion is proved by the Borg ratings showing significant main effects only, i.e., higher exertion scores for standing and for higher frequencies, but no significant interaction (p > .40). The same pattern, although not significant, is revealed for the Vernier test. Discussion Apparently, postures improving impedance control not only turn out to help to resist disturbances but also assist in stabilizing gaze in spite of these perturbations. Con-sequently, studying the interaction of these control mechanisms in complex unpre-dictable environments seems to be a fruitful field of research for the future. References Franklin, D. W., & Wolpert, D. M. (2011). Computational mechanisms of sensorimotor control. Neuron, 72, 425-442.
Resumo:
Although persons infected with human immunodeficiency virus (HIV), particularly men who have sex with men, are at excess risk for anal cancer, it has been difficult to disentangle the influences of anal exposure to human papillomavirus (HPV) infection, immunodeficiency, and combined antiretroviral therapy. A case-control study that included 59 anal cancer cases and 295 individually matched controls was nested in the Swiss HIV Cohort Study (1988-2011). In a subset of 41 cases and 114 controls, HPV antibodies were tested. A majority of anal cancer cases (73%) were men who have sex with men. Current smoking was significantly associated with anal cancer (odds ratio (OR) = 2.59, 95% confidence interval (CI): 1.25, 5.34), as were antibodies against L1 (OR = 4.52, 95% CI: 2.00, 10.20) and E6 (OR = ∞, 95% CI: 4.64, ∞) of HPV16, as well as low CD4+ cell counts, whether measured at nadir (OR per 100-cell/μL decrease = 1.53, 95% CI: 1.18, 2.00) or at cancer diagnosis (OR per 100-cell/μL decrease = 1.24, 95% CI: 1.08, 1.42). However, the influence of CD4+ cell counts appeared to be strongest 6-7 years prior to anal cancer diagnosis (OR for <200 vs. ≥500 cells/μL = 14.0, 95% CI: 3.85, 50.9). Smoking cessation and avoidance of even moderate levels of immunosuppression appear to be important in reducing long-term anal cancer risks.
Resumo:
The study assessed the economic efficiency of different strategies for the control of post-weaning multi-systemic wasting syndrome (PMWS) and porcine circovirus type 2 subclinical infection (PCV2SI), which have a major economic impact on the pig farming industry worldwide. The control strategies investigated consisted on the combination of up to 5 different control measures. The control measures considered were: (1) PCV2 vaccination of piglets (vac); (2) ensuring age adjusted diet for growers (diets); (3) reduction of stocking density (stock); (4) improvement of biosecurity measures (bios); and (5) total depopulation and repopulation of the farm for the elimination of other major pathogens (DPRP). A model was developed to simulate 5 years production of a pig farm with a 3-weekly batch system and with 100 sows. A PMWS/PCV2SI disease and economic model, based on PMWS severity scores, was linked to the production model in order to assess disease losses. This PMWS severity scores depends on the combination post-weaning mortality, PMWS morbidity in younger pigs and proportion of PCV2 infected pigs observed on farms. The economic analysis investigated eleven different farm scenarios, depending on the number of risk factors present before the intervention. For each strategy, an investment appraisal assessed the extra costs and benefits of reducing a given PMWS severity score to the average score of a slightly affected farm. The net present value obtained for each strategy was then multiplied by the corresponding probability of success to obtain an expected value. A stochastic simulation was performed to account for uncertainty and variability. For moderately affected farms PCV2 vaccination alone was the most cost-efficient strategy, but for highly affected farms it was either PCV2 vaccination alone or in combination with biosecurity measures, with the marginal profitability between 'vac' and 'vac+bios' being small. Other strategies such as 'diets', 'vac+diets' and 'bios+diets' were frequently identified as the second or third best strategy. The mean expected values of the best strategy for a moderately and a highly affected farm were £14,739 and £57,648 after 5 years, respectively. This is the first study to compare economic efficiency of control strategies for PMWS and PCV2SI. The results demonstrate the economic value of PCV2 vaccination, and highlight that on highly affected farms biosecurity measures are required to achieve optimal profitability. The model developed has potential as a farm-level decision support tool for the control of this economically important syndrome.
Resumo:
Because of increasing bulk milk somatic cell counts and continuous clinical mastitis problems in a substantial number of herds, a national mastitis control program was started in 2005 to improve udder health in the Netherlands. The program started with founding the Dutch Udder Health Centre (UGCN), which had the task to coordinate the program. The program consisted of 2 parts: a research part and a knowledge-transfer part, which were integrated as much as possible. The knowledge-transfer part comprised 2 communication strategies: a central and a peripheral approach. The central approach was based on educating farmers using comprehensive science-based and rational argumentation about mastitis prevention and included on-farm study group meetings. Comprehensive education materials were developed for farmers that were internally motivated to improve udder health. In the peripheral approach it was tried to motivate farmers to implement certain management measures using nontechnical arguments. Mass media campaigns were used that focused on one single aspect of mastitis prevention. These communication strategies, as well as an integrated approach between various stakeholders and different scientific disciplines were used to reach as many farmers as possible. It should be noted that, because this intervention took place at a national level, no control group was available, as it would be impossible to isolate farmers from all forms of communication for 5 years. Based on several studies executed during and after the program, however, the results suggest that udder health seemed to have improved on a national level during the course of the program from 2005 to 2010. Within a cohort of dairy herds monitored during the program, the prevalence of subclinical mastitis did not change significantly (23.0 in 2004 vs. 22.2 in 2009). The incidence rate of clinical mastitis, however, decreased significantly, from 33.5 to 28.1 quarter cases per 100 cow years at risk. The most important elements of the farmers' mindset toward mastitis control also changed favorably. The simulated costs of mastitis per farm were reduced compared with a situation in which the mastitis would not have changed, with € 400 per year. When this amount is extrapolated to all Dutch farms, the sector as a whole reduced the total costs of mastitis by € 8 million per year. It is difficult to assign the improved udder health completely to the efforts of the program due to the lack of a control group. Nevertheless, investing € 8 million by the Dutch dairy industry in a 5-yr national mastitis control program likely improved udder health and seemed to pay for itself financially.
Resumo:
Aggressive behavior can be classified in hostile and instrumental aggressions (anderson & bushman, 2002). this classification is mostly synonymously used with reactive and proactive aggression, whereas the differences between hostile and instrumental aggression lie on three dimensions, the primary goal, amount of anger and planning and calculation(bushman & anderson, 2001). although there are rating instruments and experimental paradigms to measure hostile aggression, there is no instrument to measure instrumental aggression. the following study will present an account to measure instrumental aggression with an experimental laboratory paradigm. the instrument was firstly tested on two samples of normal young adolescents (n1 = 100; amage. = 19.14; n2 = 60; amage. = 21.46). the first study revealed a strong correlation with a laboratory aggression paradigm measuring hostile aggression, but no correlations with self-reported aggression in the buss and perry questionnaire. these results were replicated in a second study, revealing an additional correlation with aggressive but not adaptive assertiveness. secondly the instrument was part of the evaluation of the reasoning and rehabilitation program r&r2 (ross, hilborn & lidell, 1984) in an institution for male adolescents with adjustment problems in switzerland. the r&r2 is a cognitive behavioral group therapy to reduce antisocial and promote prosocial cognitions and behavior. the treatment group (n= 16; rangeage = 15-17) is compared to a no treatment control group (n=24; rangeage = 17-19) preand post- treatment. further aggressive behavior was surveyed and experimentally measured. hostile rumination, aggressive and adaptive assertiveness, emotional and social competence were included in the measurement to estimate construct validity.