184 resultados para Dynamic criteria
Resumo:
The development of forensic intelligence relies on the expression of suitable models that better represent the contribution of forensic intelligence in relation to the criminal justice system, policing and security. Such models assist in comparing and evaluating methods and new technologies, provide transparency and foster the development of new applications. Interestingly, strong similarities between two separate projects focusing on specific forensic science areas were recently observed. These observations have led to the induction of a general model (Part I) that could guide the use of any forensic science case data in an intelligence perspective. The present article builds upon this general approach by focusing on decisional and organisational issues. The article investigates the comparison process and evaluation system that lay at the heart of the forensic intelligence framework, advocating scientific decision criteria and a structured but flexible and dynamic architecture. These building blocks are crucial and clearly lay within the expertise of forensic scientists. However, it is only part of the problem. Forensic intelligence includes other blocks with their respective interactions, decision points and tensions (e.g. regarding how to guide detection and how to integrate forensic information with other information). Formalising these blocks identifies many questions and potential answers. Addressing these questions is essential for the progress of the discipline. Such a process requires clarifying the role and place of the forensic scientist within the whole process and their relationship to other stakeholders.
Resumo:
Invasive fungal diseases (IFDs) have become major causes of morbidity and mortality among highly immunocompromised patients. Authoritative consensus criteria to diagnose IFD have been useful in establishing eligibility criteria for antifungal trials. There is an important need for generation of consensus definitions of outcomes of IFD that will form a standard for evaluating treatment success and failure in clinical trials. Therefore, an expert international panel consisting of the Mycoses Study Group and the European Organization for Research and Treatment of Cancer was convened to propose guidelines for assessing treatment responses in clinical trials of IFDs and for defining study outcomes. Major fungal diseases that are discussed include invasive disease due to Candida species, Aspergillus species and other molds, Cryptococcus neoformans, Histoplasma capsulatum, and Coccidioides immitis. We also discuss potential pitfalls in assessing outcome, such as conflicting clinical, radiological, and/or mycological data and gaps in knowledge.
Resumo:
Functional connectivity (FC) as measured by correlation between fMRI BOLD time courses of distinct brain regions has revealed meaningful organization of spontaneous fluctuations in the resting brain. However, an increasing amount of evidence points to non-stationarity of FC; i.e., FC dynamically changes over time reflecting additional and rich information about brain organization, but representing new challenges for analysis and interpretation. Here, we propose a data-driven approach based on principal component analysis (PCA) to reveal hidden patterns of coherent FC dynamics across multiple subjects. We demonstrate the feasibility and relevance of this new approach by examining the differences in dynamic FC between 13 healthy control subjects and 15 minimally disabled relapse-remitting multiple sclerosis patients. We estimated whole-brain dynamic FC of regionally-averaged BOLD activity using sliding time windows. We then used PCA to identify FC patterns, termed "eigenconnectivities", that reflect meaningful patterns in FC fluctuations. We then assessed the contributions of these patterns to the dynamic FC at any given time point and identified a network of connections centered on the default-mode network with altered contribution in patients. Our results complement traditional stationary analyses, and reveal novel insights into brain connectivity dynamics and their modulation in a neurodegenerative disease.
Resumo:
It is well established that Notch signaling plays a critical role at multiple stages of T cell development and activation. However, detailed analysis of the cellular and molecular events associated with Notch signaling in T cells is hampered by the lack of reagents that can unambiguously measure cell surface Notch receptor expression. Using novel rat mAbs directed against the extracellular domains of Notch1 and Notch2, we find that Notch1 is already highly expressed on common lymphoid precursors in the bone marrow and remains at high levels during intrathymic maturation of CD4(-)CD8(-) thymocytes. Notch1 is progressively down-regulated at the CD4(+)CD8(+) and mature CD4(+) or CD8(+) thymic stages and is expressed at low levels on peripheral T cells. Immunofluorescence staining of thymus cryosections further revealed a localization of Notch1(+)CD25(-) cells adjacent to the thymus capsule. Notch1 was up-regulated on peripheral T cells following activation in vitro with anti-CD3 mAbs or infection in vivo with lymphocytic chorio-meningitis virus or Leishmania major. In contrast to Notch1, Notch2 was expressed at intermediate levels on common lymphoid precursors and CD117(+) early intrathymic subsets, but disappeared completely at subsequent stages of T cell development. However, transient up-regulation of Notch2 was also observed on peripheral T cells following anti-CD3 stimulation. Collectively our novel mAbs reveal a dynamic regulation of Notch1 and Notch2 surface expression during T cell development and activation. Furthermore they provide an important resource for future analysis of Notch receptors in various tissues including the hematopoietic system.
Resumo:
PURPOSE: Cardiovascular magnetic resonance (CMR) has become a robust and important diagnostic imaging modality in cardiovascular medicine. However,insufficient image quality may compromise its diagnostic accuracy. No standardized criteria are available to assess the quality of CMR studies. We aimed todescribe and validate standardized criteria to evaluate the quality of CMR studies including: a) cine steady-state free precession, b) delayed gadoliniumenhancement, and c) adenosine stress first-pass perfusion. These criteria will serve for the assessment of the image quality in the setting of the Euro-CMR registry.METHOD AND MATERIALS: First, a total of 45 quality criteria were defined (35 qualitative criteria with a score from 0-3, and 10 quantitative criteria). Thequalitative score ranged from 0 to 105. The lower the qualitative score, the better the quality. The quantitative criteria were based on the absolute signal intensity (delayed enhancement) and on the signal increase (perfusion) of the anterior/posterior left ventricular wall after gadolinium injection. These criteria were then applied in 30 patients scanned with a 1.5T system and in 15 patients scanned with a 3.0T system. The examinations were jointly interpreted by 3 CMR experts and 1 study nurse. In these 45 patients the correlation between the results of the quality assessment obtained by the different readers was calculated.RESULTS: On the 1.5T machine, the mean quality score was 3.5. The mean difference between each pair of observers was 0.2 (5.7%) with a mean standarddeviation of 1.4. On the 3.0T machine, the mean quality score was 4.4. The mean difference between each pair of onservers was 0.3 (6.4%) with a meanstandard deviation of 1.6. The quantitative quality assessments between observers were well correlated for the 1.5T machine: R was between 0.78 and 0.99 (pCONCLUSION: The described criteria for the assessment of CMR image quality are robust and have a low inter-observer variability, especially on 1.5T systems.CLINICAL RELEVANCE/APPLICATION: These criteria will allow the standardization of CMR examinations. They will help to improve the overall quality ofexaminations and the comparison between clinical studies.
Resumo:
The shape of alliance processes over the course of psychotherapy has already been studied in several process-outcome studies on very brief psychotherapy. The present study applies the shape-of-change methodology to short-term dynamic psychotherapies and complements this method with hierarchical linear modeling. A total of 50 psychotherapies of up to 40 sessions were included. Alliance was measured at the end of each session. The results indicate that a linear progression model is most adequate. Three main patterns were found: stable, linear, and quadratic growth. The linear growth pattern, along with the slope parameter, was related to treatment outcome. This study sheds additional light on alliance process research, underscores the importance of linear alliance progression for outcome, and also fosters a better understanding of its limitations.
Resumo:
Recently published criteria using clinical (ataxia or asymmetrical distribution at onset or full development, and sensory loss not restricted to the lower limbs) and electrophysiological items (less than two abnormal lower limb motor nerves and at least an abolished SAP or three SAP below 30% of lower limit of normal in the upper limbs) were sensitive and specific for the diagnosis of sensory neuronopathy (SNN) (Camdessanche et al., Brain, 2009). However, these criteria need to be validated on a large multicenter population. For this, a database collecting cases from fifteen Reference Centers for Neuromuscular diseases in France and Switzerland is currently developed. So far, data from 120 patients with clinically pure sensory neuropathy have been collected. Cases were classified independently from the evaluated criteria as SNN (53), non-SNN (46) or suspected SNN (21) according to the expert's diagnosis. Using the criteria, SNN was possible in 83% (44/53), 23.9% (11/46) and 71.4% (15/21) of cases, respectively. In the non-SSN group, half of the patients with a diagnosis of possible SSN had an ataxic form of inflammatory demyelinating neuropathy. In the SNN group, half of those not retained as possible SNN had CANOMAD, paraneoplasia, or B12 deficiency. In a second step, after application of the items necessary to reach the level of probable SNN (no biological or electrophysiological abnormalities excluding SNN; presence of onconeural antibody, cisplatin treatment, Sj ¨ ogren's syndrome or spinal cord MRI high signal in the posterior column), a final diagnosis of possible or probable SNN was obtained in, respectively, 90.6% (48/53), 8.8% (4/45), and 71.4% (15/21) of patients in the three groups. Among the 5 patients with a final non-SNN but initial SNN diagnosis, 3 had motor conduction abnormalities (one with CANOMAD) and among the 4 patients with a final SNN but initial non-SSN diagnosis, one had anti-Hu antibody and one was discussed as a possible ataxic CIDP. These preliminary results confirm the sensitivity and specificity of the proposed criteria for the diagnosis of SNN.
Resumo:
To estimate the prevalence of metabolically healthy obesity (MHO) according to different definitions. Population-based sample of 2803 women and 2557 men participated in the study. Metabolic abnormalities were defined using six sets of criteria, which included different combinations of the following: waist; blood pressure; total, high-density lipoprotein or low-density lipoprotein-cholesterol; triglycerides; fasting glucose; homeostasis model assessment; high-sensitivity C-reactive protein; personal history of cardiovascular, respiratory or metabolic diseases. For each set, prevalence of MHO was assessed for body mass index (BMI); waist or percent body fat. Among obese (BMI 30 kg/m(2)) participants, prevalence of MHO ranged between 3.3 and 32.1% in men and between 11.4 and 43.3% in women according to the criteria used. Using abdominal obesity, prevalence of MHO ranged between 5.7 and 36.7% (men) and 12.2 and 57.5% (women). Using percent body fat led to a prevalence of MHO ranging between 6.4 and 43.1% (men) and 12.0 and 55.5% (women). MHO participants had a lower odd of presenting a family history of type 2 diabetes. After multivariate adjustment, the odds of presenting with MHO decreased with increasing age, whereas no relationship was found with gender, alcohol consumption or tobacco smoking using most sets of criteria. Physical activity was positively related, whereas increased waist was negatively related with BMI-defined MHO. MHO prevalence varies considerably according to the criteria used, underscoring the need for a standard definition of this metabolic entity. Physical activity increases the likelihood of presenting with MHO, and MHO is associated with a lower prevalence of family history of type 2 diabetes.
Resumo:
PURPOSE: From February 2001 to February 2002, 946 patients with advanced GI stromal tumors (GISTs) treated with imatinib were included in a controlled EORTC/ISG/AGITG (European Organisation for Research and Treatment of Cancer/Italian Sarcoma Group/Australasian Gastro-Intestinal Trials Group) trial. This analysis investigates whether the response classification assessed by RECIST (Response Evaluation Criteria in Solid Tumors), predicts for time to progression (TTP) and overall survival (OS). PATIENTS AND METHODS: Per protocol, the first three disease assessments were done at 2, 4, and 6 months. For the purpose of the analysis (landmark method), disease response was subclassified in six categories: partial response (PR; > 30% size reduction), minor response (MR; 10% to 30% reduction), no change (NC) as either NC- (0% to 10% reduction) or NC+ (0% to 20% size increase), progressive disease (PD; > 20% increase/new lesions), and subjective PD (clinical progression). RESULTS: A total of 906 patients had measurable disease at entry. At all measurement time points, complete response (CR), PR, and MR resulted in similar TTP and OS; this was also true for NC- and NC+, and for PD and subjective PD. Patients were subsequently classified as responders (CR/PR/MR), NC (NC+/NC-), or PD. This three-class response categorization was found to be highly predictive of further progression or survival for the first two measurement points. After 6 months of imatinib, responders (CR/PR/MR) had the same survival prognosis as patients classified as NC. CONCLUSION: RECIST perfectly enables early discrimination between patients who benefited long term from imatinib and those who did not. After 6 months of imatinib, if the patient is not experiencing PD, the pattern of radiologic response by tumor size criteria has no prognostic value for further outcome. Imatinib needs to be continued as long as there is no progression according to RECIST.
Resumo:
OBJECTIVE: To develop a provisional definition for the evaluation of response to therapy in juvenile dermatomyositis (DM) based on the Paediatric Rheumatology International Trials Organisation juvenile DM core set of variables. METHODS: Thirty-seven experienced pediatric rheumatologists from 27 countries achieved consensus on 128 difficult patient profiles as clinically improved or not improved using a stepwise approach (patient's rating, statistical analysis, definition selection). Using the physicians' consensus ratings as the "gold standard measure," chi-square, sensitivity, specificity, false-positive and-negative rates, area under the receiver operating characteristic curve, and kappa agreement for candidate definitions of improvement were calculated. Definitions with kappa values >0.8 were multiplied by the face validity score to select the top definitions. RESULTS: The top definition of improvement was at least 20% improvement from baseline in 3 of 6 core set variables with no more than 1 of the remaining worsening by more than 30%, which cannot be muscle strength. The second-highest scoring definition was at least 20% improvement from baseline in 3 of 6 core set variables with no more than 2 of the remaining worsening by more than 25%, which cannot be muscle strength (definition P1 selected by the International Myositis Assessment and Clinical Studies group). The third is similar to the second with the maximum amount of worsening set to 30%. This indicates convergent validity of the process. CONCLUSION: We propose a provisional data-driven definition of improvement that reflects well the consensus rating of experienced clinicians, which incorporates clinically meaningful change in core set variables in a composite end point for the evaluation of global response to therapy in juvenile DM.
Resumo:
A procedure for the dynamic generation of 1,6-hexamethylene diisocyanate (HDI) aerosol atmospheres of 70 micrograms m-3 (0.01 ppm) to 1.75 mg m-3 (0.25 ppm), based on the precise control of the evaporation of pure liquid HDI and subsequent dilution with air, was developed. The apparatus consisted of a home-made glass nebulizer coupled with a separation stage to exclude non-respirable droplets (greater than 10 microns). The aerosol concentrations were achieved by passing air through the nebulizer at 1.5-4.5 l. min-1 to generate dynamically 0.01-0.25 ppm of diisocyanate in an experimental chamber of 8.55 m3. The distribution of the liquid aerosol was established with an optical counter and the diisocyanate concentration was determined from samples collected in impingers by a high-pressure liquid chromatographic method. The atmospheres generated were suitable for the evaluation both of sampling procedures full scale, and of analytical methods: at 140 micrograms m-3 (0.02 ppm) they remained stable for 15-min provocation tests in clinical asthma, as verified by breath-zone sampling of exposed patients.
Resumo:
BACKGROUND: Complex foot and ankle fractures, such as calcaneum fractures or Lisfranc dislocations, are often associated with a poor outcome, especially in terms of gait capacity. Indeed, degenerative changes often lead to chronic pain and chronic functional limitations. Prescription footwear represents an important therapeutic tool during the rehabilitation process. Local Dynamic Stability (LDS) is the ability of locomotor system to maintain continuous walking by accommodating small perturbations that occur naturally during walking. Because it reflects the degree of control over the gait, LDS has been advocated as a relevant indicator for evaluating different conditions and pathologies. The aim of this study was to analyze changes in LDS induced by orthopaedic shoes in patients with persistent foot and ankle injuries. We hypothesised that footwear adaptation might help patients to improve gait control, which could lead to higher LDS: METHODS: Twenty-five middle-aged inpatients (5 females, 20 males) participated in the study. They were treated for chronic post-traumatic disabilities following ankle and/or foot fractures in a Swiss rehabilitation clinic. During their stay, included inpatients received orthopaedic shoes with custom-made orthoses (insoles). They performed two 30s walking trials with standard shoes and two 30s trials with orthopaedic shoes. A triaxial motion sensor recorded 3D accelerations at the lower back level. LDS was assessed by computing divergence exponents in the acceleration signals (maximal Lyapunov exponents). Pain was evaluated with Visual Analogue Scale (VAS). LDS and pain differences between the trials with standard shoes and the trials with orthopaedic shoes were assessed. RESULTS: Orthopaedic shoes significantly improved LDS in the three axes (medio-lateral: 10% relative change, paired t-test p < 0.001; vertical: 9%, p = 0.03; antero-posterior: 7%, p = 0.04). A significant decrease in pain level (VAS score -29%) was observed. CONCLUSIONS: Footwear adaptation led to pain relief and to improved foot & ankle proprioception. It is likely that that enhancement allows patients to better control foot placement. As a result, higher dynamic stability has been observed. LDS seems therefore a valuable index that could be used in early evaluation of footwear outcome in clinical settings.
Resumo:
PURPOSE: Spine surgery rates are increasing worldwide. Treatment failures are often attributed to poor patient selection and inappropriate treatment, but for many spinal disorders there is little consensus on the precise indications for surgery. With an aging population, more patients with lumbar degenerative spondylolisthesis (LDS) will present for surgery. The aim of this study was to develop criteria for the appropriateness of surgery in symptomatic LDS. METHODS: A systematic review was carried out to summarize the current level of evidence for the treatment of LDS. Clinical scenarios were generated comprising combinations of signs and symptoms in LDS and other relevant variables. Based on the systematic review and their own clinical experience, twelve multidisciplinary international experts rated each scenario on a 9-point scale (1 highly inappropriate, 9 highly appropriate) with respect to performing decompression only, fusion, and instrumented fusion. Surgery for each theoretical scenario was classified as appropriate, inappropriate, or uncertain based on the median ratings and disagreement in the ratings. RESULTS: 744 hypothetical scenarios were generated; overall, surgery (of some type) was rated appropriate in 27 %, uncertain in 41 % and inappropriate in 31 %. Frank panel disagreement was low (7 % scenarios). Face validity was shown by the logical relationship between each variable's subcategories and the appropriateness ratings, e.g., no/mild disability had a mean appropriateness rating of 2.3 ± 1.5, whereas the rating for moderate disability was 5.0 ± 1.6 and for severe disability, 6.6 ± 1.6. Similarly, the average rating for no/minimal neurological abnormality was 2.3 ± 1.5, increasing to 4.3 ± 2.4 for moderate and 5.9 ± 1.7 for severe abnormality. The three variables most likely (p < 0.0001) to be components of scenarios rated "appropriate" were: severe disability, no yellow flags, and severe neurological deficit. CONCLUSION: This is the first study to report criteria for determining candidacy for surgery in LDS developed by a multidisciplinary international panel using a validated method (RAM). The panel ratings followed logical clinical rationale, indicating good face validity. The work refines clinical classification and the phenotype of degenerative spondylolisthesis. The predictive validity of the criteria should be evaluated prospectively to examine whether patients treated "appropriately" have better clinical outcomes.