718 resultados para Intuitive
Resumo:
BACKGROUND High-risk prostate cancer (PCa) is an extremely heterogeneous disease. A clear definition of prognostic subgroups is mandatory. OBJECTIVE To develop a pretreatment prognostic model for PCa-specific survival (PCSS) in high-risk PCa based on combinations of unfavorable risk factors. DESIGN, SETTING, AND PARTICIPANTS We conducted a retrospective multicenter cohort study including 1360 consecutive patients with high-risk PCa treated at eight European high-volume centers. INTERVENTION Retropubic radical prostatectomy with pelvic lymphadenectomy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Two Cox multivariable regression models were constructed to predict PCSS as a function of dichotomization of clinical stage (< cT3 vs cT3-4), Gleason score (GS) (2-7 vs 8-10), and prostate-specific antigen (PSA; ≤ 20 ng/ml vs > 20 ng/ml). The first "extended" model includes all seven possible combinations; the second "simplified" model includes three subgroups: a good prognosis subgroup (one single high-risk factor); an intermediate prognosis subgroup (PSA >20 ng/ml and stage cT3-4); and a poor prognosis subgroup (GS 8-10 in combination with at least one other high-risk factor). The predictive accuracy of the models was summarized and compared. Survival estimates and clinical and pathologic outcomes were compared between the three subgroups. RESULTS AND LIMITATIONS The simplified model yielded an R(2) of 33% with a 5-yr area under the curve (AUC) of 0.70 with no significant loss of predictive accuracy compared with the extended model (R(2): 34%; AUC: 0.71). The 5- and 10-yr PCSS rates were 98.7% and 95.4%, 96.5% and 88.3%, 88.8% and 79.7%, for the good, intermediate, and poor prognosis subgroups, respectively (p = 0.0003). Overall survival, clinical progression-free survival, and histopathologic outcomes significantly worsened in a stepwise fashion from the good to the poor prognosis subgroups. Limitations of the study are the retrospective design and the long study period. CONCLUSIONS This study presents an intuitive and easy-to-use stratification of high-risk PCa into three prognostic subgroups. The model is useful for counseling and decision making in the pretreatment setting.
Resumo:
In this paper we introduce a class of descriptors for regular languages arising from an application of the Stone duality between finite Boolean algebras and finite sets. These descriptors, called classical fortresses, are object specified in classical propositional logic and capable to accept exactly regular languages. To prove this, we show that the languages accepted by classical fortresses and deterministic finite automata coincide. Classical fortresses, besides being propositional descriptors for regular languages, also turn out to be an efficient tool for providing alternative and intuitive proofs for the closure properties of regular languages.
Resumo:
Aim: The landscape metaphor allows viewing corrective experiences (CE) as pathway to a state with relatively lower 'tension' (local minimum). However, such local minima are not easily accessible but obstructed by states with relatively high tension (local maxima) according to the landscape metaphor (Caspar & Berger, 2012). For example, an individual with spider phobia has to transiently tolerate high levels of tension during an exposure therapy to access the local minimum of habituation. To allow for more specific therapeutic guidelines and empirically testable hypotheses, we advance the landscape metaphor to a scientific model which bases on motivational processes. Specifically, we conceptualize CEs as available but unusual trajectories (=pathways) through a motivational space. The dimensions of the motivational state are set up by basic motives such as need for agency or attachment. Methods: Dynamic system theory is used to model motivational states and trajectories using mathematical equations. Fortunately, these equations have easy-to-comprehend and intuitive visual representations similar to the landscape metaphor. Thus, trajectories that represent CEs are informative and action guiding for both therapists and patients without knowledge on dynamic systems. However, the mathematical underpinnings of the model allow researchers to deduct hypotheses for empirical testing. Results: First, the results of simulations of CEs during exposure therapy in anxiety disorders are presented and compared to empirical findings. Second, hypothetical CEs in an autonomy-attachment conflict are reported from a simulation study. Discussion: Preliminary clinical implications for the evocation of CEs are drawn after a critical discussion of the proposed model.
Resumo:
PURPOSE To develop a method for computing and visualizing pressure differences derived from time-resolved velocity-encoded three-dimensional phase-contrast magnetic resonance imaging (4D flow MRI) and to compare pressure difference maps of patients with unrepaired and repaired aortic coarctation to young healthy volunteers. METHODS 4D flow MRI data of four patients with aortic coarctation either before or after repair (mean age 17 years, age range 3-28, one female, three males) and four young healthy volunteers without history of cardiovascular disease (mean age 24 years, age range 20-27, one female, three males) was acquired using a 1.5-T clinical MR scanner. Image analysis was performed with in-house developed image processing software. Relative pressures were computed based on the Navier-Stokes equation. RESULTS A standardized method for intuitive visualization of pressure difference maps was developed and successfully applied to all included patients and volunteers. Young healthy volunteers exhibited smooth and regular distribution of relative pressures in the thoracic aorta at mid systole with very similar distribution in all analyzed volunteers. Patients demonstrated disturbed pressures compared to volunteers. Changes included a pressure drop at the aortic isthmus in all patients, increased relative pressures in the aortic arch in patients with residual narrowing after repair, and increased relative pressures in the descending aorta in a patient after patch aortoplasty. CONCLUSIONS Pressure difference maps derived from 4D flow MRI can depict alterations of spatial pressure distribution in patients with repaired and unrepaired aortic coarctation. The technique might allow identifying pathophysiological conditions underlying complications after aortic coarctation repair.
Resumo:
The finite depth of field of a real camera can be used to estimate the depth structure of a scene. The distance of an object from the plane in focus determines the defocus blur size. The shape of the blur depends on the shape of the aperture. The blur shape can be designed by masking the main lens aperture. In fact, aperture shapes different from the standard circular aperture give improved accuracy of depth estimation from defocus blur. We introduce an intuitive criterion to design aperture patterns for depth from defocus. The criterion is independent of a specific depth estimation algorithm. We formulate our design criterion by imposing constraints directly in the data domain and optimize the amount of depth information carried by blurred images. Our criterion is a quadratic function of the aperture transmission values. As such, it can be numerically evaluated to estimate optimized aperture patterns quickly. The proposed mask optimization procedure is applicable to different depth estimation scenarios. We use it for depth estimation from two images with different focus settings, for depth estimation from two images with different aperture shapes as well as for depth estimation from a single coded aperture image. In this work we show masks obtained with this new evaluation criterion and test their depth discrimination capability using a state-of-the-art depth estimation algorithm.
Resumo:
BACKGROUND/AIMS Several countries are working to adapt clinical trial regulations to align the approval process to the level of risk for trial participants. The optimal framework to categorize clinical trials according to risk remains unclear, however. Switzerland is the first European country to adopt a risk-based categorization procedure in January 2014. We assessed how accurately and consistently clinical trials are categorized using two different approaches: an approach using criteria set forth in the new law (concept) or an intuitive approach (ad hoc). METHODS This was a randomized controlled trial with a method-comparison study nested in each arm. We used clinical trial protocols from eight Swiss ethics committees approved between 2010 and 2011. Protocols were randomly assigned to be categorized in one of three risk categories using the concept or the ad hoc approach. Each protocol was independently categorized by the trial's sponsor, a group of experts and the approving ethics committee. The primary outcome was the difference in categorization agreement between the expert group and sponsors across arms. Linear weighted kappa was used to quantify agreements, with the difference between kappas being the primary effect measure. RESULTS We included 142 of 231 protocols in the final analysis (concept = 78; ad hoc = 64). Raw agreement between the expert group and sponsors was 0.74 in the concept and 0.78 in the ad hoc arm. Chance-corrected agreement was higher in the ad hoc (kappa: 0.34 (95% confidence interval = 0.10-0.58)) than in the concept arm (0.27 (0.06-0.50)), but the difference was not significant (p = 0.67). LIMITATIONS The main limitation was the large number of protocols excluded from the analysis mostly because they did not fit with the clinical trial definition of the new law. CONCLUSION A structured risk categorization approach was not better than an ad hoc approach. Laws introducing risk-based approaches should provide guidelines, examples and templates to ensure correct application.
Resumo:
The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.
Resumo:
OBJECTIVES To assess the use of quality assessment tools among a cross-section of systematic reviews (SRs) and to further evaluate whether quality was used as a parameter in the decision to include primary studies within subsequent meta-analysis. STUDY DESIGN AND SETTING We searched PubMed for SRs (interventional, observational, and diagnostic) published in Core Clinical Journals between January 1 and March 31, 2014. RESULTS Three hundred nine SRs were identified. Quality assessment was undertaken in 222 (71.8%) with isolated use of the Cochrane risk of bias tool (26.1%, n = 58) and the Newcastle-Ottawa Scale (15.3%, n = 34) most common. A threshold level of primary study quality for subsequent meta-analysis was used in 12.9% (40 of 309) of reviews. Overall, fifty-four combinations of quality assessment tools were identified with a similar preponderance of tools used among observational and interventional reviews. Multiple tools were used in 11.7% (n = 36) of SRs overall. CONCLUSION We found that quality assessment tools were used in a majority of SRs; however, a threshold level of quality for meta-analysis was stipulated in just 12.9% (n = 40). This cross-sectional analysis provides further evidence of the need for more active or intuitive editorial processes to enhance the reporting of SRs.
Resumo:
Introduction: Clinical reasoning is essential for the practice of medicine. In theory of development of medical expertise it is stated, that clinical reasoning starts from analytical processes namely the storage of isolated facts and the logical application of the ‘rules’ of diagnosis. Then the learners successively develop so called semantic networks and illness-scripts which finally are used in an intuitive non-analytic fashion [1], [2]. The script concordance test (SCT) is an example for assessing clinical reasoning [3]. However the aggregate scoring [3] of the SCT is recognized as problematic [4]. The SCT`s scoring leads to logical inconsistencies and is likely to reflect construct-irrelevant differences in examinees’ response styles [4]. Also the expert panel judgments might lead to an unintended error of measurement [4]. In this PhD project the following research questions will be addressed: 1. How does a format look like to assess clinical reasoning (similar to the SCT but) with multiple true-false questions or other formats with unambiguous correct answers, and by this address the above mentioned pitfalls in traditional scoring of the SCT? 2. How well does this format fulfill the Ottawa criteria for good assessment, with special regards to educational and catalytic effects [5]? Methods: 1. In a first study it shall be assessed whether designing a new format using multiple true-false items to assess clinical reasoning similar to the SCT-format is arguable in a theoretically and practically sound fashion. For this study focus groups or interviews with assessment experts and students will be undertaken. 2. In an study using focus groups and psychometric data Norcini`s and colleagues Criteria for Good Assessment [5] shall be determined for the new format in a real assessment. Furthermore the scoring method for this new format shall be optimized using real and simulated data.
Resumo:
Percentile shares provide an intuitive and easy-to-understand way for analyzing income or wealth distributions. A celebrated example are the top income shares sported by the works of Thomas Piketty and colleagues. Moreover, series of percentile shares, defined as differences between Lorenz ordinates, can be used to visualize whole distributions or changes in distributions. In this talk, I present a new command called pshare that computes and graphs percentile shares (or changes in percentile shares) from individual level data. The command also provides confidence intervals and supports survey estimation.
Resumo:
Percentile shares provide an intuitive and easy-to-understand way for analyzing income or wealth distributions. A celebrated example is the top income shares sported by the works of Thomas Piketty and colleagues. Moreover, series of percentile shares, defined as differences between Lorenz ordinates, can be used to visualize whole distributions or changes in distributions. In this talk, I present a new command called pshare that computes and graphs percentile shares (or changes in percentile shares) from individual level data. The command also provides confidence intervals and supports survey estimation.
Resumo:
A problem with a practical application of Varian.s Weak Axiom of Cost Minimization is that an observed violation may be due to random variation in the output quantities produced by firms rather than due to inefficiency on the part of the firm. In this paper, unlike in Varian (1985), the output rather than the input quantities are treated as random and an alternative statistical test of the violation of WACM is proposed. We assume that there is no technical inefficiency and provide a test of the hypothesis that an observed violation of WACM is merely due to random variations in the output levels of the firms being compared.. We suggest an intuitive approach for specifying a value of the variance of the noise term that is needed for the test. The paper includes an illustrative example utilizing a data set relating to a number of U.S. airlines.
Resumo:
The purpose of this formative study was to determine and prioritize the HIV-prevention needs of Latino young men who have sex with men (YMSM) in Chihuahua (Mexico), Texas, and California, based on YMSM and service provider perceptions of the factors affecting the assimilation and implementation of HIV-preventive behaviors. These factors included: perceived social support, identification of the modes of HIV transmission, perceived risk of HIV, perceived norms and attitudes of peers.^ The study, drawn from a secondary data set, was a convenience sample of providers (n=8) and clients (n=15). Participants completed face-to face interviews and a survey instrument. Interviews were analyzed to identify common themes and congruence among client groups, and among clients and providers. Providers’ understanding of theoretical constructs of interventions was also assessed. Survey data were analyzed to determine variable frequencies and their congruence to the qualitative analysis. ^ The results revealed several differences and many commonalities in the assimilation of protective messages. Client and provider perceptions were congruent across all domains. Providers demonstrated intuitive command of theoretical concepts but inconsistently verbalized their application. Both clients and providers recognized Latinos possessed high HIV-knowledge levels, despite inconsistent protective behaviors. Clients and providers consistently identified important reasons leading to inconsistent protective behaviors, such as: lack of access to targeted information and condoms, self-esteem, sexual identification, situational factors, decreased perceived HIV-risk, and concerns about homophobia, stigma, and rejection. Other factors included: poverty, failure to reach disenfranchised populations, and lack of role models/positive parental figures. The principal conclusion of the study was that there is a need for further study to understand the interrelationship between larger socioeconomic issues and consistent protective behaviors.^
Resumo:
Public health departments play an important role in promoting and preserving the health of communities. The lack of a system to ensure their quality and accountability led to the development of a national voluntary accreditation program by Public Health Accreditation Board (PHAB). The concept that accreditation will lead to quality improvement in public health which will ultimately lead to healthy communities seems intuitive but lacks a robust body of evidence. A critical review of literature was conducted to explore if accreditation can lead to quality improvement in public health. The articles were selected from publically available databases using a specific set of criteria for inclusion, exclusion, and appraisal. To understand the relationship between accreditation and quality improvement, the potential strengths and limitations of accreditation process were evaluated. Recommendations for best practices are suggested so that public health accreditation can yield maximum benefits. A logic model framework to help depict the impact of accreditation on various levels of public health outcomes is also discussed in this thesis. The literature review shows that existing accreditation programs in other industries show limited but encouraging evidence that accreditation will improve quality and strengthen the delivery of public health services. While progress in introducing accreditation in public health can be informed by other accredited industries, the public health field has its own set of challenges. Providing incentives, creating financing strategies, and having a strong leadership will allow greater access to accreditation by all public health departments. The suggested recommendations include that continuous evaluation, public participation, systems approach, clear vision, and dynamic standards should become hallmarks of the accreditation process. Understanding the link between accreditation, quality improvement, and health outcomes will influence the successful adoption and implementation of the public health accreditation program. This review of literature suggests that accreditation is an important step in improving the quality of public health departments and in ultimately improving the health of communities. However, accreditation should be considered in an integrated system of tools and approaches to improve the public health practice. Hence, it is a means to an end - not an end unto itself.^
Resumo:
Phase I clinical trial is mainly designed to determine the maximum tolerated dose (MTD) of a new drug. Optimization of phase I trial design is crucial to minimize the number of enrolled patients exposed to unsafe dose levels and to provide reliable information to the later phases of clinical trials. Although it has been criticized about its inefficient MTD estimation, nowadays the traditional 3+3 method remains dominant in practice due to its simplicity and conservative estimation. There are many new designs that have been proven to generate more credible MTD estimation, such as the Continual Reassessment Method (CRM). Despite its accepted better performance, the CRM design is still not widely used in real trials. There are several factors that contribute to the difficulties of CRM adaption in practice. First, CRM is not widely accepted by the regulatory agencies such as FDA in terms of safety. It is considered to be less conservative and tend to expose more patients above the MTD level than the traditional design. Second, CRM is relatively complex and not intuitive for the clinicians to fully understand. Third, the CRM method take much more time and need statistical experts and computer programs throughout the trial. The current situation is that the clinicians still tend to follow the trial process that they are comfortable with. This situation is not likely to change in the near future. Based on this situation, we have the motivation to improve the accuracy of MTD selection while follow the procedure of the traditional design to maintain simplicity. We found that in 3+3 method, the dose transition and the MTD determination are relatively independent. Thus we proposed to separate the two stages. The dose transition rule remained the same as 3+3 method. After getting the toxicity information from the dose transition stage, we combined the isotonic transformation to ensure the monotonic increasing order before selecting the optimal MTD. To compare the operating characteristics of the proposed isotonic method and the other designs, we carried out 10,000 simulation trials under different dose setting scenarios to compare the design characteristics of the isotonic modified method with standard 3+3 method, CRM, biased coin design (BC) and k-in-a-row design (KIAW). The isotonic modified method improved MTD estimation of the standard 3+3 in 39 out of 40 scenarios. The improvement is much greater when the target is 0.3 other than 0.25. The modified design is also competitive when comparing with other selected methods. A CRM method performed better in general but was not as stable as the isotonic method throughout the different dose settings. The results demonstrated that our proposed isotonic modified method is not only easily conducted using the same procedure as 3+3 but also outperforms the conventional 3+3 design. It can also be applied to determine MTD for any given TTL. These features make the isotonic modified method of practical value in phase I clinical trials.^