9 resultados para Critical Trends Assessment Program.
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
«Cultural mapping» has become a central keyword in the UNESCO strategy to protect world cultural and natural heritage. It can be described as a tool to increase the awareness of cultural diversity. As Crawhall (2009) pointed out, cultural mapping was initially considered to represent the «landscapes in two or three dimensions from the perspectives of indigenous and local peoples». It thus transforms the intangible cultural heritage to visible items by establishing profiles of cultures and communities, including music traditions. Cultural mapping is used as a resource for a variety of purposes as broad as peace building, adaptation to climate change, sustainability management, heritage debate and management, but can also become highly useful in the analysis of conflict points. Music plays a significant role in each of these aspects. This year’s symposium invites to highlight, yet also to critically reassess this topic from the following ethnomusicological perspectives: - The method of cultural mapping in ethnomusicology What approaches and research techniques have been used so far to establish musical maps in this context? What kinds of maps have been developed (and, for example, how far do these relate to indigenous mental maps that have only been transmitted orally)? How far do these modern approaches deviate from the earlier cultural mapping approaches of the cultural area approaches that were still evident with Alan P. Merriam and in Alan Lomax` Cantometrics? In how far are the methods of cultural mapping and of ethnomusicological fieldwork different and how can they benefit from each other? - Intangible cultural heritage and musical diversity As the 2003 UNESCO Convention for the Safeguarding of the Intangible Cultural Heritage pointed out in Article 12, each state signing the declaration «shall draw up, in a manner geared to its own situation, one or more inventories of the intangible cultural heritage, present in its territory and monitor these.» This symposium calls for a critical re-assessment of the hitherto established UNESCO intangible cultural heritage lists. The idea is to highlight the sensitive nature and the effects of the various heritage representations. «Heritage» is understood here as a selection from a selection – a small subset of history that relates to a given group of people in a particular place, at a specific time (Dann and Seaton 2001:26). This can include presentations of case studies, yet also a critical re-analysis of the selection process, e.g. who was included – or even excluded (and why)? Who were the decision makers? How can the role of ethnomusicology be described here? Where are the (existent and possible) conflict points (politically, socially, legally, etc.)? What kinds of solution strategies are available to us? How is the issue of diversity – that has been so strongly emphasized in the UNESCO declarations – reflected in the approaches? How might diversity be represented in future approaches? How does the selection process affect musical canonization (and exclusion)? What is the role of archives in this process? - Cultural landscape and music As defined by the World Heritage Committee, cultural landscapes can be understood as a distinct geographical area representing the «combined work of nature and man» (http://whc.unesco.org/en/culturallandscape/). This sub-topic calls for a more detailed – and general – exploration of the exact relation between nature/landscape (and definition of such) and music/sound. How exactly is landscape interrelated with music – and identified (and vice versa)? How is this interrelation being applied and exploited in a (inter-)national context?
Resumo:
Swidden systems consisting of temporarily cultivated land and associated fallows often do not appear on land use maps or in statistical records. This is partly due to the fact that swidden is a diverse and dynamic land use system that is difficult to map and partly because of the practice of grouping land covers associated with swidden systems into land use or land cover categories that are not self-evidently linked to swiddening. Additionally, in many parts of Southeast Asia swidden systems have changed or are in the process of changing into other land use systems. This paper assesses the extent of swidden on the basis of regional and national sources for nine countries, and determines the pattern of changes of swidden on the basis of 151 cases culled from 67 articles. Findings include (1) a majority of the cases document swidden being replaced by other forms of agriculture or by other livelihood systems; (2) in cases where swiddening is still practiced, fallow lengths are usually, but not always, shorter; and (3) shortened fallow length does not necessarily indicate a trend away from swidden since it is observed that short fallow swidden is sometimes maintained along with other more intensive farming practices and not completely abandoned. The paper concludes that there is a surprising lack of conclusive data on the extent of swidden in Southeast Asia. In order to remedy this, methods are reviewed that may lead to more precise future assessments.
Resumo:
INTRODUCTION Patients admitted to intensive care following surgery for faecal peritonitis present particular challenges in terms of clinical management and risk assessment. Collaborating surgical and intensive care teams need shared perspectives on prognosis. We aimed to determine the relationship between dynamic assessment of trends in selected variables and outcomes. METHODS We analysed trends in physiological and laboratory variables during the first week of intensive care unit (ICU) stay in 977 patients at 102 centres across 16 European countries. The primary outcome was 6-month mortality. Secondary endpoints were ICU, hospital and 28-day mortality. For each trend, Cox proportional hazards (PH) regression analyses, adjusted for age and sex, were performed for each endpoint. RESULTS Trends over the first 7 days of the ICU stay independently associated with 6-month mortality were worsening thrombocytopaenia (mortality: hazard ratio (HR) = 1.02; 95% confidence interval (CI), 1.01 to 1.03; P <0.001) and renal function (total daily urine output: HR =1.02; 95% CI, 1.01 to 1.03; P <0.001; Sequential Organ Failure Assessment (SOFA) renal subscore: HR = 0.87; 95% CI, 0.75 to 0.99; P = 0.047), maximum bilirubin level (HR = 0.99; 95% CI, 0.99 to 0.99; P = 0.02) and Glasgow Coma Scale (GCS) SOFA subscore (HR = 0.81; 95% CI, 0.68 to 0.98; P = 0.028). Changes in renal function (total daily urine output and renal component of the SOFA score), GCS component of the SOFA score, total SOFA score and worsening thrombocytopaenia were also independently associated with secondary outcomes (ICU, hospital and 28-day mortality). We detected the same pattern when we analysed trends on days 2, 3 and 5. Dynamic trends in all other measured laboratory and physiological variables, and in radiological findings, changes inrespiratory support, renal replacement therapy and inotrope and/or vasopressor requirements failed to be retained as independently associated with outcome in multivariate analysis. CONCLUSIONS Only deterioration in renal function, thrombocytopaenia and SOFA score over the first 2, 3, 5 and 7 days of the ICU stay were consistently associated with mortality at all endpoints. These findings may help to inform clinical decision making in patients with this common cause of critical illness.
Resumo:
The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.
Resumo:
Introduction: Fan violence is a frequent occurrence in Swiss football (Bundesamt für Polizei, 2015) leading to high costs for prevention and control (Mensch & Maurer, 2014). Various theories put forward an explanation of fan violence, such as the Elaborated Social Identity Model (Drury & Reicher, 2000)and the Aggravation Mitigation Model (Hylander & Guvå, 2010). Important observations from these theories are the multi-dimensional understanding of fan violence and the Dynamics occurring in the fan group. Nevertheless, none of them deal with critical incidents (CIs) which involve a tense atmosphere combined with a higher risk of fan violence. Schumacher Dimech, Brechbühl and Seiler (2015) tackled this gap in research and explored CIs where 43 defining criteria were identified and compiled in an integrated model of CIs. The defining criteria were categorised in four higher-order themes “antecedents” (e.g. a documented history of fan rivalry), “triggers” (e.g. the arrest of a fan), “reactions” (e.g. fans masking themselves) and “consequences” (e.g. fans avoiding communication with fan social workers). Methods: An inventory based on this model is being developed including these 43 criteria. In an exploratory phase, this inventory was presented as an online questionnaire and was completed by 143 individuals. Three main questions are examined: Firstly, the individual items are tested using descriptive analyses. An item analysis is conducted to test reliability, item difficulty and discriminatory power. Secondly, the model’s four higher-order themes are tested using exploratory factor analysis (EFA). Thirdly, differences between sub -groups are explored, such as gender and age-related differences. Results: Respondents rated the items’ importance as high and the quota of incomplete responses was not systematic. Two items were removed from the inventory because of low mean or a high rate of “don’t know”-responses. EFA produced a six-factor solution grouping items into match-related factors, repressive measures, fans’ delinquent behaviour, intra-group behaviour, communication and control and inter-group factors. The item “fans consume alcohol” could not be ordered into any category but was retained since literature accentuates this factor’s influence on fan violence. Analyses examining possible differences between groups are underway. Discussion: Results exploring the adequacy of this inventory assessing defining criteria of CIs in football are promising and thus further evaluative investigation is recommended. This inventory can be used in two ways: as a standardised instrument of assessment for experts evaluating specific CIs and as an instrument for exploring differences in perception and assessment of a CI e.g. gender and age differences, differences between interest groups and stakeholders.
Resumo:
Introduction: Fan violence is a frequent occurrence in Swiss football (Bundesamt für Polizei, 2015) leading to high costs for prevention and control (Mensch & Maurer, 2014). Various theories put forward an explanation of fan violence, such as the Elaborated Social Identity Model (Drury & Reicher, 2000) and the Aggravation Mitigation Model (Hylander & Guvå, 2010). Important observations from these theories are the multi-dimensional understanding of fan violence and the dynamics occurring in the fan group. Nevertheless, none of them deal with critical incidents (CIs) which involve a tense atmosphere combined with a higher risk of fan violence. Schumacher Dimech, Brechbühl and Seiler (2015) tackled this gap in research and explored CIs where 43 defining criteria were identified and compiled in an integrated model of CIs. The defining criteria were categorised in four higher-order themes “antecedents” (e.g. a documented history of fan rivalry), “triggers” (e.g. the arrest of a fan), “reactions” (e.g. fans masking themselves) and “consequences” (e.g. fans avoiding communication with fan social workers). Methods: An inventory based on this model is being developed including these 43 criteria. In an exploratory phase, this inventory was presented as an online questionnaire and was completed by 143 individuals. Three main questions are examined: Firstly, the individual items are tested using descriptive analyses. An item analysis is conducted to test reliability, item difficulty and discriminatory power. Secondly, the model’s four higher-order themes are tested using exploratory factor analysis (EFA). Thirdly, differences between sub-groups are explored, such as gender and agerelated differences. Results: Respondents rated the items’ importance as high and the quota of incomplete responses was not systematic. Two items were removed from the inventory because of low mean or a high rate of “don’t know”-responses. EFA produced a six-factor solution grouping items into match-related factors, repressive measures, fans’ delinquent behaviour, intra-group behaviour, communication and control and inter-group factors. The item “fans consume alcohol” could not be ordered into any category but was retained since literature accentuates this factor’s influence on fan violence. Analyses examining possible differences between groups are underway. Discussion: Results exploring the adequacy of this inventory assessing defining criteria of CIs in football are promising and thus further evaluative investigation is recommended. This inventory can be used in two ways: as a standardised instrument of assessment for experts evaluating specific CIs and as an instrument for exploring differences in perception and assessment of a CI e.g. gender and age differences, differences between interest groups and stakeholders. References: Bundesamt für Polizei. (2015). Jahresbericht 2014. Kriminalitätsbekämpfung Bund. Lage, Massnahmen und Mittel [Electronic Version]. Drury, J., & Reicher, S. (2000). Collective action and psychological change. The emergence of new social identities. British Journal of Social Psychology, 39, 579-604. Hylander, I., & Guvå, G. (2010). Misunderstanding of out-group behaviour: Different interpretations of the same crowd events among police officers and demonstrators. Nordic Psychology, 62, 25-47. Schumacher-Dimech, A., Brechbühl, A. &, Seiler, R. (2016). Dynamics of critical incidents with potentially violent outcomes involving ultra fans: an explorative study. Sport in Society. Advance online publication. doi: 10.1080/17430437.2015.1133597