35 resultados para Expanded critical incident approach
Resumo:
The two major subtypes of diffuse large B-cell lymphoma (DLBCL) (germinal centre B-cell - like (GCB-DLBCL) and activated B-cell - like (ABC-DLBCL)) are defined by means of gene expression profiling (GEP). Patients with GCB-DLBCL survive longer with the current standard regimen R-CHOP than patients with ABC-DLBCL. As GEP is not part of the current routine diagnostic work-up, efforts have been made to find a substitute than involves immunohistochemistry (IHC). Various algorithms achieved this with 80-90% accuracy. However, conflicting results on the appropriateness of IHC have been reported. Because it is likely that the molecular subtypes will play a role in future clinical practice, we assessed the determination of the molecular DLBCL subtypes by means of IHC at our University Hospital, and some aspects of this determination elsewhere in Switzerland. The most frequently used Hans algorithm includes three antibodies (against CD10, bcl-6 and MUM1). From records of the routine diagnostic work-up, we identified 51 of 172 (29.7%) newly diagnosed and treated DLBCL cases from 2005 until 2010 with an assigned DLBCL subtype. DLBCL subtype information was expanded by means of tissue microarray analysis. The outcome for patients with the GCB subtype was significantly better compared with those with the non-GC subtype, independent of the age-adjusted International Prognostic Index. We found a lack of standardisation in the subtype determination by means of IHC in Switzerland and significant problems of reproducibility. We conclude that the Hans algorithm performs well in our hands and that awareness of this important matter is increasing. However, outside clinical trials, vigorous efforts to standardise IHC determination are needed as DLBCL subtype-specific therapies emerge.
Resumo:
The vulvar intraepithelial neoplasia (VIN) is a rare chronic skin condition that may progress to an invasive carcinoma of the vulva. Major issues affecting women's health were occurring symptoms, negative influences on sexuality, uncertainty concerning the illness progression and changes in the body image. Despite this, there is little known about the lived experiences of the illness trajectory. Therefore, the aim of this study was to describe the experiences of women with VIN during the illness trajectory. In a secondary data analysis of the foregoing qualitative study we analysed eight narrative interviews with women with VIN by using thematic analysis in combination with critical hermeneutics. Central for these women during their course of illness was a sense of "Hope and Fear". This constitutive pattern reflects the fear of recurrence but also the trust in healing. The eight narratives showed women's experiences during their course of illness occurred in five phases: "there is something unknown"; "one knows, what IT is"; "IT is treated and should heal"; "IT has effects on daily life"; "meanwhile it works". Women's experiences were particularly influenced by the feeling of "embarrassment" and by "dealing with professionals". Current care seems to lack adequate support for women with VIN to manage these phases. We suggest, based on our study and the international literature, that new models of counselling and providing information need to be developed and evaluated.
Resumo:
Definitions of shock and resuscitation endpoints traditionally focus on blood pressures and cardiac output. This carries a high risk of overemphasizing systemic hemodynamics at the cost of tissue perfusion. In line with novel shock definitions and evidence of the lack of a correlation between macro- and microcirculation in shock, we recommend that macrocirculatory resuscitation endpoints, particularly arterial and central venous pressure as well as cardiac output, be reconsidered. In this viewpoint article, we propose a three-step approach of resuscitation endpoints in shock of all origins. This approach targets only a minimum individual and context-sensitive mean arterial blood pressure (for example, 45 to 50 mm Hg) to preserve heart and brain perfusion. Further resuscitation is exclusively guided by endpoints of tissue perfusion irrespectively of the presence of arterial hypotension ('permissive hypotension'). Finally, optimization of individual tissue (for example, renal) perfusion is targeted. Prospective clinical studies are necessary to confirm the postulated benefits of targeting these resuscitation endpoints.
Resumo:
Purpose – A growing body of literature points to the importance of public service motivation (PSM) for the performance of public organizations. The purpose of this paper is to assess the method predominantly used for studying this linkage by comparing the findings it yields without and with a correction suggested by Brewer (2006), which removes the common-method bias arising from employee-specific response tendencies. Design/methodology/approach – First, the authors conduct a systematic review of published empirical research on the effects of PSM on performance and show that all studies found have been conducted at the individual level. Performance indicators in all but three studies were obtained by surveying the same employees who were also asked about their PSM. Second, the authors conduct an empirical analysis. Using survey data from 240 organizational units within the Swiss federal government, the paper compares results from an individual-level analysis (comparable to existing research) to two analyses where the data are aggregated to the organizational level, one without and one with the correction for common-method bias suggested by Brewer (2006). Findings – Looking at the Attraction to Policy-Making dimension of PSM, there is an interesting contrast: While this variable is positively correlated with performance in both the individual-level analysis and the aggregated data analysis without the correction for common-method bias, it is not statistically associated with performance in the aggregated data analysis with the correction. Originality/value – The analysis is the first to assess the robustness of the performance-PSM linkage to a correction for common-method bias. The findings place the validity of at least one part of the individual-level linkage between PSM and performance into question.
Resumo:
The discussion on the New Philology triggered by French and North American scholars in the last decade of the 20th century emphasized the material character of textual transmission inside and outside the written evidences of medieval manuscripts by downgrading the active role of the historical author. However, the reception of the ideas propagated by the New Philology adherents was rather divided. Some researchers questioned its innovative status (K. Stackmann: “Neue Philologie?”), others saw a new era of the “powers of philology” evoked (H.-U. Gumbrecht). Besides the debates on the New Philology another concept of textual materiality strengthened in the last decade, maintaining that textual alterations somewhat relate to biogenetic mutations. In a matter of fact, phenomena such as genetic and textual variation, gene recombination and ‘contamination’ (the mixing of different exemplars in one manuscript text) share common features. The paper discusses to what extent the biogenetic concepts can be used for evaluating manifestations of textual production (as the approach of ‘critique génétique’ does) and of textual transmission (as the phylogenetic analysis of manuscript variation does). In this context yet the genealogical concept of stemmatology – the treelike representation of textual development abhorred by the New Philology adepts – might prove to be useful for describing the history of texts. The textual material to be analyzed will be drawn from the Parzival Project, which is currently preparing a new electronic edition of Wolfram von Eschenbach’s Parzival novel written shortly after 1200 and transmitted in numerous manuscripts up to the age of printing (www.parzival.unibe.ch). Researches of the project have actually resulted in suggesting that the advanced knowledge of the manuscript transmission yields a more precise idea on the author’s own writing process.
Resumo:
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.
Resumo:
Patients with complaints and symptoms caused by spinal degenerative diseases demonstrate a high rate of spontaneous improvement. Except of severe neurological symptoms such as high grade motor deficits, medically intractable pain and vegetative symptoms (cauda syndrome) operations require 1) symptoms, 2) a mechanical cause visible on imaging that sufficiently explains the symptoms, 3) a completed conservative treatment protocol performed over a 4) 6-12 week period. According to the evidence found in the literature, patients with lumbar disk herniation significantly benefit from surgery by a faster relieve of pain and return to social and professional activity, however, the results are converging after a period of 1-2 years. Surgery of lumbar spinal stenosis is considered a gold standard and superior to conservative care when symptoms are severe and leg pain is present. Bilateral microsurgical decompression using a bilateral or a unilateral approach with over-the-top decompression of the contralateral nerve root are superior to laminectomy as the decompression procedure. Lumbar fusion is only indicated in patients with spinal stenosis when a major or mobile spondylolisthesis is diagnosed. There is no indication of prophylactic surgery to avoid a "dangerous" deficit that might develop in the future.
Resumo:
Purpose – There is much scientific interest in the connection between the emergence of gender-based inequalities and key biographical transition points of couples in long-term relationships. Little empirical research is available comparing the evolution of a couple’s respective professional careers over space and time. The purpose of this paper is to contribute to filling this gap by addressing the following questions: what are the critical biographical moments when gender (in)equalities within a relationship begin to arise and consolidate? Which biographical decisions precede and follow such critical moments? How does decision making at critical moments impact the opportunities of both relationship partners in gaining equal access to paid employment? Design/methodology/approach – These questions are addressed from the perspectives of intersectionality and economic citizenship. Biographical interviewing is used to collect the personal and professional narratives of Swiss-, bi-national and migrant couples. The case study of a Swiss-Norwegian couple illustrates typical processes by which many skilled migrant women end up absently or precariously employed. Findings – Analysis reveals that the Scandinavian woman’s migration to Switzerland is a primary and critical moment for emerging inequality, which is then reinforced by relocation (to a small town characterized by conservative gender values) and the subsequent births of their children. It is concluded that factors of traditional gender roles, ethnicity and age intersect to create a hierarchical situation which affords the male Swiss partner more weight in terms of decision making and career advancement. Practical implications – The paper’s findings are highly relevant to the formulation of policies regarding gender inequalities and the implementation of preventive programmes within this context. Originality/value – Little empirical research is available comparing the evolution of a couple’s respective professional careers over space and time. The originality of this paper is to fill this research gap; to include migration as a critical moment for gender inequalities; to use an intersectional and geographical perspective that have been given scant attention in the literature; to use the original concept of economic citizenship; and to examine the case of a bi-national couple, which has so far not been examined by the literature on couple relationships.
Resumo:
The majority of people who sustain hip fractures after a fall to the side would not have been identified using current screening techniques such as areal bone mineral density. Identifying them, however, is essential so that appropriate pharmacological or lifestyle interventions can be implemented. A protocol, demonstrated on a single specimen, is introduced, comprising the following components; in vitro biofidelic drop tower testing of a proximal femur; high-speed image analysis through digital image correlation; detailed accounting of the energy present during the drop tower test; organ level finite element simulations of the drop tower test; micro level finite element simulations of critical volumes of interest in the trabecular bone. Fracture in the femoral specimen initiated in the superior part of the neck. Measured fracture load was 3760 N, compared to 4871 N predicted based on the finite element analysis. Digital image correlation showed compressive surface strains as high as 7.1% prior to fracture. Voxel level results were consistent with high-speed video data and helped identify hidden local structural weaknesses. We found using a drop tower test protocol that a femoral neck fracture can be created with a fall velocity and energy representative of a sideways fall from standing. Additionally, we found that the nested explicit finite element method used allowed us to identify local structural weaknesses associated with femur fracture initiation.
Resumo:
The phase assemblages and compositions in a K-bearing lherzolite + H2O system are determined between 4 and 6 GPa and 850–1200 °C, and the melting reactions occurring at subarc depth in subduction zones are constrained. Experiments were performed on a rocking multi-anvil apparatus. The experiments had around 16 wt% water content, and hydrous melt or aqueous fluid was segregated and trapped in a diamond aggregate layer. The compositions of the aqueous fluid and hydrous melt phases were measured using the cryogenic LA-ICP-MS technique. The residual lherzolite consists of olivine, orthopyroxene, clinopyroxene, and garnet, while diamond (C) is assumed to be inert. Hydrous and alkali-rich minerals were absent from the run products due to preferred dissolution of K2O (and Na2O) to the aqueous fluid/hydrous melt phases. The role of phlogopite in melting relations is, thus, controlled by the water content in the system: at the water content of around 16 wt% used here, phlogopite is unstable and thus does not participate in melting reactions. The water-saturated solidus, i.e., the first appearance of hydrous melt in the K–lherzolite composition, is located between 900 and 1000 °C at 4 GPa and between 1000 and 1100 °C at 5 and 6 GPa. Compositional jumps between hydrous melt and aqueous fluid at the solidus include a significant increase in the total dissolved solids load. All melts/fluids are peralkaline and calcium-rich. The melting reactions at the solidus are peritectic, as olivine, clinopyroxene, garnet, and H2O are consumed to generate hydrous melt plus orthopyroxene. Our fluid/melt compositional data demonstrate that the water-saturated hybrid peridotite solidus lies above 1000 °C at depths greater than 150 km and that the second critical endpoint is not reached at 6 GPa for a K2O–Na2O–CaO–FeO–MgO–Al2O3–SiO2–H2O–Cr2O3(–TiO2) peridotite composition.
Resumo:
The precise arraying of functional entities in morphologically well-defined shapes remains one of the key challenges in the processing of organic molecules1. Among various π-conjugated species, pyrene exhibits a set of unique properties, which make it an attractive compound for the utilization in materials science2. In this contribution we report on properties of self-assembled structures prepared from amphiphilic pyrene trimers (Py3) consisting of phosphodiester-linked pyrenes. Depending on the geometry of a pyrene core substitution (1.6-, 1.8-, or 2.7- type, see Scheme), the thermally-controlled self-assembly allows the preparation of supramolecular architectures of different morphologies in a bottom-up approach: two-dimensional (2D) nanosheets3 are formed in case of 1.6- and 2.7-substitution4 whereas one-dimensional (1D) fibers are built from 1.8- substituted isomers. The morphologies of the assemblies are established by AFM and TEM, and the results are further correlated with spectroscopic and scattering data. Two-dimensional assemblies consist of an inner layer of hydrophobic pyrenes, sandwiched between a net of phosphates. Due to the repulsion of the negative charges, the 2D assemblies exist mostly as free-standing sheets. An internal alignment of pyrenes leads to strong exciton coupling with an unprecedented observation (simultaneous development of J- and H-bands from two different electronic transitions). Despite the similarity in spectroscopic properties, the structural parameters of the 2D aggregates drastically depend on the preparation procedure. Under certain conditions extra-large sheets (thickness of 2 nm, aspect ratio area/thickness ~107) in aqueous solution are formed4B. Finally, one-dimensional assemblies are formed as micrometer-long and nanometer-thick fibers. Both, planar and linear structures are intriguing objects for the creation of conductive nanowires that may find interest for applications in supramolecular electronics.
Resumo:
The precise arraying of functional entities in morphologically well-defined shapes remains one of the key challenges in the processing of organic molecules1. Among various π-conjugated species, pyrene exhibits a set of unique properties, which make it an attractive compound for the utilization in materials science2. In this contribution we report on properties of self-assembled structures prepared from amphiphilic pyrene trimers (Py3) consisting of phosphodiester-linked pyrenes. Depending on the geometry of a pyrene core substitution (1.6-, 1.8-, or 2.7- type, see Scheme), the thermally-controlled self-assembly allows the preparation of supramolecular architectures of different morphologies in a bottom-up approach: two-dimensional (2D) nanosheets3 are formed in case of 1.6- and 2.7-substitution4 whereas one-dimensional (1D) fibers are built from 1.8- substituted isomers. The morphologies of the assemblies are established by AFM and TEM, and the results are further correlated with spectroscopic and scattering data. Two-dimensional assemblies consist of an inner layer of hydrophobic pyrenes, sandwiched between a net of phosphates. Due to the repulsion of the negative charges, the 2D assemblies exist mostly as free-standing sheets. An internal alignment of pyrenes leads to strong exciton coupling with an unprecedented observation (simultaneous development of J- and H-bands from two different electronic transitions). Despite the similarity in spectroscopic properties, the structural parameters of the 2D aggregates drastically depend on the preparation procedure. Under certain conditions extra-large sheets (thickness of 2 nm, aspect ratio area/thickness ~107) in aqueous solution are formed4B. Finally, one-dimensional assemblies are formed as micrometer-long and nanometer-thick fibers. Both, planar and linear structures are intriguing objects for the creation of conductive nanowires that may find interest for applications in supramolecular electronics.
Resumo:
Recurrent wheezing or asthma is a common problem in children that has increased considerably in prevalence in the past few decades. The causes and underlying mechanisms are poorly understood and it is thought that a numb er of distinct diseases causing similar symptoms are involved. Due to the lack of a biologically founded classification system, children are classified according to their observed disease related features (symptoms, signs, measurements) into phenotypes. The objectives of this PhD project were a) to develop tools for analysing phenotypic variation of a disease, and b) to examine phenotypic variability of wheezing among children by applying these tools to existing epidemiological data. A combination of graphical methods (multivariate co rrespondence analysis) and statistical models (latent variables models) was used. In a first phase, a model for discrete variability (latent class model) was applied to data on symptoms and measurements from an epidemiological study to identify distinct phenotypes of wheezing. In a second phase, the modelling framework was expanded to include continuous variability (e.g. along a severity gradient) and combinations of discrete and continuo us variability (factor models and factor mixture models). The third phase focused on validating the methods using simulation studies. The main body of this thesis consists of 5 articles (3 published, 1 submitted and 1 to be submitted) including applications, methodological contributions and a review. The main findings and contributions were: 1) The application of a latent class model to epidemiological data (symptoms and physiological measurements) yielded plausible pheno types of wheezing with distinguishing characteristics that have previously been used as phenotype defining characteristics. 2) A method was proposed for including responses to conditional questions (e.g. questions on severity or triggers of wheezing are asked only to children with wheeze) in multivariate modelling.ii 3) A panel of clinicians was set up to agree on a plausible model for wheezing diseases. The model can be used to generate datasets for testing the modelling approach. 4) A critical review of methods for defining and validating phenotypes of wheeze in children was conducted. 5) The simulation studies showed that a parsimonious parameterisation of the models is required to identify the true underlying structure of the data. The developed approach can deal with some challenges of real-life cohort data such as variables of mixed mode (continuous and categorical), missing data and conditional questions. If carefully applied, the approach can be used to identify whether the underlying phenotypic variation is discrete (classes), continuous (factors) or a combination of these. These methods could help improve precision of research into causes and mechanisms and contribute to the development of a new classification of wheezing disorders in children and other diseases which are difficult to classify.
Resumo:
HYPOTHESIS A multielectrode probe in combination with an optimized stimulation protocol could provide sufficient sensitivity and specificity to act as an effective safety mechanism for preservation of the facial nerve in case of an unsafe drill distance during image-guided cochlear implantation. BACKGROUND A minimally invasive cochlear implantation is enabled by image-guided and robotic-assisted drilling of an access tunnel to the middle ear cavity. The approach requires the drill to pass at distances below 1 mm from the facial nerve and thus safety mechanisms for protecting this critical structure are required. Neuromonitoring is currently used to determine facial nerve proximity in mastoidectomy but lacks sensitivity and specificity necessaries to effectively distinguish the close distance ranges experienced in the minimally invasive approach, possibly because of current shunting of uninsulated stimulating drilling tools in the drill tunnel and because of nonoptimized stimulation parameters. To this end, we propose an advanced neuromonitoring approach using varying levels of stimulation parameters together with an integrated bipolar and monopolar stimulating probe. MATERIALS AND METHODS An in vivo study (sheep model) was conducted in which measurements at specifically planned and navigated lateral distances from the facial nerve were performed to determine if specific sets of stimulation parameters in combination with the proposed neuromonitoring system could reliably detect an imminent collision with the facial nerve. For the accurate positioning of the neuromonitoring probe, a dedicated robotic system for image-guided cochlear implantation was used and drilling accuracy was corrected on postoperative microcomputed tomographic images. RESULTS From 29 trajectories analyzed in five different subjects, a correlation between stimulus threshold and drill-to-facial nerve distance was found in trajectories colliding with the facial nerve (distance <0.1 mm). The shortest pulse duration that provided the highest linear correlation between stimulation intensity and drill-to-facial nerve distance was 250 μs. Only at low stimulus intensity values (≤0.3 mA) and with the bipolar configurations of the probe did the neuromonitoring system enable sufficient lateral specificity (>95%) at distances to the facial nerve below 0.5 mm. However, reduction in stimulus threshold to 0.3 mA or lower resulted in a decrease of facial nerve distance detection range below 0.1 mm (>95% sensitivity). Subsequent histopathology follow-up of three representative cases where the neuromonitoring system could reliably detect a collision with the facial nerve (distance <0.1 mm) revealed either mild or inexistent damage to the nerve fascicles. CONCLUSION Our findings suggest that although no general correlation between facial nerve distance and stimulation threshold existed, possibly because of variances in patient-specific anatomy, correlations at very close distances to the facial nerve and high levels of specificity would enable a binary response warning system to be developed using the proposed probe at low stimulation currents.
Resumo:
Xu and colleagues evaluated the impact of increasing mean arterial blood pressure levels through norepinephrine administration on systemic hemodynamics, tissue perfusion, and sublingual microcirculation of septic shock patients with chronic hypertension. The authors concluded that, although increasing arterial blood pressure improved sublingual microcirculation parameters, no concomitant improvement in systemic tissue perfusion indicators was found. Here, we discuss why resuscitation targets may need to be individualized, taking into account the patient's baseline condition, and present directions for future research in this field.