942 resultados para Glasgow Outcome Scale
Resumo:
We assessed the prevalence of vertebral artery (VA) stenosis or occlusion and its influence on outcome in patients with acute basilar artery occlusion (BAO). We studied 141 patients with acute BAO enrolled in the Basilar Artery International Cooperation Study (BASICS) registry of whom baseline CT angiography (CTA) of the intracranial VAs was available. In 72 patients an additional CTA of the extracranial VAs was available. Adjusted risk ratios (aRRs) for death and poor outcome, defined as a modified Rankin Scale score ≥4, were calculated with Poisson regression in relation to VA occlusion, VA occlusion or stenosis ≥50 %, and bilateral VA occlusion. Sixty-six of 141 (47 %) patients had uni- or bilateral intracranial VA occlusion or stenosis ≥50 %. Of the 72 patients with intra- and extracranial CTA, 46 (64 %) had uni- or bilateral VA occlusion or stenosis ≥50 % and 9 (12 %) had bilateral VA occlusion. Overall, VA occlusion or stenosis ≥50 % was not associated with the risk of poor outcome. Patients with intra- and extracranial CTA and bilateral VA occlusion had a higher risk of poor outcome than patients without bilateral VA occlusion (aRR, 1.23; 95 % CI 1.02-1.50). The risk of death did not depend on the presence of unilateral or bilateral VA occlusion or stenosis ≥50 %. In conclusion, in patients with acute BAO, unilateral VA occlusion or stenosis ≥50 % is frequent, but not associated with an increased risk of poor outcome or death. Patients with BAO and bilateral VA occlusion have a slightly increased risk of poor outcome.
Resumo:
Background: Recent research suggested thatreligious coping, based on dispositional religiousness and spirituality (R/S), is an important modulating factor in the process of dealing with adversity. In contrast to the United States, the effect of R/S on psychological adjustment to stress is a widely unexplored area in Europe. Methods: We examined a Swiss sample of 328 church attendees in the aftermath of stressful life events to explore associations of positive or negative religious coping with the psychological outcome. Applying a cross-sectional design, we used Huber’s Centrality Scale to specify religiousness and Pargament’s measure of religious coping (RCOPE) for the assessment of positive and negative religious coping. Depressive symptoms and anxiety as outcome variables were examined by the Brief Symptom Inventory. The Stress-Related Growth Scale and the Marburg questionnaire for the assessment of well-being were used to assess positive outcome aspects. We conducted Mann-Whitney tests for group comparisons and cumulative logit analysis for the assessmentof associations of religious coping with our outcome variables. Results: Both forms of religious coping were positively associated with stress-related growth (p < 0.01). However, negative religious coping additionally reduced well-being (p = 0.05, β = 0.52, 95% CI = 0.27–0.99) and increased anxiety (p = 0.02, β = 1.94, 95% CI = 1.10–3.39) and depressive symptoms (p = 0.01, β = 2.27, 95% CI = 1.27–4.06). Conclusions: The effects of religious coping on the psychological adjustment to stressful life events seem relevant. These findings should be confirmed in prospective studies.
Resumo:
BACKGROUND The aim of this study was to analyze the influence of the location of middle cerebral artery (MCA) occlusion on recanalization, complications and outcome after endovascular therapy. METHODS Four-hundred sixty-four patients with acute MCA occlusions were treated with endovascular therapy. RESULTS Two-hundred ninety-three patients had M1 occlusions, 116 had M2, and 55 had M3/4 occlusions. Partial or complete recanalization was more frequently achieved in M1 (76.8%) than in M2 (59.1%) or M3/4 (47.3%, p < 0.001) occlusions, but favorable outcome (modified Rankin Scale 0-2) was less frequent in M1 (50.9%) than M2 (63.7%) or M3/4 (72.7%, p = 0.018) occlusions. Symptomatic intracerebral hemorrhage (ICH) did not differ between occlusion sites, but asymptomatic ICH was more common in M1 (22.6%) than in M2 occlusions (8.6%, p = 0.003). Recanalization was associated with favorable outcome in M1 (p < 0.001) and proximal M2 (p = 0.003) but not in distal M2 or M3/4 occlusions. CONCLUSIONS Recanalization with endovascular therapy was more frequently achieved in patients with proximal than distal MCA occlusions, but recanalization was associated with favorable outcome only in M1 and proximal M2 occlusions. Outcome was better with distal than proximal occlusions. This study shows that recanalization can be used as a surrogate marker for clinical outcome only in patients with proximal occlusions.
Temporary Internal Fixation for Ligamentous and Osseous Lisfranc Injuries: Outcome and Technical Tip
Resumo:
BACKGROUND Open rather than closed reduction and internal fixation as well as primary definitive arthrodesis are well accepted for ligamentous and osseous Lisfranc injuries. For ligamentous injuries, a better outcome after primary definitive partial arthrodesis has been published. METHODS Of 135 Lisfranc injuries that were treated from 1998 to 2012 with open reduction, temporary internal fixation by screws and plates, and restricted weight bearing in a lower leg cast for 3 months followed by an arch support for another 4 to 6 weeks, 29 ligamentous Lisfranc injuries were available for follow-up. They were compared with 29 osseous Lisfranc injuries matched in age and gender. RESULTS Between the groups, there were no significant differences in average age (39.9 vs 38 years) or in average follow-up time (8.3 vs 9.1 years). Also, no significant differences were seen in the AOFAS midfoot score (84 vs 85.3 points), the FFI pain scale (9.9 vs 14.9 points), SF 36 physical component (56.2 vs 53.9 points), SF 36 mental component (57 vs 56.4 points), or VAS for pain (1.6 vs 1.5 points). The FFI function scale was significantly lower in the ligamentous group (11.6 vs 19.5 points). Radiographically, loss of reduction was recorded 3 times in the ligamentous injuries and 4 times in the osseous injuries. Arthritis was mild/moderate/severe in 5/3/0 ligamentous injuries and in 7/2/1 osseous injuries, requiring 1 definitive secondary Lisfranc arthrodesis in each group. CONCLUSION With longer and conservative postoperative management, open reduction and temporary internal fixation in ligamentous and osseous Lisfranc injuries led to equal medium-term outcome. Inferior outcome in ligamentous injuries was not found. LEVEL OF EVIDENCE Level III, retrospective comparative cohort study.
Resumo:
BACKGROUND The presence of prodromal transient ischemic attacks (TIAs) has been associated with a favorable outcome in anterior circulation stroke. We aimed to determine the association between prodromal TIAs or minor stroke and outcomes at 1 month, in the Basilar Artery International Cooperation Study, a registry of patients presenting with an acute symptomatic and radiologically confirmed basilar artery occlusion. METHODS A total of 619 patients were enrolled in the registry. Information on prodromal TIAs was available for 517 patients and on prodromal stroke for 487 patients. We calculated risk ratios and corresponding 95% confidence intervals (CIs) for poor clinical outcome (modified Rankin Scale score ≥4) according to the variables of interest. RESULTS Prodromal minor stroke was associated with poor outcome (crude risk ratio [cRR], 1.26; 95% CI, 1.12-1.42), but TIAs were not (cRR, .93; 95% CI, .79-1.09). These associations remained essentially the same after adjustment for confounding variables. CONCLUSIONS Prodromal minor stroke was associated with an unfavorable outcome in patients with basilar artery occlusion, whereas prodromal TIA was not.
Resumo:
The objectives of this study were to describe a new spinal cord injury scale for dogs, evaluate repeatability through determining inter-rater variability of scores, compare these scores to another established system (a modified Frankel scale), and determine if the modified Frankel scale and the newly developed scale were useful as prognostic indicators for return to ambulation. A group of client-owned dogs with spinal cord injury were examined by 2 independent observers who applied the new Texas Spinal Cord Injury Score (TSCIS) and a modified Frankel scale that has been used previously. The newly developed scale was designed to describe gait, postural reactions and nociception in each limb. Weighted kappa statistics were utilized to determine inter-rater variability for the modified Frankel scale and individual components of the TSCIS. Comparisons were made between raters for the overall TSCIS score and between scales using Spearman's rho. An additional group of dogs with surgically treated thoracolumbar disk herniation was enrolled to look at correlation of both scores with spinal cord signal characteristics on magnetic resonance imaging (MRI) and ambulatory outcome at discharge. The actual agreement between raters for the modified Frankel scale was 88%, with a weighted kappa value of 0.93. The TSCIS had weighted kappa scores for gait, proprioceptive positioning and nociception components that ranged from 0.72 to 0.94. Correlation between raters for the overall TSCIS score was Spearman's rho=0.99 (P<0.001). Comparison of the overall TSCIS score to the modified Frankel score resulted in a Spearman's rho value of 0.90 (P<0.001). The modified Frankel score was weakly correlated with the length of hyperintensity of the spinal cord: L2 vertebral body length ratio on mid-sagittal T2-weighted MRI (Spearman's rho=-0.45, P=0.042) as was the overall TSCIS score (Spearman's rho=-0.47, P=0.037). There was also a significant difference in admitting modified Frankel scores (P=0.029) and admitting overall TSCIS scores (P=0.02) between dogs that were ambulatory at discharge and those that were not. Results from this study suggest that the TSCIS is an easy to administer scale for evaluating canine spinal cord injury based on the standard neurological exam and correlates well with a previously described modified Frankel scale.
Resumo:
BACKGROUND: Lack of adaptive and enhanced maladaptive coping with stress and negative emotions are implicated in many psychopathological disorders. We describe the development of a new scale to investigate the relative contribution of different coping styles to psychopathology in a large population sample. We hypothesized that the magnitude of the supposed positive correlation between maladaptive coping and psychopathology would be stronger than the supposed negative correlation between adaptive coping and psychopathology. We also examined whether distinct coping style patterns emerge for different psychopathological syndromes. METHODS: A total of 2200 individuals from the general population participated in an online survey. The Patient Health Questionnaire-9 (PHQ-9), the Obsessive-Compulsive Inventory revised (OCI-R) and the Paranoia Checklist were administered along with a novel instrument called Maladaptive and Adaptive Coping Styles (MAX) questionnaire. Participants were reassessed six months later. RESULTS: MAX consists of three dimensions representing adaptive coping, maladaptive coping and avoidance. Across all psychopathological syndromes, similar response patterns emerged. Maladaptive coping was more strongly related to psychopathology than adaptive coping both cross-sectionally and longitudinally. The overall number of coping styles adopted by an individual predicted greater psychopathology. Mediation analysis suggests that a mild positive relationship between adaptive and certain maladaptive styles (emotional suppression) partially accounts for the attenuated relationship between adaptive coping and depressive symptoms. LIMITATIONS: Results should be replicated in a clinical population. CONCLUSIONS: Results suggest that maladaptive and adaptive coping styles are not reciprocal. Reducing maladaptive coping seems to be more important for outcome than enhancing adaptive coping. The study supports transdiagnostic approaches advocating that maladaptive coping is a common factor across different psychopathologies.
Resumo:
Background: The demand for international harmonization in medical education increases with the growing mobility of students and health professionals. Many medical societies and governmental offices have issued outcome frameworks (OF), which describe aims and contents of medical education based on competencies. These national standards affect the development of curricula as well as assessment and licensing procedures. Comparing OF and identifying factors that limit their comparability may thus foster international harmonization of medical education. Summary of Work: We conducted a systematic search for national OF in MedLine, EmBase and the internet. We included all OF in German or English that resulted from a national consensus process and were published or endorsed by a national society or governmental body. We extracted information in five predetermined categories: history of origin, audience, formal structure, medical schooling system and key terms. Summary of Results: Out of 1816 results, 13 OF were included into further analyses. OF reference each other, often without addressing existing differences (e.g. in target audiences). The two most cited OF are “CanMEDs” and “Scottish Doctor”. OF differ especially in their level of detail as well as in the underlying educational system. Discussion and Conclusions: Based on our results we propose a two-step blueprint for OF, that may help to establish comparability for internationally aligned key features – so-called “core competencies” – while at the same time allowing for necessary regional adaptations in terms of “secondary competencies”. Take-home messages: Considerable differences in at least five categories of OF currently hinder the comparability of outcome frameworks.
Resumo:
PURPOSE Few studies have used multivariate models to quantify the effect of multiple previous spine surgeries on patient-oriented outcome after spine surgery. This study sought to quantify the effect of prior spine surgery on 12-month postoperative outcomes in patients undergoing surgery for different degenerative disorders of the lumbar spine. METHODS The study included 4940 patients with lumbar degenerative disease documented in the Spine Tango Registry of EUROSPINE, the Spine Society of Europe, from 2004 to 2015. Preoperatively and 12 months postoperatively, patients completed the multidimensional Core Outcome Measures Index (COMI; 0-10 scale). Patients' medical history and surgical details were recorded using the Spine Tango Surgery 2006 and 2011 forms. Multiple linear regression models were used to investigate the relationship between the number of previous surgeries and the 12-month postoperative COMI score, controlling for the baseline COMI score and other potential confounders. RESULTS In the adjusted model including all cases, the 12-month COMI score showed a 0.37-point worse value [95 % confidence intervals (95 % CI) 0.29-0.45; p < 0.001] for each additional prior spine surgery. In the subgroup of patients with lumbar disc herniation, the corresponding effect was 0.52 points (95 % CI 0.27-0.77; p < 0.001) and in lumbar degenerative spondylolisthesis, 0.40 points (95 % CI 0.17-0.64; p = 0.001). CONCLUSIONS We were able to demonstrate a clear "dose-response" effect for previous surgery: the greater the number of prior spine surgeries, the systematically worse the outcome at 12 months' follow-up. The results of this study can be used when considering or consenting a patient for further surgery, to better inform the patient of the likely outcome and to set realistic expectations.
Resumo:
Trauma and severe head injuries are important issues because they are prevalent, because they occur predominantly in the young, and because variations in clinical management may matter. Trauma is the leading cause of death for those under age 40. The focus of this head injury study is to determine if variations in time from the scene of accident to a trauma center hospital makes a difference in patient outcomes.^ A trauma registry is maintained in the Houston-Galveston area and includes all patients admitted to any one of three trauma center hospitals with mild or severe head injuries. A study cohort, derived from the Registry, includes 254 severe head injury cases, for 1980, with a Glasgow Coma Score of 8 or less.^ Multiple influences relate to patient outcomes from severe head injury. Two primary variables and four confounding variables are identified, including time to emergency room, time to intubation, patient age, severity of injury, type of injury and mode of transport to the emergency room. Regression analysis, analysis of variance, and chi-square analysis were the principal statistical methods utilized.^ Analysis indicates that within an urban setting, with a four-hour time span, variations in time to emergency room do not provide any strong influence or predictive value to patient outcome. However, data are suggestive that at longer time periods there is a negative influence on outcomes. Age is influential only when the older group (55-64) is included. Mode of transport (helicopter or ambulance) did not indicate any significant difference in outcome.^ In a multivariate regression model, outcomes are influenced primarily by severity of injury and age which explain 36% (R('2)) of variance. Inclusion of time to emergency room, time to intubation, transport mode and type injury add only 4% (R('2)) additional contribution to explaining variation in patient outcome.^ The research concludes that since the group most at risk to head trauma is the young adult male involved in automobile/motorcycle accidents, more may be gained by modifying driving habits and other preventive measures. Continuous clinical and evaluative research are required to provide updated clinical wisdom in patient management and trauma treatment protocols. A National Institute of Trauma may be required to develop a national public policy and evaluate the many medical, behavioral and social changes required to cope with the country's number 3 killer and the primary killer of young adults.^
Resumo:
The use of GaAsSbN capping layers on InAs/GaAs quantum dots (QDs) has recently been proposed for micro- and optoelectronic applications for their ability to independently tailor electron and hole confinement potentials. However, there is a lack of knowledge about the structural and compositional changes associated with the process of simultaneous Sb and N incorporation. In the present work, we have characterized using transmission electron microscopy techniques the effects of adding N in the GaAsSb/InAs/GaAs QD system. Firstly, strain maps of the regions away from the InAs QDs had revealed a huge reduction of the strain fields with the N incorporation but a higher inhomogeneity, which points to a composition modulation enhancement with the presence of Sb-rich and Sb-poor regions in the range of a few nanometers. On the other hand, the average strain in the QDs and surroundings is also similar in both cases. It could be explained by the accumulation of Sb above the QDs, compensating the tensile strain induced by the N incorporation together with an In-Ga intermixing inhibition. Indeed, compositional maps of column resolution from aberration-corrected Z-contrast images confirmed that the addition of N enhances the preferential deposition of Sb above the InAs QD, giving rise to an undulation of the growth front. As an outcome, the strong redshift in the photoluminescence spectrum of the GaAsSbN sample cannot be attributed only to the N-related reduction of the conduction band offset but also to an enhancement of the effect of Sb on the QD band structure.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
Apolipoprotein E (apoE) is critical in the modulation of cholesterol and phospholipid transport between cells of different types. Human apoE is a polymorphic protein with three common alleles, APO epsilon 2, APO epsilon 3, and APO epsilon 4. ApoE4 is associated with sporadic and late-onset familial Alzheimer disease (AD). Gene dose was shown to have an effect on risk of developing AD, age of onset, accumulation of senile plaques in the brain, and reduction of choline acetyltransferase (ChAT) activity in the hippocampus of AD subjects. To characterize the possible impact of the apoE4 allele on cholinergic markers in AD, we examined the effect of apoE4 allele copy number on pre- and postsynaptic markers of cholinergic activity. ApoE4 allele copy number showed an inverse relationship with residual brain ChAT activity and nicotinic receptor binding sites in both the hippocampal formation and the temporal cortex of AD subjects. AD cases lacking the apoE4 allele showed ChAT activities close or within age-matched normal control values. The effect of the apoE4 allele on cholinomimetic drug responsiveness was assessed next in a group (n = 40) of AD patients who completed a double-blind, 30-week clinical trial of the cholinesterase inhibitor tacrine. Results showed that > 80% of apoE4-negative AD patients showed marked improvement after 30 weeks as measured by the AD assessment scale (ADAS), whereas 60% of apoE4 carriers had ADAS scores that were worse compared to baseline. These results strongly support the concept that apoE4 plays a crucial role in the cholinergic dysfunction associated with AD and may be a prognostic indicator of poor response to therapy with acetylcholinesterase inhibitors in AD patients.
Resumo:
Background: There is strong evidence of the efficacy of family psychosocial interventions for schizophrenia, but evidence of the role played by the attitudes of relatives in the therapeutic process is lacking. Method: To study the effect of a family intervention on family attitudes and to analyse their mediating role in the therapeutic process 50 patients with schizophrenia and their key relatives undergoing a trial on the efficacy of a family psychosocial intervention were studied by means of the Affective Style Coding System, the Scale of Empathy, and the Relational Control Coding System. Specific statistical methods were used to determine the nature of the relationship of the relatives’ attitudes to the outcome of family intervention. Results: Family psychosocial intervention was associated with a reduction in relatives’ guilt induction and dominance and an improvement in empathy. Empathy and lack of dominance were identified as independent mediators of the effect of family psychosocial intervention. The change in empathy and dominance during the first 9 months of the intervention predicted the outcome in the following 15 months. Conclusion: Relatives’ empathy and lack of dominance are mediators of the beneficial effect of family psychosocial intervention on patient’s outcome.