940 resultados para Post disaster reconstruction
Resumo:
Homicides with a survival of several days are not uncommon in forensic routine work. Reconstructions of these cases by autopsy alone are very difficult and may occasionally lead to unsatisfying results. For the medico-legal reconstruction of these cases, ante-mortem and post-mortem radiological imaging should always be included in the expertise. We report on a case of fatal penetrating stab wounds to the skull in which a case reconstruction was only possible by combining the radiological ante- and post-mortem data with the autopsy findings.
Resumo:
This event study investigates the impact of the Japanese nuclear disaster in Fukushima-Daiichi on the daily stock prices of French, German, Japanese, and U.S. nuclear utility and alternative energy firms. Hypotheses regarding the (cumulative) abnormal returns based on a three-factor model are analyzed through joint tests by multivariate regression models and bootstrapping. Our results show significant abnormal returns for Japanese nuclear utility firms during the one-week event window and the subsequent four-week post-event window. Furthermore, while French and German nuclear utility and alternative energy stocks exhibit significant abnormal returns during the event window, we cannot confirm abnormal returns for U.S. stocks.
Resumo:
We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform) features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.
Resumo:
The Chakhama Valley, a remote area in Pakistan-administered Kashmir, was badly damaged by the 7.6-magnitude earthquake that struck India and Pakistan on 8 October 2005. More than 5% of the population lost their lives, and about 90% of the existing housing was irreparably damaged or completely destroyed. In early 2006, the Aga Khan Development Network (AKDN) initiated a multisector, community-driven reconstruction program in the Chakhama Valley on the premise that the scale of the disaster required a response that would address all aspects of people's lives. One important aspect covered the promotion of disaster risk management for sustainable recovery in a safe environment. Accordingly, prevailing hazards (rockfalls, landslides, and debris flow, in addition to earthquake hazards) and existing risks were thoroughly assessed, and the information was incorporated into the main planning processes. Hazard maps, detailed site investigations, and proposals for precautionary measures assisted engineers in supporting the reconstruction of private homes in safe locations to render investments disaster resilient. The information was also used for community-based land use decisions and disaster mitigation and preparedness. The work revealed three main problems: (1) thorough assessment of hazards and incorporation of this assessment into planning processes is time consuming and often little understood by the population directly affected, but it pays off in the long run; (2) relocating people out of dangerous places is a highly sensitive issue that requires the support of clear and forceful government policies; and (3) the involvement of local communities is essential for the success of mitigation and preparedness.
Resumo:
Main objective of the game is to increase the coping capacity of players and familiarise them with the Integrated Disaster Reduction Approach. The game is intended to prepare for and introduce the players to a subsequent Learning for Sustainability capacity building workshop for community leaders. The game represents a typical emergency situation resulting from a natural disaster. Before and after the event, adequate measures help to prevent or minimise potential damages. Once a disaster has occurred, concerted actions and immediate measures need to be taken to rescue as much as possible (human lives, livestock, material) and safeguard the village against further damage and losses. In the course of the game, each playing team can proof its knowledge on adequate measures that have to be taken in order to avoid or reduce losses related to natural disasters. Such measures relate to assessment and monitoring of risks, prevention and mitigation measures, preparedness and response as well as recovery and reconstruction.
Resumo:
This chapter proposed a personalized X-ray reconstruction-based planning and post-operative treatment evaluation framework called iJoint for advancing modern Total Hip Arthroplasty (THA). Based on a mobile X-ray image calibration phantom and a unique 2D-3D reconstruction technique, iJoint can generate patient-specific models of hip joint by non-rigidly matching statistical shape models to the X-ray radiographs. Such a reconstruction enables a true 3D planning and treatment evaluation of hip arthroplasty from just 2D X-ray radiographs whose acquisition is part of the standard diagnostic and treatment loop. As part of the system, a 3D model-based planning environment provides surgeons with hip arthroplasty related parameters such as implant type, size, position, offset and leg length equalization. With this newly developed system, we are able to provide true 3D solutions for computer assisted planning of THA using only 2D X-ray radiographs, which is not only innovative but also cost-effective.
Resumo:
PURPOSE Despite different existing methods, monitoring of free muscle transfer is still challenging. In the current study we evaluated our clinical setting regarding monitoring of such tissues, using a recent microcirculation-imaging camera (EasyLDI) as an additional tool for detection of perfusion incompetency. PATIENTS AND METHODS This study was performed on seven patients with soft tissue defect, who underwent reconstruction with free gracilis muscle. Beside standard monitoring protocol (clinical assessment, temperature strips, and surface Doppler), hourly EasyLDI monitoring was performed for 48 hours. Thereby a baseline value (raised flap but connected to its vascular bundle) and an ischaemia perfusion value (completely resected flap) were measured at the same point. RESULTS The mean age of the patients, mean baseline value, ischaemia value perfusion were 48.00 ± 13.42 years, 49.31 ± 17.33 arbitrary perfusion units (APU), 9.87 ± 4.22 APU, respectively. The LDI measured values in six free muscle transfers were compatible with hourly standard monitoring protocol, and normalized LDI values significantly increased during time (P < 0.001, r = 0.412). One of the flaps required a return to theatre 17 hours after the operation, where an unsalvageable flap loss was detected. All normalized LDI values of this flap were under the ischaemia perfusion level and the trend was significantly descending during time (P < 0.001, r = -0.870). CONCLUSION Due to the capability of early detection of perfusion incompetency, LDI may be recommended as an additional post-operative monitoring device for free muscle flaps, for early detection of suspected failing flaps and for validation of other methods.
Resumo:
Venous angioplasty with stenting of iliac veins is an important treatment option for patients suffering from post-thrombotic syndrome due to chronic venous obstruction. Interventional treatment of a chronically occluded vena cava, however, is challenging and often associated with failure. We describe a case of a chronic total occlusion of the entire inferior vena cava that was successfully recanalized using bidirectional wire access and a balloon puncture by a re-entry catheter to establish patency of the inferior vena cava.
Resumo:
Background. In public health preparedness, disaster preparedness refers to the strategic planning of responses to all types of disasters. Preparation and training for disaster response can be conducted using different teaching modalities, ranging from discussion-based programs such as seminars, drills and tabletop exercises to more complex operation-based programs such as functional exercises and full-scale exercises. Each method of instruction has its advantages and disadvantages. Tabletop exercises are facilitated discussions designed to evaluate programs, policies, and procedures; they are usually conducted in a classroom, often with tabletop props (e.g. models, maps or diagrams). ^ Objective. The overall goal of this project was to determine whether tabletop exercises are effective teaching modalities for disaster preparedness, with an emphasis on intentional chemical exposure. ^ Method. The target audience for the exercise was the Medical Reserve Brigade of the Texas State Guard, a group of volunteer healthcare providers and first responders who prepare for response to local disasters. A new tabletop exercise was designed to provide information on the complex, interrelated organizations within the national disaster preparedness program that this group would interact with in the event of a local disaster. This educational intervention consisted of a four hour multipart program that included a pretest of knowledge, lecture series, an interactive group discussion using a mock disaster scenario, a posttest of knowledge, and a course evaluation. ^ Results. Approximately 40 volunteers attended the intervention session; roughly half (n=21) had previously participated in a full scale drill. There was an 11% improvement in fund of knowledge between the pre- and post-test scores (p=0.002). Overall, the tabletop exercise was well received by those with and without prior training, with no significant differences found between these two groups in terms of relevance and appropriateness of content. However, the separate components of the tabletop exercise were variably effective, as gauged by written text comments on the questionnaire. ^ Conclusions. Tabletop exercises can be a useful training modality in disaster preparedness, as evidenced by improvement in knowledge and qualitative feedback on its value. Future offerings could incorporate recordings of participant responses during the drill, so that better feedback can be provided to them. Additional research should be conducted, using the same or similar design, in different populations that are stakeholders in disaster preparedness, so that the generalizability of these findings can be determined.^
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
La reconstruction en deux étapes par expanseur et implant est la technique la plus répandue pour la reconstruction mammmaire post mastectomie. La formation d’une capsule périprothétique est une réponse physiologique universelle à tout corps étranger présent dans le corps humain; par contre, la formation d’une capsule pathologique mène souvent à des complications et par conséquent à des résultats esthétiques sous-optimaux. Le microscope électronique à balayage (MEB) est un outil puissant qui permet d’effectuer une évaluation sans pareille de la topographie ultrastructurelle de spécimens. Le premier objectif de cette thèse est de comparer le MEB conventionnel (Hi-Vac) à une technologie plus récente, soit le MEB environnemental (ESEM), afin de déterminer si cette dernière mène à une évaluation supérieure des tissus capsulaires du sein. Le deuxième objectif est d‘appliquer la modalité de MEB supérieure et d’étudier les modifications ultrastructurelles des capsules périprothétiques chez les femmes subissant différents protocoles d’expansion de tissus dans le contexte de reconstruction mammaire prothétique. Deux études prospectives ont été réalisées afin de répondre à nos objectifs de recherche. Dix patientes ont été incluses dans la première, et 48 dans la seconde. La modalité Hi-Vac s’est avérée supérieure pour l’analyse compréhensive de tissus capsulaires mammaires. En employant le mode Hi-Vac dans notre protocole de recherche établi, un relief 3-D plus prononcé à été observé autour des expanseurs BIOCELL® dans le groupe d’approche d’intervention retardée (6 semaines). Des changements significatifs n’ont pas été observés au niveau des capsules SILTEX® dans les groupes d’approche d’intervention précoce (2 semaines) ni retardée.
Resumo:
Significant uncertainties persist in the reconstruction of past sea surface temperatures in the eastern equatorial Pacific, especially regarding the amplitude of the glacial cooling and the details of the post-glacial warming. Here we present the first regional calibration of alkenone unsaturation in surface sediments versus mean annual sea surface temperatures (maSST). Based on 81 new and 48 previously published data points, it is shown that open ocean samples conform to established global regressions of Uk'37 versus maSST and that there is no systematic bias from seasonality in the production or export of alkenones, or from surface ocean nutrient concentrations or salinity. The flattening of the regression at the highest maSSTs is found to be statistically insignificant. For the near-coastal Peru upwelling zone between 11-15°S and 76-79°W, however, we corroborate earlier observations that Uk'37 SST estimates significantly over-estimate maSSTs at many sites. We posit that this is caused either by uncertainties in the determination of maSSTs in this highly dynamic environment, or by biasing of the alkenone paleothermometer toward El Niño events as postulated by Rein et al. (2005).
Resumo:
Bibliography: p. 599-610.
Resumo:
"Bibliographies: British information services material: H. M. Stationery office material": p. 79-80.