976 resultados para inside-outside algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assuming that the heat capacity of a body is negligible outside certain inclusions the heat equation degenerates to a parabolic-elliptic interface problem. In this work we aim to detect these interfaces from thermal measurements on the surface of the body. We deduce an equivalent variational formulation for the parabolic-elliptic problem and give a new proof of the unique solvability based on Lions’s projection lemma. For the case that the heat conductivity is higher inside the inclusions, we develop an adaptation of the factorization method to this time-dependent problem. In particular this shows that the locations of the interfaces are uniquely determined by boundary measurements. The method also yields to a numerical algorithm to recover the inclusions and thus the interfaces. We demonstrate how measurement data can be simulated numerically by a coupling of a finite element method with a boundary element method, and finally we present some numerical results for the inverse problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Es wurde ein für bodengebundene Feldmessungen geeignetes System zur digital-holographischen Abbildung luftgetragener Objekte entwickelt und konstruiert. Es ist, abhängig von der Tiefenposition, geeignet zur direkten Bestimmung der Größe luftgetragener Objekte oberhalb von ca. 20 µm, sowie ihrer Form bei Größen oberhalb von ca. 100µm bis in den Millimeterbereich. Die Entwicklung umfaßte zusätzlich einen Algorithmus zur automatisierten Verbesserung der Hologrammqualität und zur semiautomatischen Entfernungsbestimmung großer Objekte entwickelt. Eine Möglichkeit zur intrinsischen Effizienzsteigerung der Bestimmung der Tiefenposition durch die Berechnung winkelgemittelter Profile wurde vorgestellt. Es wurde weiterhin ein Verfahren entwickelt, das mithilfe eines iterativen Ansatzes für isolierte Objekte die Rückgewinnung der Phaseninformation und damit die Beseitigung des Zwillingsbildes erlaubt. Weiterhin wurden mithilfe von Simulationen die Auswirkungen verschiedener Beschränkungen der digitalen Holographie wie der endlichen Pixelgröße untersucht und diskutiert. Die geeignete Darstellung der dreidimensionalen Ortsinformation stellt in der digitalen Holographie ein besonderes Problem dar, da das dreidimensionale Lichtfeld nicht physikalisch rekonstruiert wird. Es wurde ein Verfahren entwickelt und implementiert, das durch Konstruktion einer stereoskopischen Repräsentation des numerisch rekonstruierten Meßvolumens eine quasi-dreidimensionale, vergrößerte Betrachtung erlaubt. Es wurden ausgewählte, während Feldversuchen auf dem Jungfraujoch aufgenommene digitale Hologramme rekonstruiert. Dabei ergab sich teilweise ein sehr hoher Anteil an irregulären Kristallformen, insbesondere infolge massiver Bereifung. Es wurden auch in Zeiträumen mit formal eisuntersättigten Bedingungen Objekte bis hinunter in den Bereich ≤20µm beobachtet. Weiterhin konnte in Anwendung der hier entwickelten Theorie des ”Phasenrandeffektes“ ein Objekt von nur ca. 40µm Größe als Eisplättchen identifiziert werden. Größter Nachteil digitaler Holographie gegenüber herkömmlichen photographisch abbildenden Verfahren ist die Notwendigkeit der aufwendigen numerischen Rekonstruktion. Es ergibt sich ein hoher rechnerischer Aufwand zum Erreichen eines einer Photographie vergleichbaren Ergebnisses. Andererseits weist die digitale Holographie Alleinstellungsmerkmale auf. Der Zugang zur dreidimensionalen Ortsinformation kann der lokalen Untersuchung der relativen Objektabstände dienen. Allerdings zeigte sich, dass die Gegebenheiten der digitalen Holographie die Beobachtung hinreichend großer Mengen von Objekten auf der Grundlage einzelner Hologramm gegenwärtig erschweren. Es wurde demonstriert, dass vollständige Objektgrenzen auch dann rekonstruiert werden konnten, wenn ein Objekt sich teilweise oder ganz außerhalb des geometrischen Meßvolumens befand. Weiterhin wurde die zunächst in Simulationen demonstrierte Sub-Bildelementrekonstruktion auf reale Hologramme angewandt. Dabei konnte gezeigt werden, dass z.T. quasi-punktförmige Objekte mit Sub-Pixelgenauigkeit lokalisiert, aber auch bei ausgedehnten Objekten zusätzliche Informationen gewonnen werden konnten. Schließlich wurden auf rekonstruierten Eiskristallen Interferenzmuster beobachtet und teilweise zeitlich verfolgt. Gegenwärtig erscheinen sowohl kristallinterne Reflexion als auch die Existenz einer (quasi-)flüssigen Schicht als Erklärung möglich, wobei teilweise in Richtung der letztgenannten Möglichkeit argumentiert werden konnte. Als Ergebnis der Arbeit steht jetzt ein System zur Verfügung, das ein neues Meßinstrument und umfangreiche Algorithmen umfaßt. S. M. F. Raupach, H.-J. Vössing, J. Curtius und S. Borrmann: Digital crossed-beam holography for in-situ imaging of atmospheric particles, J. Opt. A: Pure Appl. Opt. 8, 796-806 (2006) S. M. F. Raupach: A cascaded adaptive mask algorithm for twin image removal and its application to digital holograms of ice crystals, Appl. Opt. 48, 287-301 (2009) S. M. F. Raupach: Stereoscopic 3D visualization of particle fields reconstructed from digital inline holograms, (zur Veröffentlichung angenommen, Optik - Int. J. Light El. Optics, 2009)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most consistent feature of Wiskott Aldrich syndrome (WAS) is profound thrombocytopenia with small platelets. The responsible gene encodes WAS protein (WASP), which functions in leucocytes as an actin filament nucleating agent -yet- actin filament nucleation proceeds normally in patient platelets regarding shape change, filopodia and lamellipodia generation. Because WASP localizes in the platelet membrane skeleton and is mobilized by alphaIIbbeta3 integrin outside-in signalling, we questioned whether its function might be linked to integrin. Agonist-induced alphaIIbbeta3 activation (PAC-1 binding) was normal for patient platelets, indicating normal integrin inside-out signalling. Inside-out signalling (fibrinogen, JON/A binding) was also normal for wasp-deficient murine platelets. However, adherence/spreading on immobilized fibrinogen was decreased for patient platelets and wasp-deficient murine platelets, indicating decreased integrin outside-in responses. Another integrin outside-in dependent response, fibrin clot retraction, involving contraction of the post-aggregation actin cytoskeleton, was also decreased for patient platelets and wasp-deficient murine platelets. Rebleeding from tail cuts was more frequent for wasp-deficient mice, suggesting decreased stabilisation of the primary platelet plug. In contrast, phosphatidylserine exposure, a pro-coagulant response, was enhanced for WASP-deficient patient and murine platelets. The collective results reveal a novel function for WASP in regulating pro-aggregatory and pro-coagulant responses downstream of integrin outside-in signalling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional flow visualization plays an essential role in many areas of science and engineering, such as aero- and hydro-dynamical systems which dominate various physical and natural phenomena. For popular methods such as the streamline visualization to be effective, they should capture the underlying flow features while facilitating user observation and understanding of the flow field in a clear manner. My research mainly focuses on the analysis and visualization of flow fields using various techniques, e.g. information-theoretic techniques and graph-based representations. Since the streamline visualization is a popular technique in flow field visualization, how to select good streamlines to capture flow patterns and how to pick good viewpoints to observe flow fields become critical. We treat streamline selection and viewpoint selection as symmetric problems and solve them simultaneously using the dual information channel [81]. To the best of my knowledge, this is the first attempt in flow visualization to combine these two selection problems in a unified approach. This work selects streamline in a view-independent manner and the selected streamlines will not change for all viewpoints. My another work [56] uses an information-theoretic approach to evaluate the importance of each streamline under various sample viewpoints and presents a solution for view-dependent streamline selection that guarantees coherent streamline update when the view changes gradually. When projecting 3D streamlines to 2D images for viewing, occlusion and clutter become inevitable. To address this challenge, we design FlowGraph [57, 58], a novel compound graph representation that organizes field line clusters and spatiotemporal regions hierarchically for occlusion-free and controllable visual exploration. We enable observation and exploration of the relationships among field line clusters, spatiotemporal regions and their interconnection in the transformed space. Most viewpoint selection methods only consider the external viewpoints outside of the flow field. This will not convey a clear observation when the flow field is clutter on the boundary side. Therefore, we propose a new way to explore flow fields by selecting several internal viewpoints around the flow features inside of the flow field and then generating a B-Spline curve path traversing these viewpoints to provide users with closeup views of the flow field for detailed observation of hidden or occluded internal flow features [54]. This work is also extended to deal with unsteady flow fields. Besides flow field visualization, some other topics relevant to visualization also attract my attention. In iGraph [31], we leverage a distributed system along with a tiled display wall to provide users with high-resolution visual analytics of big image and text collections in real time. Developing pedagogical visualization tools forms my other research focus. Since most cryptography algorithms use sophisticated mathematics, it is difficult for beginners to understand both what the algorithm does and how the algorithm does that. Therefore, we develop a set of visualization tools to provide users with an intuitive way to learn and understand these algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed microdosimetric characterization of the M. D. Anderson 42 MeV (p,Be) fast neutron beam was performed using the techniques of microdosimetry and a 1/2 inch diameter Rossi proportional counter. These measurements were performed at 5, 15, and 30 cm depths on the central axis, 3 cm inside, and 3 cm outside the field edge for 10 $\times$ 10 and 20 $\times$ 20 cm field sizes. Spectra were also measured at 5 and 15 cm depth on central axis for a 6 $\times$ 6 cm field size. Continuous slowing down approximation calculations were performed to model the nuclear processes that occur in the fast neutron beam. Irradiation of the CR-39 was performed using a tandem electrostatic accelerator for protons of 10, 6, and 3 MeV and alpha particles of 15, 10, and 7 MeV incident energy on target at angles of incidence from 0 to 85 degrees. The critical angle as well as track etch rate and normal incidence diameter versus linear energy transfer (LET) were obtained from these measurements. The bulk etch rate was also calculated from these measurements. Dose response of the material was studied, and the angular distribution of charged particles created by the fast neutron beam was measured with CR-39. The efficiency of CR-39 was calculated versus that of the Rossi chamber, and an algorithm was devised for derivation of LET spectra from the major and minor axis dimensions of the observed tracks. The CR-39 was irradiated in the same positions as the Rossi chamber, and the derived spectra were compared directly. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE In this study, the "Progressive Resolution Optimizer PRO3" (Varian Medical Systems) is compared to the previous version "PRO2" with respect to its potential to improve dose sparing to the organs at risk (OAR) and dose coverage of the PTV for head and neck cancer patients. MATERIALS AND METHODS For eight head and neck cancer patients, volumetric modulated arc therapy (VMAT) treatment plans were generated in this study. All cases have 2-3 phases and the total prescribed dose (PD) was 60-72Gy in the PTV. The study is mainly focused on the phase 1 plans, which all have an identical PD of 54Gy, and complex PTV structures with an overlap to the parotids. Optimization was performed based on planning objectives for the PTV according to ICRU83, and with minimal dose to spinal cord, and parotids outside PTV. In order to assess the quality of the optimization algorithms, an identical set of constraints was used for both, PRO2 and PRO3. The resulting treatment plans were investigated with respect to dose distribution based on the analysis of the dose volume histograms. RESULTS For the phase 1 plans (PD=54Gy) the near maximum dose D2% of the spinal cord, could be minimized to 22±5 Gy with PRO3, as compared to 32±12Gy with PRO2, averaged for all patients. The mean dose to the parotids was also lower in PRO3 plans compared to PRO2, but the differences were less pronounced. A PTV coverage of V95%=97±1% could be reached with PRO3, as compared to 86±5% with PRO2. In clinical routine, these PRO2 plans would require modifications to obtain better PTV coverage at the cost of higher OAR doses. CONCLUSION A comparison between PRO3 and PRO2 optimization algorithms was performed for eight head and neck cancer patients. In general, the quality of VMAT plans for head and neck patients are improved with PRO3 as compared to PRO2. The dose to OARs can be reduced significantly, especially for the spinal cord. These reductions are achieved with better PTV coverage as compared to PRO2. The improved spinal cord sparing offers new opportunities for all types of paraspinal tumors and for re-irradiation of recurrent tumors or second malignancies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diamonds are known for both their beauty and their durability. Jefferson National Lab in Newport News, VA has found a way to utilize the diamond's strength to view the beauty of the inside of the atomic nucleus with the hopes of finding exotic forms of matter. By firing very fast electrons at a diamond sheet no thicker than a human hair, high energy particles of light known as photons are produced with a high degree of polarization that can illuminate the constituents of the nucleus known as quarks. The University of Connecticut Nuclear Physics group has responsibility for crafting these extremely thin, high quality diamond wafers. These wafers must be cut from larger stones that are about the size of a human finger, and then carefully machined down to the final thickness. The thinning of these diamonds is extremely challenging, as the diamond's greatest strength also becomes its greatest weakness. The Connecticut Nuclear Physics group has developed a novel technique to assist industrial partners in assessing the quality of the final machining steps, using a technique based on laser interferometry. The images of the diamond surface produced by the interferometer encode the thickness and shape of the diamond surface in a complex way that requires detailed analysis to extract. We have developed a novel software application to analyze these images based on the method of simulated annealing. Being able to image the surface of these diamonds without requiring costly X-ray diffraction measurements allows rapid feedback to the industrial partners as they refine their thinning techniques. Thus, by utilizing a material found to be beautiful by many, the beauty of nature can be brought more clearly into view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effectiveness of the Anisotropic Analytical Algorithm (AAA) implemented in the Eclipse treatment planning system (TPS) was evaluated using theRadiologicalPhysicsCenteranthropomorphic lung phantom using both flattened and flattening-filter-free high energy beams. Radiation treatment plans were developed following the Radiation Therapy Oncology Group and theRadiologicalPhysicsCenterguidelines for lung treatment using Stereotactic Radiation Body Therapy. The tumor was covered such that at least 95% of Planning Target Volume (PTV) received 100% of the prescribed dose while ensuring that normal tissue constraints were followed as well. Calculated doses were exported from the Eclipse TPS and compared with the experimental data as measured using thermoluminescence detectors (TLD) and radiochromic films that were placed inside the phantom. The results demonstrate that the AAA superposition-convolution algorithm is able to calculate SBRT treatment plans with all clinically used photon beams in the range from 6 MV to 18 MV. The measured dose distribution showed a good agreement with the calculated distribution using clinically acceptable criteria of ±5% dose or 3mm distance to agreement. These results show that in a heterogeneous environment a 3D pencil beam superposition-convolution algorithms with Monte Carlo pre-calculated scatter kernels, such as AAA, are able to reliably calculate dose, accounting for increased lateral scattering due to the loss of electronic equilibrium in low density medium. The data for high energy plans (15 MV and 18 MV) showed very good tumor coverage in contrast to findings by other investigators for less sophisticated dose calculation algorithms, which demonstrated less than expected tumor doses and generally worse tumor coverage for high energy plans compared to 6MV plans. This demonstrates that the modern superposition-convolution AAA algorithm is a significant improvement over previous algorithms and is able to calculate doses accurately for SBRT treatment plans in the highly heterogeneous environment of the thorax for both lower (≤12 MV) and higher (greater than 12 MV) beam energies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El cáncer de próstata es el tipo de cáncer con mayor prevalencia entre los hombres del mundo occidental y, pese a tener una alta tasa de supervivencia relativa, es la segunda mayor causa de muerte por cáncer en este sector de la población. El tratamiento de elección frente al cáncer de próstata es, en la mayoría de los casos, la radioterapia externa. Las técnicas más modernas de radioterapia externa, como la radioterapia modulada en intensidad, permiten incrementar la dosis en el tumor mientras se reduce la dosis en el tejido sano. Sin embargo, la localización del volumen objetivo varía con el día de tratamiento, y se requieren movimientos muy pequeños de los órganos para sacar partes del volumen objetivo fuera de la región terapéutica, o para introducir tejidos sanos críticos dentro. Para evitar esto se han desarrollado técnicas más avanzadas, como la radioterapia guiada por imagen, que se define por un manejo más preciso de los movimientos internos mediante una adaptación de la planificación del tratamiento basada en la información anatómica obtenida de imágenes de tomografía computarizada (TC) previas a la sesión terapéutica. Además, la radioterapia adaptativa añade la información dosimétrica de las fracciones previas a la información anatómica. Uno de los fundamentos de la radioterapia adaptativa es el registro deformable de imágenes, de gran utilidad a la hora de modelar los desplazamientos y deformaciones de los órganos internos. Sin embargo, su utilización conlleva nuevos retos científico-tecnológicos en el procesamiento de imágenes, principalmente asociados a la variabilidad de los órganos, tanto en localización como en apariencia. El objetivo de esta tesis doctoral es mejorar los procesos clínicos de delineación automática de contornos y de cálculo de dosis acumulada para la planificación y monitorización de tratamientos con radioterapia adaptativa, a partir de nuevos métodos de procesamiento de imágenes de TC (1) en presencia de contrastes variables, y (2) cambios de apariencia del recto. Además, se pretende (3) proveer de herramientas para la evaluación de la calidad de los contornos obtenidos en el caso del gross tumor volumen (GTV). Las principales contribuciones de esta tesis doctoral son las siguientes: _ 1. La adaptación, implementación y evaluación de un algoritmo de registro basado en el flujo óptico de la fase de la imagen como herramienta para el cálculo de transformaciones no-rígidas en presencia de cambios de intensidad, y su aplicabilidad a tratamientos de radioterapia adaptativa en cáncer de próstata con uso de agentes de contraste radiológico. Los resultados demuestran que el algoritmo seleccionado presenta mejores resultados cualitativos en presencia de contraste radiológico en la vejiga, y no distorsiona la imagen forzando deformaciones poco realistas. 2. La definición, desarrollo y validación de un nuevo método de enmascaramiento de los contenidos del recto (MER), y la evaluación de su influencia en el procedimiento de radioterapia adaptativa en cáncer de próstata. Las segmentaciones obtenidas mediante el MER para la creación de máscaras homogéneas en las imágenes de sesión permiten mejorar sensiblemente los resultados de los algoritmos de registro en la región rectal. Así, el uso de la metodología propuesta incrementa el índice de volumen solapado entre los contornos manuales y automáticos del recto hasta un valor del 89%, cercano a los resultados obtenidos usando máscaras manuales para el registro de las dos imágenes. De esta manera se pueden corregir tanto el cálculo de los nuevos contornos como el cálculo de la dosis acumulada. 3. La definición de una metodología de evaluación de la calidad de los contornos del GTV, que permite la representación de la distribución espacial del error, adaptándola a volúmenes no-convexos como el formado por la próstata y las vesículas seminales. Dicha metodología de evaluación, basada en un nuevo algoritmo de reconstrucción tridimensional y una nueva métrica de cuantificación, presenta resultados precisos con una gran resolución espacial en un tiempo despreciable frente al tiempo de registro. Esta nueva metodología puede ser una herramienta útil para la comparación de distintos algoritmos de registro deformable orientados a la radioterapia adaptativa en cáncer de próstata. En conclusión, el trabajo realizado en esta tesis doctoral corrobora las hipótesis de investigación postuladas, y pretende servir como cimiento de futuros avances en el procesamiento de imagen médica en los tratamientos de radioterapia adaptativa en cáncer de próstata. Asimismo, se siguen abriendo nuevas líneas de aplicación futura de métodos de procesamiento de imágenes médicas con el fin de mejorar los procesos de radioterapia adaptativa en presencia de cambios de apariencia de los órganos, e incrementar la seguridad del paciente. I.2 Inglés Prostate cancer is the most prevalent cancer amongst men in the Western world and, despite having a relatively high survival rate, is the second leading cause of cancer death in this sector of the population. The treatment of choice against prostate cancer is, in most cases, external beam radiation therapy. The most modern techniques of external radiotherapy, as intensity modulated radiotherapy, allow increasing the dose to the tumor whilst reducing the dose to healthy tissue. However, the location of the target volume varies with the day of treatment, and very small movements of the organs are required to pull out parts of the target volume outside the therapeutic region, or to introduce critical healthy tissues inside. Advanced techniques, such as the image-guided radiotherapy (IGRT), have been developed to avoid this. IGRT is defined by more precise handling of internal movements by adapting treatment planning based on the anatomical information obtained from computed tomography (CT) images prior to the therapy session. Moreover, the adaptive radiotherapy adds dosimetric information of previous fractions to the anatomical information. One of the fundamentals of adaptive radiotherapy is deformable image registration, very useful when modeling the displacements and deformations of the internal organs. However, its use brings new scientific and technological challenges in image processing, mainly associated to the variability of the organs, both in location and appearance. The aim of this thesis is to improve clinical processes of automatic contour delineation and cumulative dose calculation for planning and monitoring of adaptive radiotherapy treatments, based on new methods of CT image processing (1) in the presence of varying contrasts, and (2) rectum appearance changes. It also aims (3) to provide tools for assessing the quality of contours obtained in the case of gross tumor volume (GTV). The main contributions of this PhD thesis are as follows: 1. The adaptation, implementation and evaluation of a registration algorithm based on the optical flow of the image phase as a tool for the calculation of non-rigid transformations in the presence of intensity changes, and its applicability to adaptive radiotherapy treatment in prostate cancer with use of radiological contrast agents. The results demonstrate that the selected algorithm shows better qualitative results in the presence of radiological contrast agents in the urinary bladder, and does not distort the image forcing unrealistic deformations. 2. The definition, development and validation of a new method for masking the contents of the rectum (MER, Spanish acronym), and assessing their impact on the process of adaptive radiotherapy in prostate cancer. The segmentations obtained by the MER for the creation of homogenous masks in the session CT images can improve significantly the results of registration algorithms in the rectal region. Thus, the use of the proposed methodology increases the volume overlap index between manual and automatic contours of the rectum to a value of 89%, close to the results obtained using manual masks for both images. In this way, both the calculation of new contours and the calculation of the accumulated dose can be corrected. 3. The definition of a methodology for assessing the quality of the contours of the GTV, which allows the representation of the spatial distribution of the error, adapting it to non-convex volumes such as that formed by the prostate and seminal vesicles. Said evaluation methodology, based on a new three-dimensional reconstruction algorithm and a new quantification metric, presents accurate results with high spatial resolution in a time negligible compared to the registration time. This new approach may be a useful tool to compare different deformable registration algorithms oriented to adaptive radiotherapy in prostate cancer In conclusion, this PhD thesis corroborates the postulated research hypotheses, and is intended to serve as a foundation for future advances in medical image processing in adaptive radiotherapy treatment in prostate cancer. In addition, it opens new future applications for medical image processing methods aimed at improving the adaptive radiotherapy processes in the presence of organ’s appearance changes, and increase the patient safety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Magnetoencephalography (MEG) provides a direct measure of brain activity with high combined spatiotemporal resolution. Preprocessing is necessary to reduce contributions from environmental interference and biological noise. New method The effect on the signal-to-noise ratio of different preprocessing techniques is evaluated. The signal-to-noise ratio (SNR) was defined as the ratio between the mean signal amplitude (evoked field) and the standard error of the mean over trials. Results Recordings from 26 subjects obtained during and event-related visual paradigm with an Elekta MEG scanner were employed. Two methods were considered as first-step noise reduction: Signal Space Separation and temporal Signal Space Separation, which decompose the signal into components with origin inside and outside the head. Both algorithm increased the SNR by approximately 100%. Epoch-based methods, aimed at identifying and rejecting epochs containing eye blinks, muscular artifacts and sensor jumps provided an SNR improvement of 5–10%. Decomposition methods evaluated were independent component analysis (ICA) and second-order blind identification (SOBI). The increase in SNR was of about 36% with ICA and 33% with SOBI. Comparison with existing methods No previous systematic evaluation of the effect of the typical preprocessing steps in the SNR of the MEG signal has been performed. Conclusions The application of either SSS or tSSS is mandatory in Elekta systems. No significant differences were found between the two. While epoch-based methods have been routinely applied the less often considered decomposition methods were clearly superior and therefore their use seems advisable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este trabajo se presenta el desarrollo de una metodología para obtener un universo de funciones de Green y el algoritmo correspondiente, para estimar la altura de tsunamis a lo largo de la costa occidental de México en función del momento sísmico y de la extensión del área de ruptura de sismos interplaca localizados entre la costa y la Trinchera Mesoamericana. Tomando como caso de estudio el sismo ocurrido el 9 de octubre de 1995 en la costa de Jalisco-Colima, se estudiaron los efectos del tsunami originados en la hidrodinámica del Puerto de Manzanillo, México, con una propuesta metodológica que contempló lo siguiente: El primer paso de la metodología contempló la aplicación del método inverso de tsunamis para acotar los parámetros de la fuente sísmica mediante la confección de un universo de funciones de Green para la costa occidental de México. Tanto el momento sísmico como la localización y extensión del área de ruptura de sismos se prescribe en segmentos de planos de falla de 30 X 30 km. A cada uno de estos segmentos del plano de falla corresponde un conjunto de funciones de Green ubicadas en la isobata de 100 m, para 172 localidades a lo largo de la costa, separadas en promedio 12 km entre una y otra. El segundo paso de la metodología contempló el estudio de la hidrodinámica (velocidades de las corrientes y niveles del mar en el interior del puerto y el estudio del runup en la playa) originada por el tsunami, la cual se estudió en un modelo hidráulico de fondo fijo y en un modelo numérico, representando un tsunami sintético en la profundidad de 34 m como condición inicial, el cual se propagó a la costa con una señal de onda solitaria. Como resultado de la hidrodinámica del puerto de Manzanillo, se realizó un análisis de riesgo para la definición de las condiciones operativas del puerto en términos de las velocidades en el interior del mismo, y partiendo de las condiciones iniciales del terremoto de 1995, se definieron las condiciones límites de operación de los barcos en el interior y exterior del puerto. In this work is presented the development of a methodology in order to obtain a universe of Green's functions and the corresponding algorithm in order to estimate the tsunami wave height along the west coast of Mexico, in terms of seismic moment and the extent of the area of the rupture, in the interplate earthquakes located between the coast and the Middle America Trench. Taking as a case of study the earthquake occurred on October 9, 1995 on the coast of Jalisco-Colima, were studied the hydrodynamics effects of the tsunami caused in the Port of Manzanillo, Mexico, with a methodology that contemplated the following The first step of the methodology contemplated the implementation of the tsunami inverse method to narrow the parameters of the seismic source through the creation of a universe of Green's functions for the west coast of Mexico. Both the seismic moment as the location and extent of earthquake rupture area prescribed in segments fault planes of 30 X 30 km. Each of these segments of the fault plane corresponds a set of Green's functions located in the 100 m isobath, to 172 locations along the coast, separated on average 12 km from each other. The second step of the methodology contemplated the study of the hydrodynamics (speed and directions of currents and sea levels within the port and the study of the runup on the beach Las Brisas) caused by the tsunami, which was studied in a hydraulic model of fix bed and in a numerical model, representing a synthetic tsunami in the depth of 34 m as an initial condition which spread to the coast with a solitary wave signal. As a result of the hydrodynamics of the port of Manzanillo, a risk analysis to define the operating conditions of the port in terms of the velocities in the inner and outside of the port was made, taken in account the initial conditions of the earthquake and tsunami ocurred in Manzanillo port in 1995, were defined the limits conditions of operation of the ships inside and outside the port.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to develop a method to hide information inside a binary image. An algorithm to embed data in scanned text or figures is proposed, based on the detection of suitable pixels, which verify some conditions in order to be not detected. In broad terms, the algorithm locates those pixels placed at the contours of the figures or in those areas where some scattering of the two colors can be found. The hidden information is independent from the values of the pixels where this information is embedded. Notice that, depending on the sequence of bits to be hidden, around half of the used pixels to keep bits of data will not be modified. The other basic characteristic of the proposed scheme is that it is necessary to take into consideration the bits that are modified, in order to perform the recovering process of the information, which consists on recovering the sequence of bits placed in the proper positions. An application to banking sector is proposed for hidding some information in signatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evacuation route planning is a fundamental task for building engineering projects. Safety regulations are established so that all occupants are driven on time out of a building to a secure place when faced with an emergency situation. As an example, Spanish building code requires the planning of evacuation routes on large and, usually, public buildings. Engineers often plan these routes on single building projects, repeatedly assigning clusters of rooms to each emergency exit in a trial-and-error process. But problems may arise for a building complex where distribution and use changes make visual analysis cumbersome and sometimes unfeasible. This problem could be solved by using well-known spatial analysis techniques, implemented as a specialized software able to partially emulate engineer reasoning. In this paper we propose and test an easily reproducible methodology that makes use of free and open source software components for solving a case study. We ran a complete test on a building floor at the University of Alicante (Spain). This institution offers a web service (WFS) that allows retrieval of 2D geometries from any building within its campus. We demonstrate how geospatial technologies and computational geometry algorithms can be used for automating the creation and optimization of evacuation routes. In our case study, the engineers’ task is to verify that the load capacity of each emergency exit does not exceed the standards specified by Spain’s current regulations. Using Dijkstra’s algorithm, we obtain the shortest paths from every room to the most appropriate emergency exit. Once these paths are calculated, engineers can run simulations and validate, based on path statistics, different cluster configurations. Techniques and tools applied in this research would be helpful in the design and risk management phases of any complex building project.