978 resultados para Processing technique


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within the framework of cost-effective patterning processes a novel technique that saves photolithographic processing steps, easily scalable to wide area production, is proposed. It consists of a tip-probe, which is biased with respect to a conductive substrate and slides on it, keeping contact with the material. The sliding tip leaves an insulating path (which currently is as narrow as 30 μm) across the material, which enables the drawing of tracks and pads electrically insulated from the surroundings. This ablation method, called arc-erosion, requires an experimental set up that had to be customized for this purpose and is described. Upon instrumental monitoring, a brief proposal of the physics below this process is also presented. As a result an optimal control of the patterning process has been acquired. The system has been used on different substrates, including indium tin oxide either on glass or on polyethylene terephtalate, as well as alloys like Au/Cr, and Al. The influence of conditions such as tip speed and applied voltage is discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser shock processing (LSP) is being increasingly applied as an effective technology for the improvement of metallic materials surface properties in different types of components as a means of enhancement of their corrosion and fatigue life behavior. As reported in previous contributions by the authors, a main effect resulting from the application of the LSP technique consists on the generation of relatively deep compression residual stresses field into metallic alloy pieces allowing an improved mechanical behaviour, explicitly the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Additional results accomplished by the authors in the line of practical development of the LSP technique at an experimental level (aiming its integral assessment from an interrelated theoretical and experimental point of view) are presented in this paper. Concretely, follow-on experimental results on the residual stress profiles and associated surface properties modification successfully reached in typical materials (especially Al and Ti alloys) under different LSP irradiation conditions are presented along with a practical correlated analysis on the protective character of the residual stress profiles obtained under different irradiation strategies and the evaluation of the corresponding induced properties as material specific volume reduction at the surface, microhardness and wear resistance. Additional remarks on the improved character of the LSP technique over the traditional “shot peening” technique in what concerns depth of induced compressive residual stresses fields are also made through the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser shock processing (LSP) is increasingly applied as an effective technology for the improvement of metallic materials mechanical properties in different types of components as a means of enhancement of their fatigue life behavior. As reported in previous contributions by the authors, a main effect resulting from the application of the LSP technique consists on the generation of relatively deep compression residual stresses fields into metallic components allowing an improved mechanical behaviour, explicitly the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Additional results accomplished by the authors in the line of practical development of the LSP technique at an experimental level (aiming its integral assessment from an interrelated theoretical and experimental point of view) are presented in this paper. Concretely, experimental results on the residual stress profiles and associated mechanical properties modification successfully reached in typical materials under different LSP irradiation conditions are presented. In this case, the specific behavior of a widely used material in high reliability components (especially in nuclear and biomedical applications) as AISI 316L is analyzed, the effect of possible “in-service” thermal conditions on the relaxation of the LSP effects being specifically characterized. I.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuous and long-pulse lasers have been extensively used for the forming of metal sheets for macroscopic mechanical applications. However, for the manufacturing of Micro-Mechanical Systems (MMS), the applicability of such type of lasers is limited by the long relaxation time of the thermal fields responsible for the forming phenomena. As a consequence, the final sheet deformation state is attained only after a certain time, what makes the generated internal residual stress fields more dependent on ambient conditions and might difficult the subsequent assembly process. The use of short pulse (ns) lasers provides a suitable parameter matching for the laser forming of an important range of sheet components used in MEMS. The short interaction time scale required for the predominantly mechanic (shock) induction of deformation residual stresses allows the successful processing of components in a medium range of miniaturization (particularly important according to its frequent use in such systems). In the present paper, Laser Shock Micro-Forming (LSμF) is presented as an emerging technique for Microsystems parts shaping and adjustment along with a discussion on its physical foundations and practical implementation possibilities developed by the authors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser shock processing (LSP) is being increasingly applied as an effective technology for the improvement of metallic materials mechanical and surface properties in different types of components as a means of enhancement of their corrosion and fatigue life behavior. As reported in previous contributions by the authors, a main effect resulting from the application of the LSP technique consists on the generation of relatively deep compression residual stresses field into metallic alloy pieces allowing an improved mechanical behaviour, explicitly the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Additional results accomplished by the authors in the line of practical development of the LSP technique at an experimental level (aiming its integral assessment from an interrelated theoretical and experimental point of view) are presented in this paper. Concretely, follow-on experimental results on the residual stress profiles and associated surface properties modification successfully reached in typical materials (especially Al and Ti alloys characteristic of high reliability components in the aerospace, nuclear and biomedical sectors) under different LSP irradiation conditions are presented along with a practical correlated analysis on the protective character of the residual stress profiles obtained under different irradiation strategies. Additional remarks on the improved character of the LSP technique over the traditional “shot peening” technique in what concerns depth of induced compressive residual stresses fields are also made through the paper

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evolvable Hardware (EH) is a technique that consists of using reconfigurable hardware devices whose configuration is controlled by an Evolutionary Algorithm (EA). Our system consists of a fully-FPGA implemented scalable EH platform, where the Reconfigurable processing Core (RC) can adaptively increase or decrease in size. Figure 1 shows the architecture of the proposed System-on-Programmable-Chip (SoPC), consisting of a MicroBlaze processor responsible of controlling the whole system operation, a Reconfiguration Engine (RE), and a Reconfigurable processing Core which is able to change its size in both height and width. This system is used to implement image filters, which are generated autonomously thanks to the evolutionary process. The system is complemented with a camera that enables the usage of the platform for real time applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Foliage Penetration (FOPEN) radar systems were introduced in 1960, and have been constantly improved by several organizations since that time. The use of Synthetic Aperture Radar (SAR) approaches for this application has important advantages, due to the need for high resolution in two dimensions. The design of this type of systems, however, includes some complications that are not present in standard SAR systems. FOPEN SAR systems need to operate with a low central frequency (VHF or UHF bands) in order to be able to penetrate the foliage. High bandwidth is also required to obtain high resolution. Due to the low central frequency, large integration angles are required during SAR image formation, and therefore the Range Migration Algorithm (RMA) is used. This project thesis identifies the three main complications that arise due to these requirements. First, a high fractional bandwidth makes narrowband propagation models no longer valid. Second, the VHF and UHF bands are used by many communications systems. The transmitted signal spectrum needs to be notched to avoid interfering them. Third, those communications systems cause Radio Frequency Interference (RFI) on the received signal. The thesis carries out a thorough analysis of the three problems, their degrading effects and possible solutions to compensate them. The UWB model is applied to the SAR signal, and the degradation induced by it is derived. The result is tested through simulation of both a single pulse stretch processor and the complete RMA image formation. Both methods show that the degradation is negligible, and therefore the UWB propagation effect does not need compensation. A technique is derived to design a notched transmitted signal. Then, its effect on the SAR image formation is evaluated analytically. It is shown that the stretch processor introduces a processing gain that reduces the degrading effects of the notches. The remaining degrading effect after processing gain is assessed through simulation, and an experimental graph of degradation as a function of percentage of nulled frequencies is obtained. The RFI is characterized and its effect on the SAR processor is derived. Once again, a processing gain is found to be introduced by the receiver. As the RFI power can be much higher than that of the desired signal, an algorithm is proposed to remove the RFI from the received signal before RMA processing. This algorithm is a modification of the Chirp Least Squares Algorithm (CLSA) explained in [4], which adapts it to deramped signals. The algorithm is derived analytically and then its performance is evaluated through simulation, showing that it is effective in removing the RFI and reducing the degradation caused by both RFI and notching. Finally, conclusions are drawn as to the importance of each one of the problems in SAR system design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser shock processing (LSP) is increasingly applied as an effective technology for the improvement of metallic materials mechanical properties in different types of components as a means of enhancement of their fatigue life behavior. As reported in previous contributions by the authors, a main effect resulting from the application of the LSP technique consists on the generation of relatively deep compression residual stresses fields into metallic components allowing an improved mechanical behaviour, explicitly the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Additional results accomplished by the authors in the line of practical development of the LSP technique at an experimental level (aiming its integral assessment from an interrelated theoretical and experimental point of view)are presented in this paper. Concretely, experimental results on the residual stress profiles and associated mechanical properties modification successfully reached in typical materials under different LSP irradiation conditions are presented. In this case, the specific behavior of a widely used material in high reliability components (especially in nuclear and biomedical applications) as AISI 316L is analyzed, the effect of possible “in-service” thermal conditions on the relaxation of the LSP effects being specifically characterized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser Shock Processing (LSP) has been demonstrated as an emerging technique for the induction of RS’s fields in subsurface layers of relatively thick specimens. However, the LSP treatment of relatively thin specimens brings, as an additional consequence, the possible bending in a process of laser shock forming. This effect poses a new class of problems regarding the attainment of specified RS’s depth profiles in the mentioned type of sheets, and, what can be more critical, an overall deformation of the treated component. The analysis of the problem of LSP treatment for induction of tentatively through-thickness RS’s fields for fatigue life enhancement in relatively thin sheets in a way compatible with reduced overall workpiece deformation due to spring-back self-equilibration is envisaged in this paper. The coupled theoretical-experimental predictive approach developed by the authors has been applied to the specification of LSP treatments for achievement of RS's fields tentatively able to retard crack propagation on normalized specimens. A convergence between numerical code results and experimental results coming from direct RS's measurement is presented as a first step for the treatment of the normalized specimens under optimized conditions and verification of the crack retardation properties virtually induced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On-line partial discharge (PD) measurements have become a common technique for assessing the insulation condition of installed high voltage (HV) insulated cables. When on-line tests are performed in noisy environments, or when more than one source of pulse-shaped signals are present in a cable system, it is difficult to perform accurate diagnoses. In these cases, an adequate selection of the non-conventional measuring technique and the implementation of effective signal processing tools are essential for a correct evaluation of the insulation degradation. Once a specific noise rejection filter is applied, many signals can be identified as potential PD pulses, therefore, a classification tool to discriminate the PD sources involved is required. This paper proposes an efficient method for the classification of PD signals and pulse-type noise interferences measured in power cables with HFCT sensors. By using a signal feature generation algorithm, representative parameters associated to the waveform of each pulse acquired are calculated so that they can be separated in different clusters. The efficiency of the clustering technique proposed is demonstrated through an example with three different PD sources and several pulse-shaped interferences measured simultaneously in a cable system with a high frequency current transformer (HFCT).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we demonstrate the use of a video camera for measuring the frequency of small-amplitude vibration movements. The method is based on image acquisition and multilevel thresholding and it only requires a video camera with high enough acquisition rate, not being necessary the use of targets or auxiliary laser beams. Our proposal is accurate and robust. We demonstrate the technique with a pocket camera recording low-resolution videos with AVI-JPEG compression and measuring different objects that vibrate in parallel or perpendicular direction to the optical sensor. Despite the low resolution and the noise, we are able to measure the main vibration modes of a tuning fork, a loudspeaker and a bridge. Results are successfully compared with design parameters and measurements with alternative devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic fluid hyperthermia (MFH) is considered a promising therapeutic technique for the treatment of cancer cells, in which magnetic nanoparticles (MNPs) with superparamagnetic behavior generate mild-temperatures under an AC magnetic field to selectively destroy the abnormal cancer cells, in detriment of the healthy ones. However, the poor heating efficiency of most NMPs and the imprecise experimental determination of the temperature field during the treatment, are two of the majors drawbacks for its clinical advance. Thus, in this work, different MNPs were developed and tested under an AC magnetic field (~1.10 kA/m and 200 kHz), and the heat generated by them was assessed by an infrared camera. The resulting thermal images were processed in MATLAB after the thermographic calibration of the infrared camera. The results show the potential to use this thermal technique for the improvement and advance of MFH as a clinical therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND With increasing demand for umbilical cord blood units (CBUs) with total nucleated cell (TNC) counts of more than 150 × 10(7) , preshipping assessment is mandatory. Umbilical cord blood processing requires aseptic techniques and laboratories with specific air quality and cleanliness. Our aim was to establish a fast and efficient method for determining TNC counts at the obstetric ward without exposing the CBU to the environment. STUDY DESIGN AND METHODS Data from a total of 151 cord blood donations at a single procurement site were included in this prospective study. We measured TNC counts in cord blood aliquots taken from the umbilical cord (TNCCord ), from placenta (TNCPlac ), and from a tubing segment of the sterile collection system (TNCTS ). TNC counts were compared to reference TNC counts in the CBU which were ascertained at the cord blood bank (TNCCBU ). RESULTS TNCTS counts (173 ± 33 × 10(7) cells; calculated for 1 unit) correlated fully with the TNCCBU reference counts (166 ± 33 × 10(7) cells, Pearson's r = 0.97, p < 0.0001). In contrast, TNCCord and TNCPlac counts were more disparate from the reference (r = 0.92 and r = 0.87, respectively). CONCLUSIONS A novel method of measuring TNC counts in tubing segments from the sterile cord blood collection system allows rapid and correct identification of CBUs with high cell numbers at the obstetric ward without exposing cells to the environment. This approach may contribute to cost efficacy as only CBUs with satisfactory TNC counts need to be shipped to the cord blood bank.