978 resultados para Processing technique


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Reabilitação Oral - FOAR

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Os dados sísmicos terrestres são afetados pela existência de irregularidades na superfície de medição, e.g. a topografia. Neste sentido, para obter uma imagem sísmica de alta resolução, faz-se necessário corrigir estas irregularidades usando técnicas de processamento sísmico, e.g. correições estáticas residuais e de campo. O método de empilhamento Superfície de Reflexão Comum, CRS ("Common-Reflection-Surface", em inglês) é uma nova técnica de processamento para simular seções sísmicas com afastamento-nulo, ZO ("Zero-Offset", em inglês) a partir de dados sísmicos de cobertura múltipla. Este método baseia-se na aproximação hiperbólica de tempos de trânsito paraxiais de segunda ordem referido ao raio (central) normal. O operador de empilhamento CRS para uma superfície de medição planar depende de três parâmetros, denominados o ângulo de emergência do raio normal, a curvatura da onda Ponto de Incidência Normal, NIP ("Normal Incidence Point", em inglês) e a curvatura da onda Normal, N. Neste artigo o método de empilhamento CRS ZO 2-D é modificado com a finalidade de considerar uma superfície de medição com topografia suave também dependente desses parâmetros. Com este novo formalismo CRS, obtemos uma seção sísmica ZO de alta resolução, sem aplicar as correições estáticas, onde em cada ponto desta seção são estimados os três parâmetros relevantes do processo de empilhamento CRS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Com o objetivo de ganhar competitividade no mercado internacional e contribuir para o desenvolvimento tecnológico no país, o presente trabalho apresenta a técnica de processamento de moldagem por transferência de resina (RTM), utilizada na fabricação de materiais compósitos estruturais e ainda pouco estudada no Brasil. Os compósitos processados por essa técnica apresentam maior fração volumétrica de fibras, melhor acabamento superficial e pouca ou nenhuma necessidade de acabamento do componente produzido. Este trabalho compreende a caracterização de compósitos produzidos com resina epóxi monocomponente RTM6 e o tecido não dobrável de fibra de carbono. Os compósitos produzidos pela Hexcel Composites foram analisados pela técnica de ultrassom C-Scan e os resultados mostraram que os laminados processados estão homogêneos quanto à impregnação. Ensaios mecânicos mostram que os laminados com tecido apresentam características comparáveis à dos compósitos produzidos em autoclave com maiores porcentagens de reforço. Em fadiga, os laminados apresentaram um alto e curto intervalo, com tensões próximas à de tração. Quanto ao comportamento térmico observou-se melhora nas propriedades com a adição do reforço de fibras de carbono, que promoveram o aumento da temperatura de transição vítrea (Tg). Quanto ao comportamento viscoelástico, foi observado a influencia da temperatura e freqüência no material. Considerando as propriedades mecânicas e térmicas, ambos os compósitos foram classificados como adequados à aplicação proposta.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nanoscience aims at manipulating atoms, molecules and nano-size particles in a precise and controlled manner. Nano-scale control of the thin film structures of organic/polymeric materials is a prerequisite to the fabrication of sophisticated functional devices. The work presented in this thesis is a compilation of various polymer thin films with newly synthesized functional polymers. Cationic and anionic LC amphotropic polymers, p-type and n-type semiconducting polymers with triarylamine, oxadiazole, thiadiazole and triazine moieties are suitable materials to fabricate multilayers by layer-by-layer (LBL) self-assembly with a well defined internal structure. The LBL assembly is the ideal processing technique to prepare thin polymer film composites with fine control over morphology and composition at nano-scale thickness, which may have applications in photo-detectors, light-emitting diodes (LEDs), displays and sensors, as well as in solar cells. The multilayer build-up was investigated with amphotropic LC polymers individually by solution-dipping and spin-coating methods; they showed different internal orders with respect to layering and orientation of the mesogens, as a result of the liquid crystalline phase. The synthesized p-type and n-type semiconducting polymers were examined optically and electrochemically, suggesting that they are favorably promising as hole-(p-type) or electron-(n-type) transport materials in electronic and optoelectronic devices. In addition, we report a successful film deposition of polymers by the vacuum deposition method. The vapor deposition method provides a clean environment; it is solvent free and well suited to sequential depositions in hetero-structured multilayer system. As the potential applications, the fabricated polymer thin films were used as simple electrochromic films and also used as hole transporting layers in LEDs. Electrochemical and electrochromic characterizations of assembled films reveal that the newly synthesized polymers give rise to high contrast ratio and fast switching electrochromic films. The LEDs with vacuum deposited films show dramatic improvements in device characteristics, indicating that the films are promising as hole transporting layers. These are the result of not only the thin nano-scale film structures but also the combination with the high charge carrier mobility of synthesized semiconducting polymers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Polylactic acid (PLA) is a bio-derived, biodegradable polymer with a number of similar mechanical properties to commodity plastics like polyethylene (PE) and polyethylene terephthalate (PETE). There has recently been a great interest in using PLA to replace these typical petroleum-derived polymers because of the developing trend to use more sustainable materials and technologies. However, PLA¿s inherent slow crystallization behavior is not compatible with prototypical polymer processing techniques such as molding and extrusion, and in turn inhibits its widespread use in industrial applications. In order to make PLA into a commercially-viable material, there is a need to process the material in such a way that its tendency to form crystals is enhanced. The industry standard for producing PLA products is via twin screw extrusion (TSE), where polymer pellets are fed into a heated extruder, mixed at a temperature above its melting temperature, and molded into a desired shape. A relatively novel processing technique called solid-state shear pulverization (SSSP) processes the polymer in the solid state so that nucleation sites can develop and fast crystallization can occur. SSSP has also been found to enhance the mechanical properties of a material, but its powder output form is undesirable in industry. A new process called solid-state/melt extrusion (SSME), developed at Bucknell University, combines the TSE and SSSP processes in one instrument. This technique has proven to produce moldable polymer products with increased mechanical strength. This thesis first investigated the effects of the TSE, SSSP, and SSME polymer processing techniques on PLA. The study seeks to determine the process that yields products with the most enhanced thermal and mechanical properties. For characterization, percent crystallinity, crystallization half time, storage modulus, softening temperature, degradation temperature and molecular weight were analyzed for all samples. Through these characterization techniques, it was observed that SSME-processed PLA had enhanced properties relative to TSE- and SSSP-processed PLA. Because of the previous findings, an optimization study for SSME-processed PLA was conducted where throughput and screw design were varied. The optimization study determined PLA processed with a low flow rate and a moderate screw design in an SSME process produced a polymer product with the largest increase in thermal properties and a high retention of polymer structure relative to TSE-, SSSP-, and all other SSME-processed PLA. It was concluded that the SSSP part of processing scissions polymer chains, creating defects within the material, while the TSE part of processing allows these defects to be mixed thoroughly throughout the sample. The study showed that a proper SSME setup allows for both the increase in nucleation sites within the polymer and sufficient mixing, which in turn leads to the development of a large amount of crystals in a short period of time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: Respiratory motion causes substantial uncertainty in radiotherapy treatment planning. Four-dimensional computed tomography (4D-CT) is a useful tool to image tumor motion during normal respiration. Treatment margins can be reduced by targeting the motion path of the tumor. The expense and complexity of 4D-CT, however, may be cost-prohibitive at some facilities. We developed an image processing technique to produce images from cine CT that contain significant motion information without 4D-CT. The purpose of this work was to compare cine CT and 4D-CT for the purposes of target delineation and dose calculation, and to explore the role of PET in target delineation of lung cancer. Methods: To determine whether cine CT could substitute 4D-CT for small mobile lung tumors, we compared target volumes delineated by a physician on cine CT and 4D-CT for 27 tumors with intrafractional motion greater than 1 cm. We assessed dose calculation by comparing dose distributions calculated on respiratory-averaged cine CT and respiratory-averaged 4D-CT using the gamma index. A threshold-based PET segmentation model of size, motion, and source-to-background was developed from phantom scans and validated with 24 lung tumors. Finally, feasibility of integrating cine CT and PET for contouring was assessed on a small group of larger tumors. Results: Cine CT to 4D-CT target volume ratios were (1.05±0.14) and (0.97±0.13) for high-contrast and low-contrast tumors respectively which was within intraobserver variation. Dose distributions on cine CT produced good agreement (< 2%/1 mm) with 4D-CT for 71 of 73 patients. The segmentation model fit the phantom data with R2 = 0.96 and produced PET target volumes that matched CT better than 6 published methods (-5.15%). Application of the model to more complex tumors produced mixed results and further research is necessary to adequately integrate PET and cine CT for delineation. Conclusions: Cine CT can be used for target delineation of small mobile lesions with minimal differences to 4D-CT. PET, utilizing the segmentation model, can provide additional contrast. Additional research is required to assess the efficacy of complex tumor delineation with cine CT and PET. Respiratory-averaged cine CT can substitute respiratory-averaged 4D-CT for dose calculation with negligible differences.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

No single processing technique is capable of optimally preserving each and all of the structural entities of cartilaginous tissue. Hence, the choice of methodology must necessarily be governed by the nature of the component that is targeted for analysis, for example, fibrillar collagens or proteoglycans within the extracellular matrix, or the chondrocytes themselves. This article affords an insight into the pitfalls that are to be encountered when implementing the available techniques and how best to circumvent them. Adult articular cartilage is taken as a representative pars pro toto of the different bodily types. In mammals, this layer of tissue is a component of the synovial joints, wherein it fulfills crucial and diverse biomechanical functions. The biomechanical functions of articular cartilage have their structural and molecular correlates. During the natural course of postnatal development and after the onset of pathological disease processes, such as osteoarthritis, the tissue undergoes structural changes which are intimately reflected in biomechanical modulations. The fine structural intricacies that subserve the changes in tissue function can be accurately assessed only if they are faithfully preserved at the molecular level. For this reason, a careful consideration of the tissue-processing technique is indispensable. Since, as aforementioned, no single methodological tool is capable of optimally preserving all constituents, the approach must be pre-selected with a targeted structure in view. Guidance in this choice is offered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Arterial spin labeling (ASL) is a technique for noninvasively measuring cerebral perfusion using magnetic resonance imaging. Clinical applications of ASL include functional activation studies, evaluation of the effect of pharmaceuticals on perfusion, and assessment of cerebrovascular disease, stroke, and brain tumor. The use of ASL in the clinic has been limited by poor image quality when large anatomic coverage is required and the time required for data acquisition and processing. This research sought to address these difficulties by optimizing the ASL acquisition and processing schemes. To improve data acquisition, optimal acquisition parameters were determined through simulations, phantom studies and in vivo measurements. The scan time for ASL data acquisition was limited to fifteen minutes to reduce potential subject motion. A processing scheme was implemented that rapidly produced regional cerebral blood flow (rCBF) maps with minimal user input. To provide a measure of the precision of the rCBF values produced by ASL, bootstrap analysis was performed on a representative data set. The bootstrap analysis of single gray and white matter voxels yielded a coefficient of variation of 6.7% and 29% respectively, implying that the calculated rCBF value is far more precise for gray matter than white matter. Additionally, bootstrap analysis was performed to investigate the sensitivity of the rCBF data to the input parameters and provide a quantitative comparison of several existing perfusion models. This study guided the selection of the optimum perfusion quantification model for further experiments. The optimized ASL acquisition and processing schemes were evaluated with two ASL acquisitions on each of five normal subjects. The gray-to-white matter rCBF ratios for nine of the ten acquisitions were within ±10% of 2.6 and none were statistically different from 2.6, the typical ratio produced by a variety of quantitative perfusion techniques. Overall, this work produced an ASL data acquisition and processing technique for quantitative perfusion and functional activation studies, while revealing the limitations of the technique through bootstrap analysis. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A refined sample processing technique using glacial acetic acid has been applied to Upper Cenomanian and Lower Turonian limestones from Baddeckenstedt (Lower Saxony) enabeling the first quantitative analysis of planktonic foraminiferal populations through the Stage boundary succession in northwestern Germany. Measurements of carbonate contents, organic carbon and stable carbon and oxygen isotopes were also reported. These data allow a correlation to be made of the Baddeckenstedt section with those at Misburg (basinal facies, northwestern Germany) and Dover (Plenus Marls, southern England). Significant maxima of the organic carbon content at Baddeckenstedt correspond to prominent black shale couplets at Misburg. The planktonic foraminiferal generic groups show at Baddeckenstedt similar fluctuations as reported from Dover. Their correlation reveals details of a complex paleoceanographic regime in the NW-German Basin during the Cenomanian/Turonian Oceanic Anoxic Event.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The image by Computed Tomography is a non-invasive alternative for observing soil structures, mainly pore space. The pore space correspond in soil data to empty or free space in the sense that no material is present there but only fluids, the fluid transport depend of pore spaces in soil, for this reason is important identify the regions that correspond to pore zones. In this paper we present a methodology in order to detect pore space and solid soil based on the synergy of the image processing, pattern recognition and artificial intelligence. The mathematical morphology is an image processing technique used for the purpose of image enhancement. In order to find pixels groups with a similar gray level intensity, or more or less homogeneous groups, a novel image sub-segmentation based on a Possibilistic Fuzzy c-Means (PFCM) clustering algorithm was used. The Artificial Neural Networks (ANNs) are very efficient for demanding large scale and generic pattern recognition applications for this reason finally a classifier based on artificial neural network is applied in order to classify soil images in two classes, pore space and solid soil respectively.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Se ha utilizado un programa de modelización de ondas sísmicas por métodos finitos en dos dimensiones para analizar el efecto Source Ghost en profundidades de 4, 14, 24 y 34 metros. Este efecto se produce cuando se dispara una fuente enterrada y, debido al contacto suelo-aire, se genera una onda reflejada que, en cierto momento, se superpone con la onda principal, produciéndose una disminución de la amplitud de la onda (Source Ghost). Los resultados teóricos del efecto se han comparado con los resultados prácticos del programa de modelización concluyéndose que es posible determinar el rango de frecuencias afectado por el efecto. Sin embargo, la distancia entre receptor y fuente es una nueva variable que desplaza el efecto hacia frecuencias más altas impidiendo su predicción. La utilización de una técnica de procesamiento básica como la corrección del Normal Move-Out (NMO) en el apilado de las trazas, contrarresta la variable distancia receptor-fuente, y por tanto es posible calcular el rango de frecuencias del efecto Source Ghost. Abstract A seismic wave forward modeling in two dimensions using finite-difference method has been used for analyzing the Source Ghost effect at depths between 4-34 meters. A shot from a buried source generates a down going reflection due to the free surface boundary and, at some point, it interferes with the main wave propagation causing a reduction of wave amplitude at some frequency range (Source Ghost). Theoretical results and experimental results provided by the forward modeling are compared for concluding that the forward modeling is able to identify the frequency range affected by the source ghost. Nevertheless, it has been found that the receiver-source distance (offset) is a new variable that modifies the frequency range to make it unpredictable. A basic seismic processing technique, Normal Move-Out (NMO) correction, has been used for a single twenty fold CMP gather. The final stack shows that the processing technique neutralize the offset effect and therefore the forward modeling is still capable to determine the affected frequency range by the source ghost regardless the distance between receiver and source.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis se centra en el estudio y desarrollo de algoritmos de guerra electrónica {electronic warfare, EW) y radar para su implementación en sistemas de tiempo real. La llegada de los sistemas de radio, radar y navegación al terreno militar llevó al desarrollo de tecnologías para combatirlos. Así, el objetivo de los sistemas de guerra electrónica es el control del espectro electomagnético. Una de la funciones de la guerra electrónica es la inteligencia de señales {signals intelligence, SIGINT), cuya labor es detectar, almacenar, analizar, clasificar y localizar la procedencia de todo tipo de señales presentes en el espectro. El subsistema de inteligencia de señales dedicado a las señales radar es la inteligencia electrónica {electronic intelligence, ELINT). Un sistema de tiempo real es aquel cuyo factor de mérito depende tanto del resultado proporcionado como del tiempo en que se da dicho resultado. Los sistemas radar y de guerra electrónica tienen que proporcionar información lo más rápido posible y de forma continua, por lo que pueden encuadrarse dentro de los sistemas de tiempo real. La introducción de restricciones de tiempo real implica un proceso de realimentación entre el diseño del algoritmo y su implementación en plataformas “hardware”. Las restricciones de tiempo real son dos: latencia y área de la implementación. En esta tesis, todos los algoritmos presentados se han implementado en plataformas del tipo field programmable gate array (FPGA), ya que presentan un buen compromiso entre velocidad, coste total, consumo y reconfigurabilidad. La primera parte de la tesis está centrada en el estudio de diferentes subsistemas de un equipo ELINT: detección de señales mediante un detector canalizado, extracción de los parámetros de pulsos radar, clasificación de modulaciones y localization pasiva. La transformada discreta de Fourier {discrete Fourier transform, DFT) es un detector y estimador de frecuencia quasi-óptimo para señales de banda estrecha en presencia de ruido blanco. El desarrollo de algoritmos eficientes para el cálculo de la DFT, conocidos como fast Fourier transform (FFT), han situado a la FFT como el algoritmo más utilizado para la detección de señales de banda estrecha con requisitos de tiempo real. Así, se ha diseñado e implementado un algoritmo de detección y análisis espectral para su implementación en tiempo real. Los parámetros más característicos de un pulso radar son su tiempo de llegada y anchura de pulso. Se ha diseñado e implementado un algoritmo capaz de extraer dichos parámetros. Este algoritmo se puede utilizar con varios propósitos: realizar un reconocimiento genérico del radar que transmite dicha señal, localizar la posición de dicho radar o bien puede utilizarse como la parte de preprocesado de un clasificador automático de modulaciones. La clasificación automática de modulaciones es extremadamente complicada en entornos no cooperativos. Un clasificador automático de modulaciones se divide en dos partes: preprocesado y el algoritmo de clasificación. Los algoritmos de clasificación basados en parámetros representativos calculan diferentes estadísticos de la señal de entrada y la clasifican procesando dichos estadísticos. Los algoritmos de localization pueden dividirse en dos tipos: triangulación y sistemas cuadráticos. En los algoritmos basados en triangulación, la posición se estima mediante la intersección de las rectas proporcionadas por la dirección de llegada de la señal. En cambio, en los sistemas cuadráticos, la posición se estima mediante la intersección de superficies con igual diferencia en el tiempo de llegada (time difference of arrival, TDOA) o diferencia en la frecuencia de llegada (frequency difference of arrival, FDOA). Aunque sólo se ha implementado la estimación del TDOA y FDOA mediante la diferencia de tiempos de llegada y diferencia de frecuencias, se presentan estudios exhaustivos sobre los diferentes algoritmos para la estimación del TDOA, FDOA y localización pasiva mediante TDOA-FDOA. La segunda parte de la tesis está dedicada al diseño e implementación filtros discretos de respuesta finita (finite impulse response, FIR) para dos aplicaciones radar: phased array de banda ancha mediante filtros retardadores (true-time delay, TTD) y la mejora del alcance de un radar sin modificar el “hardware” existente para que la solución sea de bajo coste. La operación de un phased array de banda ancha mediante desfasadores no es factible ya que el retardo temporal no puede aproximarse mediante un desfase. La solución adoptada e implementada consiste en sustituir los desfasadores por filtros digitales con retardo programable. El máximo alcance de un radar depende de la relación señal a ruido promedio en el receptor. La relación señal a ruido depende a su vez de la energía de señal transmitida, potencia multiplicado por la anchura de pulso. Cualquier cambio hardware que se realice conlleva un alto coste. La solución que se propone es utilizar una técnica de compresión de pulsos, consistente en introducir una modulación interna a la señal, desacoplando alcance y resolución. ABSTRACT This thesis is focused on the study and development of electronic warfare (EW) and radar algorithms for real-time implementation. The arrival of radar, radio and navigation systems to the military sphere led to the development of technologies to fight them. Therefore, the objective of EW systems is the control of the electromagnetic spectrum. Signals Intelligence (SIGINT) is one of the EW functions, whose mission is to detect, collect, analyze, classify and locate all kind of electromagnetic emissions. Electronic intelligence (ELINT) is the SIGINT subsystem that is devoted to radar signals. A real-time system is the one whose correctness depends not only on the provided result but also on the time in which this result is obtained. Radar and EW systems must provide information as fast as possible on a continuous basis and they can be defined as real-time systems. The introduction of real-time constraints implies a feedback process between the design of the algorithms and their hardware implementation. Moreover, a real-time constraint consists of two parameters: Latency and area of the implementation. All the algorithms in this thesis have been implemented on field programmable gate array (FPGAs) platforms, presenting a trade-off among performance, cost, power consumption and reconfigurability. The first part of the thesis is related to the study of different key subsystems of an ELINT equipment: Signal detection with channelized receivers, pulse parameter extraction, modulation classification for radar signals and passive location algorithms. The discrete Fourier transform (DFT) is a nearly optimal detector and frequency estimator for narrow-band signals buried in white noise. The introduction of fast algorithms to calculate the DFT, known as FFT, reduces the complexity and the processing time of the DFT computation. These properties have placed the FFT as one the most conventional methods for narrow-band signal detection for real-time applications. An algorithm for real-time spectral analysis for user-defined bandwidth, instantaneous dynamic range and resolution is presented. The most characteristic parameters of a pulsed signal are its time of arrival (TOA) and the pulse width (PW). The estimation of these basic parameters is a fundamental task in an ELINT equipment. A basic pulse parameter extractor (PPE) that is able to estimate all these parameters is designed and implemented. The PPE may be useful to perform a generic radar recognition process, perform an emitter location technique and can be used as the preprocessing part of an automatic modulation classifier (AMC). Modulation classification is a difficult task in a non-cooperative environment. An AMC consists of two parts: Signal preprocessing and the classification algorithm itself. Featurebased algorithms obtain different characteristics or features of the input signals. Once these features are extracted, the classification is carried out by processing these features. A feature based-AMC for pulsed radar signals with real-time requirements is studied, designed and implemented. Emitter passive location techniques can be divided into two classes: Triangulation systems, in which the emitter location is estimated with the intersection of the different lines of bearing created from the estimated directions of arrival, and quadratic position-fixing systems, in which the position is estimated through the intersection of iso-time difference of arrival (TDOA) or iso-frequency difference of arrival (FDOA) quadratic surfaces. Although TDOA and FDOA are only implemented with time of arrival and frequency differences, different algorithms for TDOA, FDOA and position estimation are studied and analyzed. The second part is dedicated to FIR filter design and implementation for two different radar applications: Wideband phased arrays with true-time delay (TTD) filters and the range improvement of an operative radar with no hardware changes to minimize costs. Wideband operation of phased arrays is unfeasible because time delays cannot be approximated by phase shifts. The presented solution is based on the substitution of the phase shifters by FIR discrete delay filters. The maximum range of a radar depends on the averaged signal to noise ratio (SNR) at the receiver. Among other factors, the SNR depends on the transmitted signal energy that is power times pulse width. Any possible hardware change implies high costs. The proposed solution lies in the use of a signal processing technique known as pulse compression, which consists of introducing an internal modulation within the pulse width, decoupling range and resolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A nivel mundial, el cáncer de mama es el tipo de cáncer más frecuente además de una de las principales causas de muerte entre la población femenina. Actualmente, el método más eficaz para detectar lesiones mamarias en una etapa temprana es la mamografía. Ésta contribuye decisivamente al diagnóstico precoz de esta enfermedad que, si se detecta a tiempo, tiene una probabilidad de curación muy alta. Uno de los principales y más frecuentes hallazgos en una mamografía, son las microcalcificaciones, las cuales son consideradas como un indicador importante de cáncer de mama. En el momento de analizar las mamografías, factores como la capacidad de visualización, la fatiga o la experiencia profesional del especialista radiólogo hacen que el riesgo de omitir ciertas lesiones presentes se vea incrementado. Para disminuir dicho riesgo es importante contar con diferentes alternativas como por ejemplo, una segunda opinión por otro especialista o un doble análisis por el mismo. En la primera opción se eleva el coste y en ambas se prolonga el tiempo del diagnóstico. Esto supone una gran motivación para el desarrollo de sistemas de apoyo o asistencia en la toma de decisiones. En este trabajo de tesis se propone, se desarrolla y se justifica un sistema capaz de detectar microcalcificaciones en regiones de interés extraídas de mamografías digitalizadas, para contribuir a la detección temprana del cáncer demama. Dicho sistema estará basado en técnicas de procesamiento de imagen digital, de reconocimiento de patrones y de inteligencia artificial. Para su desarrollo, se tienen en cuenta las siguientes consideraciones: 1. Con el objetivo de entrenar y probar el sistema propuesto, se creará una base de datos de imágenes, las cuales pertenecen a regiones de interés extraídas de mamografías digitalizadas. 2. Se propone la aplicación de la transformada Top-Hat, una técnica de procesamiento digital de imagen basada en operaciones de morfología matemática. La finalidad de aplicar esta técnica es la de mejorar el contraste entre las microcalcificaciones y el tejido presente en la imagen. 3. Se propone un algoritmo novel llamado sub-segmentación, el cual está basado en técnicas de reconocimiento de patrones aplicando un algoritmo de agrupamiento no supervisado, el PFCM (Possibilistic Fuzzy c-Means). El objetivo es encontrar las regiones correspondientes a las microcalcificaciones y diferenciarlas del tejido sano. Además, con la finalidad de mostrar las ventajas y desventajas del algoritmo propuesto, éste es comparado con dos algoritmos del mismo tipo: el k-means y el FCM (Fuzzy c-Means). Por otro lado, es importante destacar que en este trabajo por primera vez la sub-segmentación es utilizada para detectar regiones pertenecientes a microcalcificaciones en imágenes de mamografía. 4. Finalmente, se propone el uso de un clasificador basado en una red neuronal artificial, específicamente un MLP (Multi-layer Perceptron). El propósito del clasificador es discriminar de manera binaria los patrones creados a partir de la intensidad de niveles de gris de la imagen original. Dicha clasificación distingue entre microcalcificación y tejido sano. ABSTRACT Breast cancer is one of the leading causes of women mortality in the world and its early detection continues being a key piece to improve the prognosis and survival. Currently, the most reliable and practical method for early detection of breast cancer is mammography.The presence of microcalcifications has been considered as a very important indicator ofmalignant types of breast cancer and its detection and classification are important to prevent and treat the disease. However, the detection and classification of microcalcifications continue being a hard work due to that, in mammograms there is a poor contrast between microcalcifications and the tissue around them. Factors such as visualization, tiredness or insufficient experience of the specialist increase the risk of omit some present lesions. To reduce this risk, is important to have alternatives such as a second opinion or a double analysis for the same specialist. In the first option, the cost increases and diagnosis time also increases for both of them. This is the reason why there is a great motivation for development of help systems or assistance in the decision making process. This work presents, develops and justifies a system for the detection of microcalcifications in regions of interest extracted fromdigitizedmammographies to contribute to the early detection of breast cancer. This systemis based on image processing techniques, pattern recognition and artificial intelligence. For system development the following features are considered: With the aim of training and testing the system, an images database is created, belonging to a region of interest extracted from digitized mammograms. The application of the top-hat transformis proposed. This image processing technique is based on mathematical morphology operations. The aim of this technique is to improve the contrast betweenmicrocalcifications and tissue present in the image. A novel algorithm called sub-segmentation is proposed. The sub-segmentation is based on pattern recognition techniques applying a non-supervised clustering algorithm known as Possibilistic Fuzzy c-Means (PFCM). The aim is to find regions corresponding to the microcalcifications and distinguish them from the healthy tissue. Furthermore,with the aim of showing themain advantages and disadvantages this is compared with two algorithms of same type: the k-means and the fuzzy c-means (FCM). On the other hand, it is important to highlight in this work for the first time the sub-segmentation is used for microcalcifications detection. Finally, a classifier based on an artificial neural network such as Multi-layer Perceptron is used. The purpose of this classifier is to discriminate froma binary perspective the patterns built from gray level intensity of the original image. This classification distinguishes between microcalcifications and healthy tissue.