10 resultados para Structural similarity index

em Universidad Politécnica de Madrid


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Desde los inicios de la codificación de vídeo digital hasta hoy, tanto la señal de video sin comprimir de entrada al codificador como la señal de salida descomprimida del decodificador, independientemente de su resolución, uso de submuestreo en los planos de diferencia de color, etc. han tenido siempre la característica común de utilizar 8 bits para representar cada una de las muestras. De la misma manera, los estándares de codificación de vídeo imponen trabajar internamente con estos 8 bits de precisión interna al realizar operaciones con las muestras cuando aún no se han transformado al dominio de la frecuencia. Sin embargo, el estándar H.264, en gran auge hoy en día, permite en algunos de sus perfiles orientados al mundo profesional codificar vídeo con más de 8 bits por muestra. Cuando se utilizan estos perfiles, las operaciones efectuadas sobre las muestras todavía sin transformar se realizan con la misma precisión que el número de bits del vídeo de entrada al codificador. Este aumento de precisión interna tiene el potencial de permitir unas predicciones más precisas, reduciendo el residuo a codificar y aumentando la eficiencia de codificación para una tasa binaria dada. El objetivo de este Proyecto Fin de Carrera es estudiar, utilizando las medidas de calidad visual objetiva PSNR (Peak Signal to Noise Ratio, relación señal ruido de pico) y SSIM (Structural Similarity, similaridad estructural), el efecto sobre la eficiencia de codificación y el rendimiento al trabajar con una cadena de codificación/descodificación H.264 de 10 bits en comparación con una cadena tradicional de 8 bits. Para ello se utiliza el codificador de código abierto x264, capaz de codificar video de 8 y 10 bits por muestra utilizando los perfiles High, High 10, High 4:2:2 y High 4:4:4 Predictive del estándar H.264. Debido a la ausencia de herramientas adecuadas para calcular las medidas PSNR y SSIM de vídeo con más de 8 bits por muestra y un tipo de submuestreo de planos de diferencia de color distinto al 4:2:0, como parte de este proyecto se desarrolla también una aplicación de análisis en lenguaje de programación C capaz de calcular dichas medidas a partir de dos archivos de vídeo sin comprimir en formato YUV o Y4M. ABSTRACT Since the beginning of digital video compression, the uncompressed video source used as input stream to the encoder and the uncompressed decoded output stream have both used 8 bits for representing each sample, independent of resolution, chroma subsampling scheme used, etc. In the same way, video coding standards force encoders to work internally with 8 bits of internal precision when working with samples before being transformed to the frequency domain. However, the H.264 standard allows coding video with more than 8 bits per sample in some of its professionally oriented profiles. When using these profiles, all work on samples still in the spatial domain is done with the same precision the input video has. This increase in internal precision has the potential of allowing more precise predictions, reducing the residual to be encoded, and thus increasing coding efficiency for a given bitrate. The goal of this Project is to study, using PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity) objective video quality metrics, the effects on coding efficiency and performance caused by using an H.264 10 bit coding/decoding chain compared to a traditional 8 bit chain. In order to achieve this goal the open source x264 encoder is used, which allows encoding video with 8 and 10 bits per sample using the H.264 High, High 10, High 4:2:2 and High 4:4:4 Predictive profiles. Given that no proper tools exist for computing PSNR and SSIM values of video with more than 8 bits per sample and chroma subsampling schemes other than 4:2:0, an analysis application written in the C programming language is developed as part of this Project. This application is able to compute both metrics from two uncompressed video files in the YUV or Y4M format.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En general, la distribución de una flota de vehículos que recorre rutas fijas no se realiza completamente en base a criterios objetivos, primando otros aspectos más difícilmente cuantificables. El análisis apropiado debería tener en consideración la variabilidad existente entre las diferentes rutas dentro de una misma ciudad para así determinar qué tecnología es la que mejor se adapta a las características de cada itinerario. Este trabajo presenta una metodología para optimizar la asignación de una flota de vehículos a sus rutas, consiguiendo reducir el consumo y las emisiones contaminantes. El método propuesto está organizado según el siguiente procedimiento: - Registro de las características cinemáticas de los vehículos que recorren un conjunto representativo de rutas. - Agrupamiento de las líneas en conglomerados de líneas similares empleando un algoritmo jerárquico que optimice el índice de semejanza entre rutas obtenido mediante contraste de hipótesis de las variables representativas. - Generación de un ciclo cinemático específico para cada conglomerado. - Tipificación de variables macroscópicas que faciliten la clasificación de las restantes líneas utilizando una red neuronal entrenada con la información recopilada en las rutas medidas. - Conocimiento de las características de la flota disponible. - Disponibilidad de un modelo que estime, según la tecnología del vehículo, el consumo y las emisiones asociados a las variables cinemáticas de los ciclos. - Desarrollo de un algoritmo de reasignación de vehículos que optimice una función objetivo dependiente de las emisiones. En el proceso de optimización de la flota se plantean dos escenarios de gran trascendencia en la evaluación ambiental, consistentes en minimizar la emisión de dióxido de carbono y su impacto como gas de efecto invernadero (GEI), y alternativamente, la producción de nitróxidos, por su influencia en la lluvia ácida y en la formación de ozono troposférico en núcleos urbanos. Además, en ambos supuestos se introducen en el problema restricciones adicionales para evitar que las emisiones de las restantes sustancias superen los valores estipulados según la organización de la flota actualmente realizada por el operador. La metodología ha sido aplicada en 160 líneas de autobús de la EMT de Madrid, conociéndose los datos cinemáticos de 25 rutas. Los resultados indican que, en ambos supuestos, es factible obtener una redistribución de la flota que consiga reducir significativamente la mayoría de las sustancias contaminantes, evitando que, en contraprestación, aumente la emisión de cualquier otro contaminante. ABSTRACT In general, the distribution of a fleet of vehicles that travel fixed routes is not usually implemented on the basis of objective criteria, thus prioritizing on other features that are more difficult to quantify. The appropriate analysis should consider the existing variability amongst the different routes within the city in order to determine which technology adapts better to the peculiarities of each itinerary. This study proposes a methodology to optimize the allocation of a fleet of vehicles to the routes in order to reduce fuel consumption and pollutant emissions. The suggested method is structured in accordance with the following procedure: - Recording of the kinematic characteristics of the vehicles that travel a representative set of routes. - Grouping of the lines in clusters of similar routes by utilizing a hierarchical algorithm that optimizes the similarity index between routes, which has been previously obtained by means of hypothesis contrast based on a set of representative variables. - Construction of a specific kinematic cycle to represent each cluster of routes. - Designation of macroscopic variables that allow the classification of the remaining lines using a neural network trained with the information gathered from a sample of routes. - Identification and comprehension of the operational characteristics of the existing fleet. - Availability of a model that evaluates, in accordance with the technology of the vehicle, the fuel consumption and the emissions related with the kinematic variables of the cycles. - Development of an algorithm for the relocation of the vehicle fleet by optimizing an objective function which relies on the values of the pollutant emissions. Two scenarios having great relevance in environmental evaluation are assessed during the optimization process of the fleet, these consisting in minimizing carbon dioxide emissions due to its impact as greenhouse gas (GHG), and alternatively, the production of nitroxides for their influence on acid rain and in the formation of tropospheric ozone in urban areas. Furthermore, additional restrictions are introduced in both assumptions in order to prevent that emission levels for the remaining substances exceed the stipulated values for the actual fleet organization implemented by the system operator. The methodology has been applied in 160 bus lines of the EMT of Madrid, for which kinematic information is known for a sample consisting of 25 routes. The results show that, in both circumstances, it is feasible to obtain a redistribution of the fleet that significantly reduces the emissions for the majority of the pollutant substances, while preventing an alternative increase in the emission level of any other contaminant.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of the current study was to assess how closely batch cultures (BC) of rumen microorganisms can mimic the dietary differences in fermentation characteristics found in the rumen, and to analyse changes in bacterial diversity over the in vitro incubation period. Four ruminally and duodenally cannulated sheep were fed four diets having forage : concentrate ratios (FCR) of 70 : 30 or 30 : 70, with either alfalfa hay or grass hay as forage. Rumen fluid from each sheep was used to inoculate BC containing the same diet fed to the donor sheep, and the main rumen fermentation parameters were determined after 24 h of incubation. There were differences between BC and sheep in the magnitude of most measured parameters, but BC detected differences among diets due to forage type similar to those found in sheep. In contrast, BC did not reproduce the dietary differences due to FCR found in sheep for pH, degradability of neutral detergent fibre and total volatile fatty acid (VFA) concentrations. There were differences between systems in the magnitude of most determined parameters and BC showed higher pH values and NH3–N concentrations, but lower fibre degradability and VFA and lactate concentrations compared with sheep. There were significant relationships between in vivo and in vitro values for molar proportions of acetate, propionate and butyrate, and the acetate : propionate ratio. The automated ribosomal intergenic spacer analysis (ARISA) of 16S ribosomal deoxyribonucleic acid showed that FCR had no effect on bacterial diversity either in the sheep rumen fluid used as inoculum (IN) or in BC samples. In contrast, bacterial diversity was greater with alfalfa hay diets than those with grass hay in the IN, but was unaffected by forage type in the BC. Similarity index between the bacterial communities in the inocula and those in the BC ranged from 67·2 to 74·7%, and was unaffected by diet characteristics. Bacterial diversity was lower in BC than in the inocula with 14 peaks out of a total of 181 detected in the ARISA electropherograms never appearing in BC samples, which suggests that incubation conditions in the BC may have caused a selection of some bacterial strains. However, each BC sample showed the highest similarity index with its corresponding rumen IN, which highlights the importance of using rumen fluid from donors fed a diet similar to that being incubated in BC when conducting in vitro experiments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La investigación para el conocimiento del cerebro es una ciencia joven, su inicio se remonta a Santiago Ramón y Cajal en 1888. Desde esta fecha a nuestro tiempo la neurociencia ha avanzado mucho en el desarrollo de técnicas que permiten su estudio. Desde la neurociencia cognitiva hoy se explican muchos modelos que nos permiten acercar a nuestro entendimiento a capacidades cognitivas complejas. Aun así hablamos de una ciencia casi en pañales que tiene un lago recorrido por delante. Una de las claves del éxito en los estudios de la función cerebral ha sido convertirse en una disciplina que combina conocimientos de diversas áreas: de la física, de las matemáticas, de la estadística y de la psicología. Esta es la razón por la que a lo largo de este trabajo se entremezclan conceptos de diferentes campos con el objetivo de avanzar en el conocimiento de un tema tan complejo como el que nos ocupa: el entendimiento de la mente humana. Concretamente, esta tesis ha estado dirigida a la integración multimodal de la magnetoencefalografía (MEG) y la resonancia magnética ponderada en difusión (dMRI). Estas técnicas son sensibles, respectivamente, a los campos magnéticos emitidos por las corrientes neuronales, y a la microestructura de la materia blanca cerebral. A lo largo de este trabajo hemos visto que la combinación de estas técnicas permiten descubrir sinergias estructurofuncionales en el procesamiento de la información en el cerebro sano y en el curso de patologías neurológicas. Más específicamente en este trabajo se ha estudiado la relación entre la conectividad funcional y estructural y en cómo fusionarlas. Para ello, se ha cuantificado la conectividad funcional mediante el estudio de la sincronización de fase o la correlación de amplitudes entre series temporales, de esta forma se ha conseguido un índice que mide la similitud entre grupos neuronales o regiones cerebrales. Adicionalmente, la cuantificación de la conectividad estructural a partir de imágenes de resonancia magnética ponderadas en difusión, ha permitido hallar índices de la integridad de materia blanca o de la fuerza de las conexiones estructurales entre regiones. Estas medidas fueron combinadas en los capítulos 3, 4 y 5 de este trabajo siguiendo tres aproximaciones que iban desde el nivel más bajo al más alto de integración. Finalmente se utilizó la información fusionada de MEG y dMRI para la caracterización de grupos de sujetos con deterioro cognitivo leve, la detección de esta patología resulta relevante en la identificación precoz de la enfermedad de Alzheimer. Esta tesis está dividida en seis capítulos. En el capítulos 1 se establece un contexto para la introducción de la connectómica dentro de los campos de la neuroimagen y la neurociencia. Posteriormente en este capítulo se describen los objetivos de la tesis, y los objetivos específicos de cada una de las publicaciones científicas que resultaron de este trabajo. En el capítulo 2 se describen los métodos para cada técnica que fue empleada: conectividad estructural, conectividad funcional en resting state, redes cerebrales complejas y teoría de grafos y finalmente se describe la condición de deterioro cognitivo leve y el estado actual en la búsqueda de nuevos biomarcadores diagnósticos. En los capítulos 3, 4 y 5 se han incluido los artículos científicos que fueron producidos a lo largo de esta tesis. Estos han sido incluidos en el formato de la revista en que fueron publicados, estando divididos en introducción, materiales y métodos, resultados y discusión. Todos los métodos que fueron empleados en los artículos están descritos en el capítulo 2 de la tesis. Finalmente, en el capítulo 6 se concluyen los resultados generales de la tesis y se discuten de forma específica los resultados de cada artículo. ABSTRACT In this thesis I apply concepts from mathematics, physics and statistics to the neurosciences. This field benefits from the collaborative work of multidisciplinary teams where physicians, psychologists, engineers and other specialists fight for a common well: the understanding of the brain. Research on this field is still in its early years, being its birth attributed to the neuronal theory of Santiago Ramo´n y Cajal in 1888. In more than one hundred years only a very little percentage of the brain functioning has been discovered, and still much more needs to be explored. Isolated techniques aim at unraveling the system that supports our cognition, nevertheless in order to provide solid evidence in such a field multimodal techniques have arisen, with them we will be able to improve current knowledge about human cognition. Here we focus on the multimodal integration of magnetoencephalography (MEG) and diffusion weighted magnetic resonance imaging. These techniques are sensitive to the magnetic fields emitted by the neuronal currents and to the white matter microstructure, respectively. The combination of such techniques could bring up evidences about structural-functional synergies in the brain information processing and which part of this synergy fails in specific neurological pathologies. In particular, we are interested in the relationship between functional and structural connectivity, and how two integrate this information. We quantify the functional connectivity by studying the phase synchronization or the amplitude correlation between time series obtained by MEG, and so we get an index indicating similarity between neuronal entities, i.e. brain regions. In addition we quantify structural connectivity by performing diffusion tensor estimation from the diffusion weighted images, thus obtaining an indicator of the integrity of the white matter or, if preferred, the strength of the structural connections between regions. These quantifications are then combined following three different approaches, from the lowest to the highest level of integration, in chapters 3, 4 and 5. We finally apply the fused information to the characterization or prediction of mild cognitive impairment, a clinical entity which is considered as an early step in the continuum pathological process of dementia. The dissertation is divided in six chapters. In chapter 1 I introduce connectomics within the fields of neuroimaging and neuroscience. Later in this chapter we describe the objectives of this thesis, and the specific objectives of each of the scientific publications that were produced as result of this work. In chapter 2 I describe the methods for each of the techniques that were employed, namely structural connectivity, resting state functional connectivity, complex brain networks and graph theory, and finally, I describe the clinical condition of mild cognitive impairment and the current state of the art in the search for early biomarkers. In chapters 3, 4 and 5 I have included the scientific publications that were generated along this work. They have been included in in their original format and they contain introduction, materials and methods, results and discussion. All methods that were employed in these papers have been described in chapter 2. Finally, in chapter 6 I summarize all the results from this thesis, both locally for each of the scientific publications and globally for the whole work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a repairability index for damage assessment in reinforced concrete structural members. The procedure discussed in this paper differs from the standard methods in two aspects: the structural and damage analyses are coupled and it is based on the concepts of fracture and continuum damage mechanics. The relationship between the repairability index and the well-known Park and Ang index is shown in some particular cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The refractive index changes induced by swift ion-beam irradiation in silica have been measured either by spectroscopic ellipsometry or through the effective indices of the optical modes propagating through the irradiated structure. The optical response has been analyzed by considering an effective homogeneous medium to simulate the nanostructured irradiated system consisting of cylindrical tracks, associated to the ion impacts, embedded into a virgin material. The role of both, irradiation fluence and stopping power, has been investigated. Above a certain electronic stopping power threshold (∼2.5 keV/nm), every ion impact creates an axial region around the trajectory with a fixed refractive index (around n = 1.475) corresponding to a certain structural phase that is independent of stopping power. The results have been compared with previous data measured by means of infrared spectroscopy and small-angle X-ray scattering; possible mechanisms and theoretical models are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Critical infrastructures support everyday activities in modern societies, facilitating the exchange of services and quantities of various nature. Their functioning is the result of the integration of diverse technologies, systems and organizations into a complex network of interconnections. Benefits from networking are accompanied by new threats and risks. In particular, because of the increased interdependency, disturbances and failures may propagate and render unstable the whole infrastructure network. This paper presents a methodology of resilience analysis of networked systems of systems. Resilience generalizes the concept of stability of a system around a state of equilibrium, with respect to a disturbance and its ability of preventing, resisting and recovery. The methodology provides a tool for the analysis of off-equilibrium conditions that may occur in a single system and propagate through the network of dependencies. The analysis is conducted in two stages. The first stage of the analysis is qualitative. It identifies the resilience scenarios, i.e. the sequence of events, triggered by an initial disturbance, which include failures and the system response. The second stage is quantitative. The most critical scenarios can be simulated, for the desired parameter settings, in order to check if they are successfully handled, i.e recovered to nominal conditions, or they end into the network failure. The proposed methodology aims at providing an effective support to resilience-informed design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introducción: Diversos cambios ocurren en el sistema cardiovascular materno durante el embarazo, lo que genera un gran estrés sobre este sistema especialmente durante el tercer trimestre, pudiendo acentuarse en presencia de determinados factores de riesgo. Los objetivos de este estudio fueron, valorar las adaptaciones cardiovasculares producidas por un programa específico de ejercicio físico; su seguridad sobre el sistema cardiovascular materno y los resultados del embarazo; y su eficacia en el control de los factores de riesgo cardiovascular. Material y métodos: El diseño del estudio fue un ensayo clínico aleatorizado. 151 gestantes sanas fueron evaluadas mediante un ecocardiograma y un electrocardiograma en la semana 20 y 34 de gestación. Un total de 89 gestantes participaron en un programa de ejercicio físico (GE) desde el primer hasta el tercer trimestre de embarazo, constituido principalmente por 25-30 minutos de trabajo aeróbico (55-60% de la frecuencia cardiaca de reserva), trabajo de fortalecimiento general y específico, y un trabajo de tonificación del suelo pélvico; desarrollado 3 días a la semana con una duración de 55-60 minutos cada sesión. Las gestantes aleatoriamente asignadas al grupo de control (GC; n=62) permanecieron sedentarias durante el embarazo. El estudio fue aprobado por el Comité Ético de investigación clínica del Hospital Universitario de Fuenlabrada. Resultados: Las características basales fueron similares entre ambos grupos. A diferencia del GC, las gestantes del GE evitaron el descenso significativo del gasto cardiaco indexado, entre el 2º y 3ºT de embarazo, y conservaron el patrón geométrico normal del ventrículo izquierdo; mientras que en el GC cambió hacia un patrón de remodelado concéntrico. En la semana 20, las gestantes del GE presentaron valores significativamente menores de frecuencia cardiaca (GC: 79,56±10,76 vs. GE: 76,05±9,34; p=0,04), tensión arterial sistólica (GC: 110,19±10,23 vs. GE: 106,04±12,06; p=0,03); tensión arterial diastólica (GC: 64,56±7,88 vs. GE: 61,81±7,15; p=0,03); tiempo de relajación isovolumétrica (GC: 72,94±14,71 vs. GE: 67,05±16,48; p=0,04); y un mayor tiempo de deceleración de la onda E (GC: 142,09±39,11 vs. GE: 162,10±48,59; p=0,01). En la semana 34, el GE presentó valores significativamente superiores de volumen sistólico (GC: 51,13±11,85 vs. GE: 56,21±12,79 p=0,04), de llenado temprano del ventrículo izquierdo (E) (GC: 78,38±14,07 vs. GE: 85,30±16,62; p=0,02) y de tiempo de deceleración de la onda E (GC: 130,35±37,11 vs. GE: 146,61±43,40; p=0,04). Conclusión: La práctica regular de ejercicio físico durante el embarazo puede producir adaptaciones positivas sobre el sistema cardiovascular materno durante el tercer trimestre de embarazo, además de ayudar en el control de sus factores de riesgo, sin alterar la salud materno-fetal. ABSTRACT Background: Several changes occur in the maternal cardiovascular system during pregnancy. These changes produce a considerable stress in this system, especially during the third trimester, which can be increased in presence of some risk factors. The aims of this study were, to assess the maternal cardiac adaptations in a specific exercise program; its safety on the maternal cardiovascular system and pregnancy outcomes; and its effectiveness in the control of cardiovascular risk factors. Material and methods: A randomized controlled trial was designed. 151 healthy pregnant women were assessed by an echocardiography and electrocardiography at 20 and 34 weeks of gestation. A total of 89 pregnant women participated in a physical exercise program (EG) from the first to the third trimester of pregnancy. It consisted of 25-30 minutes of aerobic conditioning (55-60% of their heart rate reserve), general and specific strength exercises, and a pelvic floor muscles training; 3 times per weeks during 55-60 minutes per session. Pregnant women randomized allocated to the control group (CG) remained sedentary during pregnancy. The study was approved by the Research Ethics Committee of Hospital Universitario de Fuenlabrada. Results: Baseline characteristics were similar between groups. Difference from the CG, pregnant women from the EG prevented the significant decrease of the cardiac output index, between the 2nd and 3rd trimester of pregnancy, and preserved the normal left ventricular pattern; whereas in the CG shifted to concentric remodeling pattern. At 20 weeks, women in the EG had significant lower heart rate (CG: 79,56±10,76 vs. EG: 76,05±9,34; p=0,04), systolic blood pressure (CG: 110,19±10,23 vs. EG: 106,04±12,06; p=0,03); diastolic blood pressure (CG: 64,56±7,88 vs. EG: 61,81±7,15; p=0,03); isovolumetric relaxation time (GC: 72,94±14,71 vs. GE: 67,05±16,48; p=0,04); and a higher deceleration time of E Wave (GC: 142,09±39,11 vs. GE: 162,10±48,59; p=0,01). At 34 weeks, the EG had a significant higher stroke volume (CG: 51,13±11,85 vs. EG: 56,21±12,79 p=0,04), early filling of left ventricular (E) (CG: 78,38±14,07 vs. EG: 85,30±16,62; p=0,02) and deceleration time of E wave (CG: 130,35±37,11 vs. EG:146,61±43,40; p=0,04). Conclusion: Physical regular exercise program during pregnancy may produce positive maternal cardiovascular adaptations during the third trimester of pregnancy. In addition, it helps to control the cardiovascular risk factors without altering maternal and fetus health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The city of Lorca (Spain) was hit on May 11th, 2011, by two consecutive earth-quakes of magnitudes 4.6 and 5.2 Mw, causing casualties and important damage in buildings. Many of the damaged structures were reinforced concrete frames with wide beams. This study quantifies the expected level of damage on this structural type in the case of the Lorca earth-quake by means of a seismic index Iv that compares the energy input by the earthquake with the energy absorption/dissipation capacity of the structure. The prototype frames investigated represent structures designed in two time periods (1994–2002 and 2003–2008), in which the applicable codes were different. The influence of the masonry infill walls and the proneness of the frames to concentrate damage in a given story were further investigated through nonlinear dynamic response analyses. It is found that (1) the seismic index method predicts levels of damage that range from moderate/severe to complete collapse; this prediction is consistent with the observed damage; (2) the presence of masonry infill walls makes the structure very prone to damage concentration and reduces the overall seismic capacity of the building; and (3) a proper hierarchy of strength between beams and columns that guarantees the formation of a strong column-weak beam mechanism (as prescribed by seismic codes), as well as the adoption of counter-measures to avoid the negative interaction between non-structural infill walls and the main frame, would have reduced the level of damage from Iv=1 (collapse) to about Iv=0.5 (moderate/severe damage)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of this Master's Thesis is aimed at modeling active for estimating seismic hazard in Haití failures. It has been used zoned probabilistic method, both classical and hybrid, considering the incorporation of active faults as independent units in the calculation of seismic hazard. In this case, the rate of seismic moment is divided between the failures and the area seismogenetic same region. Failures included in this study are the Septentrional, Matheux and Enriquillo fault. We compared the results obtained by both methods to determine the importance of considering the faults in the calculation. In the first instance, updating the seismic catalog, homogenization, completeness analysis and purification was necessary to obtain a catalog ready to proceed to the estimation of the hazard. With the seismogenic zoning defined in previous studies and the updated seismic catalog, they are obtained relations Gutenberg-Richter recurrence of seismicity, superficial and deep in each area. Selected attenuation models were those used in (Benito et al., 2011), as the tectonic area of study is very similar to that of Central America. Its implementation has been through the development of a logical in which each branch is multiplied by an index based on the relevance of each combination of models. Results are presented as seismic hazard maps for return periods of 475, 975 and 2475 years, and spectral acceleration (SA) in structural periods: 0.1 - 0.2 - 0.5 - 1.0 and 2.0 seconds, and the difference accelerations between maps obtained by the classical method and the hybrid method. Maps realize the importance of including faults as separate items in the calculation of the hazard. The morphology of the zoned maps presented higher values in the area where the superficial and deep zone overlap. In the results it can determine that the minimum values in the zoned approach they outweigh the hybrid method, especially in areas where there are no faults. Higher values correspond to those obtained in fault zones by the hybrid method understanding that the contribution of the faults in this method is very important with high values. The maximum value of PGA obtained is close to Septentrional in 963gal, near to 460 gal in Matheux, and the Enriquillo fault line value reaches 760gal PGA in the Eastern segment and Western 730gal in the segment. This compares with that obtained in the zoned approach in this area where the value of PGA obtained was 240gal. These values are compared with those obtained by Frankel et al., (2011) with those have much similarity in values and morphology, in contrast to those presented by Benito et al., (2012) and the Standard Seismic Dominican Republic