972 resultados para Almost unitary power factor


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Treatment allocation by epidermal growth factor receptor mutation status is a new standard in patients with metastatic nonesmall-cell lung cancer. Yet, relatively few modern chemotherapy trials were conducted in patients characterized by epidermal growth factor receptor wild type. We describe the results of a multicenter phase II trial, testing in parallel 2 novel combination therapies, predefined molecular markers, and tumor rebiopsy at progression. Objective: The goal was to demonstrate that tailored therapy, according to tumor histology and epidermal growth factor receptor (EGFR) mutation status, and the introduction of novel drug combinations in the treatment of advanced nonesmall-cell lung cancer are promising for further investigation. Methods: We conducted a multicenter phase II trial with mandatory EGFR testing and 2 strata. Patients with EGFR wild type received 4 cycles of bevacizumab, pemetrexed, and cisplatin, followed by maintenance with bevacizumab and pemetrexed until progression. Patients with EGFR mutations received bevacizumab and erlotinib until progression. Patients had computed tomography scans every 6 weeks and repeat biopsy at progression. The primary end point was progression-free survival (PFS) ≥ 35% at 6 months in stratum EGFR wild type; 77 patients were required to reach a power of 90% with an alpha of 5%. Secondary end points were median PFS, overall survival, best overall response rate (ORR), and tolerability. Further biomarkers and biopsy at progression were also evaluated. Results: A total of 77 evaluable patients with EGFR wild type received an average of 9 cycles (range, 1-25). PFS at 6 months was 45.5%, median PFS was 6.9 months, overall survival was 12.1 months, and ORR was 62%. Kirsten rat sarcoma oncogene mutations and circulating vascular endothelial growth factor negatively correlated with survival, but thymidylate synthase expression did not. A total of 20 patients with EGFR mutations received an average of 16.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Strategies to improve risk prediction are of major importance in patients with heart failure (HF). Fibroblast growth factor 23 (FGF-23) is an endocrine regulator of phosphate and vitamin D homeostasis associated with an increased cardiovascular risk. We aimed to assess the prognostic effect of FGF-23 on mortality in HF patients with a particular focus on differences between patients with HF with preserved ejection fraction and patients with HF with reduced ejection fraction (HFrEF). METHODS AND RESULTS FGF-23 levels were measured in 980 patients with HF enrolled in the Ludwigshafen Risk and Cardiovascular Health (LURIC) study including 511 patients with HFrEF and 469 patients with HF with preserved ejection fraction and a median follow-up time of 8.6 years. FGF-23 was additionally measured in a second cohort comprising 320 patients with advanced HFrEF. FGF-23 was independently associated with mortality with an adjusted hazard ratio per 1-SD increase of 1.30 (95% confidence interval, 1.14-1.48; P<0.001) in patients with HFrEF, whereas no such association was found in patients with HF with preserved ejection fraction (for interaction, P=0.043). External validation confirmed the significant association with mortality with an adjusted hazard ratio per 1 SD of 1.23 (95% confidence interval, 1.02-1.60; P=0.027). FGF-23 demonstrated an increased discriminatory power for mortality in addition to N-terminal pro-B-type natriuretic peptide (C-statistic: 0.59 versus 0.63) and an improvement in net reclassification index (39.6%; P<0.001). CONCLUSIONS FGF-23 is independently associated with an increased risk of mortality in patients with HFrEF but not in those with HF with preserved ejection fraction, suggesting a different pathophysiologic role for both entities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parasites and pathogens are apparent key factors for the detrimental health of managed European honey bee subspecies, Apis mellifera. Apicultural trade is arguably the main factor for the almost global distribution of most honey bee diseases, thereby increasing chances for multiple infestations/infections of regions, apiaries, colonies and even individual bees. This imposes difficulties to evaluate the effects of pathogens in isolation, thereby creating demand to survey remote areas. Here, we conducted the first comprehensive survey for 14 honey bee pathogens in Mongolia (N = 3 regions, N = 9 locations, N = 151 colonies), where honey bee colonies depend on humans to overwinter. In Mongolia, honey bees, Apis spp., are not native and colonies of European A. mellifera subspecies have been introduced ~60 years ago. Despite the high detection power and large sample size across Mongolian regions with beekeeping, the mite Acarapis woodi, the bacteria Melissococcus plutonius and Paenibacillus larvae, the microsporidian Nosema apis, Acute bee paralysis virus, Kashmir bee virus, Israeli acute paralysis virus and Lake Sinai virus strain 2 were not detected, suggesting that they are either very rare or absent. The mite Varroa destructor, Nosema ceranae and four viruses (Sacbrood virus, Black queen cell virus, Deformed wing virus (DWV) and Chronic bee paralysis virus) were found with different prevalence. Despite the positive correlation between the prevalence of V. destructor mites and DWV, some areas had only mites, but not DWV, which is most likely due to the exceptional isolation of apiaries (up to 600 km). Phylogenetic analyses of the detected viruses reveal their clustering and European origin, thereby supporting the role of trade for pathogen spread and the isolation of Mongolia from South-Asian countries. In conclusion, this survey reveals the distinctive honey bee pathosphere of Mongolia, which offers opportunities for exciting future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Myxococcus xanthus is a Gram-negative soil bacterium that undergoes multicellular development when high-density cells are starved on a solid surface. Expression of the 4445 gene, predicted to encode a periplasmic protein, commences 1.5 h after the initiation of development and requires starvation and high density conditions. Addition of crude or boiled supernatant from starving high-density cells restored 4445 expression to starving low-density cells. Addition of L-threonine or L-isoleucine to starving low-density cells also restored 4445 expression, indicating that the high-density signaling activity present in the supernatant might be composed of extracellular amino acids or small peptides. To investigate the circuitry integrating these starvation and high-density signals, the cis- and trans-acting elements controlling 4445 expression were identified. The 4445 transcription start site was determined by primer extension analysis to be 58 by upstream of the predicted translation start site. The promoter region contained a consensus sequence characteristic of e&barbelow;xtrac&barbelow;ytoplasmic f&barbelow;unction (ECF) sigma factor-dependent promoters, suggesting that 4445 expression might be regulated by an ECF sigma factor-dependent pathway, which are known to respond to envelope stresses. The small size of the minimum regulatory region, identified by 5′-end deletion analysis as being only 66 by upstream of the transcription start site, suggests that RNA polymerase could be the sole direct regulator of 4445 expression. To identify trans-acting negative regulators of 4445 expression, a strain containing a 4445-lacZ was mutagenized using the Himar1-tet transposon. The four transposon insertions characterized mapped to an operon encoding a putative ECF sigma factor, ecfA; an anti-sigma factor, reaA; and a negative regulator, reaB. The reaA and the reaB mutants expressed 4445 during growth and development at levels almost 100-fold higher than wild type, indicating that these genes encode negative regulators. The ecfA mutant expressed 4445-lacZ at basal levels, indicating that ecfA is a positive regulator. High Mg2+ concentrations over-stimulated this ecfA pathway possibly due to the depletion of exopolysaccharides and assembled type IV pili. These data indicate that the ecfA operon encodes a new regulatory stress pathway that integrates and transduces starvation and cell density cues during early development and is also responsive to cell-surface alterations.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I sought to examine the relationship between public approval of the president and his subsequent behavior. Specifically, I looked at the relationship between public approval and signing statement usage along with their usage following the 2006 outcry against President Bush's use of them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detailed information about the sediment properties and microstructure can be provided through the analysis of digital ultrasonic P wave seismograms recorded automatically during full waveform core logging. The physical parameter which predominantly affects the elastic wave propagation in water-saturated sediments is the P wave attenuation coefficient. The related sedimentological parameter is the grain size distribution. A set of high-resolution ultrasonic transmission seismograms (ca. 50-500 kHz), which indicate downcore variations in the grain size by their signal shape and frequency content, are presented. Layers of coarse-grained foraminiferal ooze can be identified by highly attenuated P waves, whereas almost unattenuated waves are recorded in fine-grained areas of nannofossil ooze. Color-encoded pixel graphics of the seismograms and instantaneous frequencies present full waveform images of the lithology and attenuation. A modified spectral difference method is introduced to determine the attenuation coefficient and its power law a = kfn. Applied to synthetic seismograms derived using a "constant Q" model, even low attenuation coefficients can be quantified. A downcore analysis gives an attenuation log which ranges from ca. 700 dB/m at 400 kHz and a power of n = 1-2 in coarse-grained sands to few decibels per meter and n ? 0.5 in fine-grained clays. A least squares fit of a second degree polynomial describes the mutual relationship between the mean grain size and the attenuation coefficient. When it is used to predict the mean grain size, an almost perfect coincidence with the values derived from sedimentological measurements is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Public participation is an integral part of Environmental Impact Assessment (EIA), and as such, has been incorporated into regulatory norms. Assessment of the effectiveness of public participation has remained elusive however. This is partly due to the difficulty in identifying appropriate effectiveness criteria. This research uses Q methodology to discover and analyze stakeholder's social perspectives of the effectiveness of EIAs in the Western Cape, South Africa. It considers two case studies (Main Road and Saldanha Bay EIAs) for contextual participant perspectives of the effectiveness based on their experience. It further considers the more general opinion of provincial consent regulator staff at the Department of Environmental Affairs and the Department of Planning (DEA&DP). Two main themes of investigation are drawn from the South African National Environmental Management Act imperative for effectiveness: firstly, the participation procedure, and secondly, the stakeholder capabilities necessary for effective participation. Four theoretical frameworks drawn from planning, politics and EIA theory are adapted to public participation and used to triangulate the analysis and discussion of the revealed social perspectives. They consider citizen power in deliberation, Habermas' preconditions for the Ideal Speech Situation (ISS), a Foucauldian perspective of knowledge, power and politics, and a Capabilities Approach to public participation effectiveness. The empirical evidence from this research shows that the capacity and contextual constraints faced by participants demand the legislative imperatives for effective participation set out in the NEMA. The implementation of effective public participation has been shown to be a complex, dynamic and sometimes nebulous practice. The functional level of participant understanding of the process was found to be significantly wide-ranging with consequences of unequal and dissatisfied stakeholder engagements. Furthermore, the considerable variance of stakeholder capabilities in the South African social context, resulted in inequalities in deliberation. The social perspectives revealed significant differences in participant experience in terms of citizen power in deliberation. The ISS preconditions are highly contested in both the Saldanha EIA case study and the DEA&DP social perspectives. Only one Main Road EIA case study social perspective considered Foucault's notion of governmentality as a reality in EIA public participation. The freedom of control of ones environment, based on a Capabilities approach, is a highly contested notion. Although agreed with in principle, all of the social perspectives indicate that contextual and capacity realities constrain its realisation. This research has shown that Q method can be applied to EIA public participation in South Africa and, with the appropriate research or monitoring applications it could serve as a useful feedback tool to inform best practice public participation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Illumination uniformity of a spherical capsule directly driven by laser beams has been assessed numerically. Laser facilities characterized by ND = 12, 20, 24, 32, 48 and 60 directions of irradiation with associated a single laser beam or a bundle of NB laser beams have been considered. The laser beam intensity profile is assumed super-Gaussian and the calculations take into account beam imperfections as power imbalance and pointing errors. The optimum laser intensity profile, which minimizes the root-mean-square deviation of the capsule illumination, depends on the values of the beam imperfections. Assuming that the NB beams are statistically independents is found that they provide a stochastic homogenization of the laser intensity associated to the whole bundle, reducing the errors associated to the whole bundle by the factor  , which in turn improves the illumination uniformity of the capsule. Moreover, it is found that the uniformity of the irradiation is almost the same for all facilities and only depends on the total number of laser beams Ntot = ND × NB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

urface treatments have been recently shown to play an active role in electrical characteristics in AlGaN/GaN HEMTs, in particular during the passivation processing [1-4]. However, the responsible mechanisms are partially unknown and further studies are demanding. The effects of power and time N2 plasma pre-treatment prior to SiN deposition using PE-CVD (plasma enhanced chemical vapour deposition) on GaN and AlGaN/GaN HEMT have been investigated. The low power (60 W) plasma pre-treatment was found to improve the electronic characteristics in GaN based HEMT devices, independently of the time duration up to 20 min. In contrast, high power (150 and 210 W) plasma pretreatment showed detrimental effects in the electronic properties (Fig. 1), increasing the sheet resistance of the 2DEG, decreasing the 2DEG charge density in AlGaN/GaN HEMTs, transconductance reduction and decreasing the fT and fmax values up to 40% respect to the case using 60 W N2 plasma power. Although AFM (atomic force microscopy) results showed AlGaN and GaN surface roughness is not strongly affected by the N2-plasma, KFM (Kelvin force microscopy) surface analysis shows significant changes in the surface potential, trending to increase its values as the plasma power is higher. The whole results point at energetic ions inducing polarization-charge changes that affect dramatically to the 2-DEG charge density and the final characteristics of the HEMT devices. Therefore, we conclude that AlGaN surface is strongly sensitive to N2 plasma power conditions, which turn to be a key factor to achieve a good surface preparation prior to SiN passivation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To date, the majority of quality controls performed at PV plants are based on the measurement of a small sample of individual modules. Consequently, there is very little representative data on the real Standard Test Conditions (STC) power output values for PV generators. This paper presents the power output values for more than 1300 PV generators having a total installed power capacity of almost 15.3 MW. The values were obtained by the INGEPER-UPNA group, in collaboration with the IES-UPM, through a study to monitor the power output of a number of PV plants from 2006 to 2009. This work has made it possible to determine, amongst other things, the power dispersion that can be expected amongst generators made by different manufacturers, amongst generators made by the same manufacturer but comprising modules of different nameplate ratings and also amongst generators formed by modules with the same characteristics. The work also analyses the STC power output evolution over time in the course of this 4-year study. The values presented here could be considered to be representative of generators with fault-free modules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La condición física, o como mejor se la conoce hoy en día el “fitness”, es una variable que está cobrando gran protagonismo, especialmente desde la perspectiva de la salud. La mejora de la calidad de vida que se ha experimentado en los últimos años en las sociedades desarrolladas, conlleva un aumento de la esperanza de vida, lo que hace que cada vez más personas vivan más años. Este rápido crecimiento de la población mayor de 60 años hace que, un grupo poblacional prácticamente olvidado desde el punto de vista de la investigación científica en el campo de la actividad física y del deporte, cobre gran relevancia, con el fin de poder ayudar a alcanzar el dicho “no se trata de aportar años a la vida sino vida a lo años”. La presente memoria de Tesis Doctoral tiene como principal objetivo valorar los niveles de fitness en población mayor española, además de analizar la relación existente entre el fitness, sus condicionantes y otros aspectos de la salud, tales como la composición corporal y el estado cognitivo. Entendemos que para poder establecer futuras políticas de salud pública en relación a la actividad física y el envejecimiento activo es necesario conocer cuáles son los niveles de partida de la población mayor en España y sus condicionantes. El trabajo está basado en los datos del estudio multicéntrico EXERNET (Estudio Multi-céntrico para la Evaluación de los Niveles de Condición Física y su relación con Estilos de Vida Saludables en población mayor española no institucionalizada), así como en los datos de dos estudios, llevados a cabo en población mayor institucionalizada. Se han analizado un total de 3136 mayores de vida independiente, procedentes de 6 comunidades autónomas, y 153 mayores institucionalizados en residencias de la Comunidad de Madrid. Los principales resultados de esta tesis son los siguientes: a) Fueron establecidos los valores de referencia, así como las curvas de percentiles, para cada uno de los test de fitness, de acuerdo a la edad y al sexo, en población mayor española de vida independiente y no institucionalizada. b) Los varones obtuvieron mejores niveles de fitness que las mujeres, excepto en los test de flexibilidad; existe una tendencia a disminuir la condición física en ambos sexos a medida que la edad aumenta. c) Niveles bajos de fitness funcional fueron asociados con un aumento en la percepción de problemas. d) El nivel mínimo de fitness funcional a partir del cual los mayores perciben problemas en sus actividades de la vida diaria (AVD) es similar en ambos sexos. e) Niveles elevados de fitness fueron asociados con un menor riesgo de sufrir obesidad sarcopénica y con una mejor salud percibida en los mayores. f) Las personas mayores con obesidad sarcopénica tienen menor capacidad funcional que las personas mayores sanas. g) Niveles elevados de fuerza fueron asociados con un mejor estado cognitivo siendo el estado cognitivo la variable que más influye en el deterioro de la fuerza, incluso más que el sexo y la edad. ABSTRACT Fitness is a variable that is gaining in prominence, especially from the health perspective. Improvement of life quality that has been experienced in the last few years in developed countries, leads to an expanded life expectancy, increasing the numbers of people living longer. This population consisting of people of over 60 years, an almost forgotten population group from the point of view of scientific research in the field of physical activity and sport, is becoming increasingly important, with the main aim of helping to achieve the saying “do not only add years to life, but also add life to years”. The principal aim of the current thesis was to assess physical fitness levels in Spanish elderly people, of over 65 years, analyzing relationship between physical fitness, its determinants, and other aspects of health such as body composition and cognitive status. In order to establish further public health policies in relation to physical activity and active ageing it is necessary to identify the starting physical fitness levels of the Spanish population and their determinants. The work is based on data from the EXERNET multi-center study ("Multi-center Study for the Evaluation of Fitness levels and their relationship to Healthy Lifestyles in noninstitutionalized Spanish elderly"), and on data from two studies conducted in institutionalized elderly people: a total of 3136 non-institutionalized elderly, from 6 Regions of Spain, and 153 institutionalized elderly in nursing homes of Madrid. The main outcomes of this thesis are: a) sex- and age-specific physical fitness normative values and percentile curves for independent and non-institutionalized Spanish elderly were established. b) Greater physical fitness was present in the elderly men than in women, except for the flexibility test, and a trend toward decreased physical fitness in both sexes as their age increased. c) Lower levels of functional fitness were associated with increased perceived problems. d) The minimum functional fitness level at which older adults perceive problems in their ADLs, is similar for both sexes e) Higher levels of physical fitness were associated with a reduced risk of suffering sarcopenic obesity and better perceived health among the elderly. f) The elderly with sarcopenic obesity have lower physical functioning than healthy counterparts. g) Higher strength values were associated with better cognitive status with cognitive status being the most influencing variable in strength deterioration even more than sex and age.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the power-frequency control of hydropower plants with long penstocks is addressed. In such configuration the effects of pressure waves cannot be neglected and therefore commonly used criteria for adjustment of PID governors would not be appropriate. A second-order Π model of the turbine-penstock based on a lumped parameter approach is considered. A correction factor is introduced in order to approximate the model frequency response to the continuous case in the frequency interval of interest. Using this model, several criteria are analysed for adjusting the PI governor of a hydropower plant operating in an isolated system. Practical criteria for adjusting the PI governor are given. The results are applied to a real case of a small island where the objective is to achieve a generation 100% renewable (wind and hydro). Frequency control is supposed to be provided exclusively by the hydropower plant. It is verified that the usual criterion for tuning the PI controller of isolated hydro plants gives poor results. However, with the new proposed adjustment, the time response is considerably improved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las patologías de la voz se han transformado en los últimos tiempos en una problemática social con cierto calado. La contaminación de las ciudades, hábitos como el de fumar, el uso de aparatos de aire acondicionado, etcétera, contribuyen a ello. Esto alcanza más relevancia en profesionales que utilizan su voz de manera frecuente, como, por ejemplo, locutores, cantantes, profesores o teleoperadores. Por todo ello resultan de especial interés las técnicas de ayuda al diagnóstico que son capaces de extraer conclusiones clínicas a partir de una muestra de la voz grabada con un micrófono, frente a otras invasivas que implican la exploración utilizando laringoscopios, fibroscopios o videoendoscopios, técnicas en cualquier caso mucho más molestas para los pacientes al exigir la introducción parcial del instrumental citado por la garganta, en actuaciones consideradas de tipo quirúrgico. Dentro de aquellas técnicas se ha avanzado mucho en un período de tiempo relativamente corto. En lo que se refiere al diagnóstico de patologías, hemos pasado en los últimos quince años de trabajar principalmente con parámetros extraídos de la señal de voz –tanto en el dominio del tiempo como en el de la frecuencia– y con escalas elaboradas con valoraciones subjetivas realizadas por expertos a hacerlo también con parámetros procedentes de estimaciones de la fuente glótica. La importancia de utilizar la fuente glótica reside, a grandes rasgos, en que se trata de una señal vinculada directamente al estado de la estructura laríngea del locutor y también en que está generalmente menos influida por el tracto vocal que la señal de voz. Es conocido que el tracto vocal guarda más relación con el mensaje hablado, y su presencia dificulta el proceso de detección de patología vocal. Estas estimaciones de la fuente glótica han sido obtenidas a través de técnicas de filtrado inverso desarrolladas por nuestro grupo de investigación. Hemos conseguido, además, profundizar en la naturaleza de la señal glótica: somos capaces de descomponerla y relacionarla con parámetros biomecánicos de los propios pliegues vocales, obteniendo estimaciones de elementos como la masa, la pérdida de energía o la elasticidad del cuerpo y de la cubierta del pliegue, entre otros. De las componentes de la fuente glótica surgen también los denominados parámetros biométricos, relacionados con la forma de la señal, que constituyen por sí mismos una firma biométrica del individuo. También trabajaremos con parámetros temporales, relacionados con las diferentes etapas que se observan dentro de la señal glótica durante un ciclo de fonación. Por último, consideraremos parámetros clásicos de perturbación y energía de la señal. En definitiva, contamos ahora con una considerable cantidad de parámetros glóticos que conforman una base estadística multidimensional, destinada a ser capaz de discriminar personas con voces patológicas o disfónicas de aquellas que no presentan patología en la voz o con voces sanas o normofónicas. Esta tesis doctoral se ocupa de varias cuestiones: en primer lugar, es necesario analizar cuidadosamente estos nuevos parámetros, por lo que ofreceremos una completa descripción estadística de los mismos. También estudiaremos cuestiones como la distribución de los parámetros atendiendo a criterios como el de normalidad estadística de los mismos, ocupándonos especialmente de la diferencia entre las distribuciones que presentan sujetos sanos y sujetos con patología vocal. Para todo ello emplearemos diferentes técnicas estadísticas: generación de elementos y diagramas descriptivos, pruebas de normalidad y diversos contrastes de hipótesis, tanto paramétricos como no paramétricos, que considerarán la diferencia entre los grupos de personas sanas y los grupos de personas con alguna patología relacionada con la voz. Además, nos interesa encontrar relaciones estadísticas entre los parámetros, de cara a eliminar posibles redundancias presentes en el modelo, a reducir la dimensionalidad del problema y a establecer un criterio de importancia relativa en los parámetros en cuanto a su capacidad discriminante para el criterio patológico/sano. Para ello se aplicarán técnicas estadísticas como la Correlación Lineal Bivariada y el Análisis Factorial basado en Componentes Principales. Por último, utilizaremos la conocida técnica de clasificación Análisis Discriminante, aplicada a diferentes combinaciones de parámetros y de factores, para determinar cuáles de ellas son las que ofrecen tasas de acierto más prometedoras. Para llevar a cabo la experimentación se ha utilizado una base de datos equilibrada y robusta formada por doscientos sujetos, cien de ellos pertenecientes al género femenino y los restantes cien al género masculino, con una proporción también equilibrada entre los sujetos que presentan patología vocal y aquellos que no la presentan. Una de las aplicaciones informáticas diseñada para llevar a cabo la recogida de muestras también es presentada en esta tesis. Los distintos estudios estadísticos realizados nos permitirán identificar aquellos parámetros que tienen una mayor contribución a la hora de detectar la presencia de patología vocal. Alguno de los estudios, además, nos permitirá presentar una ordenación de los parámetros en base a su importancia para realizar la detección. Por otra parte, también concluiremos que en ocasiones es conveniente realizar una reducción de la dimensionalidad de los parámetros para mejorar las tasas de detección. Por fin, las propias tasas de detección constituyen quizá la conclusión más importante del trabajo. Todos los análisis presentes en el trabajo serán realizados para cada uno de los dos géneros, de acuerdo con diversos estudios previos que demuestran que los géneros masculino y femenino deben tratarse de forma independiente debido a las diferencias orgánicas observadas entre ambos. Sin embargo, en lo referente a la detección de patología vocal contemplaremos también la posibilidad de trabajar con la base de datos unificada, comprobando que las tasas de acierto son también elevadas. Abstract Voice pathologies have become recently in a social problem that has reached a certain concern. Pollution in cities, smoking habits, air conditioning, etc. contributes to it. This problem is more relevant for professionals who use their voice frequently: speakers, singers, teachers, actors, telemarketers, etc. Therefore techniques that are capable of drawing conclusions from a sample of the recorded voice are of particular interest for the diagnosis as opposed to other invasive ones, involving exploration by laryngoscopes, fiber scopes or video endoscopes, which are techniques much less comfortable for patients. Voice quality analysis has come a long way in a relatively short period of time. In regard to the diagnosis of diseases, we have gone in the last fifteen years from working primarily with parameters extracted from the voice signal (both in time and frequency domains) and with scales drawn from subjective assessments by experts to produce more accurate evaluations with estimates derived from the glottal source. The importance of using the glottal source resides broadly in that this signal is linked to the state of the speaker's laryngeal structure. Unlike the voice signal (phonated speech) the glottal source, if conveniently reconstructed using adaptive lattices, may be less influenced by the vocal tract. As it is well known the vocal tract is related to the articulation of the spoken message and its influence complicates the process of voice pathology detection, unlike when using the reconstructed glottal source, where vocal tract influence has been almost completely removed. The estimates of the glottal source have been obtained through inverse filtering techniques developed by our research group. We have also deepened into the nature of the glottal signal, dissecting it and relating it to the biomechanical parameters of the vocal folds, obtaining several estimates of items such as mass, loss or elasticity of cover and body of the vocal fold, among others. From the components of the glottal source also arise the so-called biometric parameters, related to the shape of the signal, which are themselves a biometric signature of the individual. We will also work with temporal parameters related to the different stages that are observed in the glottal signal during a cycle of phonation. Finally, we will take into consideration classical perturbation and energy parameters. In short, we have now a considerable amount of glottal parameters in a multidimensional statistical basis, designed to be able to discriminate people with pathologic or dysphonic voices from those who do not show pathology. This thesis addresses several issues: first, a careful analysis of these new parameters is required, so we will offer a complete statistical description of them. We will also discuss issues such as distribution of the parameters, considering criteria such as their statistical normality. We will take special care in the analysis of the difference between distributions from healthy subjects and the distributions from pathological subjects. To reach these goals we will use different statistical techniques such as: generation of descriptive items and diagramas, tests for normality and hypothesis testing, both parametric and nonparametric. These latter techniques consider the difference between the groups of healthy subjects and groups of people with an illness related to voice. In addition, we are interested in finding statistical relationships between parameters. There are various reasons behind that: eliminate possible redundancies in the model, reduce the dimensionality of the problem and establish a criterion of relative importance in the parameters. The latter reason will be done in terms of discriminatory power for the criterion pathological/healthy. To this end, statistical techniques such as Bivariate Linear Correlation and Factor Analysis based on Principal Components will be applied. Finally, we will use the well-known technique of Discriminant Analysis classification applied to different combinations of parameters and factors to determine which of these combinations offers more promising success rates. To perform the experiments we have used a balanced and robust database, consisting of two hundred speakers, one hundred of them males and one hundred females. We have also used a well-balanced proportion where subjects with vocal pathology as well as subjects who don´t have a vocal pathology are equally represented. A computer application designed to carry out the collection of samples is also presented in this thesis. The different statistical analyses performed will allow us to determine which parameters contribute in a more decisive way in the detection of vocal pathology. Therefore, some of the analyses will even allow us to present a ranking of the parameters based on their importance for the detection of vocal pathology. On the other hand, we will also conclude that it is sometimes desirable to perform a dimensionality reduction in order to improve the detection rates. Finally, detection rates themselves are perhaps the most important conclusion of the work. All the analyses presented in this work have been performed for each of the two genders in agreement with previous studies showing that male and female genders should be treated independently, due to the observed functional differences between them. However, with regard to the detection of vocal pathology we will consider the possibility of working with the unified database, ensuring that the success rates obtained are also high.