920 resultados para Modified reflected normal loss function
Resumo:
Each year, some two million people in the United Kingdom experience visual hallucinations. Infrequent, fleeting visual hallucinations, often around sleep, are a usual feature of life. In contrast, consistent, frequent, persistent hallucinations during waking are strongly associated with clinical disorders; in particular delirium, eye disease, psychosis, and dementia. Research interest in these disorders has driven a rapid expansion in investigatory techniques, new evidence, and explanatory models. In parallel, a move to generative models of normal visual function has resolved the theoretical tension between veridical and hallucinatory perceptions. From initial fragmented areas of investigation, the field has become increasingly coherent over the last decade. Controversies and gaps remain, but for the first time the shapes of possible unifying models are becoming clear, along with the techniques for testing these. This book provides a comprehensive survey of the neuroscience of visual hallucinations and the clinical techniques for testing these. It brings together the very latest evidence from cognitive neuropsychology, neuroimaging, neuropathology, and neuropharmacology, placing this within current models of visual perception. Leading researchers from a range of clinical and basic science areas describe visual hallucinations in their historical and scientific context, combining introductory information with up-to-date discoveries. They discuss results from the main investigatory techniques applied in a range of clinical disorders. The final section outlines future research directions investigating the potential for new understandings of veridical and hallucinatory perceptions, and for treatments of problematic hallucinations. Fully comprehensive, this is an essential reference for clinicians in the fields of the psychology and psychiatry of hallucinations, as well as for researchers in departments, research institutes and libraries. It has strong foundations in neuroscience, cognitive science, optometry, psychiatry, psychology, clinical medicine, and philosophy. With its lucid explanation and many illustrations, it is a clear resource for educators and advanced undergraduate and graduate students.
Resumo:
There is increasing evidence that the complement system plays an important role in diabetes and the development of diabetic vascular complications. In particular, mannan-binding lectin (MBL) levels are elevated in diabetes patients, and diabetes patients with diabetic nephropathy have higher MBL levels than diabetes patients with normal renal function. The MBL-associated serine proteases (MASPs) MASP-1, MASP-2, and MASP-3, and MBL-associated protein MAp44 have not yet been studied in diabetes patients. We therefore measured plasma levels of MASP-1, MASP-2, MASP-3, and MAp44 in 30 children with type 1 diabetes mellitus (T1DM) and 17 matched control subjects, and in 45 adults with T1DM and 31 matched control subjects. MASP-1 and MASP-2 levels were significantly higher in children and adults with T1DM than in their respective control groups, whereas MASP-3 and MAp44 levels did not differ between patients and controls. MASP-1 and MASP-2 levels correlated with HbA1c, and MASP levels decreased when glycaemic control improved. Since MASP-1 and MASP-2 have been shown to directly interact with blood coagulation, elevated levels of these proteins may play a role in the enhanced thrombotic environment and consequent vascular complications in diabetes.
Resumo:
Cisplatin, a major antineoplastic drug used in the treatment of solid tumors, is a known nephrotoxin. This retrospective cohort study evaluated the prevalence and severity of cisplatin nephrotoxicity in 54 children and its impact on height and weight.We recorded the weight, height, serum creatinine, and electrolytes in each cisplatin cycle and after 12 months of treatment. Nephrotoxicity was graded as follows: normal renal function (Grade 0); asymptomatic electrolyte disorders, including an increase in serum creatinine, up to 1.5 times baseline value (Grade 1); need for electrolyte supplementation <3 months and/or increase in serum creatinine 1.5 to 1.9 times from baseline (Grade 2); increase in serum creatinine 2 to 2.9 times from baseline or need for electrolyte supplementation for more than 3 months after treatment completion (Grade 3); and increase in serum creatinine ≥3 times from baseline or renal replacement therapy (Grade 4).Nephrotoxicity was observed in 41 subjects (75.9%). Grade 1 nephrotoxicity was observed in 18 patients (33.3%), Grade 2 in 5 patients (9.2%), and Grade 3 in 18 patients (33.3%). None had Grade 4 nephrotoxicity. Nephrotoxicity patients were younger and received higher cisplatin dose, they also had impairment in longitudinal growth manifested as statistically significant worsening on the height Z Score at 12 months after treatment. We used a multiple logistic regression model using the delta of height Z Score (baseline-12 months) as dependent variable in order to adjust for the main confounder variables such as: germ cell tumor, cisplatin total dose, serum magnesium levels at 12 months, gender, and nephrotoxicity grade. Patients with nephrotoxicity Grade 1 where at higher risk of not growing (OR 5.1, 95% CI 1.07-24.3, P=0.04). The cisplatin total dose had a significant negative relationship with magnesium levels at 12 months (Spearman r=-0.527, P=<0.001).
Resumo:
BACKGROUND Phosphate imbalances or disorders have a high risk of morbidity and mortality in patients with chronic kidney disease. It is unknown if this finding extends to mortality in patients presenting at an emergency room with or without normal kidney function. METHODS AND PATIENTS This cross sectional analysis included all emergency room patients between 2010 and 2011 at the Inselspital Bern, Switzerland. A multivariable cox regression model was applied to assess the association between phosphate levels and in-hospital mortality up to 28 days. RESULTS 22,239 subjects were screened for the study. Plasma phosphate concentrations were measured in 2,390 patients on hospital admission and were included in the analysis. 3.5% of the 480 patients with hypophosphatemia and 10.7% of the 215 patients with hyperphosphatemia died. In univariate analysis, phosphate levels were associated with mortality, age, diuretic therapy and kidney function (all p<0.001). In a multivariate Cox regression model, hyperphosphatemia (OR 3.29, p<0.001) was a strong independent risk factor for mortality. Hypophosphatemia was not associated with mortality (p>0.05). CONCLUSION Hyperphosphatemia is associated with 28-day in-hospital mortality in an unselected cohort of patients presenting in an emergency room.
Resumo:
The movement of ions across specific channels embedded on the membrane of individual cardiomyocytes is crucial for the generation and propagation of the cardiac electric impulse. Emerging evidence over the past 20 years strongly suggests that the normal electric function of the heart is the result of dynamic interactions of membrane ion channels working in an orchestrated fashion as part of complex molecular networks. Such networks work together with exquisite temporal precision to generate each action potential and contraction. Macromolecular complexes play crucial roles in transcription, translation, oligomerization, trafficking, membrane retention, glycosylation, post-translational modification, turnover, function, and degradation of all cardiac ion channels known to date. In addition, the accurate timing of each cardiac beat and contraction demands, a comparable precision on the assembly and organizations of sodium, calcium, and potassium channel complexes within specific subcellular microdomains, where physical proximity allows for prompt and efficient interaction. This review article, part of the Compendium on Sudden Cardiac Death, discusses the major issues related to the role of ion channel macromolecular assemblies in normal cardiac electric function and the mechanisms of arrhythmias leading to sudden cardiac death. It provides an idea of how these issues are being addressed in the laboratory and in the clinic, which important questions remain unanswered, and what future research will be needed to improve knowledge and advance therapy.
Resumo:
BACKGROUND In recent years, the scientific discussion has focused on new strategies to enable a torn anterior cruciate ligament (ACL) to heal into mechanically stable scar tissue. Dynamic intraligamentary stabilization (DIS) was first performed in a pilot study of 10 patients. The purpose of the current study was to evaluate whether DIS would lead to similarly sufficient stability and good clinical function in a larger case series. METHODS Acute ACL ruptures were treated by using an internal stabilizer, combined with anatomical repositioning of torn bundles and microfracturing to promote self-healing. Clinical assessment (Tegner, Lysholm, IKDC, and visual analogue scale [VAS] for patient satisfaction scores) and assessment of knee laxity was performed at 3, 6, 12, and 24 months. A one-sample design with a non-inferiority margin was chosen to compare the preoperative and postoperative IKDS and Lysholm scores. RESULTS 278 patients with a 6:4 male to female ratio were included. Average patient age was 31 years. Preoperative mean IKDC, Lysholm, and Tegner scores were 98.8, 99.3, and 5.1 points, respectively. The mean anteroposterior (AP) translation difference from the healthy contralateral knee was 4.7 mm preoperatively. After DIS treatment, the mean 12-month IKDC, Lysholm, and Tegner scores were 93.6, 96.2, and 4.9 points, respectively, and the mean AP translation difference was 2.3 mm. All these outcomes were significantly non-inferior to the preoperative or healthy contralateral values (p < 0.0001). Mean patient satisfaction was 8.8 (VAS 0-10). Eight ACL reruptures occurred and 3 patients reported insufficient subjective stability of the knee at the end of the study period. CONCLUSIONS Anatomical repositioning, along with DIS and microfracturing, leads to clinically stable healing of the torn ACL in the large majority of patients. Most patients exhibited almost normal knee function, reported excellent satisfaction, and were able to return to their previous levels of sporting activity. Moreover, this strategy resulted in stable healing of all sutured menisci, which could lower the rate of osteoarthritic changes in future. The present findings support the discussion of a new paradigm in ACL treatment based on preservation and self-healing of the torn ligament.
Resumo:
This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the central bank should adopt a loss function that differs from the social loss function. Carefully designing the central bank s loss function with consistent targets can harmonize optimal and consistent policy. This desirable result emerges from two observations. First, the social loss function reflects a normative process that does not necessarily prove consistent with the structure of the microeconomy. Thus, the social loss function cannot serve as a direct loss function for the central bank. Second, an optimal loss function for the central bank must depend on the structure of that microeconomy. In addition, this paper shows that control theory provides a benchmark for institution design in a game-theoretical framework.
Resumo:
This paper shows that optimal policy and consistent policy outcomes require the use of control-theory and game-theory solution techniques. While optimal policy and consistent policy often produce different outcomes even in a one-period model, we analyze consistent policy and its outcome in a simple model, finding that the cause of the inconsistency with optimal policy traces to inconsistent targets in the social loss function. As a result, the social loss function cannot serve as a direct loss function for the central bank. Accordingly, we employ implementation theory to design a central bank loss function (mechanism design) with consistent targets, while the social loss function serves as a social welfare criterion. That is, with the correct mechanism design for the central bank loss function, optimal policy and consistent policy become identical. In other words, optimal policy proves implementable (consistent).
Resumo:
This paper examines four equivalent methods of optimal monetary policymaking, committing to the social loss function, using discretion with the central bank long-run and short-run loss functions, and following monetary policy rules. All lead to optimal economic performance. The same performance emerges from these different policymaking methods because the central bank actually follows the same (similar) policy rules. These objectives (the social loss function, the central bank long-run and short-run loss functions) and monetary policy rules imply a complete regime for optimal policy making. The central bank long-run and short-run loss functions that produce the optimal policy with discretion differ from the social loss function. Moreover, the optimal policy rule emerges from the optimization of these different central bank loss functions.
Resumo:
Aortic aneurysms and dissections are the 15th most common cause of death in the United States. Genetic factors contribute to the pathogenesis of thoracic aortic aneurysms and dissections (TAAD). Currently, six loci and four genes have been identified for familial TAAD. Notably, mutations in smooth muscle cell (SMC) contractile genes, ACTA2 and MYH11, are responsible for 15% of familial TAAD, suggesting that proper SMC contraction is important for normal aorta function. Therefore, we hypothesize that mutations in other genes encoding SMC contractile proteins also cause familial TAAD. ^ To test this hypothesis, we used a candidate gene approach to identify causative mutations in SMC contractile genes for familial TAAD. Sequencing DNA in 80 TAAD patients from unrelated families, we identified putative mutations in eight contractile genes. We chose myosin light chain kinase (MLCK ) S1759P for further study for the following reasons: (1) Serine 1759 is conserved between vertebrates and invertebrates. (2) S1759P is predicted to be functionally deleterious by bioinformatics. (3) Low blood pressure is observed in SMC-selective MLCK-deficient mice. ^ In the presence of Ca2+/Calmodulin (CaM), MLCK containing CaM binding and kinase domains are activated to phosphorylate myosin light chain, thereby initiate SMC contraction. The CaM binding sequence of MLCK forms an α-helix structure required for CaM binding. MLCK Serine 1759 is located within the CaM binding domain. S1759P is predicted to decrease the α-helix composition in the CaM binding domain. Hence, we hypothesize that MLCK mutations cause TAAD through disturbing CaM binding and MLCK activity. ^ We further sequenced MLCK in DNA samples from additional 86 probands with familial TAAD. Two more mutations, MLCK A1754T and R1480Stop, were identified, supporting that MLCK mutations cause familial TAAD. ^ To define whether MLCK mutations disrupted CaM binding and MLCK activity, we performed co-immunoprecipitation and kinase assays. Decreased CaM binding and kinase activity was detected in A1754T and S1759P. Moreover, R1480Stop is predicted to truncate kinase and CaM binding domains. We conclude that MLCK mutations disrupt CaM binding and MLCK activity. ^ Collectively, our study is first to show mutations in genes regulating SMC contraction cause TAAD. This finding further highlights the importance of SMC contraction in maintaining aorta function. ^
Resumo:
En el ámbito de la llanura pampeana tienen lugar procesos degradativos que condicionan la actividad agrícola ganadera, vinculados con la erosión de tipo hídrica superficial. El presente trabajo busca modelar la emisión de sedimentos en una cuenca hidrográfica con forestaciones del Noreste Pampeano. La metodología implementada consiste en aplicar un modelo cartográfico cuantitativo desarrollado en base geoespacial con Sistema de Información Geográfica, apoyado en la Ecuación Universal de Pérdida de Suelo Modificada (MUSLE). Se realizó un análisis de validación estadística con ensayos de microsimulador de lluvias a campo, para una lluvia de 30 mm.h-1 de dos años de retorno. Los resultados obtenidos fueron mapas georreferenciados de cada factor de la MUSLE valorizados por color-intensidad, que alcanzan un valor de 33,77 Mg de sedimentos emitidos a la salida de la cuenca, con un coeficiente de correlación de 0,94 y un grado de ajuste de Nash-Sutcliffe de 0,82. Se concluye que el modelo cartográfico generó información espacial precisa de los componentes de la MUSLE para un evento de lluvia concreto. La aplicación del microsimulador de lluvias permitió la obtención de valores reales de emisión de sedimentos, lográndose un alto grado de ajuste. La emisión de sedimentos en la cuenca resultó ser leve a nula.
Resumo:
Damage models based on the Continuum Damage Mechanics (CDM) include explicitly the coupling between damage and mechanical behavior and, therefore, are consistent with the definition of damage as a phenomenon with mechanical consequences. However, this kind of models is characterized by their complexity. Using the concept of lumped models, possible simplifications of the coupled models have been proposed in the literature to adapt them to the study of beams and frames. On the other hand, in most of these coupled models damage is associated only with the damage energy release rate which is shown to be the elastic strain energy. According to this, damage is a function of the maximum amplitude of cyclic deformation but does not depend on the number of cycles. Therefore, low cycle effects are not taking into account. From the simplified model proposed by Flórez-López, it is the purpose of this paper to present a formulation that allows to take into account the degradation produced not only by the peak values but also by the cumulative effects such as the low cycle fatigue. For it, the classical damage dissipative potential based on the concept of damage energy release rate is modified using a fatigue function in order to include cumulative effects. The fatigue function is determined through parameters such as the cumulative rotation and the total rotation and the number of cycles to failure. Those parameters can be measured or identified physically through the haracteristics of the RC. So the main advantage of the proposed model is the possibility of simulating the low cycle fatigue behavior without introducing parameters with no suitable physical meaning. The good performance of the proposed model is shown through a comparison between numerical and test results under cycling loading.
Resumo:
This is an account of some aspects of the geometry of Kahler affine metrics based on considering them as smooth metric measure spaces and applying the comparison geometry of Bakry-Emery Ricci tensors. Such techniques yield a version for Kahler affine metrics of Yau s Schwarz lemma for volume forms. By a theorem of Cheng and Yau, there is a canonical Kahler affine Einstein metric on a proper convex domain, and the Schwarz lemma gives a direct proof of its uniqueness up to homothety. The potential for this metric is a function canonically associated to the cone, characterized by the property that its level sets are hyperbolic affine spheres foliating the cone. It is shown that for an n -dimensional cone, a rescaling of the canonical potential is an n -normal barrier function in the sense of interior point methods for conic programming. It is explained also how to construct from the canonical potential Monge-Ampère metrics of both Riemannian and Lorentzian signatures, and a mean curvature zero conical Lagrangian submanifold of the flat para-Kahler space.
Resumo:
La familia de algoritmos de Boosting son un tipo de técnicas de clasificación y regresión que han demostrado ser muy eficaces en problemas de Visión Computacional. Tal es el caso de los problemas de detección, de seguimiento o bien de reconocimiento de caras, personas, objetos deformables y acciones. El primer y más popular algoritmo de Boosting, AdaBoost, fue concebido para problemas binarios. Desde entonces, muchas han sido las propuestas que han aparecido con objeto de trasladarlo a otros dominios más generales: multiclase, multilabel, con costes, etc. Nuestro interés se centra en extender AdaBoost al terreno de la clasificación multiclase, considerándolo como un primer paso para posteriores ampliaciones. En la presente tesis proponemos dos algoritmos de Boosting para problemas multiclase basados en nuevas derivaciones del concepto margen. El primero de ellos, PIBoost, está concebido para abordar el problema descomponiéndolo en subproblemas binarios. Por un lado, usamos una codificación vectorial para representar etiquetas y, por otro, utilizamos la función de pérdida exponencial multiclase para evaluar las respuestas. Esta codificación produce un conjunto de valores margen que conllevan un rango de penalizaciones en caso de fallo y recompensas en caso de acierto. La optimización iterativa del modelo genera un proceso de Boosting asimétrico cuyos costes dependen del número de etiquetas separadas por cada clasificador débil. De este modo nuestro algoritmo de Boosting tiene en cuenta el desbalanceo debido a las clases a la hora de construir el clasificador. El resultado es un método bien fundamentado que extiende de manera canónica al AdaBoost original. El segundo algoritmo propuesto, BAdaCost, está concebido para problemas multiclase dotados de una matriz de costes. Motivados por los escasos trabajos dedicados a generalizar AdaBoost al terreno multiclase con costes, hemos propuesto un nuevo concepto de margen que, a su vez, permite derivar una función de pérdida adecuada para evaluar costes. Consideramos nuestro algoritmo como la extensión más canónica de AdaBoost para este tipo de problemas, ya que generaliza a los algoritmos SAMME, Cost-Sensitive AdaBoost y PIBoost. Por otro lado, sugerimos un simple procedimiento para calcular matrices de coste adecuadas para mejorar el rendimiento de Boosting a la hora de abordar problemas estándar y problemas con datos desbalanceados. Una serie de experimentos nos sirven para demostrar la efectividad de ambos métodos frente a otros conocidos algoritmos de Boosting multiclase en sus respectivas áreas. En dichos experimentos se usan bases de datos de referencia en el área de Machine Learning, en primer lugar para minimizar errores y en segundo lugar para minimizar costes. Además, hemos podido aplicar BAdaCost con éxito a un proceso de segmentación, un caso particular de problema con datos desbalanceados. Concluimos justificando el horizonte de futuro que encierra el marco de trabajo que presentamos, tanto por su aplicabilidad como por su flexibilidad teórica. Abstract The family of Boosting algorithms represents a type of classification and regression approach that has shown to be very effective in Computer Vision problems. Such is the case of detection, tracking and recognition of faces, people, deformable objects and actions. The first and most popular algorithm, AdaBoost, was introduced in the context of binary classification. Since then, many works have been proposed to extend it to the more general multi-class, multi-label, costsensitive, etc... domains. Our interest is centered in extending AdaBoost to two problems in the multi-class field, considering it a first step for upcoming generalizations. In this dissertation we propose two Boosting algorithms for multi-class classification based on new generalizations of the concept of margin. The first of them, PIBoost, is conceived to tackle the multi-class problem by solving many binary sub-problems. We use a vectorial codification to represent class labels and a multi-class exponential loss function to evaluate classifier responses. This representation produces a set of margin values that provide a range of penalties for failures and rewards for successes. The stagewise optimization of this model introduces an asymmetric Boosting procedure whose costs depend on the number of classes separated by each weak-learner. In this way the Boosting procedure takes into account class imbalances when building the ensemble. The resulting algorithm is a well grounded method that canonically extends the original AdaBoost. The second algorithm proposed, BAdaCost, is conceived for multi-class problems endowed with a cost matrix. Motivated by the few cost-sensitive extensions of AdaBoost to the multi-class field, we propose a new margin that, in turn, yields a new loss function appropriate for evaluating costs. Since BAdaCost generalizes SAMME, Cost-Sensitive AdaBoost and PIBoost algorithms, we consider our algorithm as a canonical extension of AdaBoost to this kind of problems. We additionally suggest a simple procedure to compute cost matrices that improve the performance of Boosting in standard and unbalanced problems. A set of experiments is carried out to demonstrate the effectiveness of both methods against other relevant Boosting algorithms in their respective areas. In the experiments we resort to benchmark data sets used in the Machine Learning community, firstly for minimizing classification errors and secondly for minimizing costs. In addition, we successfully applied BAdaCost to a segmentation task, a particular problem in presence of imbalanced data. We conclude the thesis justifying the horizon of future improvements encompassed in our framework, due to its applicability and theoretical flexibility.
Resumo:
En la presente Tesis se ha llevado a cabo el contraste y desarrollo de metodologías que permitan mejorar el cálculo de las avenidas de proyecto y extrema empleadas en el cálculo de la seguridad hidrológica de las presas. En primer lugar se ha abordado el tema del cálculo de las leyes de frecuencia de caudales máximos y su extrapolación a altos periodos de retorno. Esta cuestión es de gran relevancia, ya que la adopción de estándares de seguridad hidrológica para las presas cada vez más exigentes, implica la utilización de periodos de retorno de diseño muy elevados cuya estimación conlleva una gran incertidumbre. Es importante, en consecuencia incorporar al cálculo de los caudales de diseño todas la técnicas disponibles para reducir dicha incertidumbre. Asimismo, es importante hacer una buena selección del modelo estadístico (función de distribución y procedimiento de ajuste) de tal forma que se garantice tanto su capacidad para describir el comportamiento de la muestra, como para predecir de manera robusta los cuantiles de alto periodo de retorno. De esta forma, se han realizado estudios a escala nacional con el objetivo de determinar el esquema de regionalización que ofrece mejores resultados para las características hidrológicas de las cuencas españolas, respecto a los caudales máximos anuales, teniendo en cuenta el numero de datos disponibles. La metodología utilizada parte de la identificación de regiones homogéneas, cuyos límites se han determinado teniendo en cuenta las características fisiográficas y climáticas de las cuencas, y la variabilidad de sus estadísticos, comprobando posteriormente su homogeneidad. A continuación, se ha seleccionado el modelo estadístico de caudales máximos anuales con un mejor comportamiento en las distintas zonas de la España peninsular, tanto para describir los datos de la muestra como para extrapolar a los periodos de retorno más altos. El proceso de selección se ha basado, entre otras cosas, en la generación sintética de series de datos mediante simulaciones de Monte Carlo, y el análisis estadístico del conjunto de resultados obtenido a partir del ajuste de funciones de distribución a estas series bajo distintas hipótesis. Posteriormente, se ha abordado el tema de la relación caudal-volumen y la definición de los hidrogramas de diseño en base a la misma, cuestión que puede ser de gran importancia en el caso de presas con grandes volúmenes de embalse. Sin embargo, los procedimientos de cálculo hidrológico aplicados habitualmente no tienen en cuenta la dependencia estadística entre ambas variables. En esta Tesis se ha desarrollado un procedimiento para caracterizar dicha dependencia estadística de una manera sencilla y robusta, representando la función de distribución conjunta del caudal punta y el volumen en base a la función de distribución marginal del caudal punta y la función de distribución condicionada del volumen respecto al caudal. Esta última se determina mediante una función de distribución log-normal, aplicando un procedimiento de ajuste regional. Se propone su aplicación práctica a través de un procedimiento de cálculo probabilístico basado en la generación estocástica de un número elevado de hidrogramas. La aplicación a la seguridad hidrológica de las presas de este procedimiento requiere interpretar correctamente el concepto de periodo de retorno aplicado a variables hidrológicas bivariadas. Para ello, se realiza una propuesta de interpretación de dicho concepto. El periodo de retorno se entiende como el inverso de la probabilidad de superar un determinado nivel de embalse. Al relacionar este periodo de retorno con las variables hidrológicas, el hidrograma de diseño de la presa deja de ser un único hidrograma para convertirse en una familia de hidrogramas que generan un mismo nivel máximo en el embalse, representados mediante una curva en el plano caudal volumen. Esta familia de hidrogramas de diseño depende de la propia presa a diseñar, variando las curvas caudal-volumen en función, por ejemplo, del volumen de embalse o la longitud del aliviadero. El procedimiento propuesto se ilustra mediante su aplicación a dos casos de estudio. Finalmente, se ha abordado el tema del cálculo de las avenidas estacionales, cuestión fundamental a la hora de establecer la explotación de la presa, y que puede serlo también para estudiar la seguridad hidrológica de presas existentes. Sin embargo, el cálculo de estas avenidas es complejo y no está del todo claro hoy en día, y los procedimientos de cálculo habitualmente utilizados pueden presentar ciertos problemas. El cálculo en base al método estadístico de series parciales, o de máximos sobre un umbral, puede ser una alternativa válida que permite resolver esos problemas en aquellos casos en que la generación de las avenidas en las distintas estaciones se deba a un mismo tipo de evento. Se ha realizado un estudio con objeto de verificar si es adecuada en España la hipótesis de homogeneidad estadística de los datos de caudal de avenida correspondientes a distintas estaciones del año. Asimismo, se han analizado los periodos estacionales para los que es más apropiado realizar el estudio, cuestión de gran relevancia para garantizar que los resultados sean correctos, y se ha desarrollado un procedimiento sencillo para determinar el umbral de selección de los datos de tal manera que se garantice su independencia, una de las principales dificultades en la aplicación práctica de la técnica de las series parciales. Por otra parte, la aplicación practica de las leyes de frecuencia estacionales requiere interpretar correctamente el concepto de periodo de retorno para el caso estacional. Se propone un criterio para determinar los periodos de retorno estacionales de forma coherente con el periodo de retorno anual y con una distribución adecuada de la probabilidad entre las distintas estaciones. Por último, se expone un procedimiento para el cálculo de los caudales estacionales, ilustrándolo mediante su aplicación a un caso de estudio. The compare and develop of a methodology in order to improve the extreme flow estimation for dam hydrologic security has been developed. First, the work has been focused on the adjustment of maximum peak flows distribution functions from which to extrapolate values for high return periods. This has become a major issue as the adoption of stricter standards on dam hydrologic security involves estimation of high design return periods which entails great uncertainty. Accordingly, it is important to incorporate all available techniques for the estimation of design peak flows in order to reduce this uncertainty. Selection of the statistical model (distribution function and adjustment method) is also important since its ability to describe the sample and to make solid predictions for high return periods quantiles must be guaranteed. In order to provide practical application of previous methodologies, studies have been developed on a national scale with the aim of determining a regionalization scheme which features best results in terms of annual maximum peak flows for hydrologic characteristics of Spanish basins taking into account the length of available data. Applied methodology starts with the delimitation of regions taking into account basin’s physiographic and climatic characteristics and the variability of their statistical properties, and continues with their homogeneity testing. Then, a statistical model for maximum annual peak flows is selected with the best behaviour for the different regions in peninsular Spain in terms of describing sample data and making solid predictions for high return periods. This selection has been based, among others, on synthetic data series generation using Monte Carlo simulations and statistical analysis of results from distribution functions adjustment following different hypothesis. Secondly, the work has been focused on the analysis of the relationship between peak flow and volume and how to define design flood hydrographs based on this relationship which can be highly important for large volume reservoirs. However, commonly used hydrologic procedures do not take statistical dependence between these variables into account. A simple and sound method for statistical dependence characterization has been developed by the representation of a joint distribution function of maximum peak flow and volume which is based on marginal distribution function of peak flow and conditional distribution function of volume for a given peak flow. The last one is determined by a regional adjustment procedure of a log-normal distribution function. Practical application is proposed by a probabilistic estimation procedure based on stochastic generation of a large number of hydrographs. The use of this procedure for dam hydrologic security requires a proper interpretation of the return period concept applied to bivariate hydrologic data. A standard is proposed in which it is understood as the inverse of the probability of exceeding a determined reservoir level. When relating return period and hydrological variables the only design flood hydrograph changes into a family of hydrographs which generate the same maximum reservoir level and that are represented by a curve in the peak flow-volume two-dimensional space. This family of design flood hydrographs depends on the dam characteristics as for example reservoir volume or spillway length. Two study cases illustrate the application of the developed methodology. Finally, the work has been focused on the calculation of seasonal floods which are essential when determining the reservoir operation and which can be also fundamental in terms of analysing the hydrologic security of existing reservoirs. However, seasonal flood calculation is complex and nowadays it is not totally clear. Calculation procedures commonly used may present certain problems. Statistical partial duration series, or peaks over threshold method, can be an alternative approach for their calculation that allow to solve problems encountered when the same type of event is responsible of floods in different seasons. A study has been developed to verify the hypothesis of statistical homogeneity of peak flows for different seasons in Spain. Appropriate seasonal periods have been analyzed which is highly relevant to guarantee correct results. In addition, a simple procedure has been defined to determine data selection threshold on a way that ensures its independency which is one of the main difficulties in practical application of partial series. Moreover, practical application of seasonal frequency laws requires a correct interpretation of the concept of seasonal return period. A standard is proposed in order to determine seasonal return periods coherently with the annual return period and with an adequate seasonal probability distribution. Finally a methodology is proposed to calculate seasonal peak flows. A study case illustrates the application of the proposed methodology.