974 resultados para Two-state Potts model
Resumo:
In this article, we perform an extensive study of flavor observables in a two-Higgs-doublet model with generic Yukawa structure (of type III). This model is interesting not only because it is the decoupling limit of the minimal supersymmetric standard model but also because of its rich flavor phenomenology which also allows for sizable effects not only in flavor-changing neutral-current (FCNC) processes but also in tauonic B decays. We examine the possible effects in flavor physics and constrain the model both from tree-level processes and from loop observables. The free parameters of the model are the heavy Higgs mass, tanβ (the ratio of vacuum expectation values) and the “nonholomorphic” Yukawa couplings ϵfij(f=u,d,ℓ). In our analysis we constrain the elements ϵfij in various ways: In a first step we give order of magnitude constraints on ϵfij from ’t Hooft’s naturalness criterion, finding that all ϵfij must be rather small unless the third generation is involved. In a second step, we constrain the Yukawa structure of the type-III two-Higgs-doublet model from tree-level FCNC processes (Bs,d→μ+μ−, KL→μ+μ−, D¯¯¯0→μ+μ−, ΔF=2 processes, τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−) and observe that all flavor off-diagonal elements of these couplings, except ϵu32,31 and ϵu23,13, must be very small in order to satisfy the current experimental bounds. In a third step, we consider Higgs mediated loop contributions to FCNC processes [b→s(d)γ, Bs,d mixing, K−K¯¯¯ mixing and μ→eγ] finding that also ϵu13 and ϵu23 must be very small, while the bounds on ϵu31 and ϵu32 are especially weak. Furthermore, considering the constraints from electric dipole moments we obtain constrains on some parameters ϵu,ℓij. Taking into account the constraints from FCNC processes we study the size of possible effects in the tauonic B decays (B→τν, B→Dτν and B→D∗τν) as well as in D(s)→τν, D(s)→μν, K(π)→eν, K(π)→μν and τ→K(π)ν which are all sensitive to tree-level charged Higgs exchange. Interestingly, the unconstrained ϵu32,31 are just the elements which directly enter the branching ratios for B→τν, B→Dτν and B→D∗τν. We show that they can explain the deviations from the SM predictions in these processes without fine-tuning. Furthermore, B→τν, B→Dτν and B→D∗τν can even be explained simultaneously. Finally, we give upper limits on the branching ratios of the lepton flavor-violating neutral B meson decays (Bs,d→μe, Bs,d→τe and Bs,d→τμ) and correlate the radiative lepton decays (τ→μγ, τ→eγ and μ→eγ) to the corresponding neutral current lepton decays (τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−). A detailed Appendix contains all relevant information for the considered processes for general scalar-fermion-fermion couplings.
Resumo:
Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.
Resumo:
This dissertation explores phase I dose-finding designs in cancer trials from three perspectives: the alternative Bayesian dose-escalation rules, a design based on a time-to-dose-limiting toxicity (DLT) model, and a design based on a discrete-time multi-state (DTMS) model. We list alternative Bayesian dose-escalation rules and perform a simulation study for the intra-rule and inter-rule comparisons based on two statistical models to identify the most appropriate rule under certain scenarios. We provide evidence that all the Bayesian rules outperform the traditional ``3+3'' design in the allocation of patients and selection of the maximum tolerated dose. The design based on a time-to-DLT model uses patients' DLT information over multiple treatment cycles in estimating the probability of DLT at the end of treatment cycle 1. Dose-escalation decisions are made whenever a cycle-1 DLT occurs, or two months after the previous check point. Compared to the design based on a logistic regression model, the new design shows more safety benefits for trials in which more late-onset toxicities are expected. As a trade-off, the new design requires more patients on average. The design based on a discrete-time multi-state (DTMS) model has three important attributes: (1) Toxicities are categorized over a distribution of severity levels, (2) Early toxicity may inform dose escalation, and (3) No suspension is required between accrual cohorts. The proposed model accounts for the difference in the importance of the toxicity severity levels and for transitions between toxicity levels. We compare the operating characteristics of the proposed design with those from a similar design based on a fully-evaluated model that directly models the maximum observed toxicity level within the patients' entire assessment window. We describe settings in which, under comparable power, the proposed design shortens the trial. The proposed design offers more benefit compared to the alternative design as patient accrual becomes slower.
Resumo:
We consider a large quantum system with spins 12 whose dynamics is driven entirely by measurements of the total spin of spin pairs. This gives rise to a dissipative coupling to the environment. When one averages over the measurement results, the corresponding real-time path integral does not suffer from a sign problem. Using an efficient cluster algorithm, we study the real-time evolution from an initial antiferromagnetic state of the two-dimensional Heisenberg model, which is driven to a disordered phase, not by a Hamiltonian, but by sporadic measurements or by continuous Lindblad evolution.
Resumo:
Activities of daily living (ADL) are important for quality of life. They are indicators of cognitive health status and their assessment is a measure of independence in everyday living. ADL are difficult to reliably assess using questionnaires due to self-reporting biases. Various sensor-based (wearable, in-home, intrusive) systems have been proposed to successfully recognize and quantify ADL without relying on self-reporting. New classifiers required to classify sensor data are on the rise. We propose two ad-hoc classifiers that are based only on non-intrusive sensor data. METHODS: A wireless sensor system with ten sensor boxes was installed in the home of ten healthy subjects to collect ambient data over a duration of 20 consecutive days. A handheld protocol device and a paper logbook were also provided to the subjects. Eight ADL were selected for recognition. We developed two ad-hoc ADL classifiers, namely the rule based forward chaining inference engine (RBI) classifier and the circadian activity rhythm (CAR) classifier. The RBI classifier finds facts in data and matches them against the rules. The CAR classifier works within a framework to automatically rate routine activities to detect regular repeating patterns of behavior. For comparison, two state-of-the-art [Naïves Bayes (NB), Random Forest (RF)] classifiers have also been used. All classifiers were validated with the collected data sets for classification and recognition of the eight specific ADL. RESULTS: Out of a total of 1,373 ADL, the RBI classifier correctly determined 1,264, while missing 109 and the CAR determined 1,305 while missing 68 ADL. The RBI and CAR classifier recognized activities with an average sensitivity of 91.27 and 94.36%, respectively, outperforming both RF and NB. CONCLUSIONS: The performance of the classifiers varied significantly and shows that the classifier plays an important role in ADL recognition. Both RBI and CAR classifier performed better than existing state-of-the-art (NB, RF) on all ADL. Of the two ad-hoc classifiers, the CAR classifier was more accurate and is likely to be better suited than the RBI for distinguishing and recognizing complex ADL.
Resumo:
Though E2F1 is deregulated in most human cancers by mutations of the p16-cyclin D-Rb pathway, it also exhibits tumor suppressive activity. A transgenic mouse model overexpressing E2F1 under the control of the bovine keratin 5 (K5) promoter exhibits epidermal hyperplasia and spontaneously develops tumors in the skin and other epithelial tissues after one year of age. In a p53-deficient background, aberrant apoptosis in K5 E2F1 transgenic epidermis is reduced and tumorigenesis is accelerated. In sharp contrast, K5 E2F1 transgenic mice are resistant to papilloma formation in the DMBA/TPA two-stage carcinogenesis protocol. K5 E2F4 and K5 DP1 transgenic mice were also characterized and both display epidermal hyperplasia but do not develop spontaneous tumors even in cooperation with p53 deficiency. These transgenic mice do not have increased levels of apoptosis in their skin and are more susceptible to papilloma formation in the two-stage carcinogenesis model. These studies show that deregulated proliferation does not necessarily lead to tumor formation and that the ability to suppress skin carcinogenesis is unique to E2F1. E2F1 can also suppress skin carcinogenesis when okadaic acid is used as the tumor promoter and when a pre-initiated mouse model is used, demonstrating that E2F1's tumor suppressive activity is not specific for TPA and occurs at the promotion stage. E2F1 was thought to induce p53-dependent apoptosis through upregulation of p19ARF tumor suppressor, which inhibits mdm2-mediated p53 degradation. Consistent with in vitro studies, the overexpression of E2F1 in mouse skin results in the transcriptional activation of the p19ARF and the accumulation of p53. Inactivation of either p19ARF or p53 restores the sensitivity of K5 E2F1 transgenic mice to DMBA/TPA carcinogenesis, demonstrating that an intact p19ARF-p53 pathway is necessary for E2F1 to suppress carcinogenesis. Surprisingly, while p53 is required for E2F1 to induce apoptosis in mouse skin, p19ARF is not, and inactivation of p19ARF actually enhances E2F1-induced apoptosis and proliferation in transgenic epidermis. This indicates that ARF is important for E2F1-induced tumor suppression but not apoptosis. Senescence is another potential mechanism of tumor suppression that involves p53 and p19ARF. K5 E2F1 transgenic mice initiated with DMBA and treated with TPA show an increased number of senescence cells in their epidermis. These experiments demonstrate that E2F1's unique tumor suppressive activity in two-stage skin carcinogenesis can be genetically separated from E2F1-induced apoptosis and suggest that senescence utilizing the p19ARF-p53 pathway plays a role in tumor suppression by E2F1. ^
Resumo:
SRI is unique among known photoreceptors in that it produces opposite signals depending on the color of light stimuli. Absorption of orange light (587 nm) triggers an attractant response by the cell, whereas absorption of orange light followed by near-UV light (373 run) triggers a repellent response. Using behavioral mutants that exhibit aberrant color-sensing ability, we tested a two-conformation equilibrium model, using FRET and EPR spectroscopy. The essence of the model applied to SRI-HtrI is that the complex exists in a metastable two-conformer equilibrium which is shifted in one direction by orange light absorption (producing an attractant signal) and in the opposite direction by a second UV-violet photon (producing a repellent signal). First, by FRET we found that the E-F cytoplasmic loop of SRI moves toward the RAMP domain of the HtrI transducer during the formation of the orange-light activated signaling state of the complex. This is the first localization of a change in the physical relationship between the receptor and transducer subunits of the complex and provides a structural property of the two proposed conformers that we can monitor. Second, EPR spectra of a spin label probe at this cytoplasmic position showed shifts in the dark in the mutants toward shorter or longer EF loop-RAMP distances, explaining their behavior in terms of their mutations causing pre-stimulus shifts into one or the other conformer. ^ Next, we applied a novel electrophysiological method for monitoring the directionality of proton movement during photoactivation of SRI, to investigate the process of proton transfer in the photoactive site from the chromophore to proton acceptors on both the wildtype and aberrant color-response mutants. We observed an unexpected and critical difference in the two signaling conformations of the SRI-HtrI complex. The finding is that the vectoriality (i.e. movement away or toward the cytoplasm) of the light-induced proton transfer from the chromophore to the protein is opposite in formation of the two conformations. Retinylidene proton transfer is a common critical process in rhodopsins and these results are the first to show differences in vectoriality in a rhodopsin receptor, and to demonstrate functional importance of the direction of proton transfer. ^
Resumo:
El presente estudio se realizó con la finalidad de modelizar la distribución espacial del carbón de la espiga del maíz causada por Sporisorium reilianum durante 2006 en el Estado de México y su visualización a través de la generación mapas de densidad. El muestreo se realizó en 100 parcelas georreferenciadas por cada localidad analizada. La incidencia de la enfermedad (porcentaje de plantas enfermas) se determinó al establecer cinco puntos parcela, en cada punto se contabilizaron 100 plantas. Se realizó el análisis geoestadístico para estimar el semivariograma experimental, una vez obtenido, se ajustó a un modelo teórico (esférico, exponencial o gaussiano) a través de los programas Variowin 2.2., su ajuste se validó a través de la validación cruzada. Posteriormente, se elaboraron mapas de agregación de la enfermedad con el método de interpolación geoestadística o krigeado. Los resultados indicaron que la enfermedad se presentó en 20 localidades de 19 municipios del Estado de México; todas las localidades presentaron un comportamiento espacial agregado de la enfermedad, 16 localidades se ajustaron al modelo esférico, dos al modelo exponencial y dos localidades se ajustaron al modelo gaussiano. En todos los modelos se lograron establecer mapas de agregación que permitirá adecuar las acciones de manejo en términos de puntos o sitios específicos.
Resumo:
The estimation of modal parameters of a structure from ambient measurements has attracted the attention of many researchers in the last years. The procedure is now well established and the use of state space models, stochastic system identification methods and stabilization diagrams allows to identify the modes of the structure. In this paper the contribution of each identified mode to the measured vibration is discussed. This modal contribution is computed using the Kalman filter and it is an indicator of the importance of the modes. Also the variation of the modal contribution with the order of the model is studied. This analysis suggests selecting the order for the state space model as the order that includes the modes with higher contribution. The order obtained using this method is compared to those obtained using other well known methods, like Akaike criteria for time series or the singular values of the weighted projection matrix in the Stochastic Subspace Identification method. Finally, both simulated and measured vibration data are used to show the practicability of the derived technique. Finally, it is important to remark that the method can be used with any identification method working in the state space model.
Resumo:
At present, several models for quantum computation have been proposed. Adiabatic quantum computation scheme particularly offers this possibility and is based on a slow enough time evolution of the system, where no transitions take place. In this work, a new strategy for quantum computation is provided from the opposite point of view. The objective is to control the non-adiabatic transitions between some states in order to produce the desired exit states after the evolution. The model is introduced by means of an analogy between the adiabatic quantum computation and an inelastic atomic collision. By means of a simple two-state model, several quantum gates are reproduced, concluding the possibility of diabatic universal faulttolerant quantum computation. Going a step further, a new quantum diabatic computation model is glimpsed, where a carefully chosen Hamiltonian could carry out a non-adiabatic transition between the initial and the sought final state.
Resumo:
System identification deals with the problem of building mathematical models of dynamical systems based on observed data from the system" [1]. In the context of civil engineering, the system refers to a large scale structure such as a building, bridge, or an offshore structure, and identification mostly involves the determination of modal parameters (the natural frequencies, damping ratios, and mode shapes). This paper presents some modal identification results obtained using a state-of-the-art time domain system identification method (data-driven stochastic subspace algorithms [2]) applied to the output-only data measured in a steel arch bridge. First, a three dimensional finite element model was developed for the numerical analysis of the structure using ANSYS. Modal analysis was carried out and modal parameters were extracted in the frequency range of interest, 0-10 Hz. The results obtained from the finite element modal analysis were used to determine the location of the sensors. After that, ambient vibration tests were conducted during April 23-24, 2009. The response of the structure was measured using eight accelerometers. Two stations of three sensors were formed (triaxial stations). These sensors were held stationary for reference during the test. The two remaining sensors were placed at the different measurement points along the bridge deck, in which only vertical and transversal measurements were conducted (biaxial stations). Point estimate and interval estimate have been carried out in the state space model using these ambient vibration measurements. In the case of parametric models (like state space), the dynamic behaviour of a system is described using mathematical models. Then, mathematical relationships can be established between modal parameters and estimated point parameters (thus, it is common to use experimental modal analysis as a synonym for system identification). Stable modal parameters are found using a stabilization diagram. Furthermore, this paper proposes a method for assessing the precision of estimates of the parameters of state-space models (confidence interval). This approach employs the nonparametric bootstrap procedure [3] and is applied to subspace parameter estimation algorithm. Using bootstrap results, a plot similar to a stabilization diagram is developed. These graphics differentiate system modes from spurious noise modes for a given order system. Additionally, using the modal assurance criterion, the experimental modes obtained have been compared with those evaluated from a finite element analysis. A quite good agreement between numerical and experimental results is observed.
Resumo:
The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.
Resumo:
En esta tesis se va a describir y aplicar de forma novedosa la técnica del alisado exponencial multivariante a la predicción a corto plazo, a un día vista, de los precios horarios de la electricidad, un problema que se está estudiando intensivamente en la literatura estadística y económica reciente. Se van a demostrar ciertas propiedades interesantes del alisado exponencial multivariante que permiten reducir el número de parámetros para caracterizar la serie temporal y que al mismo tiempo permiten realizar un análisis dinámico factorial de la serie de precios horarios de la electricidad. En particular, este proceso multivariante de elevada dimensión se estimará descomponiéndolo en un número reducido de procesos univariantes independientes de alisado exponencial caracterizado cada uno por un solo parámetro de suavizado que variará entre cero (proceso de ruido blanco) y uno (paseo aleatorio). Para ello, se utilizará la formulación en el espacio de los estados para la estimación del modelo, ya que ello permite conectar esa secuencia de modelos univariantes más eficientes con el modelo multivariante. De manera novedosa, las relaciones entre los dos modelos se obtienen a partir de un simple tratamiento algebraico sin requerir la aplicación del filtro de Kalman. De este modo, se podrán analizar y poner al descubierto las razones últimas de la dinámica de precios de la electricidad. Por otra parte, la vertiente práctica de esta metodología se pondrá de manifiesto con su aplicación práctica a ciertos mercados eléctricos spot, tales como Omel, Powernext y Nord Pool. En los citados mercados se caracterizará la evolución de los precios horarios y se establecerán sus predicciones comparándolas con las de otras técnicas de predicción. ABSTRACT This thesis describes and applies the multivariate exponential smoothing technique to the day-ahead forecast of the hourly prices of electricity in a whole new way. This problem is being studied intensively in recent statistics and economics literature. It will start by demonstrating some interesting properties of the multivariate exponential smoothing that reduce drastically the number of parameters to characterize the time series and that at the same time allow a dynamic factor analysis of the hourly prices of electricity series. In particular this very complex multivariate process of dimension 24 will be estimated by decomposing a very reduced number of univariate independent of exponentially smoothing processes each characterized by a single smoothing parameter that varies between zero (white noise process) and one (random walk). To this end, the formulation is used in the state space model for the estimation, since this connects the sequence of efficient univariate models to the multivariate model. Through a novel way, relations between the two models are obtained from a simple algebraic treatment without applying the Kalman filter. Thus, we will analyze and expose the ultimate reasons for the dynamics of the electricity price. Moreover, the practical aspect of this methodology will be shown by applying this new technique to certain electricity spot markets such as Omel, Powernext and Nord Pool. In those markets the behavior of prices will be characterized, their predictions will be formulated and the results will be compared with those of other forecasting techniques.
Resumo:
Las poblaciones de salmónidos en la Península Ibérica (trucha común, Salmo trutta; y salmón atlántico, Salmo salar) se encuentran cerca del límite meridional de sus distribuciones naturales, y por tanto tienen una gran importancia para la conservación de estas especies. En la presente Tesis se han investigado algunos aspectos de la reproducción y de la gestión del hábitat, con el objeto de mejorar el conocimiento acerca de estas poblaciones meridionales de salmónidos. Se ha estudiado la reproducción de la trucha común en el río Castril (Andalucía, sur de España), donde se ha observado que la freza ocurre desde diciembre hasta abril con el máximo de actividad en febrero. Este hecho representa uno de los periodos reproductivos más tardíos y con mayor duración de toda la distribución natural de la especie. Además, actualmente se sabe que el resto de poblaciones andaluzas tienen periodos de reproducción similares (retrasados y extendidos). Análisis en la escala de la distribución natural de la trucha común, han mostrado que la latitud explica parcialmente tanto la fecha media de reproducción (R2 = 62.8%) como la duración del periodo de freza (R2 = 24.4%) mediante relaciones negativas: a menor latitud, la freza ocurre más tarde y durante más tiempo. Es verosímil que un periodo de freza largo suponga una ventaja para la supervivencia de las poblaciones de trucha en hábitats impredecibles, y por tanto se ha propuesto la siguiente hipótesis, que deberá ser comprobada en el futuro: la duración de la freza es mayor en hábitats impredecibles que en aquellos más predecibles. La elevada tasa de solapamiento de frezaderos observada en el río Castril no se explica únicamente por una excesiva densidad de reproductores. Las hembras de trucha eligieron lugares específicos para construir sus frezaderos en vez de dispersarse aleatoriamente dentro del hábitat adecuado para la freza que tenían disponible. Estas observaciones sugieren que las hembras tienen algún tipo de preferencia por solapar sus frezaderos. Además, en ríos calizos como el Castril, las gravas pueden ser muy cohesivas y difíciles de excavar, por lo que el solapamiento de frezaderos puede suponer una ventaja para la hembra, porque la excavación en sustratos que han sido previamente removidos por frezas anteriores requerirá menos gasto de energía que en sustratos con gravas cohesivas que no han sido alteradas. Por tanto, se ha propuesto la siguiente hipótesis, que deberá ser comprobada en el futuro: las hembras tienen una mayor preferencia por solapar sus frezaderos en ríos con sustratos cohesivos que en ríos con sustratos de gravas sueltas. En el marco de la gestión del hábitat, se han empleado dos enfoques diferentes para la evaluación del hábitat físico, con el objeto de cuantificar los cambios potenciales en la disponibilidad de hábitat, antes de la implementación real de determinadas medidas sobre el hábitat. En primer lugar, se ha evaluado el hábitat físico del salmón atlántico en el río Pas (Cantabria, norte de España), en la escala del microhábitat, empleando la metodología IFIM junto con un modelo hidráulico bidimensional (River2D). Se han simulado una serie de acciones de mejora del hábitat y se han cuantificado los cambios en el hábitat bajo estas acciones. Los resultados mostraron un aumento muy pequeño en la disponibilidad de hábitat, por lo que no sería efectivo implementar estas acciones en este tramo fluvial. En segundo lugar, se ha evaluado el hábitat físico de la trucha común en el río Tajuña (Guadalajara, centro de España), en la escala del mesohábitat, empleando la metodología MesoHABSIM. Actualmente, el río Tajuña está alterado por los usos agrícolas de sus riberas, y por tanto se ha diseñado una restauración para mitigar estos impactos y para llevar al río a un estado más natural. Se ha cuantificado la disponibilidad de hábitat tras la restauración planteada, y los resultados han permitido identificar los tramos en los que la restauración resultaría más eficaz. ABSTRACT Salmonid populations in the Iberian Peninsula (brown trout, Salmo trutta; and Atlantic salmon, Salmo salar) are close to the southern limit of their natural ranges, and therefore they are of great importance for the conservation of the species. In the present dissertation, some aspects of spawning and habitat management have been investigated, in order to improve the knowledge on these southern salmonid populations. Brown trout spawning have been studied in the river Castril (Andalusia, southern Spain), and it has been observed that spawning occurs from December until April with the maximum activity in February. This finding represents one of the most belated and protracted spawning periods within the natural range of the species. Furthermore, it is now known that the rest of Andalusian populations show similar (belated and extended) spawning periods. Broad-scale analyses throughout the brown trout natural range showed that latitude partly explained both spawning mean time (R2 = 62.8%) and spawning duration (R2 = 24.4%) by negative relationships: the lower the latitude, the later the spawning time and the longer the spawning period. It is plausible that a long spawning period would be an advantage for survival of trout populations in unpredictable habitats, and thus the following hypothesis has been proposed, which is yet to be tested: spawning duration is longer in unpredictable than in predictable habitats. High rate of redd superimposition observed in the river Castril was not only caused by high density of spawners. Trout females chose specific sites for redd construction instead of randomly dispersing over the suitable spawning habitat. These observations suggest that female spawners have some kind of preference for superimposing redds. Moreover, in limestone streams such as Castril, unused gravels can be very cohesive and hard to dig, and thus redd superimposition may be an advantage for female, because digging may require less energy expenditure in already used redd sites than in cohesive and embedded unused sites. Hence, the following hypothesis has been proposed, which is yet to be tested: females have a higher preference for superimposing redds in streambeds with cohesive and embedded substrates than in rivers with loose gravels. Within the topic of habitat management, two different approaches have been used for physical habitat assessment, in order to quantify the potential change in habitat availability, prior to the actual implementation of proposed habitat measures. Firstly, physical habitat for Atlantic salmon in the river Pas (Cantabria, northern Spain) has been assessed at the microhabitat scale, using the IFIM approach along with a two dimensional hydraulic model (River2D). Proposed habitat enhancement actions have been simulated and potential habitat change has been quantified. Results showed a very small increasing in habitat availability and therefore it is not worth to implement these measures in this stream reach. Secondly, physical habitat for brown trout in the river Tajuña (Guadalajara, central Spain) has been assessed at the mesohabitat scale, using the MesoHABSIM approach. The river Tajuña is currently impacted by surrounding agricultural uses, and thus restoration was designed to mitigate these impacts and to drive the river to a more natural state. Habitat availability after the planned restoration has been quantified, and the results have permitted to identify in which sites the restoration will be more effective.
Resumo:
Operational Modal Analysis consists on estimate the modal parameters of a structure (natural frequencies, damping ratios and modal vectors) from output-only vibration measurements. The modal vectors can be only estimated where a sensor is placed, so when the number of available sensors is lower than the number of tested points, it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors): some sensors stay at the same position from setup to setup, and the other sensors change the position until all the tested points are covered. The permanent sensors are then used to merge the mode shape estimated at each setup (or partial modal vectors) into global modal vectors. Traditionally, the partial modal vectors are estimated independently setup by setup, and the global modal vectors are obtained in a postprocess phase. In this work we present two state space models that can be used to process all the recorded setups at the same time, and we also present how these models can be estimated using the maximum likelihood method. The result is that the global mode shape of each mode is obtained automatically, and subsequently, a single value for the natural frequency and damping ratio of the mode is computed. Finally, both models are compared using real measured data.