910 resultados para Discrete time pricing model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The last decades have seen an unrivaled growth and diffusion of mobile telecommunications. Several standards have been developed to this purposes, from GSM mobile phone communications to WLAN IEEE 802.11, providing different services for the the transmission of signals ranging from voice to high data rate digital communications and Digital Video Broadcasting (DVB). In this wide research and market field, this thesis focuses on Ultra Wideband (UWB) communications, an emerging technology for providing very high data rate transmissions over very short distances. In particular the presented research deals with the circuit design of enabling blocks for MB-OFDM UWB CMOS single-chip transceivers, namely the frequency synthesizer and the transmission mixer and power amplifier. First we discuss three different models for the simulation of chargepump phase-locked loops, namely the continuous time s-domain and discrete time z-domain approximations and the exact semi-analytical time-domain model. The limitations of the two approximated models are analyzed in terms of error in the computed settling time as a function of loop parameters, deriving practical conditions under which the different models are reliable for fast settling PLLs up to fourth order. Besides, a phase noise analysis method based upon the time-domain model is introduced and compared to the results obtained by means of the s-domain model. We compare the three models over the simulation of a fast switching PLL to be integrated in a frequency synthesizer for WiMedia MB-OFDM UWB systems. In the second part, the theoretical analysis is applied to the design of a 60mW 3.4 to 9.2GHz 12 Bands frequency synthesizer for MB-OFDM UWB based on two wide-band PLLs. The design is presented and discussed up to layout level. A test chip has been implemented in TSMC CMOS 90nm technology, measured data is provided. The functionality of the circuit is proved and specifications are met with state-of-the-art area occupation and power consumption. The last part of the thesis deals with the design of a transmission mixer and a power amplifier for MB-OFDM UWB band group 1. The design has been carried on up to layout level in ST Microlectronics 65nm CMOS technology. Main characteristics of the systems are the wideband behavior (1.6 GHz of bandwidth) and the constant behavior over process parameters, temperature and supply voltage thanks to the design of dedicated adaptive biasing circuits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

„Risikomaße in der Finanzmathematik“ Der Value-at -Risk (VaR) ist ein Risikomaß, dessen Verwendung von der Bankenaufsicht gefordert wird. Der Vorteil des VaR liegt – als Quantil der Ertrags- oder Verlustverteilung - vor allem in seiner einfachen Interpretierbarkeit. Nachteilig ist, dass der linke Rand der Wahrscheinlichkeitsverteilung nicht beachtet wird. Darüber hinaus ist die Berechnung des VaR schwierig, da Quantile nicht additiv sind. Der größte Nachteil des VaR ist in der fehlenden Subadditivität zu sehen. Deswegen werden Alternativen wie Expected Shortfall untersucht. In dieser Arbeit werden zunächst finanzielle Risikomaße eingeführt und einige ihre grundlegenden Eigenschaften festgehalten. Wir beschäftigen uns mit verschiedenen parametrischen und nichtparametrischen Methoden zur Ermittlung des VaR, unter anderen mit ihren Vorteilen und Nachteilen. Des Weiteren beschäftigen wir uns mit parametrischen und nichtparametrischen Schätzern vom VaR in diskreter Zeit. Wir stellen Portfoliooptimierungsprobleme im Black Scholes Modell mit beschränktem VaR und mit beschränkter Varianz vor. Der Vorteil des erstens Ansatzes gegenüber dem zweiten wird hier erläutert. Wir lösen Nutzenoptimierungsprobleme in Bezug auf das Endvermögen mit beschränktem VaR und mit beschränkter Varianz. VaR sagt nichts über den darüber hinausgehenden Verlust aus, während dieser von Expected Shortfall berücksichtigt wird. Deswegen verwenden wir hier den Expected Shortfall anstelle des von Emmer, Korn und Klüppelberg (2001) betrachteten Risikomaßes VaR für die Optimierung des Portfolios im Black Scholes Modell.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The original cefepime product was withdrawn from the Swiss market in January 2007 and replaced by a generic 10 months later. The goals of the study were to assess the impact of this cefepime shortage on the use and costs of alternative broad-spectrum antibiotics, on antibiotic policy, and on resistance of Pseudomonas aeruginosa toward carbapenems, ceftazidime, and piperacillin-tazobactam. A generalized regression-based interrupted time series model assessed how much the shortage changed the monthly use and costs of cefepime and of selected alternative broad-spectrum antibiotics (ceftazidime, imipenem-cilastatin, meropenem, piperacillin-tazobactam) in 15 Swiss acute care hospitals from January 2005 to December 2008. Resistance of P. aeruginosa was compared before and after the cefepime shortage. There was a statistically significant increase in the consumption of piperacillin-tazobactam in hospitals with definitive interruption of cefepime supply and of meropenem in hospitals with transient interruption of cefepime supply. Consumption of each alternative antibiotic tended to increase during the cefepime shortage and to decrease when the cefepime generic was released. These shifts were associated with significantly higher overall costs. There was no significant change in hospitals with uninterrupted cefepime supply. The alternative antibiotics for which an increase in consumption showed the strongest association with a progression of resistance were the carbapenems. The use of alternative antibiotics after cefepime withdrawal was associated with a significant increase in piperacillin-tazobactam and meropenem use and in overall costs and with a decrease in susceptibility of P. aeruginosa in hospitals. This warrants caution with regard to shortages and withdrawals of antibiotics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Hierarchical modeling has been proposed as a solution to the multiple exposure problem. We estimate associations between metabolic syndrome and different components of antiretroviral therapy using both conventional and hierarchical models. STUDY DESIGN AND SETTING: We use discrete time survival analysis to estimate the association between metabolic syndrome and cumulative exposure to 16 antiretrovirals from four drug classes. We fit a hierarchical model where the drug class provides a prior model of the association between metabolic syndrome and exposure to each antiretroviral. RESULTS: One thousand two hundred and eighteen patients were followed for a median of 27 months, with 242 cases of metabolic syndrome (20%) at a rate of 7.5 cases per 100 patient years. Metabolic syndrome was more likely to develop in patients exposed to stavudine, but was less likely to develop in those exposed to atazanavir. The estimate for exposure to atazanavir increased from hazard ratio of 0.06 per 6 months' use in the conventional model to 0.37 in the hierarchical model (or from 0.57 to 0.81 when using spline-based covariate adjustment). CONCLUSION: These results are consistent with trials that show the disadvantage of stavudine and advantage of atazanavir relative to other drugs in their respective classes. The hierarchical model gave more plausible results than the equivalent conventional model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation explores phase I dose-finding designs in cancer trials from three perspectives: the alternative Bayesian dose-escalation rules, a design based on a time-to-dose-limiting toxicity (DLT) model, and a design based on a discrete-time multi-state (DTMS) model. We list alternative Bayesian dose-escalation rules and perform a simulation study for the intra-rule and inter-rule comparisons based on two statistical models to identify the most appropriate rule under certain scenarios. We provide evidence that all the Bayesian rules outperform the traditional ``3+3'' design in the allocation of patients and selection of the maximum tolerated dose. The design based on a time-to-DLT model uses patients' DLT information over multiple treatment cycles in estimating the probability of DLT at the end of treatment cycle 1. Dose-escalation decisions are made whenever a cycle-1 DLT occurs, or two months after the previous check point. Compared to the design based on a logistic regression model, the new design shows more safety benefits for trials in which more late-onset toxicities are expected. As a trade-off, the new design requires more patients on average. The design based on a discrete-time multi-state (DTMS) model has three important attributes: (1) Toxicities are categorized over a distribution of severity levels, (2) Early toxicity may inform dose escalation, and (3) No suspension is required between accrual cohorts. The proposed model accounts for the difference in the importance of the toxicity severity levels and for transitions between toxicity levels. We compare the operating characteristics of the proposed design with those from a similar design based on a fully-evaluated model that directly models the maximum observed toxicity level within the patients' entire assessment window. We describe settings in which, under comparable power, the proposed design shortens the trial. The proposed design offers more benefit compared to the alternative design as patient accrual becomes slower.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surgical robots have been proposed ex vivo to drill precise holes in the temporal bone for minimally invasive cochlear implantation. The main risk of the procedure is damage of the facial nerve due to mechanical interaction or due to temperature elevation during the drilling process. To evaluate the thermal risk of the drilling process, a simplified model is proposed which aims to enable an assessment of risk posed to the facial nerve for a given set of constant process parameters for different mastoid bone densities. The model uses the bone density distribution along the drilling trajectory in the mastoid bone to calculate a time dependent heat production function at the tip of the drill bit. Using a time dependent moving point source Green's function, the heat equation can be solved at a certain point in space so that the resulting temperatures can be calculated over time. The model was calibrated and initially verified with in vivo temperature data. The data was collected in minimally invasive robotic drilling of 12 holes in four different sheep. The sheep were anesthetized and the temperature elevations were measured with a thermocouple which was inserted in a previously drilled hole next to the planned drilling trajectory. Bone density distributions were extracted from pre-operative CT data by averaging Hounsfield values over the drill bit diameter. Post-operative [Formula: see text]CT data was used to verify the drilling accuracy of the trajectories. The comparison of measured and calculated temperatures shows a very good match for both heating and cooling phases. The average prediction error of the maximum temperature was less than 0.7 °C and the average root mean square error was approximately 0.5 °C. To analyze potential thermal damage, the model was used to calculate temperature profiles and cumulative equivalent minutes at 43 °C at a minimal distance to the facial nerve. For the selected drilling parameters, temperature elevation profiles and cumulative equivalent minutes suggest that thermal elevation of this minimally invasive cochlear implantation surgery may pose a risk to the facial nerve, especially in sclerotic or high density mastoid bones. Optimized drilling parameters need to be evaluated and the model could be used for future risk evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The discrete-time Markov chain is commonly used in describing changes of health states for chronic diseases in a longitudinal study. Statistical inferences on comparing treatment effects or on finding determinants of disease progression usually require estimation of transition probabilities. In many situations when the outcome data have some missing observations or the variable of interest (called a latent variable) can not be measured directly, the estimation of transition probabilities becomes more complicated. In the latter case, a surrogate variable that is easier to access and can gauge the characteristics of the latent one is usually used for data analysis. ^ This dissertation research proposes methods to analyze longitudinal data (1) that have categorical outcome with missing observations or (2) that use complete or incomplete surrogate observations to analyze the categorical latent outcome. For (1), different missing mechanisms were considered for empirical studies using methods that include EM algorithm, Monte Carlo EM and a procedure that is not a data augmentation method. For (2), the hidden Markov model with the forward-backward procedure was applied for parameter estimation. This method was also extended to cover the computation of standard errors. The proposed methods were demonstrated by the Schizophrenia example. The relevance of public health, the strength and limitations, and possible future research were also discussed. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical methods are developed which assess survival data for two attributes; (1) prolongation of life, (2) quality of life. Health state transition probabilities correspond to prolongation of life and are modeled as a discrete-time semi-Markov process. Imbedded within the sojourn time of a particular health state are the quality of life transitions. They reflect events which differentiate perceptions of pain and suffering over a fixed time period. Quality of life transition probabilities are derived from the assumptions of a simple Markov process. These probabilities depend on the health state currently occupied and the next health state to which a transition is made. Utilizing the two forms of attributes the model has the capability to estimate the distribution of expected quality adjusted life years (in addition to the distribution of expected survival times). The expected quality of life can also be estimated within the health state sojourn time making more flexible the assessment of utility preferences. The methods are demonstrated on a subset of follow-up data from the Beta Blocker Heart Attack Trial (BHAT). This model contains the structure necessary to make inferences when assessing a general survival problem with a two dimensional outcome. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A late Albian-early Cenomanian record (~103.3 to 99.0 Ma), including organic-rich deposits and a d13C increase associated with oceanic anoxic event 1d (OAE 1d), is described from Ocean Drilling Program sites 1050 and 1052 in the subtropical Atlantic. Foraminifera are well preserved at these sites. Paleotemperatures estimated from benthic d18O values average ~14°C for middle bathyal Site 1050 and ~17°C for upper bathyal Site 1052, whereas surface temperatures are estimated to have ranged from 26°C to 31°C at both sites. Among planktonic foraminifera, there is a steady balance of speciation and extinction with no discrete time of major faunal turnover. OAE 1d is recognized on the basis of a 1.2 per mill d13C increase (~100.0-99.6 Ma), which is similar in age and magnitude to d13C excursions documented in the North Atlantic and western Tethys. Organic-rich "black shales" are present throughout the studied interval at both sites. However, deposition of individual black shale beds was not synchronous between sites, and most of the black shale was deposited before the OAE 1d d13C increase. A similar pattern is observed at the other sites where OAE 1d has been recognized indicating that the site(s) of excess organic carbon burial that could have caused the d13C increase has (have) yet to be found. Our findings add weight to the view that OAEs should be chemostratigraphically (d13C) rather than lithostratigraphically defined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Apatite fission track (FT) ages and length characteristics of samples obtained from Cambrian to Paleocene-aged sandstones collected along the margin of Nares Strait in Ellesmere Island in the Canadian Arctic Archipelago are dominated by a thermal history related to Paleogene relative plate movements between Greenland and Ellesmere Island. A preliminary inverse FT thermal model for a Cambrian (Archer Fiord Formation) sandstone in the hanging wall of the Rawlings Bay thrust at Cape Lawrence is consistent with Paleocene exhumational cooling, likely as a result of erosion of the thrust. This suggests that thrusting at Cape Lawrence occurred prior to the onset of Eocene compression, likely due to transpression during earlier strikeslip along the strait. Models for samples from volcaniclastic sandstones of the Late Paleocene Pavy Formation (from Cape Back and near Pavy River), and a sandstone from the Late Paleocene Mount Lawson Formation (at Split Lake, near Makinson Inlet) are also consistent with minor burial heating following known periods of basaltic volcanism in Baffin Bay and Davis Strait (c. 61-59 Ma), or related tholeiitic volcanism and intrusive activity (c. 55-54 Ma). Thermal models for samples from sea level dykes from around Smith Sound suggest a period of Late Cretaceous - Paleocene heating prior to final cooling during Paleocene time. These model results imply that Paleocene tectonic movements along Nares Strait were significant, and provide limited support for the former existence of the Wegener Fault. Apatite FT data from central Ellesmere Island suggest however, that cooling there occurred during Early Eocene time (c. 50 Ma), which was likely a result of erosion of thrusts during Eurekan compression. This diachronous cooling suggests that Eurekan deformation was partitioned at discrete intervals across Ellesmere Island, and thus it is likely that displacements along the strait were much less than the 150 km that has been previously suggested for the Wegener Fault.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to increase current empirical evidence on the relevance of real options for explaining firm investment decisions in oligopolistic markets. We study an actual investment case in the Spanish mobile telephony industry, the entrant in the market of a new operator, Yoigo. We analyze the option to abandon in order to show the relevance of the possibility of selling the company in an oligopolistic market where competitors are not allowed free entrance. The NPV (net present value) of the new entrant is calculated as a starting point. Then, based on the general approach proposed by Copeland and Antikarov (2001), a binomial tree is used to model managerial flexibility in discrete time periods, and value the option to abandon. The strike price of the option is calculated based on incremental EBITDA margins due to selling customers or merging with a competitor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El capital financiero es muy volátil y si el inversor no obtiene una remuneración adecuada al riesgo que asume puede plantearse el retirar su capital del patrimonio de la empresa y, en consecuencia, producir un cambio estructural en cualquier sector de la economía. El objetivo principal es el estudio de los coeficientes de regresión (coeficiente beta) de los modelos de valoración de activos empleados en Economía Financiera, esto es, el estudio de la variación de la rentabilidad de los activos en función de los cambios que suceden en los mercados. La elección de los modelos utilizados se justifica por la amplia utilización teórica y empírica de los mismos a lo largo de la historia de la Economía Financiera. Se han aplicado el modelo de valoración de activos de mercado (capital asset pricing model, CAPM), el modelo basado en la teoría de precios de arbitraje (arbitrage pricing theory, APT) y el modelo de tres factores de Fama y French (FF). Estos modelos se han aplicado a los rendimientos mensuales de 27 empresas del sector minero que cotizan en la bolsa de Nueva York (New York Stock Exchange, NYSE) o en la de Londres (London Stock Exchange, LSE), con datos del período que comprende desde Enero de 2006 a Diciembre de 2010. Los resultados de series de tiempo y sección cruzada tanto para CAPM, como para APT y FF producen varios errores, lo que sugiere que muchas empresas del sector no han podido obtener el coste de capital. También los resultados muestran que las empresas de mayor riesgo tienden a tener una menor rentabilidad. Estas conclusiones hacen poco probable que se mantenga en el largo plazo el equilibrio actual y puede que sea uno de los principales factores que impulsen un cambio estructural en el sector minero en forma de concentraciones de empresas. ABSTRACT Financial capital is highly volatile and if the investor does not get adequate compensation for the risk faced he may consider withdrawing his capital assets from the company and consequently produce a structural change in any sector of the economy. The main purpose is the study of the regression coefficients (beta) of asset pricing models used in financial economics, that is, the study of variation in profitability of assets in terms of the changes that occur in the markets. The choice of models used is justified by the extensive theoretical and empirical use of them throughout the history of financial economics. Have been used the capital asset pricing model, CAPM, the model XII based on the arbitrage pricing theory (APT) and the three-factor model of Fama and French (FF). These models have been applied to the monthly returns of 27 mining companies listed on the NYSE (New York Stock Exchange) or LSE(London Stock Exchange), using data from the period covered from January 2006 to December 2010. The results of time series and cross sectional regressions for CAPM, APT and FF produce some errors, suggesting that many companies have failed to obtain the cost of capital. Also the results show that higher risk firms tend to have lower profitability. These findings make it unlikely to be mainteined over the long term the current status and could drive structural change in the mining sector in the form of mergers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study examines the Capital Asset Pricing Model (CAPM) for the mining sector using weekly stock returns from 27 companies traded on the New York Stock Exchange (NYSE) or on the London Stock Exchange (LSE) for the period of December 2008 to December 2010. The results support the use of the CAPM for the allocation of risk to companies. Most companies involved in precious metals (particularly gold), which have a beta value less than unity (Table 1), have been actuated as shelter values during the financial crisis. Values of R2 do not shown very explanatory power of fitted models (R2 < 70 %). Estimated coefficients beta are not sufficient to determine the expected returns on securities but the results of the tests conducted on sample data for the period analysed do not appear to clearly reject the CAPM

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con esta disertación se pretenden resolver algunos de los problemas encontrados actualmente en la recepción de señales de satélites bajo dos escenarios particularmente exigentes: comunicaciones de Espacio Profundo y en banda Ka. Las comunicaciones con sondas de Espacio Profundo necesitan grandes aperturas en tierra para poder incrementar la velocidad de datos. La opción de usar antennas con diámetro mayor de 35 metros tiene serios problemas, pues antenas tan grandes son caras de mantener, difíciles de apuntar, pueden tener largos tiempo de reparación y además tienen una efeciencia decreciente a medida que se utilizan bandas más altas. Soluciones basadas en agrupaciones de antenas de menor tamaño (12 ó 35 metros) son mas ecónomicas y factibles técnicamente. Las comunicaciones en banda Ka tambien pueden beneficiarse de la combinación de múltiples antennas. Las antenas de menor tamaño son más fáciles de apuntar y además tienen un campo de visión mayor. Además, las técnicas de diversidad espacial pueden ser reemplazadas por una combinación de antenas para así incrementar el margen del enlace. La combinación de antenas muy alejadas sobre grandes anchos de banda, bien por recibir una señal de banda ancha o múltiples de banda estrecha, es complicada técnicamente. En esta disertación se demostrará que el uso de conformador de haz en el dominio de la frecuencia puede ayudar a relajar los requisitos de calibración y, al mismo tiempo, proporcionar un mayor campo de visión y mayores capacidades de ecualización. Para llevar esto a cabo, el trabajo ha girado en torno a tres aspectos fundamentales. El primero es la investigación bibliográfica del trabajo existente en este campo. El segundo es el modelado matemático del proceso de combinación y el desarrollo de nuevos algoritmos de estimación de fase y retardo. Y el tercero es la propuesta de nuevas aplicaciones en las que usar estas técnicas. La investigación bibliográfica se centra principalmente en los capítulos 1, 2, 4 y 5. El capítulo 1 da una breve introducción a la teoría de combinación de antenas de gran apertura. En este capítulo, los principales campos de aplicación son descritos y además se establece la necesidad de compensar retardos en subbandas. La teoría de bancos de filtros se expone en el capítulo 2; se selecciona y simula un banco de filtros modulado uniformemente con fase lineal. Las propiedades de convergencia de varios filtros adaptativos se muestran en el capítulo 4. Y finalmente, las técnicas de estimación de retardo son estudiadas y resumidas en el capítulo 5. Desde el punto de vista matemático, las principales contribución de esta disertación han sido: • Sección 3.1.4. Cálculo de la desviación de haz de un conformador de haz con compensación de retardo en pasos discretos en frecuencia intermedia. • Sección 3.2. Modelo matemático de un conformador de haz en subbandas. • Sección 3.2.2. Cálculo de la desviación de haz de un conformador de haz en subbandas con un buffer de retardo grueso. • Sección 3.2.4. Análisis de la influencia de los alias internos en la compensación en subbandas de retardo y fase. • Sección 3.2.4.2. Cálculo de la desviación de haz de un conformador de haz con compensación de retardo en subbandas. • Sección 3.2.6. Cálculo de la ganancia de relación señal a ruido de la agrupación de antenas en cada una de las subbandas. • Sección 3.3.2. Modelado de la función de transferencia de la agrupación de antenas bajo errores de estimación de retardo. • Sección 3.3.3. Modelado de los efectos de derivas de fase y retardo entre actualizaciones de las estimaciones. • Sección 3.4. Cálculo de la directividad de la agrupación de antenas con y sin compensación de retardos en subbandas. • Sección 5.2.6. Desarrollo de un algorimo para estimar la fase y el retardo entre dos señales a partir de su descomposición de subbandas bajo entornos estacionarios. • Sección 5.5.1. Desarrollo de un algorimo para estimar la fase, el retardo y la deriva de retardo entre dos señales a partir de su descomposición de subbandas bajo entornos no estacionarios. Las aplicaciones que se pueden beneficiar de estas técnicas son descritas en el capítulo 7: • Sección 6.2. Agrupaciones de antenas para comunicaciones de Espacio Profundo con capacidad multihaz y sin requisitos de calibración geométrica o de retardo de grupo. • Sección 6.2.6. Combinación en banda ancha de antenas con separaciones de miles de kilómetros, para recepción de sondas de espacio profundo. • Secciones 6.4 and 6.3. Combinación de estaciones remotas en banda Ka en escenarios de diversidad espacial, para recepción de satélites LEO o GEO. • Sección 6.3. Recepción de satélites GEO colocados con arrays de antenas multihaz. Las publicaciones a las que ha dado lugar esta tesis son las siguientes • A. Torre. Wideband antenna arraying over long distances. Interplanetary Progress Report, 42-194:1–18, 2013. En esta pulicación se resumen los resultados de las secciones 3.2, 3.2.2, 3.3.2, los algoritmos en las secciones 5.2.6, 5.5.1 y la aplicación destacada en 6.2.6. • A. Torre. Reception of wideband signals from geostationary collocated satellites with antenna arrays. IET Communications, Vol. 8, Issue 13:2229–2237, September, 2014. En esta segunda se muestran los resultados de la sección 3.2.4, el algoritmo en la sección 5.2.6.1 , y la aplicación mostrada en 6.3. ABSTRACT This dissertation is an attempt to solve some of the problems found nowadays in the reception of satellite signals under two particular challenging scenarios: Deep Space and Ka-band communications. Deep Space communications require from larger apertures on ground in order to increase the data rate. The option of using single dishes with diameters larger than 35 meters has severe drawbacks. Such antennas are expensive to maintain, prone to long downtimes, difficult to point and have a degraded performance in high frequency bands. The array solution, either with 12 meter or 35 meter antennas is deemed to be the most economically and technically feasible solution. Ka-band communications can also benefit from antenna arraying technology. The smaller aperture antennas that make up the array are easier to point and have a wider field of view allowing multiple simultaneous beams. Besides, site diversity techniques can be replaced by pure combination in order to increase link margin. Combination of far away antennas over a large bandwidth, either because a wideband signal or multiple narrowband signals are received, is a demanding task. This dissertation will show that the use of frequency domain beamformers with subband delay compensation can help to ease calibration requirements and, at the same time, provide with a wider field of view and enhanced equalization capabilities. In order to do so, the work has been focused on three main aspects. The first one is the bibliographic research of previous work on this subject. The second one is the mathematical modeling of the array combination process and the development of new phase/delay estimation algorithms. And the third one is the proposal of new applications in which these techniques can be used. Bibliographic research is mainly done in chapters 1, 2, 4 and 5. Chapter 1 gives a brief introduction to previous work in the field of large aperture antenna arraying. In this chapter, the main fields of application are described and the need for subband delay compensation is established. Filter bank theory is shown in chapter 2; a linear phase uniform modulated filter bank is selected and simulated under diverse conditions. The convergence properties of several adaptive filters are shown in chapter 4. Finally, delay estimation techniques are studied and summarized in chapter 5. From a mathematical point of view, the main contributions of this dissertation have been: • Section 3.1.4. Calculation of beam squint of an IF beamformer with delay compensation at discrete time steps. • Section 3.2. Establishment of a mathematical model of a subband beamformer. • Section 3.2.2. Calculation of beam squint in a subband beamformer with a coarse delay buffer. • Section 3.2.4. Analysis of the influence of internal aliasing on phase and delay subband compensation. • Section 3.2.4.2. Calculation of beam squint of a beamformer with subband delay compensation. • Section 3.2.6. Calculation of the array SNR gain at each of the subbands. • Section 3.3.2. Modeling of the transfer function of an array subject to delay estimation errors. • Section 3.3.3. Modeling of the effects of phase and delay drifts between estimation updates. • Section 3.4. Calculation of array directivity with and without subband delay compensation. • Section 5.2.6. Development of an algorithm to estimate relative delay and phase between two signals from their subband decomposition in stationary environments. • Section 5.5.1. Development of an algorithm to estimate relative delay rate, delay and phase between two signals from their subband decomposition in non stationary environments. The applications that can benefit from these techniques are described in chapter 7: • Section 6.2. Arrays of antennas for Deep Space communications with multibeam capacity and without geometric or group delay calibration requirement. • Section 6.2.6. Wideband antenna arraying over long distances, in the range of thousands of kilometers, for reception of Deep Space probes. • Sections 6.4 y 6.3. Combination of remote stations in Ka-band site diversity scenarios for reception of LEO or GEO satellites. • Section 6.3. Reception of GEO collocated satellites with multibeam antenna arrays. The publications that have been made from the work in this dissertation are • A. Torre. Wideband antenna arraying over long distances. Interplanetary Progress Report, 42-194:1–18, 2013. This article shows the results in sections 3.2, 3.2.2, 3.3.2, the algorithms in sections 5.2.6, 5.5.1 and the application in section 6.2.6. • A. Torre. Reception of wideband signals from geostationary collocated satellites with antenna arrays. IET Communications, Vol. 8, Issue 13:2229–2237, September, 2014. This second article shows among others the results in section 3.2.4, the algorithm in section 5.2.6.1 , and the application in section 6.3.