917 resultados para vector error correction model
Resumo:
The Laurentide Ice Sheet (LIS) was a large, dynamic ice sheet in the early Holocene. The glacial events through Hudson Strait leading to its eventual demise are recorded in the well-dated Labrador shelf core, MD99-2236 from the Cartwright Saddle. We develop a detailed history of the timing of ice-sheet discharge events from the Hudson Strait outlet of the LIS during the Holocene using high-resolution detrital carbonate, ice rafted detritus (IRD), d18O, and sediment color data. Eight detrital carbonate peaks (DCPs) associated with IRD peaks and light oxygen isotope events punctuate the MD99-2236 record between 11.5 and 8.0 ka. We use the stratigraphy of the DCPs developed from MD99-2236 to select the appropriate DeltaR to calibrate the ages of recorded glacial events in Hudson Bay and Hudson Strait such that they match the DCPs in MD99-2236. We associate the eight DCPs with H0, Gold Cove advance, Noble Inlet advance, initial retreat of the Hudson Strait ice stream (HSIS) from Hudson Strait, opening of the Tyrrell Sea, and drainage of glacial lakes Agassiz and Ojibway. The opening of Foxe Channel and retreat of glacial ice from Foxe Basin are represented by a shoulder in the carbonate data. DeltaR of 350 years applied to the radiocarbon ages constraining glacial events H0 through the opening of the Tyrell Sea provided the best match with the MD99-2236 DCPs; DeltaR values and ages from the literature are used for the younger events. A very close age match was achieved between the 8.2 ka cold event in the Greenland ice cores, DCP7 (8.15 ka BP), and the drainage of glacial lakes Agassiz and Ojibway. Our stratigraphic comparison between the DCPs in MD99-2236 and the calibrated ages of Hudson Strait/Bay deglacial events shows that the retreat of the HSIS, the opening of the Tyrell Sea, and the catastrophic drainage of glacial lakes Agassiz and Ojibway at 8.2 ka are separate events that have been combined in previous estimates of the timing of the 8.2 ka event from marine records. SW Iceland shelf core MD99-2256 documents freshwater entrainment into the subpolar gyre from the Hudson Strait outlet via the Labrador, North Atlantic, and Irminger currents. The timing of freshwater release from the LIS Hudson Strait outlet in MD99-2236 matches evidence for freshwater forcing and LIS icebergs carrying foreign minerals to the SW Iceland shelf between 11.5 and 8.2 ka. The congruency of these records supports the conclusion of the entrainment of freshwater from the retreat of the LIS through Hudson Strait into the subpolar gyre and provides specific time periods when pulses of LIS freshwater were present to influence climate.
Comparison of the stable carbon and nitrogen isotopic values of gill and white muscle tissue of fish
Resumo:
The potential use of stable carbon and nitrogen isotope ratios (d13C, d15N) of fish gills for studies on fish feeding ecology was evaluated by comparing the d13C and d15N of gill tissue with the more commonly used white muscle tissue. To account for the effect of lipid content on the d13C signatures, a study-specific lipid correction model based on C:N ratios was developed and applied to the bulk d13C data. For the majority of species in the study, we found no significant difference in d13C values between gill and muscle tissue after correction, but several species showed a small (0.3-1.4 per mil) depletion in 13C in white muscle compared to gill tissue. The average species difference in d15N between muscle and gill tissue ranged from -0.2 to 1.6 per mil for the different fish species with muscle tissue generally more enriched in 15N. The d13C values of muscle and gill were strongly linearly correlated (R**2 = 0.85) over a large isotopic range (13 per mil), suggesting that both tissues can be used to determine long-term feeding or migratory habits of fish. Muscle and gill tissue bulk d15N values were also strongly positively correlated (R**2= 0.76) but with a small difference between muscle and gill tissue. This difference indicates that the bulk d15N of the two tissue types may be influenced by different isotopic turnover rates or a different composition of amino acids.
Resumo:
Thirty years after oxygen isotope records from microfossils deposited in ocean sediments confirmed the hypothesis that variations in the Earth's orbital geometry control the ice ages (Hays et al., 1976, doi:10.1126/science.194.4270.1121), fundamental questions remain over the response of the Antarctic ice sheets to orbital cycles (Raymo and Huybers, 2008, doi:10.1038/nature06589). Furthermore, an understanding of the behaviour of the marine-based West Antarctic ice sheet (WAIS) during the 'warmer-than-present' early-Pliocene epoch (~5-3 Myr ago) is needed to better constrain the possible range of ice-sheet behaviour in the context of future global warming (Solomon et al., 2007). Here we present a marine glacial record from the upper 600 m of the AND-1B sediment core recovered from beneath the northwest part of the Ross ice shelf by the ANDRILL programme and demonstrate well-dated, ~40-kyr cyclic variations in ice-sheet extent linked to cycles in insolation influenced by changes in the Earth's axial tilt (obliquity) during the Pliocene. Our data provide direct evidence for orbitally induced oscillations in the WAIS, which periodically collapsed, resulting in a switch from grounded ice, or ice shelves, to open waters in the Ross embayment when planetary temperatures were up to ~3° C warmer than today ( Kim and Crowley, 2000, doi:10.1029/1999PA000459) and atmospheric CO2 concentration was as high as ~400 p.p.m.v. (van der Burgh et al., 1993, doi:10.1126/science.260.5115.1788, Raymo et al., 1996, doi:10.1016/0377-8398(95)00048-8). The evidence is consistent with a new ice-sheet/ice-shelf model (Pollard and DeConto, 2009, doi:10.1038/nature07809) that simulates fluctuations in Antarctic ice volume of up to +7 m in equivalent sea level associated with the loss of the WAIS and up to +3 m in equivalent sea level from the East Antarctic ice sheet, in response to ocean-induced melting paced by obliquity. During interglacial times, diatomaceous sediments indicate high surface-water productivity, minimal summer sea ice and air temperatures above freezing, suggesting an additional influence of surface melt (Huybers, 2006, doi:10.1126/science.1125249) under conditions of elevated CO2.
Resumo:
This paper will look at the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP). FEC can be used to reduce the number of retransmissions which would usually result from a lost packet. The requirement for TCP to deal with any losses is then greatly reduced. There are however side-effects to using FEC as a countermeasure to packet loss: an additional requirement for bandwidth. When applications such as real-time video conferencing are needed, delay must be kept to a minimum, and retransmissions are certainly not desirable. A balance, therefore, between additional bandwidth and delay due to retransmissions must be struck. Our results show that the throughput of data can be significantly improved when packet loss occurs using a combination of FEC and TCP, compared to relying solely on TCP for retransmissions. Furthermore, a case study applies the result to demonstrate the achievable improvements in the quality of streaming video perceived by end users.
Resumo:
In this work, we present an adaptive unequal loss protection (ULP) scheme for H264/AVC video transmission over lossy networks. This scheme combines erasure coding, H.264/AVC error resilience techniques and importance measures in video coding. The unequal importance of the video packets is identified in the group of pictures (GOP) and the H.264/AVC data partitioning levels. The presented method can adaptively assign unequal amount of forward error correction (FEC) parity across the video packets according to the network conditions, such as the available network bandwidth, packet loss rate and average packet burst loss length. A near optimal algorithm is developed to deal with the FEC assignment for optimization. The simulation results show that our scheme can effectively utilize network resources such as bandwidth, while improving the quality of the video transmission. In addition, the proposed ULP strategy ensures graceful degradation of the received video quality as the packet loss rate increases. © 2010 IEEE.
Resumo:
We develop a framework for estimating the quality of transmission (QoT) of a new lightpath before it is established, as well as for calculating the expected degradation it will cause to existing lightpaths. The framework correlates the QoT metrics of established lightpaths, which are readily available from coherent optical receivers that can be extended to serve as optical performance monitors. Past similar studies used only space (routing) information and thus neglected spectrum, while they focused on oldgeneration noncoherent networks. The proposed framework accounts for correlation in both the space and spectrum domains and can be applied to both fixed-grid wavelength division multiplexing (WDM) and elastic optical networks. It is based on a graph transformation that exposes and models the interference between spectrum-neighboring channels. Our results indicate that our QoT estimates are very close to the actual performance data, that is, to having perfect knowledge of the physical layer. The proposed estimation framework is shown to provide up to 4 × 10-2 lower pre-forward error correction bit error ratio (BER) compared to theworst-case interference scenario,which overestimates the BER. The higher accuracy can be harvested when lightpaths are provisioned with low margins; our results showed up to 47% reduction in required regenerators, a substantial savings in equipment cost.
Resumo:
We quantify the error statistics and patterning effects in a 5x 40 Gbit/s WDM RZ-DBPSK SMF/DCF fibre link using hybrid Raman/EDFA amplification. We propose an adaptive constrained coding for the suppression of errors due to patterning effects. It is established, that this coding technique can greatly reduce the bit error rate (BER) value even for large BER (BER > 101). The proposed approach can be used in the combination with the forward error correction schemes (FEC) to correct the errors even when real channel BER is outside the FEC workspace.
Resumo:
Dengue is an important vector-borne virus that infects on the order of 400 million individuals per year. Infection with one of the virus's four serotypes (denoted DENV-1 to 4) may be silent, result in symptomatic dengue 'breakbone' fever, or develop into the more severe dengue hemorrhagic fever/dengue shock syndrome (DHF/DSS). Extensive research has therefore focused on identifying factors that influence dengue infection outcomes. It has been well-documented through epidemiological studies that DHF is most likely to result from a secondary heterologous infection, and that individuals experiencing a DENV-2 or DENV-3 infection typically are more likely to present with more severe dengue disease than those individuals experiencing a DENV-1 or DENV-4 infection. However, a mechanistic understanding of how these risk factors affect disease outcomes, and further, how the virus's ability to evolve these mechanisms will affect disease severity patterns over time, is lacking. In the second chapter of my dissertation, I formulate mechanistic mathematical models of primary and secondary dengue infections that describe how the dengue virus interacts with the immune response and the results of this interaction on the risk of developing severe dengue disease. I show that only the innate immune response is needed to reproduce characteristic features of a primary infection whereas the adaptive immune response is needed to reproduce characteristic features of a secondary dengue infection. I then add to these models a quantitative measure of disease severity that assumes immunopathology, and analyze the effectiveness of virological indicators of disease severity. In the third chapter of my dissertation, I then statistically fit these mathematical models to viral load data of dengue patients to understand the mechanisms that drive variation in viral load. I specifically consider the roles that immune status, clinical disease manifestation, and serotype may play in explaining viral load variation observed across the patients. With this analysis, I show that there is statistical support for the theory of antibody dependent enhancement in the development of severe disease in secondary dengue infections and that there is statistical support for serotype-specific differences in viral infectivity rates, with infectivity rates of DENV-2 and DENV-3 exceeding those of DENV-1. In the fourth chapter of my dissertation, I integrate these within-host models with a vector-borne epidemiological model to understand the potential for virulence evolution in dengue. Critically, I show that dengue is expected to evolve towards intermediate virulence, and that the optimal virulence of the virus depends strongly on the number of serotypes that co-circulate. Together, these dissertation chapters show that dengue viral load dynamics provide insight into the within-host mechanisms driving differences in dengue disease patterns and that these mechanisms have important implications for dengue virulence evolution.
Resumo:
My dissertation has three chapters which develop and apply microeconometric tech- niques to empirically relevant problems. All the chapters examines the robustness issues (e.g., measurement error and model misspecification) in the econometric anal- ysis. The first chapter studies the identifying power of an instrumental variable in the nonparametric heterogeneous treatment effect framework when a binary treat- ment variable is mismeasured and endogenous. I characterize the sharp identified set for the local average treatment effect under the following two assumptions: (1) the exclusion restriction of an instrument and (2) deterministic monotonicity of the true treatment variable in the instrument. The identification strategy allows for general measurement error. Notably, (i) the measurement error is nonclassical, (ii) it can be endogenous, and (iii) no assumptions are imposed on the marginal distribution of the measurement error, so that I do not need to assume the accuracy of the measure- ment. Based on the partial identification result, I provide a consistent confidence interval for the local average treatment effect with uniformly valid size control. I also show that the identification strategy can incorporate repeated measurements to narrow the identified set, even if the repeated measurements themselves are endoge- nous. Using the the National Longitudinal Study of the High School Class of 1972, I demonstrate that my new methodology can produce nontrivial bounds for the return to college attendance when attendance is mismeasured and endogenous.
The second chapter, which is a part of a coauthored project with Federico Bugni, considers the problem of inference in dynamic discrete choice problems when the structural model is locally misspecified. We consider two popular classes of estimators for dynamic discrete choice models: K-step maximum likelihood estimators (K-ML) and K-step minimum distance estimators (K-MD), where K denotes the number of policy iterations employed in the estimation problem. These estimator classes include popular estimators such as Rust (1987)’s nested fixed point estimator, Hotz and Miller (1993)’s conditional choice probability estimator, Aguirregabiria and Mira (2002)’s nested algorithm estimator, and Pesendorfer and Schmidt-Dengler (2008)’s least squares estimator. We derive and compare the asymptotic distributions of K- ML and K-MD estimators when the model is arbitrarily locally misspecified and we obtain three main results. In the absence of misspecification, Aguirregabiria and Mira (2002) show that all K-ML estimators are asymptotically equivalent regardless of the choice of K. Our first result shows that this finding extends to a locally misspecified model, regardless of the degree of local misspecification. As a second result, we show that an analogous result holds for all K-MD estimators, i.e., all K- MD estimator are asymptotically equivalent regardless of the choice of K. Our third and final result is to compare K-MD and K-ML estimators in terms of asymptotic mean squared error. Under local misspecification, the optimally weighted K-MD estimator depends on the unknown asymptotic bias and is no longer feasible. In turn, feasible K-MD estimators could have an asymptotic mean squared error that is higher or lower than that of the K-ML estimators. To demonstrate the relevance of our asymptotic analysis, we illustrate our findings using in a simulation exercise based on a misspecified version of Rust (1987) bus engine problem.
The last chapter investigates the causal effect of the Omnibus Budget Reconcil- iation Act of 1993, which caused the biggest change to the EITC in its history, on unemployment and labor force participation among single mothers. Unemployment and labor force participation are difficult to define for a few reasons, for example, be- cause of marginally attached workers. Instead of searching for the unique definition for each of these two concepts, this chapter bounds unemployment and labor force participation by observable variables and, as a result, considers various competing definitions of these two concepts simultaneously. This bounding strategy leads to partial identification of the treatment effect. The inference results depend on the construction of the bounds, but they imply positive effect on labor force participa- tion and negligible effect on unemployment. The results imply that the difference- in-difference result based on the BLS definition of unemployment can be misleading
due to misclassification of unemployment.
Resumo:
Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables full spectrum CT in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical eects in the detector and are very noisy due to photon starvation. In this work, we proposed two methods based on machine learning to address the spectral distortion issue and to improve the material decomposition. This rst approach is to model distortions using an articial neural network (ANN) and compensate for the distortion in a statistical reconstruction. The second approach is to directly correct for the distortion in the projections. Both technique can be done as a calibration process where the neural network can be trained using 3D printed phantoms data to learn the distortion model or the correction model of the spectral distortion. This replaces the need for synchrotron measurements required in conventional technique to derive the distortion model parametrically which could be costly and time consuming. The results demonstrate experimental feasibility and potential advantages of ANN-based distortion modeling and correction for more accurate K-edge imaging with a PCXD. Given the computational eciency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.
Resumo:
Atomic ions trapped in micro-fabricated surface traps can be utilized as a physical platform with which to build a quantum computer. They possess many of the desirable qualities of such a device, including high fidelity state preparation and readout, universal logic gates, long coherence times, and can be readily entangled with each other through photonic interconnects. The use of optical cavities integrated with trapped ion qubits as a photonic interface presents the possibility for order of magnitude improvements in performance in several key areas of their use in quantum computation. The first part of this thesis describes the design and fabrication of a novel surface trap for integration with an optical cavity. The trap is custom made on a highly reflective mirror surface and includes the capability of moving the ion trap location along all three trap axes with nanometer scale precision. The second part of this thesis demonstrates the suitability of small micro-cavities formed from laser ablated fused silica substrates with radii of curvature in the 300-500 micron range for use with the mirror trap as part of an integrated ion trap cavity system. Quantum computing applications for such a system include dramatic improvements in the photonic entanglement rate up to 10 kHz, the qubit measurement time down to 1 microsecond, and the measurement error rates down to the 10e-5 range. The final part of this thesis details a performance simulator for exploring the physical resource requirements and performance demands to scale such a quantum computer to sizes capable of performing quantum algorithms beyond the limits of classical computation.
Resumo:
Ce mémoire s’intéresse à l’étude du critère de validation croisée pour le choix des modèles relatifs aux petits domaines. L’étude est limitée aux modèles de petits domaines au niveau des unités. Le modèle de base des petits domaines est introduit par Battese, Harter et Fuller en 1988. C’est un modèle de régression linéaire mixte avec une ordonnée à l’origine aléatoire. Il se compose d’un certain nombre de paramètres : le paramètre β de la partie fixe, la composante aléatoire et les variances relatives à l’erreur résiduelle. Le modèle de Battese et al. est utilisé pour prédire, lors d’une enquête, la moyenne d’une variable d’intérêt y dans chaque petit domaine en utilisant une variable auxiliaire administrative x connue sur toute la population. La méthode d’estimation consiste à utiliser une distribution normale, pour modéliser la composante résiduelle du modèle. La considération d’une dépendance résiduelle générale, c’est-à-dire autre que la loi normale donne une méthodologie plus flexible. Cette généralisation conduit à une nouvelle classe de modèles échangeables. En effet, la généralisation se situe au niveau de la modélisation de la dépendance résiduelle qui peut être soit normale (c’est le cas du modèle de Battese et al.) ou non-normale. L’objectif est de déterminer les paramètres propres aux petits domaines avec le plus de précision possible. Cet enjeu est lié au choix de la bonne dépendance résiduelle à utiliser dans le modèle. Le critère de validation croisée sera étudié à cet effet.