962 resultados para variable data printing
Resumo:
During the cleaning of the HPC core surfaces from Hole 480 for photography, the material removed was conserved carefully in approximately 10 cm intervals (by K. Kelts); this material was made available to us in the hope that it would be possible to obtain oxygen isotope stratigraphy for the site. The samples were, of course, somewhat variable in size, but the majority were probably between 5 and 10 cm**3. Had this been a normal marine environment, such sample sizes would have contained abundant planktonic foraminifers together with a small number of benthics. However, this is clearly not the case, for many samples contained no foraminifers, whereas others contained more benthics than planktonics. Among the planktonic foraminifers the commonest species are Globigerina bulloides, Neogloboquadrina dutertrei, and N. pachyderma. A few samples contain a more normal fauna with Globigerinoides spp. and occasional Globorotalia spp. Sample 480-3-3, 20-30 cm contained Globigerina rubescens, isolated specimens of which were noted in a few other samples in Cores 3,4, and 5. This is a particularly solution-sensitive species; in the open Pacific it is only found widely distributed at horizons of exceptionally low carbonate dissolution, such as. the last glacial-to-interglacial transition.
Resumo:
The mixing regime of the upper 180 m of a mesoscale eddy in the vicinity of the Antarctic Polar Front at 47° S and 21° E was investigated during the R.V. Polarstern cruise ANT-XVIII/2 within the scope of the iron fertilization experiment EisenEx. On the basis of hydrographic CTD and ADCP profiles we deduced the vertical diffusivity Kz from two different parameterizations. Since these parameterizations bear the character of empirical functions, based on theoretical and idealized assumptions, they were inter alia compared with Cox-number and Thorpe-scale related diffusivities deduced from microstructure measurements, which supplied the first direct insights into turbulence of this ocean region. Values of Kz in the range of 10**-4 - 10**-3 m**2/s appear as a rather robust estimate of vertical diffusivity within the seasonal pycnocline. Values in the mixed layer above are more variable in time and reach 10**-1 m**2/s during periods of strong winds. The results confirm a close agreement between the microstructure-based eddy diffusivities and eddy diffusivities calculated after the parameterization of Pacanowski and Philander [1981, Journal of Physical Oceanography 11, 1443-1451, doi:10.1175/1520-0485(1981)011<1443:POVMIN>2.0.CO;2].
Resumo:
The compositional record of the AND-2A drillcore is examined using petrological, sedimentological, volcanological and geochemical analysis of clasts, sediments and pore waters. Preliminary investigations of basement clasts (granitoids and metasediments) indicate both local and distal sources corresponding to variable ice-volume and ice-flow directions. Low abundance of sedimentary clasts (e.g., arkose, litharenite) suggests reduced contributions from sedimentary covers while intraclasts (e.g., diamictite, conglomerate) attest to intrabasinal reworking. Volcanic material includes pyroclasts (e.g., pumice, scoria), sediments and lava. Primary and reworked tephra layers occur within the Early Miocene interval (1093 to 640 metres below sea floor mbsf). The compositions of volcanic clasts reveal a diversity of alkaline types derived from the McMurdo Volcanic Group. Finer-grained sediments (e.g., sandstone, siltstone) show increases in biogenic silica and volcanic glass from 230 to 780 mbsf and higher proportions of terrigenous material c. 350 to 750 mbsf and below 970 mbsf. Basement clast assemblages suggest a dominant provenance from the Skelton Glacier - Darwin Glacier area and from the Ferrar Glacier - Koettlitz Glacier area. Provenance of sand grains is consistent with clast sources. Thirteen Geochemical Units are established based on compositional trends derived from continuous XRF scanning. High values of Fe and Ti indicate terrigenous and volcanic sources, whereas high Ca values signify either biogenic or diagenic sources. Highly alkaline and saline pore waters were produced by chemical exchange with glass at moderately elevated temperatures.
Resumo:
Owing to their important roles in biogeochemical cycles, phytoplankton functional types (PFTs) have been the aim of an increasing number of ocean color algorithms. Yet, none of the existing methods are based on phytoplankton carbon (C) biomass, which is a fundamental biogeochemical and ecological variable and the "unit of accounting" in Earth system models. We present a novel bio-optical algorithm to retrieve size-partitioned phytoplankton carbon from ocean color satellite data. The algorithm is based on existing methods to estimate particle volume from a power-law particle size distribution (PSD). Volume is converted to carbon concentrations using a compilation of allometric relationships. We quantify absolute and fractional biomass in three PFTs based on size - picophytoplankton (0.5-2 µm in diameter), nanophytoplankton (2-20 µm) and microphytoplankton (20-50 µm). The mean spatial distributions of total phytoplankton C biomass and individual PFTs, derived from global SeaWiFS monthly ocean color data, are consistent with current understanding of oceanic ecosystems, i.e., oligotrophic regions are characterized by low biomass and dominance of picoplankton, whereas eutrophic regions have high biomass to which nanoplankton and microplankton contribute relatively larger fractions. Global climatological, spatially integrated phytoplankton carbon biomass standing stock estimates using our PSD-based approach yield - 0.25 Gt of C, consistent with analogous estimates from two other ocean color algorithms and several state-of-the-art Earth system models. Satisfactory in situ closure observed between PSD and POC measurements lends support to the theoretical basis of the PSD-based algorithm. Uncertainty budget analyses indicate that absolute carbon concentration uncertainties are driven by the PSD parameter No which determines particle number concentration to first order, while uncertainties in PFTs' fractional contributions to total C biomass are mostly due to the allometric coefficients. The C algorithm presented here, which is not empirically constrained a priori, partitions biomass in size classes and introduces improvement over the assumptions of the other approaches. However, the range of phytoplankton C biomass spatial variability globally is larger than estimated by any other models considered here, which suggests an empirical correction to the No parameter is needed, based on PSD validation statistics. These corrected absolute carbon biomass concentrations validate well against in situ POC observations.
Resumo:
Dissolution of non-aqueous phase liquids (NAPLs) or gases into groundwater is a key process, both for contamination problems originating from organic liquid sources, and for dissolution trapping in geological storage of CO2. Dissolution in natural systems typically will involve both high and low NAPL saturations and a wide range of pore water flow velocities within the same source zone for dissolution to groundwater. To correctly predict dissolution in such complex systems and as the NAPL saturations change over time, models must be capable of predicting dissolution under a range of saturations and flow conditions. To provide data to test and validate such models, an experiment was conducted in a two-dimensional sand tank, where the dissolution of a spatially variable, 5x5 cm**2 DNAPL tetrachloroethene source was carefully measured using x-ray attenuation techniques at a resolution of 0.2x0.2 cm**2. By continuously measuring the NAPL saturations, the temporal evolution of DNAPL mass loss by dissolution to groundwater could be measured at each pixel. Next, a general dissolution and solute transport code was written and several published rate-limited (RL) dissolution models and a local equilibrium (LE) approach were tested against the experimental data. It was found that none of the models could adequately predict the observed dissolution pattern, particularly in the zones of higher NAPL saturation. Combining these models with a model for NAPL pool dissolution produced qualitatively better agreement with experimental data, but the total matching error was not significantly improved. A sensitivity study of commonly used fitting parameters further showed that several combinations of these parameters could produce equally good fits to the experimental observations. The results indicate that common empirical model formulations for RL dissolution may be inadequate in complex, variable saturation NAPL source zones, and that further model developments and testing is desirable.
Resumo:
Sedimentary proxies used to reconstruct marine productivity suffer from variable preservation and are sensitive to factors other than productivity. Therefore, proxy calibration is warranted. Here we map the spatial patterns of two paleoproductivity proxies, biogenic opal and barium fluxes, from a set of core-top sediments recovered in the Subarctic North Pacific. Comparisons of the proxy data with independent estimates of primary and export production, surface water macronutrient concentrations and biological pCO2 drawdown indicate that neither proxy shows a significant correlation with primary or export productivity for the entire region. Biogenic opal fluxes, when corrected for preservation using 230Th-normalized accumulation rates, show a good correlation with primary productivity along the volcanic arcs (tau = 0.71, p = 0.0024) and with export productivity throughout the western Subarctic North Pacific (tau = 0.71, p = 0.0107). Moderate and good correlations of biogenic barium flux with export production (tau = 0.57, p = 0.0022) and with surface water silicate concentrations (tau = 0.70, p = 0.0002) are observed for the central and eastern Subarctic North Pacific. For reasons unknown, however, no correlation is found in the western Subarctic North Pacific between biogenic barium flux and the reference data. Nonetheless, we show that barite saturation, uncertainty in the lithogenic barium corrections and problems with the reference datasets are not responsible for the lack of a significant correlation between biogenic barium flux and the reference data. Further studies evaluating the factors controlling the variability of the biogenic constituents in the sediments are desirable in this region.
Resumo:
The MAREDAT atlas covers 11 types of plankton, ranging in size from bacteria to jellyfish. Together, these plankton groups determine the health and productivity of the global ocean and play a vital role in the global carbon cycle. Working within a uniform and consistent spatial and depth grid (map) of the global ocean, the researchers compiled thousands and tens of thousands of data points to identify regions of plankton abundance and scarcity as well as areas of data abundance and scarcity. At many of the grid points, the MAREDAT team accomplished the difficult conversion from abundance (numbers of organisms) to biomass (carbon mass of organisms). The MAREDAT atlas provides an unprecedented global data set for ecological and biochemical analysis and modeling as well as a clear mandate for compiling additional existing data and for focusing future data gathering efforts on key groups in key areas of the ocean. This is a gridded data product about diazotrophic organisms . There are 6 variables. Each variable is gridded on a dimension of 360 (longitude) * 180 (latitude) * 33 (depth) * 12 (month). The first group of 3 variables are: (1) number of biomass observations, (2) biomass, and (3) special nifH-gene-based biomass. The second group of 3 variables is same as the first group except that it only grids non-zero data. We have constructed a database on diazotrophic organisms in the global pelagic upper ocean by compiling more than 11,000 direct field measurements including 3 sub-databases: (1) nitrogen fixation rates, (2) cyanobacterial diazotroph abundances from cell counts and (3) cyanobacterial diazotroph abundances from qPCR assays targeting nifH genes. Biomass conversion factors are estimated based on cell sizes to convert abundance data to diazotrophic biomass. Data are assigned to 3 groups including Trichodesmium, unicellular diazotrophic cyanobacteria (group A, B and C when applicable) and heterocystous cyanobacteria (Richelia and Calothrix). Total nitrogen fixation rates and diazotrophic biomass are calculated by summing the values from all the groups. Some of nitrogen fixation rates are whole seawater measurements and are used as total nitrogen fixation rates. Both volumetric and depth-integrated values were reported. Depth-integrated values are also calculated for those vertical profiles with values at 3 or more depths.
Resumo:
River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first assemble an unprecedented collection of river flow observations, combining information from three distinct data bases. Observed monthly runoff rates are first tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 12) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950 - December 2015) on a 0.5° x 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring.
Resumo:
River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first collect an unprecedented collection of river flow observations, combining information from three distinct data bases. Observed monthly runoff rates are first tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 11) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950-December 2014) on a 0.5° × 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring.
Resumo:
A CMOS vector-sum phase shifter covering the full 360° range is presented in this paper. Broadband operational transconductance amplifiers with variable transconductance provide coarse scaling of the quadrature vector amplitudes. Fine scaling of the amplitudes is accomplished using a passive resistive network. Expressions are derived to predict the maximum bit resolution of the phase shifter from the scaling factor of the coarse and fine vector-scaling stages. The phase shifter was designed and fabricated using the standard 130-nm CMOS process and was tested on-wafer over the frequency range of 4.9–5.9 GHz. The phase shifter delivers root mean square (rms) phase and amplitude errors of 1.25° and 0.7 dB, respectively, at the midband frequency of 5.4 GHz. The input and output return losses are both below 17 dB over the band, and the insertion loss is better than 4 dB over the band. The circuit uses an area of 0.303 mm2 excluding bonding pads and draws 28 mW from a 1.2 V supply.
Resumo:
When we study the variables that a ffect survival time, we usually estimate their eff ects by the Cox regression model. In biomedical research, e ffects of the covariates are often modi ed by a biomarker variable. This leads to covariates-biomarker interactions. Here biomarker is an objective measurement of the patient characteristics at baseline. Liu et al. (2015) has built up a local partial likelihood bootstrap model to estimate and test this interaction e ffect of covariates and biomarker, but the R code developed by Liu et al. (2015) can only handle one variable and one interaction term and can not t the model with adjustment to nuisance variables. In this project, we expand the model to allow adjustment to nuisance variables, expand the R code to take any chosen interaction terms, and we set up many parameters for users to customize their research. We also build up an R package called "lplb" to integrate the complex computations into a simple interface. We conduct numerical simulation to show that the new method has excellent fi nite sample properties under both the null and alternative hypothesis. We also applied the method to analyze data from a prostate cancer clinical trial with acid phosphatase (AP) biomarker.
Resumo:
In this thesis, novel analog-to-digital and digital-to-analog generalized time-interleaved variable bandpass sigma-delta modulators are designed, analysed, evaluated and implemented that are suitable for high performance data conversion for a broad-spectrum of applications. These generalized time-interleaved variable bandpass sigma-delta modulators can perform noise-shaping for any centre frequency from DC to Nyquist. The proposed topologies are well-suited for Butterworth, Chebyshev, inverse-Chebyshev and elliptical filters, where designers have the flexibility of specifying the centre frequency, bandwidth as well as the passband and stopband attenuation parameters. The application of the time-interleaving approach, in combination with these bandpass loop-filters, not only overcomes the limitations that are associated with conventional and mid-band resonator-based bandpass sigma-delta modulators, but also offers an elegant means to increase the conversion bandwidth, thereby relaxing the need to use faster or higher-order sigma-delta modulators. A step-by-step design technique has been developed for the design of time-interleaved variable bandpass sigma-delta modulators. Using this technique, an assortment of lower- and higher-order single- and multi-path generalized A/D variable bandpass sigma-delta modulators were designed, evaluated and compared in terms of their signal-to-noise ratios, hardware complexity, stability, tonality and sensitivity for ideal and non-ideal topologies. Extensive behavioural-level simulations verified that one of the proposed topologies not only used fewer coefficients but also exhibited greater robustness to non-idealties. Furthermore, second-, fourth- and sixth-order single- and multi-path digital variable bandpass digital sigma-delta modulators are designed using this technique. The mathematical modelling and evaluation of tones caused by the finite wordlengths of these digital multi-path sigmadelta modulators, when excited by sinusoidal input signals, are also derived from first principles and verified using simulation and experimental results. The fourth-order digital variable-band sigma-delta modulator topologies are implemented in VHDL and synthesized on Xilinx® SpartanTM-3 Development Kit using fixed-point arithmetic. Circuit outputs were taken via RS232 connection provided on the FPGA board and evaluated using MATLAB routines developed by the author. These routines included the decimation process as well. The experiments undertaken by the author further validated the design methodology presented in the work. In addition, a novel tunable and reconfigurable second-order variable bandpass sigma-delta modulator has been designed and evaluated at the behavioural-level. This topology offers a flexible set of choices for designers and can operate either in single- or dual-mode enabling multi-band implementations on a single digital variable bandpass sigma-delta modulator. This work is also supported by a novel user-friendly design and evaluation tool that has been developed in MATLAB/Simulink that can speed-up the design, evaluation and comparison of analog and digital single-stage and time-interleaved variable bandpass sigma-delta modulators. This tool enables the user to specify the conversion type, topology, loop-filter type, path number and oversampling ratio.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.
Resumo:
El Mercado de Renta Variable en Colombia sigue estando en desarrollo, así como la confianza de los inversionistas a la hora de tomar decisiones de elección de portafolios óptimos de inversión, los cuales le brinden la maximización de los retornos esperados a un mínimo riesgo. Por lo anterior esta investigación explora y conoce más a fondo los sectores que conforman el mercado accionario y determina cual es más rentable que otro, a través del modelo propuesto por Harry Markowitz y teniendo en cuenta los avances a la teoría hecha por Sharpe a través del índice de Sharpe y Betas. Entre los sectores que conforman El Mercado de Renta Variable en Colombia está el Financiero, Materiales, Energía, Consumo Básico Servicios e Industrial; los cuales siguen la misma tendencia bajista que el Índice del Colcap, el cual en los últimos años está rentando negativamente. Por lo tanto con esta investigación el lector e inversionista cuenta con herramientas que aplican el modelo de Markowitz para vislumbrar de acuerdo a datos históricos, los sectores en los cuales se recomienda invertir y en los que por el contrario de acuerdo a la tendencia de debe desistir. Sin embargo, se aclara que esta investigación se basa en datos históricos, tendencias y cálculos matemáticos que pueden diferenciarse de la realidad actual, dado que por aspectos coyunturales económicos, políticos o sociales puede verse afectadas las rentabilidades de las acciones y sectores en los que decida invertir las personas.