871 resultados para front-end of innovations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta Tesis surgió ante la intensidad y verosimilitud de varias señales o “warnings” asociadas a políticas dirigidas a reducir el peso del petróleo en el sector energético, tanto por razones económicas, como geopolíticas, como ambientales. Como tal Tesis se consolidó al ir incorporando elementos novedosos pero esenciales en el mundo petrolífero, particularmente las “tecnologías habilitantes”, tanto de incidencia directa, como el “fracking” como indirecta, del cual es un gran ejemplo el Vehículo Eléctrico (puro). La Tesis se definió y estructuró para elaborar una serie de indagaciones y disquisiciones, que comportaran un conjunto de conclusiones que fueran útiles para las corporaciones energéticas. También para la comprensión de la propia evolución del sector y de sus prestaciones técnicas y económicas, de cara a dar el servicio que los usuarios finales piden. Dentro de las tareas analíticas y reflexivas de la Tesis, se acuñaron ciertos términos conceptuales para explicar más certeramente la realidad del sector, y tal es el caso del “Investment burden”, que pondera la inversión específica (€/W) requerida por una instalación, con la duración del período de construcción y los riesgos tanto tangibles como regulatorios. Junto a ello la Tesis propone una herramienta de estudio y prognosis, denominada “Market integrated energy efficiency”, especialmente aplicable a dicotomías. Tal es el caso del coche térmico, versus coche eléctrico. El objetivo es optimizar una determinada actividad energética, o la productividad total del sector. Esta Tesis propone varias innovaciones, que se pueden agrupar en dos niveles: el primero dentro del campo de la Energía, y el segundo dentro del campo de las corporaciones, y de manera especial de las corporaciones del sector hidrocarburos. A nivel corporativo, la adaptación a la nueva realidad será función directa de la capacidad de cada corporación para desarrollar y/o comprar las tecnologías que permitan mantener o aumentar cuota de mercado. Las conclusiones de la Tesis apuntan a tres opciones principalmente para un replanteamiento corporativo: - Diversificación energética - Desplazamiento geográfico - Beneficiándose de posibles nuevos nichos tecnológicos, como son: • En upstream: Recuperación estimulada de petróleo mediante uso de energías renovables • En downstream: Aditivos orientados a reducir emisiones • En gestión del cambio: Almacenamiento energético con fines operativos Algunas políticas energéticas siguen la tendencia de crecimiento cero de algunos países de la OCDE. No obstante, la realidad mundial es muy diferente a la de esos países. Por ejemplo, según diversas estimaciones (basadas en bancos de datos solventes, referenciados en la Tesis) el número de vehículos aumentará desde aproximadamente mil millones en la actualidad hasta el doble en 2035; mientras que la producción de petróleo sólo aumentará de 95 a 145 millones de barriles al día. Un aumento del 50% frente a un aumento del 100%. Esto generará un curioso desajuste, que se empezará a sentir en unos pocos años. Las empresas y corporaciones del sector hidrocarburos pueden perder el monopolio que atesoran actualmente en el sector transporte frente a todas las demás fuentes energéticas. Esa pérdida puede quedar compensada por una mejor gestión de todas sus capacidades y una participación más integrada en el mundo de la energía, buscando sinergias donde hasta ahora no había sino distanciamiento. Los productos petrolíferos pueden alimentar cualquier tipo de maquina térmica, como las turbinas Brayton, o alimentar reformadores para la producción masiva de H2 para su posterior uso en pilas combustible. El almacenamiento de productos derivados del petróleo no es ningún reto ni plantea problema alguno; y sin embargo este almacenamiento es la llave para resolver muchos problemas. Es posible que el comercio de petróleo se haga menos volátil debido a los efectos asociados al almacenamiento; pero lo que es seguro es que la eficiencia energética de los usos de ese petróleo será más elevada. La Tesis partía de ciertas amenazas sobre el futuro del petróleo, pero tras el análisis realizado se puede vislumbrar un futuro prometedor en la fusión de políticas medioambientales coercitivas y las nuevas tecnologías emergentes del actual portafolio de oportunidades técnicas. ABSTRACT This Thesis rises from the force and the credibility of a number of warning signs linked to policies aimed at reducing the role of petroleum in the energy industry due to economical, geopolitical and environmental drives. As such Thesis, it grew up based on aggregating new but essentials elements into the petroleum sector. This is the case of “enabling technologies” that have a direct impact on the petroleum industry (such as fracking), or an indirect but deep impact (such as the full electrical vehicle). The Thesis was defined and structured in such a way that could convey useful conclusions for energy corporations through a series of inquiries and treatises. In addition to this, the Thesis also aims at understating la evolution of the energy industry and its capabilities both technical and economical, towards delivering the services required by end users. Within the analytical task performed in the Thesis, new terms were coined. They depict concepts that aid at explaining the facts of the energy industry. This is the case for “Investment burden”, it weights the specific capital investment (€/W) required to build a facility with the time that takes to build it, as well as other tangible risks as those posed by regulation. In addition to this, the Thesis puts forward an application designed for reviewing and predicting: the so called “Market integrated energy efficiency”, especially well-suited for dichotomies, very appealing for the case of the thermal car versus the electric car. The aim is to optimize energy related activity; or even the overall productivity of the system. The innovations proposed in this Thesis can be classified in two tiers. Tier one, within the energy sector; and tier two, related to Energy Corporation in general, but with oil and gas corporations at heart. From a corporate level, the adaptation to new energy era will be linked with the corporation capability to develop or acquire those technologies that will yield to retaining or enhancing market share. The Thesis highlights three options for corporate evolution: - diversification within Energy - geographic displacement - profiting new technologies relevant to important niches of work for the future, as: o Upstream: enhanced oil recovery using renewable energy sources (for upstream companies in the petroleum business) o Downstream: additives for reducing combustion emissions o Management of Change: operational energy storage Some energy policies tend to follow the zero-growth of some OECD countries, but the real thing could be very different. For instance, and according to estimates the number of vehicles in use will grow from 1 billion to more than double this figure 2035; but oil production will only grow from 95 million barrel/day to 145 (a 50% rise of versus an intensification of over a 100%). Hydrocarbon Corporation can lose the monopoly they currently hold over the supply of energy to transportation. This lose can be mitigated through an enhanced used of their capabilities and a higher degree of integration in the world of energy, exploring for synergies in those places were gaps were present. Petroleum products can be used to feed any type of thermal machine, as Brayton turbines, or steam reformers to produce H2 to be exploited in fuel cells. Storing petroleum products does not present any problem, but very many problems can be solved with them. Petroleum trading will likely be less volatile because of the smoothing effects of distributed storage, and indeed the efficiency in petroleum consumption will be much higher. The Thesis kicked off with a menace on the future of petroleum. However, at the end of the analysis, a bright future can be foreseen in the merging between highly demanding environmental policies and the relevant technologies of the currently emerging technical portfolio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Drosophila melanogaster Suppressor of forked [Su(f)] protein shares homology with the yeast RNA14 protein and the 77-kDa subunit of human cleavage stimulation factor, which are proteins involved in mRNA 3′ end formation. This suggests a role for Su(f) in mRNA 3′ end formation in Drosophila. The su(f) gene produces three transcripts; two of them are polyadenylated at the end of the transcription unit, and one is a truncated transcript, polyadenylated in intron 4. Using temperature-sensitive su(f) mutants, we show that accumulation of the truncated transcript requires wild-type Su(f) protein. This suggests that the Su(f) protein autoregulates negatively its accumulation by stimulating 3′ end formation of the truncated su(f) RNA. Cloning of su(f) from Drosophila virilis and analysis of its RNA profile suggest that su(f) autoregulation is conserved in this species. Sequence comparison between su(f) from both species allows us to point out three conserved regions in intron 4 downstream of the truncated RNA poly(A) site. These conserved regions include the GU-rich downstream sequence involved in poly(A) site definition. Using transgenes truncated within intron 4, we show that sequence up to the conserved GU-rich domain is sufficient for production of the truncated RNA and for regulation of this production by su(f). Our results indicate a role of su(f) in the regulation of poly(A) site utilization and an important role of the GU-rich sequence for this regulation to occur.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

WormBase (http://www.wormbase.org) is a web-based resource for the Caenorhabditis elegans genome and its biology. It builds upon the existing ACeDB database of the C.elegans genome by providing data curation services, a significantly expanded range of subject areas and a user-friendly front end.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanism by which elongation factor G (EF-G) catalyzes the translocation of tRNAs and mRNA on the ribosome is not known. The reaction requires GTP, which is hydrolyzed to GDP. Here we show that EF-G from Escherichia coli lacking the G domain still catalyzed partial translocation in that it promoted the transfer of the 3' end of peptidyl-tRNA to the P site on the 50S ribosomal subunit into a puromycin-reactive state in a slow-turnover reaction. In contrast, it did not bring about translocation on the 30S subunit, since (i) deacylated tRNA was not released from the P site and (ii) the A site remained blocked for aminoacyl-tRNA binding during and after partial translocation. The reaction probably represents the first EF-G-dependent step of translocation that follows the spontaneous formation of the A/P state that is not puromycin-reactive [Moazed, D. & Noller, H. F. (1989) Nature (London) 342, 142-148]. In the complete system--i.e., with intact EF-G and GTP--the 50S phase of translocation is rapidly followed by the 30S phase during which the tRNAs together with the mRNA are shifted on the small ribosomal subunit, and GTP is hydrolyzed. As to the mechanism of EF-G function, the results show that the G domain has an important role, presumably exerted through interactions with other domains of EF-G, in the promotion of translocation on the small ribosomal subunit. The G domain's intramolecular interactions are likely to be modulated by GTP binding and hydrolysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ALICE is one of four major experiments of particle accelerator LHC installed in the European laboratory CERN. The management committee of the LHC accelerator has just approved a program update for this experiment. Among the upgrades planned for the coming years of the ALICE experiment is to improve the resolution and tracking efficiency maintaining the excellent particles identification ability, and to increase the read-out event rate to 100 KHz. In order to achieve this, it is necessary to update the Time Projection Chamber detector (TPC) and Muon tracking (MCH) detector modifying the read-out electronics, which is not suitable for this migration. To overcome this limitation the design, fabrication and experimental test of new ASIC named SAMPA has been proposed . This ASIC will support both positive and negative polarities, with 32 channels per chip and continuous data readout with smaller power consumption than the previous versions. This work aims to design, fabrication and experimental test of a readout front-end in 130nm CMOS technology with configurable polarity (positive/negative), peaking time and sensitivity. The new SAMPA ASIC can be used in both chambers (TPC and MCH). The proposed front-end is composed of a Charge Sensitive Amplifier (CSA) and a Semi-Gaussian shaper. In order to obtain an ASIC integrating 32 channels per chip, the design of the proposed front-end requires small area and low power consumption, but at the same time requires low noise. In this sense, a new Noise and PSRR (Power Supply Rejection Ratio) improvement technique for the CSA design without power and area impact is proposed in this work. The analysis and equations of the proposed circuit are presented which were verified by electrical simulations and experimental test of a produced chip with 5 channels of the designed front-end. The measured equivalent noise charge was <550e for 30mV/fC of sensitivity at a input capacitance of 18.5pF. The total core area of the front-end was 2300?m × 150?m, and the measured total power consumption was 9.1mW per channel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The bound notebook contains academic texts copied by Harvard student James Varney in the early 1720s. The texts are written tête-bêche (where both ends of the volume are used to begin writing). The front paste-down endpaper reads 'James Varney his book 1724,' and the rear paste-down endpaper reads 'Joseph Lovett' [AB 1728].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present-day condition of bipolar glaciation characterized by rapid and large climate fluctuations began at the end of the Pliocene with the intensification of the Northern Hemisphere continental glaciations. The global cooling steps of the late Pliocene have been documented in numerous studies of Ocean Drilling Program (ODP) sites from the Northern Hemisphere. However, the interactions between oceans and between land and ocean during these cooling steps are poorly known. In particular, data from the Southern Hemisphere are lacking. Therefore I investigated the pollen of ODP Site 1082 in the southeast Atlantic Ocean in order to obtain a high-resolution record of vegetation change in Namibia between 3.4 and 1.8 Ma. Four phases of vegetation development are inferred that are connected to global climate change. (1) Before 3 Ma, extensive, rather open grass-rich savannahs with mopane trees existed in Namibia, but the extension of desert and semidesert vegetation was still restricted. (2) Increase of winter rainfall dependent Renosterveld-like vegetation occurred between 3.1 and 2.2 Ma connected to strong advection of polar waters along the Namibian coast and a northward shift of the Polar Front Zone in the Southern Ocean. (3) Climatically induced fluctuations became stronger between 2.7 and 2.2 Ma and semiarid areas extended during glacial periods probably as the result of an increased pole-equator thermal gradient and consequently globally enhanced atmospheric circulation. (4) Aridification and climatic variability further increased after 2.2 Ma, when the Polar Front Zone migrated southward and the influence of Atlantic moisture brought by the westerlies to southern Africa declined. It is concluded that the positions of the frontal systems in the Southern Ocean which determine the locations of the high-pressure cells over the South Atlantic and the southern Indian Ocean have a strong influence on the climate of southern Africa in contrast to the climate of northwest and central Africa, which is dominated by the Saharan low-pressure cell.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Through the processes of the biological pump, carbon is exported to the deep ocean in the form of dissolved and particulate organic matter. There are several ways by which downward export fluxes can be estimated. The great attraction of the 234Th technique is that its fundamental operation allows a downward flux rate to be determined from a single water column profile of thorium coupled to an estimate of POC/234Th ratio in sinking matter. We present a database of 723 estimates of organic carbon export from the surface ocean derived from the 234Th technique. Data were collected from tables in papers published between 1985 and 2013 only. We also present sampling dates, publication dates and sampling areas. Most of the open ocean Longhurst provinces are represented by several measurements. However, the Western Pacific, the Atlantic Arctic, South Pacific and the South Indian Ocean are not well represented. There is a variety of integration depths ranging from surface to 220m. Globally the fluxes ranged from -22 to 125 mmol of C/m**2/d. We believe that this database is important for providing new global estimate of the magnitude of the biological carbon pump.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Changes in surface water hydrography in the Southern Ocean (eastern Atlantic sector) could be reconstructed on the basis of isotope-geochemical and micropaleontological studies. A total of 75 high quality multicorer sediment surface samples from the southern South Atlantic Ocean and three Quaternary sediment cores, taken on a meridional transect across the Antarctic Circumpolar Current, have been investigated. The results of examining stable oxygen isotope compositions of 24 foraminiferal species and morphotypes were compared to the near-surface hydrography. The different foraminifera have been divided into four groups living at different depths in the upper water column. The 8180 differences between shallow-living (e.g. G. bulloides, N. pachyderma) and deeper-dwelling (e. g. G. inflata) species reflect the measured temperature gradient of the upper 250 m in the water column. Thus, the 6180 difference between shallow-living and deeper-living foraminifera can be used as an indicator for the vertical temperature gradient in the surface water of the Antarctic Circumpolar Current, which is independent of ice volume. All planktonic foraminifera in the surface sediment samples have been counted. 27 species and morphotypes have been selected, to form a reference data Set for statistical purposes. By using R- and Q-mode principal component analysis these planktonic foraminifera have been divided into four and five assemblages, respectively. The geographic distribution of these assemblages is mainly linked to the temperature of sea-surface waters. The five assemblages (factors) of the Q-mode principal component analysis account for 97.l % of the variance of original data. Following the transferfunction- technique a multiple regression between the Q-mode factors and the actual mean sea-surface environmental parameters resulted in a set of equations. The new transfer function can be used to estimate past sea-surface seasonal temperatures for paleoassemblages of planktonic foraminifera with a precision of approximately ±1.2°C. This transfer function F75-27-5 encompasses in particular the environmental conditions in the Atlantic sector of the Antarctic Circumpolar Current. During the last 140,000 years reconstructed sea-surface temperatures fluctuated in the present northern Subantarctic Zone (PS2076-1/3) at an amplitude of up to 7.5°C in summer and of up to 8.5°C in winter. In the present Polarfrontal Zone (PS1754-1) these fluctuations between glacials and interglacials show lower temperatures from 2.5 to 8.5°C in summer and from 1.0 to 5.0°C in winter, respectively. Compared to today, calculated oxygen isotope temperature gradients in the present Subantarctic Zone were lower during the last 140,000 years. This is an indicator for a good mixing of the upper water column. In the Polarfrontal Zone also lower oxygen isotope temperature gradients were found for the glacials 6, 4 and 2. But almost similar temperature gradients as today were found during the interglacial stages 5, 3 and the Holocene, which implicates a mixing of the upper water column compared to present. Paleosalinities were reconstructed by combining d18O-data and the evaluated transfer function paleotemperatures. Especially in the present Polarfrontal Zone (PS1754-1) and in the Antarctic Zone (PS1768-8), a short-term reduction of salinity up to 4 %o, could be detected. This significant reduction in sea-surface water salinity indicates the increased influx of melt-water at the beginning of deglaciation in the southern hemisphere at the end of the last glacial, approximately 16,500-13,000 years ago. The reconstruction of environmental Parameters indicates only small changes in the position of the frontal Systems in the eastern sector of the Antarctic Circumpolar Current during the last 140,000 years. The average position of the Subtropical Front and Subantarctic Front shifted approximately three latitudes between interglacials and glacials. The Antarctic Polar Front shifted approximately four latitudes. But substantial modifications of this scenario have been interpreted for the reconstruction of cold sea-surface temperatures at 41Â S during the oxygen isotope stages 16 and 14 to 12. During these times the Subtropical Front was probably shified up to seven latitudes northwards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Green cloth binding, with gilt spine titles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.