906 resultados para Quantities and measurements
Resumo:
Recurrent wheezing or asthma is a common problem in children that has increased considerably in prevalence in the past few decades. The causes and underlying mechanisms are poorly understood and it is thought that a numb er of distinct diseases causing similar symptoms are involved. Due to the lack of a biologically founded classification system, children are classified according to their observed disease related features (symptoms, signs, measurements) into phenotypes. The objectives of this PhD project were a) to develop tools for analysing phenotypic variation of a disease, and b) to examine phenotypic variability of wheezing among children by applying these tools to existing epidemiological data. A combination of graphical methods (multivariate co rrespondence analysis) and statistical models (latent variables models) was used. In a first phase, a model for discrete variability (latent class model) was applied to data on symptoms and measurements from an epidemiological study to identify distinct phenotypes of wheezing. In a second phase, the modelling framework was expanded to include continuous variability (e.g. along a severity gradient) and combinations of discrete and continuo us variability (factor models and factor mixture models). The third phase focused on validating the methods using simulation studies. The main body of this thesis consists of 5 articles (3 published, 1 submitted and 1 to be submitted) including applications, methodological contributions and a review. The main findings and contributions were: 1) The application of a latent class model to epidemiological data (symptoms and physiological measurements) yielded plausible pheno types of wheezing with distinguishing characteristics that have previously been used as phenotype defining characteristics. 2) A method was proposed for including responses to conditional questions (e.g. questions on severity or triggers of wheezing are asked only to children with wheeze) in multivariate modelling.ii 3) A panel of clinicians was set up to agree on a plausible model for wheezing diseases. The model can be used to generate datasets for testing the modelling approach. 4) A critical review of methods for defining and validating phenotypes of wheeze in children was conducted. 5) The simulation studies showed that a parsimonious parameterisation of the models is required to identify the true underlying structure of the data. The developed approach can deal with some challenges of real-life cohort data such as variables of mixed mode (continuous and categorical), missing data and conditional questions. If carefully applied, the approach can be used to identify whether the underlying phenotypic variation is discrete (classes), continuous (factors) or a combination of these. These methods could help improve precision of research into causes and mechanisms and contribute to the development of a new classification of wheezing disorders in children and other diseases which are difficult to classify.
Resumo:
Today, there is little knowledge on the attitude state of decommissioned intact objects in Earth orbit. Observational means have advanced in the past years, but are still limited with respect to an accurate estimate of motion vector orientations and magnitude. Especially for the preparation of Active Debris Removal (ADR) missions as planned by ESA’s Clean Space initiative or contingency scenarios for ESA spacecraft like ENVISAT, such knowledge is needed. ESA's “Debris Attitude Motion Measurements and Modelling” project (ESA Contract No. 40000112447), led by the Astronomical Institute of the University of Bern (AIUB), addresses this problem. The goal of the project is to achieve a good understanding of the attitude evolution and the considerable internal and external effects which occur. To characterize the attitude state of selected targets in LEO and GTO, multiple observation methods are combined. Optical observations are carried out by AIUB, Satellite Laser Ranging (SLR) is performed by the Space Research Institute of the Austrian Academy of Sciences (IWF) and radar measurements and signal level determination are provided by the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR). The In-Orbit Tumbling Analysis tool (ιOTA) is a prototype software, currently in development by Hyperschall Technologie Göttingen GmbH (HTG) within the framework of the project. ιOTA will be a highly modular software tool to perform short-(days), medium-(months) and long-term (years) propagation of the orbit and attitude motion (six degrees-of-freedom) of spacecraft in Earth orbit. The simulation takes into account all relevant acting forces and torques, including aerodynamic drag, solar radiation pressure, gravitational influences of Earth, Sun and Moon, eddy current damping, impulse and momentum transfer from space debris or micro meteoroid impact, as well as the optional definition of particular spacecraft specific influences like tank sloshing, reaction wheel behaviour, magnetic torquer activity and thruster firing. The purpose of ιOTA is to provide high accuracy short-term simulations to support observers and potential ADR missions, as well as medium-and long-term simulations to study the significance of the particular internal and external influences on the attitude, especially damping factors and momentum transfer. The simulation will also enable the investigation of the altitude dependency of the particular external influences. ιOTA's post-processing modules will generate synthetic measurements for observers and for software validation. The validation of the software will be done by cross-calibration with observations and measurements acquired by the project partners.
Resumo:
In this work we study the Zeeman effect on stratospheric O₂ using ground-based microwave radiometer measurements. The interaction of the Earth magnetic field with the oxygen dipole leads to a splitting of O₂ energy states, which polarizes the emission spectra. A special campaign was carried out in order to measure this effect in the oxygen emission line centered at 53.07 GHz. Both a fixed and a rotating mirror were incorporated into the TEMPERA (TEMPERature RAdiometer) in order to be able to measure under different observational angles. This new configuration allowed us to change the angle between the observational path and the Earth magnetic field direction. Moreover, a high-resolution spectrometer (1 kHz) was used in order to measure for the first time the polarization state of the radiation due to the Zeeman effect in the main isotopologue of oxygen from ground-based microwave measurements. The measured spectra showed a clear polarized signature when the observational angles were changed, evidencing the Zeeman effect in the oxygen molecule. In addition, simulations carried out with the Atmospheric Radiative Transfer Simulator (ARTS) allowed us to verify the microwave measurements showing a very good agreement between model and measurements. The results suggest some interesting new aspects for research of the upper atmosphere.
Resumo:
Serial quantification of BCR-ABL1 mRNA is an important therapeutic indicator in chronic myeloid leukaemia, but there is a substantial variation in results reported by different laboratories. To improve comparability, an internationally accepted plasmid certified reference material (CRM) was developed according to ISO Guide 34:2009. Fragments of BCR-ABL1 (e14a2 mRNA fusion), BCR and GUSB transcripts were amplified and cloned into pUC18 to yield plasmid pIRMM0099. Six different linearised plasmid solutions were produced with the following copy number concentrations, assigned by digital PCR, and expanded uncertainties: 1.08±0.13 × 10(6), 1.08±0.11 × 10(5), 1.03±0.10 × 10(4), 1.02±0.09 × 10(3), 1.04±0.10 × 10(2) and 10.0±1.5 copies/μl. The certification of the material for the number of specific DNA fragments per plasmid, copy number concentration of the plasmid solutions and the assessment of inter-unit heterogeneity and stability were performed according to ISO Guide 35:2006. Two suitability studies performed by 63 BCR-ABL1 testing laboratories demonstrated that this set of 6 plasmid CRMs can help to standardise a number of measured transcripts of e14a2 BCR-ABL1 and three control genes (ABL1, BCR and GUSB). The set of six plasmid CRMs is distributed worldwide by the Institute for Reference Materials and Measurements (Belgium) and its authorised distributors (https://ec.europa.eu/jrc/en/reference-materials/catalogue/; CRM code ERM-AD623a-f).
Resumo:
Clinical oncologists and cancer researchers benefit from information on the vascularization or non-vascularization of solid tumors because of blood flow's influence on three popular treatment types: hyperthermia therapy, radiotherapy, and chemotherapy. The objective of this research is the development of a clinically useful tumor blood flow measurement technique. The designed technique is sensitive, has good spatial resolution, in non-invasive and presents no risk to the patient beyond his usual treatment (measurements will be subsequent only to normal patient treatment).^ Tumor blood flow was determined by measuring the washout of positron emitting isotopes created through neutron therapy treatment. In order to do this, several technical and scientific questions were addressed first. These questions were: (1) What isotopes are created in tumor tissue when it is irradiated in a neutron therapy beam and how much of each isotope is expected? (2) What are the chemical states of the isotopes that are potentially useful for blood flow measurements and will those chemical states allow these or other isotopes to be washed out of the tumor? (3) How should isotope washout by blood flow be modeled in order to most effectively use the data? These questions have been answered through both theoretical calculation and measurement.^ The first question was answered through the measurement of macroscopic cross sections for the predominant nuclear reactions in the body. These results correlate well with an independent mathematical prediction of tissue activation and measurements of mouse spleen neutron activation. The second question was addressed by performing cell suspension and protein precipitation techniques on neutron activated mouse spleens. The third and final question was answered by using first physical principles to develop a model mimicking the blood flow system and measurement technique.^ In a final set of experiments, the above were applied to flow models and animals. The ultimate aim of this project is to apply its methodology to neutron therapy patients. ^
Resumo:
A Payment Cost Minimization (PCM) auction has been proposed as an alternative to the Offer Cost Minimization (OCM) auction to be used in wholesale electric power markets with the intention to lower the procurement cost of electricity. Efficiency concerns about this proposal have relied on the assumption of true production cost revelation. Using an experimental approach, I compare the two auctions, strictly controlling for the level of unilateral market power. A specific feature of these complex-offer auctions is that the sellers submit not only the quantities and the minimum prices at which they are willing to sell, but also the start-up fees that are designed to reimburse the fixed start-up costs of the generation plants. I find that both auctions result in start-up fees that are significantly higher than the start-up costs. Overall, the two auctions perform similarly in terms of procurement cost and efficiency. Surprisingly, I do not find a substantial difference between less market power and more market power designs. Both designs result in similar inefficiencies and equally higher procurement costs over the competitive prediction. The PCM auction tends to have lower price volatility than the OCM auction when the market power is minimal but this property vanishes in the designs with market power. These findings lead me to conclude that both the PCM and the OCM auctions do not belong to the class of truth revealing mechanisms and do not easily elicit competitive behavior.
Resumo:
In this paper we consider the case for assigning tax revenues to Scotland, by which we mean that taxes levied on Scottish tax bases should be returned to the Scottish budget. The budget, however, would continue to be supplemented by transfers from the Westminster budget. This arrangement differs from the current situation whereby public spending is largely financed by a bloc grant from Westminster. Our suggestion falls short of full fiscal federalism for Scotland . meaning that Scotland had control over choice of tax base and of tax rates, and fiscal transfers from Westminster would be minimal. We use propositions drawn from the theory of fiscal federalism to argue for a smaller vertical imbalance between taxes retained in Scotland and public spending in Scotland. A closer matching of spending with taxes would better signal to beneficiaries the true costs of public spending in terms of taxes raised. It would also create more complete incentives for politicians to provide public goods and services in quantities and at qualities that voters are actually willing to pay for. Under the current bloc grant system, the marginal tax cost of spending does not enter into political agents. calculations as spending is out of a fixed total budget. Moreover, the Scottish electorate is hindered in signaling its desire for local public goods and services since the size of the total budget is determined by a rigid formula set by Westminster. At the present time we reject proposals for full fiscal federalism because in sharply reducing vertical imbalance in the Scottish budget, it is likely to worsen horizontal balance between Scotland and the other UK regions. Horizontal balance occurs where similarly situated regions enjoy the same per capita level of public goods and services at the same per capita tax cost. The complete removal of the bloc grant under full fiscal federalism would remove the mechanism that currently promotes horizontal equity in the UK. Variability in own-source tax revenues creates other problems with full fiscal federalism. Taxes derived from North Sea oil would constitute a large proportion of Scottish taxes, but these are known to be volatile in the face of variable oil prices and the pound-dollar exchange rate. At the present time variability in oil tax revenue is absorbed by Westminster. Scotland is insulated through the bloc grant. This risk sharing mechanism would be lost with full fiscal federalism. It is true that Scotland could turn to financial markets to tide itself over oil tax revenue downturns, but as a much smaller and less diversified financial entity than the UK as a whole it would probably have to borrow on less favorable terms than can Westminster. Scotland would have to bear this extra cost itself. Also, with full fiscal federalism it is difficult to see how the Scottish budget could be used as a macroeconomic stabilizer. At present, tax revenue downturns in Scotland - together with the steady bloc grant - are absorbed through an increase in vertical imbalance. This acts as an automatic stabilizer for the Scottish economy. No such mechanism would exist under full fiscal federalism. The borrowing alternative would still exist but on the less favorable terms - as with borrowing to finance oil tax shortfalls.
Resumo:
In my recent experimental research of wholesale electricity auctions, I discovered that the complex structure of the offers leaves a lot of room for strategic behavior, which consequently leads to anti- competitive and inefficient outcomes in the market. A specific feature of these complex-offer auctions is that the sellers submit not only the quantities and the minimum prices at which they are willing to sell, but also the start-up fees that are designed to reimburse the fixed start-up costs of the generation plants. In this paper, using the experimental method I compare the performance of two complex-offer auctions (COAs) against the performance of a simple-offer auction (SOA), in which the sellers have to recover all their generation costs --- fixed and variable ---through a uniform market-clearing price. I find that the SOA significantly reduces consumer prices and lowers price volatility. It mitigates anti-competitive effects that are present in the COAs and achieves allocative efficiency more quickly.
Resumo:
Each year an estimated 180,000 people in the United States (U.S.) die as a result of medication errors, now considered a major public health problem. If a patient cannot correctly act on information related to medication use or "Medication Literacy" there is an increased potential for error. Medication use issues are unique on the US-Mexico border because they include high rates of herbal products and medication products from Mexico as well as issues related to the preferred language (English or Spanish) of the patient. To evaluate the medication literacy in a US-Mexico border community, this retrospective study evaluates 180 subjects representing four diverse economic segments of a metropolitan US-Mexico Border community who have taken a Medication Literacy Assessment. The assessment tool has been created to understand how patients interpret medication information for prescription, over-the-counter, herbal, and Mexican medication product use, and how they problem-solve medication questions. The Medication Literacy Assessment tool specifically assesses document literacy (e.g., prescription labels), prose literacy (e.g., patient leaflets), numeracy (e.g., calculations and measurements) as well as qualitative data related to medication use practices. The main hypothesis of this study is that the ability to interpret and use medications will vary based on education, language (Spanish or English), and recruitment sites (economically diverse communities). The results will provide information to better characterize medication use in a primarily Hispanic population on the US-Mexico border and may be used to influence policy decisions regarding prescription and over-the-counter product information.^
Resumo:
The Neogene biostratigraphy presented here is based on the study of 230 samples through 737 m of pelagic sediment in Hole 806B. Sediment accumulation is interrupted only once in the uppermost lower Miocene (Zone N6), apparently coincident with a widespread deep-sea hiatus. Preservation of planktonic foraminifers through the section ranges from good to moderately poor. One hundred and ten species of planktonic foraminifers were identified; taxonomic notes on most species are included. All of the standard low-latitude Neogene foraminiferal zones are delineated, with the exceptions of Zones N8 and N9 because of a high first occurrence of Orbulina, and Zones N18 and N19 because of a high first occurrence of Sphaeroidinella dehiscens. Good agreement exists between the published account of the variation in planktonic foraminiferal species richness and the rates of diversification and turnover, and measurements of these evolutionary indexes in the record of Hole 806B. The global pattern of change in tropical/transitional species richness is paralleled in Hole 806B, with departures caused by either ecological conditions peculiar to the western equatorial Pacific or by inexactness in the estimation of million-year intervals in Hole 806B. Temporal changes in the relative abundance of taxa in the sediment assemblages, considered in light of their depth habitats, reveal a detailed picture of historical change in the structure of the upper water column over the Ontong Java Plateau. The dominance of surface dwellers (Paragloborotalia kugleri, P. mayeri, Dentoglobigerina altispira, Globigerinita glutinata, and Globigerinoides spp.) throughout the lower and middle Miocene is replaced by a more equitable distribution of surface (D. altispira and Globigerinoides spp.), intermediate (Globorotalia menardii plexus), and deep (Streptochilus spp.) dwellers in the late Miocene, following the closing of the Indo-Pacific Seaway and the initiation of large-scale glaciation in the Antarctic. The shoaling of the thermocline along the equator engendered by these climatic and tectonic events persisted through the Pliocene, when initial increases in the abundance of a new set of shallow, intermediate, and deep dwelling species of planktonic foraminifers coincide with the closing of the Panamanian Seaway.
Resumo:
A fines de 2001 se incorporó al mercado argentino el cultivar AZ-1 de Pennisetum clandestinum Hochst. ex Chiov. (kikuyo) para ser utilizado en campos deportivos o con fines ornamentales. Con el propósito de caracterizar esta variedad comercial en el departamento Paraná (provincia de Entre Ríos, Argentina) se evaluó su comportamiento y aptitud para césped ornamental y/o deportivo en condiciones de mantenimiento para césped de calidad, bajo dos condiciones de drenaje. Se evaluaron cobertura, color, textura, distancia de entrenudos, grosor de estolones y período de dormición. El ensayo se realizó en Oro Verde, departamento Paraná, en un suelo Molisol y consistió en 2 tratamientos: con y sin drenaje, con 4 repeticiones cada uno. El tamaño de la parcela fue de 2,5 por 5,0 m. El diseño experimental utilizado fue parcelas apareadas y las mediciones se realizaron desde junio hasta noviembre de 2005. Las características climáticas locales son: temperatura media anual de 18,1°C y un régimen isohigro de 947,6 mm de precipitación anual. El cultivar se comportó como apto para césped en las características y condiciones de manejo evaluadas, sin diferencias entre los tratamientos con y sin drenaje. Presentó rápida implantación, niveles altos de cobertura, color claro y uniforme, textura media y estolones gruesos a medianos a través del tiempo. En invierno no perdió cobertura ni color. Se caracterizó como un césped no apto para campos deportivos de alta exigencia, pero recomendable para clubes de bajo presupuesto y mantenimiento.
Resumo:
The studies described here base mainly on sedimentary material collected during the "Indian Ocean Expedition" of the German research vessel "Meteor" in the region of the Indian-Pakistan continental margin in February and March 1965. Moreover,samples from the mouth of the Indus-River were available, which were collected by the Pakistan fishing vessel "Machhera" in March 1965. Altogether, the following quantities of sedimentary material were collected: 59.73 m piston cores. 54.52 m gravity cores. 33 box grab samples. 68 bottom grab samples Component analyses of the coarse fraction were made of these samples and the sedimentary fabric was examined. Moreover, the CaCO3 and Corg contents were discussed. From these investigations the following history of sedimentation can be derived: Recent sedimentation on the shelf is mainly characterized by hydrodynamic processes and terrigenous supply of material. In the shallow water wave action and currents running parallel to the coast, imply a repeated reworking which induces a sorting of the grains and layering of the sediments as well as a lack of bioturbation. The sedimentation rate is very high here. From the coast-line down to appr. 50 m the sediment becomes progressively finer, the conditions of deposition become less turbulent. On the outer shelf the sediment is again considerably coarser. It contains many relicts of planktonic organisms and it shows traces of burrowing. Indications for redeposition are nearly missing, a considerable part of the fine fraction of the sediments is, however, whirled up and carried away. In wide areas of the outer shelf this stirring has gained such a degree that recent deposits are nearly completely missing. Here, coarse relict sands rich in ooids are exposed, which were formed in very shallow stirred water during the time when the sea reached its lowest level, i.e. at the turn of the Pleistocene to the Holocene. Below the relict sand white, very fine-grained aragonite mud was found at one location (core 228). This aragonite mud was obviously deposited in very calm water of some greater depth, possibly behind a reef barrier. Biochemic carbonate precipitation played an important part in the formation of relict sands and aragonite muds. In postglacial times the relict sands were exposed for long periods to violent wave action and to areal erosion. In the present days they are gradually covered by recent sediments proceeding from the sides. On the continental margin beyond the shelf edge the distribution of the sediments is to a considerable extent determined by the morphology of the sea bottom. The material originating from the continent and/or the shelf, is less transported by action of the water than by the force of gravity. Within the range of the uppermost part of the continental slope recent sedimentation reaches its maximum. Here the fine material is deposited which has been whirled up in the zone of the relict sands. A laminated fine-grained sediment is formed here due to the very high sedimentation rate as well as to the extremely low O2-content in the bottom water, which prevents life on the bottom of the sea and impedes thus also bioturbation. The lamination probaly reflects annual variation in deposition and can be attributed to the rhythm of the monsoon with its effects on the water and the weather conditions. In the lower part of the upper continental slope sediments are to be found which show in varying intensity, intercalations of fine material (silt) from the shelf, in large sections of the core. These fine intercalations of allochthonous material are closely related to the autochthonous normal sediment, so that a great number of small individual depositional processes can be inferred. In general the intercalations are missing in the uppermost part of the cores; in the lower part they can be met in different quantities, and they reach their maximum frequency in the upper part of the lower core section. The depositions described here were designated as turbid layer sediments, since they get their material from turbid layers, which transport components to the continental slope which have been whirled up from the shelf. Turbidites are missing in this zone. Since the whole upper continental slope shows a low oxygen-content of the bottom water the structure of the turbid layer sediments is more or less preserved. The lenticular-phacoidal fine structure does, however, not reflect annual rhythms, but sporadic individual events, as e.g. tsunamis. At the lower part of the continental slope and on the continental rise the majority of turbidites was deposited, which, during glacial times and particularly at the beginning of the post-glacial period, transported material from the zone of relict sands. The Laccadive Ridge represented a natural obstacle for the transport of suspended sediments into the deep sea. Core SIC-181 from the Arabian Basin shows some intercalations of turbidites; their material, however, does not originate from the Indian Shelf, but from the Laccadive Ridge. Within the range of the Indus Cone it is surprising that distinct turbidites are nearly completely missing; on the other hand, turbid layer sediments are to be found. The bottom of the sea is showing still a slight slope here, so that the turbidites funneled through the Canyon of the Swatch probably rush down to greater water depths. Due to the particularly large supply of suspended material by theIndus River the turbid layer sediments show farther extension than in other regions. In general the terrigenous components are concentrated on the Indus Cone. It is within the range of the lower continental slope that the only discovery of a sliding mass (core 186) has been located. It can be assumed that this was set in motion during the Holocene. During the period of time discussed here the following development of kind and intensity of the deposition of allochthonous material can be observed on the Indian-Pakistan continental margin: At the time of the lowest sea level the shelf was only very narrow, and the zone in which bottom currents were able to stir up material by oscillating motion, was considerably confined. The rivers flowed into the sea near to the edge of the shelf. For this reason the percentage of terrigenous material, quartz and mica is higher in the lower part of many cores (e.g. cores 210 and 219) than in the upper part. The transition from glacial to postglacial times caused a series of environmental changes. Among them the rise of the sea level (in the area of investigation appr. 150 m) had the most important influence on the sedimentation process. In connection with this event many river valleys became canyons, which sucked sedimentary material away from the shelf and transported it in form of turbidites into the deep sea. During the rise of the sea level a situation can be expected with a maximum area of the comparatively plane shelf being exposed to wave action. During this time the process of stirring up of sediments and formation of turbid layers will reach a maximum. Accordingly, the formation of turbidites and turbid layer sediments are most frequent at the same time. This happened in general in the older polstglacial period. The present day high water level results in a reduced supply of sediments into the canyons. The stirring up of sediments from the shelf by wave action is restricted to the finest material. The missing of shelf material in the uppermost core sections can thus be explained. The laminated muds reflect these calm sedimentation conditions as well. In the southwestern part of the area of investigation fine volcanic glass was blown in during the Pleistocene, probably from the southeast. It has thus become possible to correlate the cores 181, 182, 202. Eolian dust from the Indian subcontinent represents probably an important component of the deep sea sediments. The chemism of the bottom as well as of the pore water has a considerable influence on the development of the sediments. Of particular importance in this connection is a layer with a minimum content of oxygen in the sea water (200-1500 m), which today touches the upper part of the continental slope. Above and beyond this oxygen minimum layer somewhat higher O2-values are to be observed at the sea bottom. During the Pleistocene the oxygen minimum layer has obviously been locatedin greater depth as is indicated by the facies of laminated mud occuring in the lower part of core 219. The type of bioturbation is mainly determined by the chemism. Moreover, the chemism is responsible for a considerable selective dissolution, either complete or partial, of the sedimentary components. Within the range of the oxygen minimum layer an alkaline milieu is developed at the bottom. This causes a complete or partial dissolution of the siliceous organisms. Here, bioturbation is in general completely missing; sometimes small pyrite-filled burrowing racks are found. In the areas rich in O2 high pH-values result in a partial dissolution of the calcareous shells. Large, non-pyritized burrowing tracks characterize the type of bioturbation in this environment. A study of the "lebensspuren" in the cores supports the assumption that, particularly within the region of the Laccadive Basin, the oxygen content in the bottom sediments was lower than during the Holocene. This may be attributed to a high sedimentation rate and to a lower O2-content of the bottom water. The composition of the allochthonous sedimentary components, detritus and/or volcanic glass may locally change the chemism to a considerable extent for a certain time; under such special circumstances the type of bioturbation and the state of preservation of the components may be different from those of the normal sediment.
Resumo:
Ocean acidification causes corals to calcify at reduced rates, but current understanding of the underlying processes is limited. Here, we conduct a mechanistic study into how seawater acidification alters skeletal growth of the coral Stylophora pistillata. Reductions in colony calcification rates are manifested as increases in skeletal porosity at lower pH, while linear extension of skeletons remains unchanged. Inspection of the microstructure of skeletons and measurements of pH at the site of calcification indicate that dissolution is not responsible for changes in skeletal porosity. Instead, changes occur by enlargement of corallite-calyxes and thinning of associated skeletal elements, constituting a modification in skeleton architecture. We also detect increases in the organic matrix protein content of skeletons formed under lower pH. Overall, our study reveals that seawater acidification not only causes decreases in calcification, but can also cause morphological change of the coral skeleton to a more porous and potentially fragile phenotype.
Resumo:
We present Holocene and last glacial maximum (LGM) oxygen and carbon isotope measurements on Planulina wuellerstorfi in six southeast Pacific cores. Sedimentation rates are low in this part of the ocean, and measurements were made on individual foraminiferal shells in order to identify the Holocene and glacial individuals on the basis of their extreme d18O. The new d13C data were combined with previous P. wuellerstorfi data for interpretation of global thermohaline circulation. Data from the Southern Ocean were examined closely for regional coherency and a few anomalous d13C values suspected of having productivity overprint were removed. The resulting global d13C distributions and gradients indicate that the deep water circulation was similar during the Holocene and LGM. This interpretation brings d13C data to a better agreement with Cd/Ca data and marks a sharp contrast with a widely held view based on d13C measurements that the glacial Southern Ocean was the terminus of the thermohaline circulation. The proposed presence of glacial North Atlantic Deep Water does not necessarily contradict the postulated presence of Glacial North Atlantic Intermediate Water.
Resumo:
Climate warming is expected to differentially affect CO2 exchange of the diverse ecosystems in the Arctic. Quantifying responses of CO2 exchange to warming in these ecosystems will require coordinated experimentation using standard temperature manipulations and measurements. Here, we used the International Tundra Experiment (ITEX) standard warming treatment to determine CO2 flux responses to growing-season warming for ecosystems spanning natural temperature and moisture ranges across the Arctic biome. We used the four North American Arctic ITEX sites (Toolik Lake, Atqasuk, and Barrow [USA] and Alexandra Fiord [Canada]) that span 10° of latitude. At each site, we investigated the CO2 responses to warming in both dry and wet or moist ecosystems. Net ecosystem CO2 exchange (NEE), ecosystem respiration (ER), and gross ecosystem photosynthesis (GEP) were assessed using chamber techniques conducted over 24-h periods sampled regularly throughout the summers of two years at all sites. At Toolik Lake, warming increased net CO2 losses in both moist and dry ecosystems. In contrast, at Atqasuk and Barrow, warming increased net CO2 uptake in wet ecosystems but increased losses from dry ecosystems. At Alexandra Fiord, warming improved net carbon uptake in the moist ecosystem in both years, but in the wet and dry ecosystems uptake increased in one year and decreased the other. Warming generally increased ER, with the largest increases in dry ecosystems. In wet ecosystems, high soil moisture limited increases in respiration relative to increases in photosynthesis. Warming generally increased GEP, with the notable exception of the Toolik Lake moist ecosystem, where warming unexpectedly decreased GEP >25%. Overall, the respiration response determined the effect of warming on ecosystem CO2 balance. Our results provide the first multiple-site comparison of arctic tundra CO2 flux responses to standard warming treatments across a large climate gradient. These results indicate that (1) dry tundra may be initially the most responsive ecosystems to climate warming by virtue of strong increases in ER, (2) moist and wet tundra responses are dampened by higher water tables and soil water contents, and (3) both GEP and ER are responsive to climate warming, but the magnitudes and directions are ecosystem-dependent.