950 resultados para Decomposition of Ranked Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Bázel–2. tőkeegyezmény bevezetését követően a bankok és hitelintézetek Magyarországon is megkezdték saját belső minősítő rendszereik felépítését, melyek karbantartása és fejlesztése folyamatos feladat. A szerző arra a kérdésre keres választ, hogy lehetséges-e a csőd-előrejelző modellek előre jelző képességét növelni a hagyományos matematikai-statisztikai módszerek alkalmazásával oly módon, hogy a modellekbe a pénzügyi mutatószámok időbeli változásának mértékét is beépítjük. Az empirikus kutatási eredmények arra engednek következtetni, hogy a hazai vállalkozások pénzügyi mutatószámainak időbeli alakulása fontos információt hordoz a vállalkozás jövőbeli fizetőképességéről, mivel azok felhasználása jelentősen növeli a csődmodellek előre jelző képességét. A szerző azt is megvizsgálja, hogy javítja-e a megfigyelések szélsőségesen magas vagy alacsony értékeinek modellezés előtti korrekciója a modellek klasszifikációs teljesítményét. ______ Banks and lenders in Hungary also began, after the introduction of the Basel 2 capital agreement, to build up their internal rating systems, whose maintenance and development are a continuing task. The author explores whether it is possible to increase the predictive capacity of business-failure forecasting models by traditional mathematical-cum-statistical means in such a way that they incorporate the measure of change in the financial indicators over time. Empirical findings suggest that the temporal development of the financial indicators of firms in Hungary carries important information about future ability to pay, since the predictive capacity of bankruptcy forecasting models is greatly increased by using such indicators. The author also examines whether the classification performance of the models can be improved by correcting for extremely high or low values before modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arsenic has been classified as a group I carcinogen. It has been ranked number one in the CERCLA priority list of hazardous substances due to its frequency, toxicity and potential for human exposure. Paradoxically, arsenic has been employed as a successful chemotherapeutic agent for acute promyelocytic leukemia and has found some success in multiple myeloma. Since arsenic toxicity and efficacy is species dependent, a speciation method, based on the complementary use of reverse phase and cation exchange chromatography, was developed. Inductively coupled plasma mass spectrometer (ICP-MS), as an element specific detector, and electrospray ionization mass spectrometer (ESI-MS), as a molecule specific detector, were employed. Low detection limits in the µg. L−1 range on the ICP-MS and mg. L−1 range on the ESI-MS were obtained. The developed methods were validated against each other through the use of a Deming plot. With the developed speciation method, the effects of both pH on the stability of As species and reduced glutathione (GSH) concentration on the formation and stability of arsenic glutathione complexes were studied. To identify arsenicals in multiple myeloma (MM) cell lines post arsenic trioxide (ATO) and darinaparsin (DAR) incubation, an extraction method based on the use of ultrasonic probe was developed. Extraction tools and solvents were evaluated and the effect of GSH concentration on the quantitation of arsenic glutathione (As-GSH) complexes in MM cell extracts was studied. The developed method was employed for the identification of metabolites in DAR incubated cell lines where the effect of extraction pH, DAR incubation concentration and incubation time on the relative distribution of the As metabolites was assessed. A new arsenic species, dimethyarsinothioyl glutathione (DMMTA V-GS), a pentavalent thiolated arsenical, was identified in the cell extracts through the use of liquid chromatography tandem mass spectrometry. The formation of the new metabolite in the extracts was dependent on the decomposition of s-dimethylarsino glutathione (DMA(GS)). These results have major implications in both the medical and toxicological fields of As because they involve the metabolism of a chemotherapeutic agent and the role sulfur compounds play in this mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effects of nutrient availability and litter quality on litter decomposition were measured in two oligotrophic phosphorus (P)-limited Florida Everglades esturies, United States. The two estuaries differ, in that one (Shark River estuary) is directly connected to the Gulf of Mexico and receives marine P, while the other (Taylor Slough estuary) does not receive marine P because Florida Bay separates it from the Gulf of Mexico. Decomposition of three macrophytes.Cladium jamaicense, Eleochaaris spp., andJuncus roemerianus, was studied using a litter bag technique over 18 mo. Litter was exposed to three treatments: soil surface+macroinvertebrates (=macro), soil surface without macroinvertebrates (=wet), and above the soil and water (=aerial). The third treatment replicated the decomposition of standing dead leaves. Decomposition rates showed that litter exposed to the wet and macro treatments decomposed significantly faster than the aerial treatment, where atmospheric deposition was the only source of nutrients. Macroinvertebrates had no influence on litter decompostion rates.C. jamaicense decomposed faster at sites, with higher P, andEleocharis spp. decomposed significantly faster at sites with higher nitrogen (N). Initial tissue C:N and C:P molar ratios revealed that the nutrient quality of litter of bothEleocharis spp. andJ. roemerianus was higher thanC. jamaicense, but onlyEleocharis spp. decomposed faster thanC. jamaicense. C. jamaicense litter tended to immobilize P, whileEleocharis spp. litter showed net remineralization of N and P. A comparison with other estuarine and wetland systems revealed the dependence of litter decomposition on nutrient availability and litter quality. The results from this experiment suggest that Everglades restoration may have an important effect on key ecosystem processes in the estuarine ecotone of this landscape.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Large pools of dead wood in mangrove forests following disturbances such as hurricanes may influence nutrient fluxes. We hypothesized that decomposition of wood of mangroves from Florida, USA (Avicennia germinans, Laguncularia racemosa and Rhizophora mangle), and the consequent nutrient dynamics, would depend on species, location in the forest relative to freshwater and marine influences and whether the wood was standing, lying on the sediment surface or buried. 2. Wood disks (8–10 cm diameter, 1 cm thick) from each species were set to decompose at sites along the Shark River, either buried in the sediment, on the soil surface or in the air (above both the soil surface and high tide elevation). 3. A simple exponential model described the decay of wood in the air, and neither species nor site had any effect on the decay coefficient during the first 13 months of decomposition. 4. Over 28 months of decomposition, buried and surface disks decomposed following a two-component model, with labile and refractory components. Avicennia germinans had the largest labile component (18 ± 2% of dry weight), while Laguncularia racemosa had the lowest (10 ± 2%). Labile components decayed at rates of 0.37–23.71% month−1, while refractory components decayed at rates of 0.001–0.033% month−1. Disks decomposing on the soil surface had higher decay rates than buried disks, but both were higher than disks in the air. All species had similar decay rates of the labile and refractory components, but A. germinans exhibited faster overall decay because of a higher proportion of labile components. 5. Nitrogen content generally increased in buried and surface disks, but there was little change in N content of disks in the air over the 2-year study. Between 17% and 68% of total phosphorus in wood leached out during the first 2 months of decomposition, with buried disks having the greater losses, P remaining constant or increasing slightly thereafter. 6. Newly deposited wood from living trees was a short-term source of N for the ecosystem but, by the end of 2 years, had become a net sink. Wood, however, remained a source of P for the ecosystem. 7. As in other forested ecosystems, coarse woody debris can have a significant impact on carbon and nutrient dynamics in mangrove forests. The prevalence of disturbances, such as hurricanes, that can deposit large amounts of wood on the forest floor accentuates the importance of downed wood in these forests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: (1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (E LUMO) via QSAR modelling and analysis; (2) to validate the models by using internal and external cross-validation techniques; (3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl ) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: (1) Linear or Multi-linear Regression (MLR); (2) Partial Least Squares (PLS); and (3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: (1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; (2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; (3) E LUMO are shown to correlate highly with the NCl for several classes of DBPs; and (4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new modality for preventing HIV transmission is emerging in the form of topical microbicides. Some clinical trials have shown some promising results of these methods of protection while other trials have failed to show efficacy. Due to the relatively novel nature of microbicide drug transport, a rigorous, deterministic analysis of that transport can help improve the design of microbicide vehicles and understand results from clinical trials. This type of analysis can aid microbicide product design by helping understand and organize the determinants of drug transport and the potential efficacies of candidate microbicide products.

Microbicide drug transport is modeled as a diffusion process with convection and reaction effects in appropriate compartments. This is applied here to vaginal gels and rings and a rectal enema, all delivering the microbicide drug Tenofovir. Although the focus here is on Tenofovir, the methods established in this dissertation can readily be adapted to other drugs, given knowledge of their physical and chemical properties, such as the diffusion coefficient, partition coefficient, and reaction kinetics. Other dosage forms such as tablets and fiber meshes can also be modeled using the perspective and methods developed here.

The analyses here include convective details of intravaginal flows by both ambient fluid and spreading gels with different rheological properties and applied volumes. These are input to the overall conservation equations for drug mass transport in different compartments. The results are Tenofovir concentration distributions in time and space for a variety of microbicide products and conditions. The Tenofovir concentrations in the vaginal and rectal mucosal stroma are converted, via a coupled reaction equation, to concentrations of Tenofovir diphosphate, which is the active form of the drug that functions as a reverse transcriptase inhibitor against HIV. Key model outputs are related to concentrations measured in experimental pharmacokinetic (PK) studies, e.g. concentrations in biopsies and blood. A new measure of microbicide prophylactic functionality, the Percent Protected, is calculated. This is the time dependent volume of the entire stroma (and thus fraction of host cells therein) in which Tenofovir diphosphate concentrations equal or exceed a target prophylactic value, e.g. an EC50.

Results show the prophylactic potentials of the studied microbicide vehicles against HIV infections. Key design parameters for each are addressed in application of the models. For a vaginal gel, fast spreading at small volume is more effective than slower spreading at high volume. Vaginal rings are shown to be most effective if inserted and retained as close to the fornix as possible. Because of the long half-life of Tenofovir diphosphate, temporary removal of the vaginal ring (after achieving steady state) for up to 24h does not appreciably diminish Percent Protected. However, full steady state (for the entire stromal volume) is not achieved until several days after ring insertion. Delivery of Tenofovir to the rectal mucosa by an enema is dominated by surface area of coated mucosa and whether the interiors of rectal crypts are filled with the enema fluid. For the enema 100% Percent Protected is achieved much more rapidly than for vaginal products, primarily because of the much thinner epithelial layer of the mucosa. For example, 100% Percent Protected can be achieved with a one minute enema application, and 15 minute wait time.

Results of these models have good agreement with experimental pharmacokinetic data, in animals and clinical trials. They also improve upon traditional, empirical PK modeling, and this is illustrated here. Our deterministic approach can inform design of sampling in clinical trials by indicating time periods during which significant changes in drug concentrations occur in different compartments. More fundamentally, the work here helps delineate the determinants of microbicide drug delivery. This information can be the key to improved, rational design of microbicide products and their dosage regimens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.

This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.

Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: 1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (ELUMO) via QSAR modelling and analysis; 2) to validate the models by using internal and external cross-validation techniques; 3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: 1) Linear or Multi-linear Regression (MLR); 2) Partial Least Squares (PLS); and 3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: 1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; 2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; 3) ELUMO are shown to correlate highly with the NCl for several classes of DBPs; and 4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributions of pore water O2, NO-2, NO-3, NH+4, Si(OH)4, PO[3-]4, Mn[2+], F-, and T.A. were determined at 15 stations in the eastern equatorial Atlantic. While overall profile characteristics are consistent with previous models of organic matter diagenesis, profile shapes suggest that a deep reaction layer, rich in organic C, is also present at many sites. While it is unlikely that the oxidation of organic C in this layer has had a major effect on the ocean C cycle, pore water profile shapes are significantly altered. Despite exposure to seawater SO[2-]4 concentrations for > 1000 years, decomposition of the organic matter in the layer appears to be restricted to oxic and suboxic processes. These results suggest major differences in organic carbon decomposition and preservation under oxic/suboxic and anoxic conditions. Present-day benthic fluxes are largest adjacent to the eastern boundary coastal upwelling region and similar in magnitude to values reported for the eastern Pacific. Preliminary estimates suggest that the benthic respiration in the eastern 1/3 of the North Atlantic south of 20°N may alone account for >20% of the total deep North Atlantic respiration. Combining these results with estimates of organic C burial and deep water-column decomposition suggests that this region is a major location of organic C input into the deep sea.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current models of the global carbon cycle lack natural mechanisms to explain known large, transient shifts in past records of the stable carbon-isotope ratio (delta13C) of carbon reservoirs. The injection into the atmosphere of ~1,200-2,000 gigatons of carbon, as methane from the decomposition of sedimentary methane hydrates, has been proposed to explain a delta13C anomaly associated with high-latitude warming and changes in marine and terrestrial biota near the Palaeocene-Eocene boundary, about 55 million years ago. These events may thus be considered as a natural 'experiment' on the effects of transient greenhouse warming. Here we use physical, chemical and spectral analyses of a sediment core from the western North Atlantic Ocean to show that two-thirds of the carbon-isotope anomaly occurred within no more than a few thousand years, indicating that carbon was catastrophically released into the ocean and atmosphere. Both the delta13C anomaly and biotic changes began between 54.93 and 54.98 million years ago, and are synchronous in oceans and on land. The longevity of the delta13C anomaly suggests that the residence time of carbon in the Palaeocene global carbon cycle was ~120 thousand years, which is similar to the modelled response after a massive input of methane. Our results suggest that large natural perturbations to the global carbon cycle have occurred in the past-probably by abrupt failure of sedimentary carbon reservoirs-at rates that are similar to those induced today by human activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To effectively assess and mitigate risk of permafrost disturbance, disturbance-p rone areas can be predicted through the application of susceptibility models. In this study we developed regional susceptibility models for permafrost disturbances using a field disturbance inventory to test the transferability of the model to a broader region in the Canadian High Arctic. Resulting maps of susceptibility were then used to explore the effect of terrain variables on the occurrence of disturbances within this region. To account for a large range of landscape charac- teristics, the model was calibrated using two locations: Sabine Peninsula, Melville Island, NU, and Fosheim Pen- insula, Ellesmere Island, NU. Spatial patterns of disturbance were predicted with a generalized linear model (GLM) and generalized additive model (GAM), each calibrated using disturbed and randomized undisturbed lo- cations from both locations and GIS-derived terrain predictor variables including slope, potential incoming solar radiation, wetness index, topographic position index, elevation, and distance to water. Each model was validated for the Sabine and Fosheim Peninsulas using independent data sets while the transferability of the model to an independent site was assessed at Cape Bounty, Melville Island, NU. The regional GLM and GAM validated well for both calibration sites (Sabine and Fosheim) with the area under the receiver operating curves (AUROC) N 0.79. Both models were applied directly to Cape Bounty without calibration and validated equally with AUROC's of 0.76; however, each model predicted disturbed and undisturbed samples differently. Addition- ally, the sensitivity of the transferred model was assessed using data sets with different sample sizes. Results in- dicated that models based on larger sample sizes transferred more consistently and captured the variability within the terrain attributes in the respective study areas. Terrain attributes associated with the initiation of dis- turbances were similar regardless of the location. Disturbances commonly occurred on slopes between 4 and 15°, below Holocene marine limit, and in areas with low potential incoming solar radiation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although ancestral polymorphisms and incomplete lineage sorting are commonly used at the population level, increasing reports of these models have been invoked and tested to explain deep radiations. Hypotheses are put forward for ancestral polymorphisms being the likely reason for paraphyletic taxa at the class level in the diatoms based on an ancient rapid radiation of the entire groups. Models for ancestral deep coalescence are invoked to explain paraphyly and molecular evolution at the class level in the diatoms. Other examples at more recent divergences are also documented. Discussion as to whether or not paraphyletic groups seen in the diatoms at all taxonomic levels should be recognized is provided. The continued use of the terms centric and pennate diatoms is substantiated with additional evidence produced to support their use in diatoms both as descriptive terms for both groups and as taxonomic groups for the latter because new morphological evidence from the auxospores justifies the formal classification of the basal and core araphids as new subclasses of pennate diatoms in the Class Bacillariophyceae. Keys for higher levels of the diatoms showing how the terms centrics and araphid diatoms can be defined are provided.