913 resultados para orders of worth
Resumo:
Carbon Capture and Storage may use deep saline aquifers for CO(2) sequestration, but small CO(2) leakage could pose a risk to overlying fresh groundwater. We performed laboratory incubations of CO(2) infiltration under oxidizing conditions for >300 days on samples from four freshwater aquifers to 1) understand how CO(2) leakage affects freshwater quality; 2) develop selection criteria for deep sequestration sites based on inorganic metal contamination caused by CO(2) leaks to shallow aquifers; and 3) identify geochemical signatures for early detection criteria. After exposure to CO(2), water pH declines of 1-2 units were apparent in all aquifer samples. CO(2) caused concentrations of the alkali and alkaline earths and manganese, cobalt, nickel, and iron to increase by more than 2 orders of magnitude. Potentially dangerous uranium and barium increased throughout the entire experiment in some samples. Solid-phase metal mobility, carbonate buffering capacity, and redox state in the shallow overlying aquifers influence the impact of CO(2) leakage and should be considered when selecting deep geosequestration sites. Manganese, iron, calcium, and pH could be used as geochemical markers of a CO(2) leak, as their concentrations increase within 2 weeks of exposure to CO(2).
Resumo:
Carbon sequestration in sandstone saline reservoirs holds great potential for mitigating climate change, but its storage potential and cost per ton of avoided CO2 emissions are uncertain. We develop a general model to determine the maximum theoretical constraints on both storage potential and injection rate and use it to characterize the economic viability of geosequestration in sandstone saline aquifers. When applied to a representative set of aquifer characteristics, the model yields results that compare favorably with pilot projects currently underway. Over a range of reservoir properties, maximum effective storage peaks at an optimal depth of 1600 m, at which point 0.18-0.31 metric tons can be stored per cubic meter of bulk volume of reservoir. Maximum modeled injection rates predict minima for storage costs in a typical basin in the range of $2-7/ ton CO2 (2005 U.S.$) depending on depth and basin characteristics in our base-case scenario. Because the properties of natural reservoirs in the United States vary substantially, storage costs could in some cases be lower or higher by orders of magnitude. We conclude that available geosequestration capacity exhibits a wide range of technological and economic attractiveness. Like traditional projects in the extractive industries, geosequestration capacity should be exploited starting with the low-cost storage options first then moving gradually up the supply curve.
Resumo:
Regions of the hamster alpha 1-adrenergic receptor (alpha 1 AR) that are important in GTP-binding protein (G protein)-mediated activation of phospholipase C were determined by studying the biological functions of mutant receptors constructed by recombinant DNA techniques. A chimeric receptor consisting of the beta 2-adrenergic receptor (beta 2AR) into which the putative third cytoplasmic loop of the alpha 1AR had been placed activated phosphatidylinositol metabolism as effectively as the native alpha 1AR, as did a truncated alpha 1AR lacking the last 47 residues in its cytoplasmic tail. Substitutions of beta 2AR amino acid sequence in the intermediate portions of the third cytoplasmic loop of the alpha 1AR or at the N-terminal portion of the cytoplasmic tail caused marked decreases in receptor coupling to phospholipase C. Conservative substitutions of two residues in the C terminus of the third cytoplasmic loop (Ala293----Leu, Lys290----His) increased the potency of agonists for stimulating phosphatidylinositol metabolism by up to 2 orders of magnitude. These data indicate (i) that the regions of the alpha 1AR that determine coupling to phosphatidylinositol metabolism are similar to those previously shown to be involved in coupling of beta 2AR to adenylate cyclase stimulation and (ii) that point mutations of a G-protein-coupled receptor can cause remarkable increases in sensitivity of biological response.
Resumo:
BACKGROUND: The evolutionary relationships of modern birds are among the most challenging to understand in systematic biology and have been debated for centuries. To address this challenge, we assembled or collected the genomes of 48 avian species spanning most orders of birds, including all Neognathae and two of the five Palaeognathae orders, and used the genomes to construct a genome-scale avian phylogenetic tree and perform comparative genomics analyses (Jarvis et al. in press; Zhang et al. in press). Here we release assemblies and datasets associated with the comparative genome analyses, which include 38 newly sequenced avian genomes plus previously released or simultaneously released genomes of Chicken, Zebra finch, Turkey, Pigeon, Peregrine falcon, Duck, Budgerigar, Adelie penguin, Emperor penguin and the Medium Ground Finch. We hope that this resource will serve future efforts in phylogenomics and comparative genomics. FINDINGS: The 38 bird genomes were sequenced using the Illumina HiSeq 2000 platform and assembled using a whole genome shotgun strategy. The 48 genomes were categorized into two groups according to the N50 scaffold size of the assemblies: a high depth group comprising 23 species sequenced at high coverage (>50X) with multiple insert size libraries resulting in N50 scaffold sizes greater than 1 Mb (except the White-throated Tinamou and Bald Eagle); and a low depth group comprising 25 species sequenced at a low coverage (~30X) with two insert size libraries resulting in an average N50 scaffold size of about 50 kb. Repetitive elements comprised 4%-22% of the bird genomes. The assembled scaffolds allowed the homology-based annotation of 13,000 ~ 17000 protein coding genes in each avian genome relative to chicken, zebra finch and human, as well as comparative and sequence conservation analyses. CONCLUSIONS: Here we release full genome assemblies of 38 newly sequenced avian species, link genome assembly downloads for the 7 of the remaining 10 species, and provide a guideline of genomic data that has been generated and used in our Avian Phylogenomics Project. To the best of our knowledge, the Avian Phylogenomics Project is the biggest vertebrate comparative genomics project to date. The genomic data presented here is expected to accelerate further analyses in many fields, including phylogenetics, comparative genomics, evolution, neurobiology, development biology, and other related areas.
Resumo:
BACKGROUND: Determining the evolutionary relationships among the major lineages of extant birds has been one of the biggest challenges in systematic biology. To address this challenge, we assembled or collected the genomes of 48 avian species spanning most orders of birds, including all Neognathae and two of the five Palaeognathae orders. We used these genomes to construct a genome-scale avian phylogenetic tree and perform comparative genomic analyses. FINDINGS: Here we present the datasets associated with the phylogenomic analyses, which include sequence alignment files consisting of nucleotides, amino acids, indels, and transposable elements, as well as tree files containing gene trees and species trees. Inferring an accurate phylogeny required generating: 1) A well annotated data set across species based on genome synteny; 2) Alignments with unaligned or incorrectly overaligned sequences filtered out; and 3) Diverse data sets, including genes and their inferred trees, indels, and transposable elements. Our total evidence nucleotide tree (TENT) data set (consisting of exons, introns, and UCEs) gave what we consider our most reliable species tree when using the concatenation-based ExaML algorithm or when using statistical binning with the coalescence-based MP-EST algorithm (which we refer to as MP-EST*). Other data sets, such as the coding sequence of some exons, revealed other properties of genome evolution, namely convergence. CONCLUSIONS: The Avian Phylogenomics Project is the largest vertebrate phylogenomics project to date that we are aware of. The sequence, alignment, and tree data are expected to accelerate analyses in phylogenomics and other related areas.
Resumo:
Sound waves are propagating pressure fluctuations, which are typically several orders of magnitude smaller than the pressure variations in the flow field that account for flow acceleration. On the other hand, these fluctuations travel at the speed of sound in the medium, not as a transported fluid quantity. Due to the above two properties, the Reynolds averaged Navier–Stokes equations do not resolve the acoustic fluctuations. This paper discusses a defect correction method for this type of multi-scale problems in aeroacoustics. Numerical examples in one dimensional and two dimensional are used to illustrate the concept. Copyright (C) 2002 John Wiley & Sons, Ltd.
Resumo:
The relationship between date of first description and size, geographic range and depth of occurrence is investigated for 18 orders of marine holozooplankton (comprising over 4000 species). Results of multiple regression analyses suggest that all attributes are linked, which reflects the complex interplay between them. Partial correlation coefficients suggest that geographic range is the most important predictor of description date, and shows an inverse relationship. By contrast, size is generally a poor indicator of description date, which probably mirrors the size-independent way in which specimens are collected, though there is clearly a positive relationship between both size and depth (for metabolic/trophic reasons), and size and geographic range. There is also a positive relationship between geographic range and depth that probably reflects the near constant nature of the deep-water environment and the wide-ranging currents to be found there. Although we did not explicitly incorporate either abundance or location into models predicting the date of first description, neither should be ignored.
Resumo:
We frequently require sensitive bioassay techniques with which to study the effects of marine contaminants at environmentally realistic concentrations. Unfortunately, it is difficult to achieve sensitivity and precision in an organism amenable to indefinite periods of laboratory culture. Results from different laboratories are often extremely variable: LC50 values for the same substance, using the same organism, may differ by two or even three orders of magnitude (Wilson, Cowell & Beynon, 1975). Moreover, some of the most sensitive bioassay organisms require nutrient media, which may alter the availability and toxicity of metals by complexing them (Jones, 1964; Kamp-Nielsen, 1971; Hannan & Patouillet, 1972) and often contain metal impurities at significant levels (Albert, 1968; Steeman Nielsen & Wium Anderson, 1970). The object of the work reported here has been to develop a technique by which these problems might be minimized or avoided. Hydroids were chosen as bioassay organisms for a variety of reasons. They are tolerant but sensitive to small variations in their chemical environment. Techniques for growing hydroids are simple and they can be cultured under conditions of near optimal temperature, salinity and food supply, thus minimizing the errors frequent in bioassay work arising from variations in the history of the test organisms, their size, sex or physiological state. An important source of variability in all work with organisms is that inherent in the genetic material, but with hydroids this can be avoided by the use of a single clone.
Resumo:
1. A first step in the analysis of complex movement data often involves discretisation of the path into a series of step-lengths and turns, for example in the analysis of specialised random walks, such as Lévy flights. However, the identification of turning points, and therefore step-lengths, in a tortuous path is dependent on ad-hoc parameter choices. Consequently, studies testing for movement patterns in these data, such as Lévy flights, have generated debate. However, studies focusing on one-dimensional (1D) data, as in the vertical displacements of marine pelagic predators, where turning points can be identified unambiguously have provided strong support for Lévy flight movement patterns. 2. Here, we investigate how step-length distributions in 3D movement patterns would be interpreted by tags recording in 1D (i.e. depth) and demonstrate the dimensional symmetry previously shown mathematically for Lévy-flight movements. We test the veracity of this symmetry by simulating several measurement errors common in empirical datasets and find Lévy patterns and exponents to be robust to low-quality movement data. 3. We then consider exponential and composite Brownian random walks and show that these also project into 1D with sufficient symmetry to be clearly identifiable as such. 4. By extending the symmetry paradigm, we propose a new methodology for step-length identification in 2D or 3D movement data. The methodology is successfully demonstrated in a re-analysis of wandering albatross Global Positioning System (GPS) location data previously analysed using a complex methodology to determine bird-landing locations as turning points in a Lévy walk. For this high-resolution GPS data, we show that there is strong evidence for albatross foraging patterns approximated by truncated Lévy flights spanning over 3·5 orders of magnitude. 5. Our simple methodology and freely available software can be used with any 2D or 3D movement data at any scale or resolution and are robust to common empirical measurement errors. The method should find wide applicability in the field of movement ecology spanning the study of motile cells to humans.
Resumo:
The decisions animals make about how long to wait between activities can determine the success of diverse behaviours such as foraging, group formation or risk avoidance. Remarkably, for diverse animal species, including humans, spontaneous patterns of waiting times show random ‘burstiness’ that appears scale-invariant across a broad set of scales. However, a general theory linking this phenomenon across the animal kingdom currently lacks an ecological basis. Here, we demonstrate from tracking the activities of 15 sympatric predator species (cephalopods, sharks, skates and teleosts) under natural and controlled conditions that bursty waiting times are an intrinsic spontaneous behaviour well approximated by heavy-tailed (power-law) models over data ranges up to four orders of magnitude. Scaling exponents quantifying ratios of frequent short to rare very long waits are species-specific, being determined by traits such as foraging mode (active versus ambush predation), body size and prey preference. A stochastic–deterministic decision model reproduced the empirical waiting time scaling and species-specific exponents, indicating that apparently complex scaling can emerge from simple decisions. Results indicate temporal power-law scaling is a behavioural ‘rule of thumb’ that is tuned to species’ ecological traits, implying a common pattern may have naturally evolved that optimizes move–wait decisions in less predictable natural environments.
Resumo:
This study investigates the use of co-melt fluidised bed granulation for the agglomeration of model pharmaceutical powders, namely, lactose mono-hydrate, PEG 10000, poly-vinyl pyrolidone and ibuprofen as a model drug. Granulation within the co-melt system was found to follow a nucleationâ??steady growthâ??coating regime profile. Using high molecular weight PEG binder, the granulation mechanism and thus the extent of granulation was found to be significantly influenced by binder viscosity. The compression properties of the granulate within the hot fluidised bed were correlated using a novel high temperature experimental procedure. It was found that the fracture stress and fractural modulus of the materials under hot processing conditions were orders of magnitude lower than those measured under ambient conditions. A range of particle velocities within the granulator were considered based on theoretical models. After an initial period of nucleation, the Stokes deformation number analysis indicated that only velocities within the high shear region of the fluidised bed were sufficient to promote significant granule deformation and therefore, coalescence. The data also indicated that larger granules de-fluidised preventing agglomeration by coalescence. Furthermore, experimental data indicated that dissipation of the viscous molten binder to the surface was the most important factor in the latter stages of the granulation process. From a pharmaceutical perspective the inclusion of the model drug, ibuprofen, combined with PVP in the co-melt process proved to be highly significant. It was found that using DSC analysis on the formulations that the decrease in the heat of fusion associated with the melting of ibuprofen within the FHMG systems may be attributed to interaction between PVP and ibuprofen through inter-molecular hydrogen bonding. This interaction decreases the crystallinity of ibuprofen and facilitates solubilisation and bioavailability within the solid matrix.
Resumo:
In this study, a series of N-chloro-acetylated dipeptides were synthesised by the application of Houghten's methodology of multiple analog peptide syntheses. The peptides, all of which contain a C-terminal free acid, were tested as inactivators of bovine cathepsin B, in an attempt at exploiting the known and, amongst the cysteine proteinases, unique carboxy dipeptidyl peptidase activity of the protease. We have succeeded in obtaining a number of effective inactivators, the most potent of which-chloroacetyl-Leu-Leu-OH, inactivates the enzyme with an apparent second-order rate constant of 3.8 x 10(4) M-1 min(-1). In contrast, the esterified analog, chloroacetyl-Leu-Leu-OMe, inactivates the enzyme some three orders of magnitude less efficiently, lending credence to our thesis that a free carboxylic acid moiety is an important determinant for inhibitor effectiveness. This preliminary study has highlighted a number of interesting features about the specificity requirements of the bovine proteinase and we believe that our approach has great potential for the rapid delineation of the subsite specificities of cathepsin B-like proteases from various species. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Optical transmission of a two-dimensional array of subwavelength holes in a metal film has been numerically studied using a differential method. Transmission spectra have been calculated showing a significant increase of the transmission in certain spectral ranges corresponding to the excitation of the surface polariton Bloch waves on a metal surface with a periodic hole structure. Under the enhanced transmission conditions, the near-field distribution of the transmitted light reveals an intensity enhancement greater than 2 orders of magnitude in localized (similar to 40 nm) spots resulting from the interference of the surface polaritons Bragg scattered by the holes in an array.
Resumo:
We present a numerical and theoretical study of intense-field single-electron ionization of helium at 390 nm and 780 nm. Accurate ionization rates (over an intensity range of (0.175-34) X10^14 W/ cm^2 at 390 nm, and (0.275 - 14.4) X 10^14 W /cm^2 at 780 nm) are obtained from full-dimensionality integrations of the time-dependent helium-laser Schroedinger equation. We show that the power law of lowest order perturbation theory, modified with a ponderomotive-shifted ionization potential, is capable of modelling the ionization rates over an intensity range that extends up to two orders of magnitude higher than that applicable to perturbation theory alone. Writing the modified perturbation theory in terms of scaled wavelength and intensity variables, we obtain to first approximation a single ionization law for both the 390 nm and 780 nm cases. To model the data in the high intensity limit as well as in the low, a new function is introduced for the rate. This function has, in part, a resemblance to that derived from tunnelling theory but, importantly, retains the correct frequency-dependence and scaling behaviour derived from the perturbative-like models at lower intensities. Comparison with the predictions of classical ADK tunnelling theory confirms that ADK performs poorly in the frequency and intensity domain treated here.
Resumo:
A comparative study of models used to predict contaminant dispersion in a partially stratified room is presented. The experiments were carried out in a ventilated test room, with an initially evenly dispersed pollutant. Air was extracted from the outlet in the ceiling of the room at 1 and 3 air changes per hour. A small temperature difference between the top and bottom of the room causes very low air velocities, and higher concentrations, in the lower half of the room. Grid-independent CFD calculations were compared with predictions from a zonal model and from CFD using a very coarse grid. All the calculations show broadly similar contaminant concentration decay rates for the three locations monitored in the experiments, with the zonal model performing surprisingly well. For the lower air change rate, the models predict a less well mixed contaminant distribution than the experimental measurements suggest. With run times of less than a few minutes, the zonal model is around two orders of magnitude faster than coarse-grid CFD and could therefore be used more easily in parametric studies and sensitivity analyses. For a more detailed picture of internal dispersion, a CFD study using coarse and standard grids may be more appropriate.