958 resultados para Data Generation
Resumo:
The solution structure of A beta(1-40)Met(O), the methionine-oxidized form of amyloid beta-peptide A beta(1-40), has been investigated by CD and NMR spectroscopy. Oxidation of Met35 may have implications in the aetiology of Alzheimer's disease. Circular dichroism experiments showed that whereas A beta(1-40) and A beta(1-40)Met(O) both adopt essentially random coil structures in water (pH 4) at micromolar concentrations, the former aggregates within several days while the latter is stable for at least 7 days under these conditions. This remarkable difference led us to determine the solution structure of A beta(1-40)Met(O) using H-1 NMR spectroscopy. In a water-SDS micelle medium needed to solubilize both peptides at the millimolar concentrations required to measure NMR spectra, chemical shift and NOE data for A beta(1-40)Met(O) strongly suggest the presence of a helical region between residues 16 and 24. This is supported by slow H-D exchange of amide protons in this region and by structure calculations using simulated annealing with the program XPLOR. The remainder of the structure is relatively disordered. Our previously reported NMR data for A beta(1-40) in the same solvent shows that helices are present over residues 15-24 (helix 1) and 28-36 (helix 2), Oxidation of Met35 thus causes a local and selective disruption of helix 2. In addition to this helix-coil rearrangement in aqueous micelles, the CD data show that oxidation inhibits a coil-to-beta-sheet transition in water. These significant structural rearrangements in the C-terminal region of A beta may be important clues to the chemistry and biology of A beta(1-40) and A beta(1-42).
Resumo:
Two major factors are likely to impact the utilisation of remotely sensed data in the near future: (1)an increase in the number and availability of commercial and non-commercial image data sets with a range of spatial, spectral and temporal dimensions, and (2) increased access to image display and analysis software through GIS. A framework was developed to provide an objective approach to selecting remotely sensed data sets for specific environmental monitoring problems. Preliminary applications of the framework have provided successful approaches for monitoring disturbed and restored wetlands in southern California.
Resumo:
Open system pyrolysis (heating rate 10 degrees C/min) of coal maturity (vitrinite reflectance, VR) sequence (0.5%, 0.8% and 1.4% VR) demonstrates that there are two stages of thermogenic methane generation from Bowen Basin coals. The first and major stage shows a steady increase in methane generation maximising at 570 degrees C, corresponding to a VR of 2-2.5%. This is followed by a less intense methane generation which has not as yet maximised by 800 degrees C (equivalent to VR of 5%). Heavier (C2+) hydrocarbons are generated up to 570 degrees C after which only the C-1 (CH4, CO and CO2) gases are produced. The main phase of heavy hydrocarbon generation occurs between 420 and 510 degrees C. Over this temperature range,methane generation accounts for only a minor component, whereas the wet gases (C-2-C-5) are either in equal abundance or are more abundant by a factor of two than the liquid hydrocarbons. The yields of non-hydrocarbon gases CO2 and CO are greater then methane during the early stages of gas generation from an immature coal, subordinate to methane during the main phase of methane generation after which they are again dominant. Compositional data for desorbed and produced coal seam gases from the Bowen show that CO2 and wet gases are a minor component. This discrepancy between the proportion of wet gas components produced during open system pyrolysis and that observed in naturally matured coals may be the result of preferential migration of wet gas components, by dilution of methane generated during secondary cracking of bitumen, or kinetic effects associated with different activations for production of individual hydrocarbon gases. Extrapolation of results of artificial pyrolysis of the main organic components in coal to geological significant heating rates suggests that isotopically light methane to delta(13)C of -50 parts per thousand can be generated. Carbon isotope depletions in C-13 are further enhanced, however, as a result of trapping of gases over selected rank levels (instantaneous generation) which is a probable explanation for the range of delta(13)C values we have recorded in methane desorbed from Bowen Basin coals (-51 +/- 9 parts per thousand). Pervasive carbonate-rich veins in Bowen Basin coals are the product of magmatism-related hydrothermal activity. Furthermore, the pyrolysis results suggest an additional organic carbon source front CO2 released at any stage during the maturation history could mix in varying proportions with CO2 from the other sources. This interpretation is supported by C and O isotopic ratios, of carbonates that indicate mixing between magmatic and meteoric fluids. Also, the steep slope of the C and O isotope correlation trend suggests that the carbonates were deposited over a very narrow temperature interval basin-wide, or at relatively high temperatures (i.e., greater than 150 degrees C) where mineral-fluid oxygen isotope fractionations are small. These temperatures are high enough for catagenic production of methane and higher hydrocarbons from the coal and coal-derived bitumen. The results suggests that a combination of thermogenic generation of methane and thermodynamic processes associated with CH4/CO2 equilibria are the two most important factors that control the primary isotope and molecular composition of coal seam gases in the Bowen Basin. Biological process are regionally subordinate but may be locally significant. (C) 1998 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.
Resumo:
In an investigation intended to determine training needs of night crews, Bowers et al. (1998, this issue) report two studies showing that the patterning of communication is a better discriminator of good and poor crews than is the content of communication. Bowers et al. characterize their studies as intended to generate hypotheses for training needs and draw connections with Exploratory Sequential Data Analysis (ESDA). Although applauding the intentions of Bowers ct al., we point out some concerns with their characterization and implementation of ESDA. Our principal concern is that the Bowers et al. exploration of the data does not convincingly lead them back to a better fundamental understanding of the original phenomena they are investigating.
Resumo:
It is recognized that vascular dispersion in the liver is a determinant of high first-pass extraction of solutes by that organ. Such dispersion is also required for translation of in-vitro microsomal activity into in-vivo predictions of hepatic extraction for any solute. We therefore investigated the relative dispersion of albumin transit times (CV2) in the livers of adult and weanling rats and in elasmobranch livers. The mean and normalized variance of the hepatic transit time distribution of albumin was estimated using parametric non-linear regression (with a correction for catheter influence) after an impulse (bolus) input of labelled albumin into a single-pass liver perfusion. The mean +/- s.e. of CV2 for albumin determined in each of the liver groups were 0.85 +/- 0.20 (n = 12), 1.48 +/- 0.33 (n = 7) and 0.90 +/- 0.18 (n = 4) for the livers of adult and weanling rats and elasmobranch livers, respectively. These CV2 are comparable with that reported previously for the dog and suggest that the CV2 Of the liver is of a similar order of magnitude irrespective of the age and morphological development of the species. It might, therefore, be justified, in the absence of other information, to predict the hepatic clearances and availabilities of highly extracted solutes by scaling within and between species livers using hepatic elimination models such as the dispersion model with a CV2 of approximately unity.
Resumo:
We tested the effects of four data characteristics on the results of reserve selection algorithms. The data characteristics were nestedness of features (land types in this case), rarity of features, size variation of sites (potential reserves) and size of data sets (numbers of sites and features). We manipulated data sets to produce three levels, with replication, of each of these data characteristics while holding the other three characteristics constant. We then used an optimizing algorithm and three heuristic algorithms to select sites to solve several reservation problems. We measured efficiency as the number or total area of selected sites, indicating the relative cost of a reserve system. Higher nestedness increased the efficiency of all algorithms (reduced the total cost of new reserves). Higher rarity reduced the efficiency of all algorithms (increased the total cost of new reserves). More variation in site size increased the efficiency of all algorithms expressed in terms of total area of selected sites. We measured the suboptimality of heuristic algorithms as the percentage increase of their results over optimal (minimum possible) results. Suboptimality is a measure of the reliability of heuristics as indicative costing analyses. Higher rarity reduced the suboptimality of heuristics (increased their reliability) and there is some evidence that more size variation did the same for the total area of selected sites. We discuss the implications of these results for the use of reserve selection algorithms as indicative and real-world planning tools.
Resumo:
There is no morphological synapomorphy for the disparate digeneans, the Fellodistomidae Nicoll, 1909. Although all known life-cycles of the group include bivalves as first intermediate hosts, there is no convincing morphological synapomorphy that can be used to unite the group. Sequences from the V4 region of small subunit (18S) rRNA genes were used to infer phylogenetic relationships among 13 species of Fellodistomidae from four subfamilies and eight species from seven other digenean families: Bivesiculidae; Brachylaimidae; Bucephalidae; Gorgoderidae; Gymnophallidae; Opecoelidae; and Zoogonidae. Outgroup comparison was made initially with an aspidogastrean. Various species from the other digenean families were used as outgroups in subsequent analyses. Three methods of analysis indicated polyphyly of the Fellodistomidae and at least two independent radiations of the subfamilies, such that they were more closely associated with other digeneans than to each other. The Tandanicolinae was monophyletic (100% bootstrap support) and was weakly associated with the Gymnophallidae (< 50-55% bootstrap support). Monophyly of the Baccigerinae was supported with 78-87% bootstrap support, and monophyly of the Zoogonidae + Baccigerinae received 77-86% support. The remaining fellodistomid species, Fellodistomum fellis, F. agnotum and Coomera brayi (Fellodistominae) plus Proctoeces maculatus and Complexobursa sp. (Proctoecinae), formed a separate clade with 74-92% bootstrap support. On the basis of molecular, morphological and life-cycle evidence, the subfamilies Baccigerinae and Tandanicolinae are removed from the Fellodistomidae and promoted to familial status. The Baccigerinae is promoted under the senior synonym Faustulidae Poche, 1926, and the Echinobrevicecinae Dronen, Blend & McEachran, 1994 is synonymised with the Faustulidae. Consequently, species that were formerly in the Fellodistomidae are now distributed in three families: Fellodistomidae; Faustulidae (syn. Baccigerinae Yamaguti, 1954); and Tandanicolidae Johnston, 1927. We infer that the use of bivalves as intermediate hosts by this broad range of families indicates multiple host-switching events within the radiation of the Digenea.
Resumo:
Poor root development due to constraining soil conditions could be an important factor influencing health of urban trees. Therefore, there is a need for efficient techniques to analyze the spatial distribution of tree roots. An analytical procedure for describing tree rooting patterns from X-ray computed tomography (CT) data is described and illustrated. Large irregularly shaped specimens of undisturbed sandy soil were sampled from Various positions around the base of trees using field impregnation with epoxy resin, to stabilize the cohesionless soil. Cores approximately 200 mm in diameter by 500 mm in height were extracted from these specimens. These large core samples were scanned with a medical X-ray CT device, and contiguous images of soil slices (2 mm thick) were thus produced. X-ray CT images are regarded as regularly-spaced sections through the soil although they are not actual 2D sections but matrices of voxels similar to 0.5 mm x 0.5 mm x 2 mm. The images were used to generate the equivalent of horizontal root contact maps from which three-dimensional objects, assumed to be roots, were reconstructed. The resulting connected objects were used to derive indices of the spatial organization of roots, namely: root length distribution, root length density, root growth angle distribution, root spatial distribution, and branching intensity. The successive steps of the method, from sampling to generation of indices of tree root organization, are illustrated through a case study examining rooting patterns of valuable urban trees. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Training-needs analysis is critical for defining and procuring effective training systems. However, traditional approaches to training-needs analysis are not suitable for capturing the demands of highly automated and computerized work domains. In this article, we propose that work domain analysis can identify the functional structure of a work domain that must be captured in a training system, so that workers can be trained to deal with unpredictable contingencies that cannot be handled by computer systems. To illustrate this argument, we outline a work domain analysis of a fighter aircraft that defines its functional structure in terms of its training objectives, measures of performance, basic training functions, physical functionality, and physical context. The functional structure or training needs identified by work domain analysis can then be used as a basis for developing functional specifications for training systems, specifically its design objectives, data collection capabilities, scenario generation capabilities, physical functionality, and physical attributes. Finally, work domain analysis also provides a useful framework for evaluating whether a tendered solution fulfills the training needs of a work domain.
Resumo:
Fine-grained pyrite is the earliest generation of pyrite and the most abundant sulfide within the Urquhart Shale at Mount Isa, northwest Queensland. The pyrite is intimately interbanded with ore-grade Pb-Zn miner alization at the Mount Isa mine but is also abundant north and south of the mine at several stratigraphic horizons within the Urquhart Shale. Detailed sedimentologic, petrographic, and sulfur isotope studies of the Urquhart Shale, mostly north of the mine, reveal that the fine-grained pyrite (delta(34)S = -3.3 to +26.3 parts per thousand) formed by thermochemical sulfate reduction during diagenesis. The sulfate source was local sulfate evaporites, pseudo morphs of which are present throughout the Urquhart Shale (i.e., gypsum, anhydrite, and barite). Deep-burial diagenetic replacement of these evaporites resulted in sulfate-bearing ground waters which migrated parallel to bedding. Fine-grained pyrite formed where these fluids infiltrated and then interacted with carbon-rich laminated siltstones. Comparison of the sulfur isotope systematics of fine-grained pyrite and spatially associated base metal sulfides from the Mount Isa Pb-Zn and Cu orebodies indicates a common sulfur source of ultimately marine origin for all sulfide types. Different sulfur isotope ratio distributions for the various sulfides are the result of contrasting formation mechanisms and/or depositional conditions rather than differing sulfur sources. The sulfur isotope systematics of the base metal and associated iron sulfide generations are consistent with mineralization by reduced hydrothermal fluids, perhaps generated by bulk reduction of evaporite-sourced sulfate-bearing waters generated deeper in the Mount Isa Group, the sedimentary sequence which contains the Urquhart Shale. The available sulfur isotope data from the Mount Isa orebodies are consistent with either a chemically and thermally zoned, evolving Cu-Pb-Zn system, or discrete Cu and Pb-Zn mineralizing events linked by a common sulfur source.
Resumo:
Gauging data are available from numerous streams throughout Australia, and these data provide a basis for historical analysis of geomorphic change in stream channels in response to both natural phenomena and human activities. We present a simple method for analysis of these data, and a briefcase study of an application to channel change in the Tully River, in the humid tropics of north Queensland. The analysis suggests that this channel has narrowed and deepened, rather than aggraded: channel aggradation was expected, given the intensification of land use in the catchment, upstream of the gauging station. Limitations of the method relate to the time periods over which stream gauging occurred; the spatial patterns of stream gauging sites; the quality and consistency of data collection; and the availability of concurrent land-use histories on which to base the interpretation of the channel changes.
Resumo:
The development of large-scale solid-stale fermentation (SSF) processes is hampered by the lack of simple tools for the design of SSF bioreactors. The use of semifundamental mathematical models to design and operate SSF bioreactors can be complex. In this work, dimensionless design factors are used to predict the effects of scale and of operational variables on the performance of rotating drum bioreactors. The dimensionless design factor (DDF) is a ratio of the rate of heat generation to the rate of heat removal at the time of peak heat production. It can be used to predict maximum temperatures reached within the substrate bed for given operational variables. Alternatively, given the maximum temperature that can be tolerated during the fermentation, it can be used to explore the combinations of operating variables that prevent that temperature from being exceeded. Comparison of the predictions of the DDF approach with literature data for operation of rotating drums suggests that the DDF is a useful tool. The DDF approach was used to explore the consequences of three scale-up strategies on the required air flow rates and maximum temperatures achieved in the substrate bed as the bioreactor size was increased on the basis of geometric similarity. The first of these strategies was to maintain the superficial flow rate of the process air through the drum constant. The second was to maintain the ratio of volumes of air per volume of bioreactor constant. The third strategy was to adjust the air flow rate with increase in scale in such a manner as to maintain constant the maximum temperature attained in the substrate bed during the fermentation. (C) 2000 John Wiley & Sons, Inc.