16 resultados para Chemical processes Data processing

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effects of ionizing radiation in different compositions of polymer gel dosimeters are investigated using FT-Raman spectroscopy and NMR T-2 relaxation times. The dosimeters are manufactured from different concentrations of comonomers (acrylamide and N,N'-methylene-bis-acrylamide) dispersed in different concentrations of an aqueous gelatin matrix. Results are analysed using a model of fast exchange of magnetization between three proton pools. The fraction of protons in each pool is determined using the known chemical composition of the dosimeter and FT-Raman spectroscopy. Based on these results, the physical and chemical processes in interplay in the dosimeters are examined in view of their effect on the changes in T-2 The precipitation of growing macroradicals and the scavenging of free radicals by gelatin are used to explain the rate of polymerization. The model describes the changes in T-2 as a function of the absorbed dose up to 50 Gy for the different compositions. This is expected to aid the theoretical design of new, more efficient dosimeters, since it was demonstrated that the optimum dosimeter (i.e, with the lowest dose resolution) must have a range of relaxation times which match the range of T-2 values which can be determined with the lowest uncertainty using an MRI scanner.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the microbial competition observed in enhanced biological phosphorus removal (EBPR) systems, an undesirable group of micro-organisms known as glycogen-accumulating organisms (GAOs) compete for carbon in the anaerobic period with the desired polyphosphate-accumulating organisms (PAOs). Some studies have suggested that a propionate carbon source provides PAOs with a competitive advantage over GAOs in EBPR systems; however, the metabolism of GAOs with this carbon source has not been previously investigated. In this study, GAOs were enriched in a laboratory-scale bioreactor with propionate as the sole carbon source, in an effort to better understand their biochemical processes. Based on comprehensive solid-, liquid- and gas-phase chemical analytical data from the bioreactor, a metabolic model was proposed for the metabolism of propionate by GAOs. The model adequately described the anaerobic stoichiometry observed through chemical analysis, and can be a valuable tool for further investigation of the competition between PAOs and GAOs, and for the optimization of the EBPR process. A group of Alphaproteobacteria dominated the biomass (96% of Bacteria) from this bioreactor, while post-fluorescence in situ hybridization (FISH) chemical staining confirmed that these Alphaproteobacteria produced poly-beta-hydroxyalkanoates (PHAs) anaerobically and utilized them aerobically, demonstrating that they were putative GAOs. Some of the Alphaproteobacteria were related to Defluvicoccus vanus (16% of Bacteria), but the specific identity of many could not be determined by FISH. Further investigation into the identity of other GAOs is necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Thames Estuary, UK, and the Brisbane River, Australia, are comparable in size and catchment area. Both are representative of the large and growing number of the world's estuaries associated with major cities. Principle differences between the two systems relate to climate and human population pressures. In order to assess the potential phytotoxic impact of herbicide residues in the estuaries, surface waters were analysed with a PAM fluorometry-based bioassay that employs the photosynthetic efficiency (photosystem II quantum yield) of laboratory cultured microalgae, as an endpoint measure of phytotoxicity. In addition, surface waters were chemically analysed for a limited number of herbicides. Diuron atrazine and simazine were detected in both systems at comparable concentrations. In contrast, bioassay results revealed that whilst detected herbicides accounted for the observed phytotoxicity of Brisbane River extracts with great accuracy, they consistently explained only around 50% of the phytotoxicity induced by Thames Estuary extracts. Unaccounted for phytotoxicity in Thames surface waters is indicative of unidentified phytotoxins. The greatest phytotoxic response was measured at Charing Cross, Thames Estuary, and corresponded to a diuron equivalent concentration of 180 ng L-1. The study employs relative potencies (REP) of PSII impacting herbicides and demonstrates that chemical analysis alone is prone to omission of valuable information. Results of the study provide support for the incorporation of bioassays into routine monitoring programs where bioassay data may be used to predict and verify chemical contamination data, alert to unidentified compounds and provide the user with information regarding cumulative toxicity of complex mixtures. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and purpose Survey data quality is a combination of the representativeness of the sample, the accuracy and precision of measurements, data processing and management with several subcomponents in each. The purpose of this paper is to show how, in the final risk factor surveys of the WHO MONICA Project, information on data quality were obtained, quantified, and used in the analysis. Methods and results In the WHO MONICA (Multinational MONItoring of trends and determinants in CArdiovascular disease) Project, the information about the data quality components was documented in retrospective quality assessment reports. On the basis of the documented information and the survey data, the quality of each data component was assessed and summarized using quality scores. The quality scores were used in sensitivity testing of the results both by excluding populations with low quality scores and by weighting the data by its quality scores. Conclusions Detailed documentation of all survey procedures with standardized protocols, training, and quality control are steps towards optimizing data quality. Quantifying data quality is a further step. Methods used in the WHO MONICA Project could be adopted to improve quality in other health surveys.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although managers consider accurate, timely, and relevant information as critical to the quality of their decisions, evidence of large variations in data quality abounds. Over a period of twelve months, the action research project reported herein attempted to investigate and track data quality initiatives undertaken by the participating organisation. The investigation focused on two types of errors: transaction input errors and processing errors. Whenever the action research initiative identified non-trivial errors, the participating organisation introduced actions to correct the errors and prevent similar errors in the future. Data quality metrics were taken quarterly to measure improvements resulting from the activities undertaken during the action research project. The action research project results indicated that for a mission-critical database to ensure and maintain data quality, commitment to continuous data quality improvement is necessary. Also, communication among all stakeholders is required to ensure common understanding of data quality improvement goals. The action research project found that to further substantially improve data quality, structural changes within the organisation and to the information systems are sometimes necessary. The major goal of the action research study is to increase the level of data quality awareness within all organisations and to motivate them to examine the importance of achieving and maintaining high-quality data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Even when data repositories exhibit near perfect data quality, users may formulate queries that do not correspond to the information requested. Users’ poor information retrieval performance may arise from either problems understanding of the data models that represent the real world systems, or their query skills. This research focuses on users’ understanding of the data structures, i.e., their ability to map the information request and the data model. The Bunge-Wand-Weber ontology was used to formulate three sets of hypotheses. Two laboratory experiments (one using a small data model and one using a larger data model) tested the effect of ontological clarity on users’ performance when undertaking component, record, and aggregate level tasks. The results indicate for the hypotheses associated with different representations but equivalent semantics that parsimonious data model participants performed better for component level tasks but that ontologically clearer data model participants performed better for record and aggregate level tasks.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The activities of conantokin-G (con-G), conantokin-T (con-T), and several novel analogues have been studied using polyamine enhancement of [H-3]MK-801 binding to human glutamate-N-methyl-D-aspartate (NMDA) receptors, and their structures have been examined using CD and H-1 NMR spectroscopy. The potencies of con-G[A7], con-G, and con-T as noncompetitive inhibitors of spermine-enhanced [H-3]MK-801 binding to NMDA receptor obtained from human brain tissue are similar to those obtained using rat brain tissue. The secondary structure and activity of con-G are found to be highly sensitive to amino acid substitution and modification. NMR chemical shift data indicate that con-G, con-G[D8,D17], and con-G[A7] have similar conformations in the presence of Ca2+. This consists of a helix for residues 2-16, which is kinked in the vicinity of Gla10. This is confirmed by 3D structure calculations on con-G[A7]. Restraining this helix in a linear form (i.e., con-G[A7,E10-K13]) results in a minor reduction in potency. Incorporation of a 7-10 salt-bridge replacement (con-G[K7-E10]) prevents helix formation in aqueous solution and produces a peptide with low potency. Peptides with the Leu5-Tyr5 substitution also have low potencies (con-G[Y5,A7] and con-G[Y5,K7]) indicating that Leu5 in con-G is important for full antagonist behavior. We have also shown that the Gla-Ala7 substitution increases potency, whereas the Gla-Lys7 substitution has no effect. Con-G and con-G[K7] both exhibit selectivity between NMDA subtypes from mid-frontal and superior temporal gyri, but not between sensorimotor and mid-frontal gyri. Asn8 and/or Asn17 appear to be important for the ability of con-G to function as an inhibitor of polyamine-stimulated [3H]MK-801 binding, but not in maintaining secondary structure. The presence of Ca2+ does not increase the potencies of con-G and con-T for NMDA receptors but does stabilize the helical structures of con-G, con-G[D8,D17], and, to a lesser extent, con-G[A7]. The NMR data support the existence of at least two independent Ca2+-chelating sites in con-G, one involving Gla7 and possibly Gla3 and the other likely to involve Gla10 and/or Gla14.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The finite element method is used to simulate coupled problems, which describe the related physical and chemical processes of ore body formation and mineralization, in geological and geochemical systems. The main purpose of this paper is to illustrate some simulation results for different types of modelling problems in pore-fluid saturated rock masses. The aims of the simulation results presented in this paper are: (1) getting a better understanding of the processes and mechanisms of ore body formation and mineralization in the upper crust of the Earth; (2) demonstrating the usefulness and applicability of the finite element method in dealing with a wide range of coupled problems in geological and geochemical systems; (3) qualitatively establishing a set of showcase problems, against which any numerical method and computer package can be reasonably validated. (C) 2002 Published by Elsevier Science B.V.