23 resultados para Scientific apparatus and instruments
em University of Queensland eSpace - Australia
Resumo:
Application of novel analytical and investigative methods such as fluorescence in situ hybridization, confocal laser scanning microscopy (CLSM), microelectrodes and advanced numerical simulation has led to new insights into micro-and macroscopic processes in bioreactors. However, the question is still open whether or not these new findings and the subsequent gain of knowledge are of significant practical relevance and if so, where and how. To find suitable answers it is necessary for engineers to know what can be expected by applying these modern analytical tools. Similarly, scientists could benefit significantly from an intensive dialogue with engineers in order to find out about practical problems and conditions existing in wastewater treatment systems. In this paper, an attempt is made to help bridge the gap between science and engineering in biological wastewater treatment. We provide an overview of recently developed methods in microbiology and in mathematical modeling and numerical simulation. A questionnaire is presented which may help generate a platform from which further technical and scientific developments can be accomplished. Both the paper and the questionnaire are aimed at encouraging scientists and engineers to enter into an intensive, mutually beneficial dialogue. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
One of the challenges in scientific visualization is to generate software libraries suitable for the large-scale data emerging from tera-scale simulations and instruments. We describe the efforts currently under way at SDSC and NPACI to address these challenges. The scope of the SDSC project spans data handling, graphics, visualization, and scientific application domains. Components of the research focus on the following areas: intelligent data storage, layout and handling, using an associated “Floor-Plan” (meta data); performance optimization on parallel architectures; extension of SDSC’s scalable, parallel, direct volume renderer to allow perspective viewing; and interactive rendering of fractional images (“imagelets”), which facilitates the examination of large datasets. These concepts are coordinated within a data-visualization pipeline, which operates on component data blocks sized to fit within the available computing resources. A key feature of the scheme is that the meta data, which tag the data blocks, can be propagated and applied consistently. This is possible at the disk level, in distributing the computations across parallel processors; in “imagelet” composition; and in feature tagging. The work reflects the emerging challenges and opportunities presented by the ongoing progress in high-performance computing (HPC) and the deployment of the data, computational, and visualization Grids.
Resumo:
One of the main objectives of the first International Junior Researcher and Engineer Workshop on Hydraulic Structures is to provide an opportunity for young researchers and engineers to present their research. But a research project is only completed when it has been published and shared with the community. Referees and peer experts play an important role to control the research quality. While some new electronic tools provide further means to disseminate some research information, the quality and impact of the works remain linked with some thorough expert-review process and the publications in international scientific journals and books. Importantly unethical publishing standards are not acceptable and cheating is despicable.
Resumo:
Knowledge of residual perturbations in the orbit of Uranus in the early 1840s did not lead to the refutation of Newton's law of gravitation but instead to the discovery of Neptune in 1846. Karl Popper asserts that this case is atypical of science and that the law of gravitation was at least prima facie falsified by these perturbations. I argue that these assertions are the product of a false, a priori methodological position I call, 'Weak Popperian Falsificationism' (WPF). Further, on the evidence the law was not prima facie false and was not generally considered so by astronomers at the time. Many of Popper's commentators (Kuhn, Lakatos, Feyerabend and others) presuppose WPF and their views on this case and its implications for scientific rationality and method suffer from this same defect.
Resumo:
Arriving in Brisbane some six years ago, I could not help being impressed by what may be prosaically described as its atmospheric amenity resources. Perhaps this in part was due to my recent experiences in major urban centres in North America, but since that time, that sparkling quality and the blue skies seem to have progressively diminished. Unfortunately, there is also objective evidence available to suggest that this apparent deterioration is not merely the result of habituation of the senses. Air pollution data for the city show trends of increasing concentrations of those very substances that have destroyed the attractiveness of major population centres elsewhere, with climates initially as salubrious. Indeed, present figures indicate that photochemical smog in unacceptably high concentrations is rapidly becoming endemic also over Brisbane. These regrettable developments should come as no surprise. The society at large has not been inclined to respond purposefully to warnings of impending environmental problems, despite the experiences and publicity from overseas and even from other cities within Australia. Nor, up to the present, have certain politicians and government officials displayed stances beyond those necessary for the maintenance of a decorum of concern. At this stage, there still exists the possibility for meaningful government action without the embarrassment of losing political favour with the electorate. To the contrary, there is every chance that such action may be turned to advantage with increased public enlightenment. It would be more than a pity to miss perhaps the final remaining opportunity: Queensland is one of the few remaining places in the world with sufficient resources to permit both rational development and high environmental quality. The choice appears to be one of making a relatively minor investment now for a large financial and social gain the near future, or, permitting Brisbane to degenerate gradually into just another stagnated Los Angeles or Sydney. The present monograph attempts to introduce the problem by reviewing the available research on air quality in the Brisbane area. It also tries to elucidate some seemingly obvious, but so far unapplied management approaches. By necessity, such a broad treatment needs to make inroads into extensive ranges of subject areas, including political and legal practices to public perceptions, scientific measurement and statistical analysis to dynamics of air flow. Clearly, it does not pretend to be definitive in any of these fields, but it does try to emphasize those adjustable facets of the human use system of natural resources, too often neglected in favour of air pollution control technology. The crossing of disciplinary boundaries, however, needs no apology: air quality problems are ubiquitous, touching upon space, time and human interaction.
Resumo:
Some diverse indicators used to measure the innovation process are considered, They include those with art aggregate, and often national, focus, and rely on data from scientific publications, patents and R&D expenditures, etc. Others have a firm-level perspective, relying primarily on surveys or case studies. Also included are indicators derived from specialized databases, or consensual agreements reached through foresight exercises. There is an obvious need for greater integration of the various approaches to capture move effectively the richness of available data and better reflect the reality of innovation. The focus for such integration could be in the area of technology strategy, which integrates the diverse scientific, technological, and innovation activities of firms within their operating environments; improved capacity to measure it has implications for policy-makers, managers and researchers.
Resumo:
Coastal wetlands are dynamic and include the freshwater-intertidal interface. In many parts of the world such wetlands are under pressure from increasing human populations and from predicted sea-level rise. Their complexity and the limited knowledge of processes operating in these systems combine to make them a management challenge.Adaptive management is advocated for complex ecosystem management (Hackney 2000; Meretsky et al. 2000; Thom 2000;National Research Council 2003).Adaptive management identifies management aims,makes an inventory/environmental assessment,plans management actions, implements these, assesses outcomes, and provides feedback to iterate the process (Holling 1978;Walters and Holling 1990). This allows for a dynamic management system that is responsive to change. In the area of wetland management recent adaptive approaches are exemplified by Natuhara et al. (2004) for wild bird management, Bunch and Dudycha (2004) for a river system, Thom (2000) for restoration, and Quinn and Hanna (2003) for seasonal wetlands in California. There are many wetland habitats for which we currently have only rudimentary knowledge (Hackney 2000), emphasizing the need for good information as a prerequisite for effective management. The management framework must also provide a way to incorporate the best available science into management decisions and to use management outcomes as opportunities to improve scientific understanding and provide feedback to the decision system. Figure 9.1 shows a model developed by Anorov (2004) based on the process-response model of Maltby et al. (1994) that forms a framework for the science that underlies an adaptive management system in the wetland context.
Resumo:
Direct comparisons between photosynthetic O-2 evolution rate and electron transport rate (ETR) were made in situ over 24 h using the benthic macroalga Ulva lactuca (Chlorophyta), growing and measured at a depth of 1.8 m, where the midday irradiance rose to 400-600 mumol photons m(-2) s(-1). O-2 exchange was measured with a 5-chamber data-logging apparatus and ETR with a submersible pulse amplitude modulated (PAM) fluorometer (Diving-PAM). Steady-state quantum yield ((Fm'-Ft)/Fm') decreased from 0.7 during the morning to 0.45 at midday, followed by some recovery in the late afternoon. At low to medium irradiances (0-300 mumol photons m(-2) s(-1)), there was a significant correlation between O-2 evolution and ETR, but at higher irradiances, ETR continued to increase steadily, while O-2 evolution tended towards an asymptote. However at high irradiance levels (600-1200 mumol photons m-(2) s(-1)) ETR was significantly lowered. Two methods of measuring ETR, based on either diel ambient light levels and fluorescence yields or rapid light curves, gave similar results at low to moderate irradiance levels. Nutrient enrichment (increases in [NO3-], [NH4+] and [HPO42-] of 5- to 15-fold over ambient concentrations) resulted in an increase, within hours, in photosynthetic rates measured by both ETR and O-2 evolution techniques. At low irradiances, approximately 6.5 to 8.2 electrons passed through PS II during the evolution of one molecule of O-2, i.e., up to twice the theoretical minimum number of four. However, in nutrient-enriched treatments this ratio dropped to 5.1. The results indicate that PAM fluorescence can be used as a good indication of the photosynthetic rate only at low to medium irradiances.
Resumo:
Several anomalies occur in the developing neural and visceral head skeleton of young specimens of Neoceratodus forsteri that have been reared under laboratory conditions. These include anomalies of the basicranium and its derivatives, aberrations of the anterior mandible and hyoid apparatus, and abnormalities in the articulation of the jaws and the elements that produce them. Apart from the occasional absence of the basihyal, and failure of the quadrate processes to form, the anomalies are not deficiencies. Most involve malformations of parts of the neurocranium and visceral skeleton, inappropriate articulations or fusions between elements, disunity in structures that are normally fused and the appearance of supernumerary elements. The incidence of chondral anomalies, generally higher than aberrations that occur in the dermal skeleton in juvenile lungfish, ranges from 1-10% in laboratory reared individuals that have not been subjected to experimental interference. The anomalies differ from those found in many amphibian populations, in the field and in the laboratory, because they involve the cranium, and not the limbs, and the lungfish have not been exposed to the factors that cause anomalies in the amphibians. It is unlikely that the existence of those anomalies, if it is reflected in the wild population, places a selective pressure on the lungfish, because, in a normal season, less than 1% of the total number of eggs produced survive to be recruited into the adult population.
Resumo:
Occupational standards concerning allowable concentrations of chemical compounds in the ambient air of workplaces have been established in several countries worldwide. With the integration of the European Union (EU), there has been a need of establishing harmonised Occupational Exposure Limits (OEL). The European Commission Directive 95/320/EC of 12 July 1995 has given the tasks to a Scientific Committee for Occupational Exposure Limits (SCOEL) to propose, based on scientific data and where appropriate, occupational limit values which may include the 8-h time-weighted average (TWA), short-term limits/excursion limits (STEL) and Biological Limit Values (BLVs). In 2000, the European Union issued a list of 62 chemical substances with Occupational Exposure Limits. Of these, 25 substances received a skin notation, indicating that toxicologically significant amounts may be taken up via the skin. For such substances, monitoring of concentrations in ambient air may not be sufficient, and biological monitoring strategies appear of potential importance in the medical surveillance of exposed workers. Recent progress has been made with respect to formulation of a strategy related to health-based BLVs. (c) 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
How can empirical evidence of adverse effects from exposure to noxious agents, which is often incomplete and uncertain, be used most appropriately to protect human health? We examine several important questions on the best uses of empirical evidence in regulatory risk management decision-making raised by the US Environmental Protection Agency (EPA)'s science-policy concerning uncertainty and variability in human health risk assessment. In our view, the US EPA (and other agencies that have adopted similar views of risk management) can often improve decision-making by decreasing reliance on default values and assumptions, particularly when causation is uncertain. This can be achieved by more fully exploiting decision-theoretic methods and criteria that explicitly account for uncertain, possibly conflicting scientific beliefs and that can be fully studied by advocates and adversaries of a policy choice, in administrative decision-making involving risk assessment. The substitution of decision-theoretic frameworks for default assumption-driven policies also allows stakeholder attitudes toward risk to be incorporated into policy debates, so that the public and risk managers can more explicitly identify the roles of risk-aversion or other attitudes toward risk and uncertainty in policy recommendations. Decision theory provides a sound scientific way explicitly to account for new knowledge and its effects on eventual policy choices. Although these improvements can complicate regulatory analyses, simplifying default assumptions can create substantial costs to society and can prematurely cut off consideration of new scientific insights (e.g., possible beneficial health effects from exposure to sufficiently low 'hormetic' doses of some agents). In many cases, the administrative burden of applying decision-analytic methods is likely to be more than offset by improved effectiveness of regulations in achieving desired goals. Because many foreign jurisdictions adopt US EPA reasoning and methods of risk analysis, it may be especially valuable to incorporate decision-theoretic principles that transcend local differences among jurisdictions.
Resumo:
This article examines the seventeenth-century debate between the Dutch philosopher Benedict de Spinoza and the British scientist Robert Boyle, with a view to explicating what the twentieth-century French philosopher Gilles Deleuze considers to be the difference between science and philosophy. The two main themes that are usually drawn from the correspondence of Boyle and Spinoza, and used to polarize the exchange, are the different views on scientific methodology and on the nature of matter that are attributed to each correspondent. Commentators have tended to focus on one or the other of these themes in order to champion either Boyle or Spinoza in their assessment of the exchange. This paper draws upon the resources made available by Gilles Deleuze and Felix Guattari in their major work What is Philosophy?, in order to offer a more balanced account of the exchange, which in its turn contributes to our understanding of Deleuze and Guattari's conception of the difference between science and philosophy.