990 resultados para Science Libraries
Resumo:
Science education has been the subject of increasing public interest over the last few years. While a good part of this attention has been due to the fundamental reshaping of school curricula and teacher professional standards currently underway, there has been a heightened level of critical media commentary about the state of science education in schools and science teacher education in universities. In some cases, the commentary has been informed by sound evidence and balanced perspectives. More recently, however, a greater degree of ignorance and misrepresentation has crept into the discourse. This chapter provides background on the history and status of science teacher education in Australia, along with insights into recent developments and challenges.
Resumo:
Architecture today often is praised for its tectonics, floating volumes, and sensational, gravity-defying stunts of “starchitecture.” Yet, very so often there is a building that inspires descriptions of the sublime, the experiential, and the power of light and architecture to transcend our expectations. The new Meinel Optical Sciences Research Building, designed by Phoenix-based Richärd+Bauer for the University of Arizona, Tucson, is one of these architectural rarities. Already drawing comparisons to Louis Kahn's 1965 Salk Institute for Biological Studies in La Jolla, California, the indescribable quality of light that characterizes the best of Kahn's work also resonates in Richärd+Bauer's new building. Both an expansion and renovation of the existing College of Optical Sciences facilities, the Meinel building includes teaching and research laboratories, six floors of offices, discussion areas, conference rooms, and an auditorium. The new 47,000 square-foot cast-in-place concrete structure, wrapped on three-sides in copper-alloy panels, harmonizes with the largely brick vocabulary of the campus while reflecting the ethereal quality of the wide Arizona sky. The façade, however, is merely a prelude for what awaits inside—where light and architecture seamlessly combine to create moments of pure awe.
Resumo:
The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.
Resumo:
This work focuses on the role of macroseismology in the assessment of seismicity and probabilistic seismic hazard in Northern Europe. The main type of data under consideration is a set of macroseismic observations available for a given earthquake. The macroseismic questionnaires used to collect earthquake observations from local residents since the late 1800s constitute a special part of the seismological heritage in the region. Information of the earthquakes felt on the coasts of the Gulf of Bothnia between 31 March and 2 April 1883 and on 28 July 1888 was retrieved from the contemporary Finnish and Swedish newspapers, while the earthquake of 4 November 1898 GMT is an example of an early systematic macroseismic survey in the region. A data set of more than 1200 macroseismic questionnaires is available for the earthquake in Central Finland on 16 November 1931. Basic macroseismic investigations including preparation of new intensity data point (IDP) maps were conducted for these earthquakes. Previously disregarded usable observations were found in the press. The improved collection of IDPs of the 1888 earthquake shows that this event was a rare occurrence in the area. In contrast to earlier notions it was felt on both sides of the Gulf of Bothnia. The data on the earthquake of 4 November 1898 GMT were augmented with historical background information discovered in various archives and libraries. This earthquake was of some concern to the authorities, because extra fire inspections were conducted in three towns at least, i.e. Tornio, Haparanda and Piteå, located in the centre of the area of perceptibility. This event posed the indirect hazard of fire, although its magnitude around 4.6 was minor on the global scale. The distribution of slightly damaging intensities was larger than previously outlined. This may have resulted from the amplification of the ground shaking in the soft soil of the coast and river valleys where most of the population was found. The large data set of the 1931 earthquake provided an opportunity to apply statistical methods and assess methodologies that can be used when dealing with macroseismic intensity. It was evaluated using correspondence analysis. Different approaches such as gridding were tested to estimate the macroseismic field from the intensity values distributed irregularly in space. In general, the characteristics of intensity warrant careful consideration. A more pervasive perception of intensity as an ordinal quantity affected by uncertainties is advocated. A parametric earthquake catalogue comprising entries from both the macroseismic and instrumental era was used for probabilistic seismic hazard assessment. The parametric-historic methodology was applied to estimate seismic hazard at a given site in Finland and to prepare a seismic hazard map for Northern Europe. The interpretation of these results is an important issue, because the recurrence times of damaging earthquakes may well exceed thousands of years in an intraplate setting such as Northern Europe. This application may therefore be seen as an example of short-term hazard assessment.
Resumo:
The Body Area Network (BAN) is an emerging technology that focuses on monitoring physiological data in, on and around the human body. BAN technology permits wearable and implanted sensors to collect vital data about the human body and transmit it to other nodes via low-energy communication. In this paper, we investigate interactions in terms of data flows between parties involved in BANs under four different scenarios targeting outdoor and indoor medical environments: hospital, home, emergency and open areas. Based on these scenarios, we identify data flow requirements between BAN elements such as sensors and control units (CUs) and parties involved in BANs such as the patient, doctors, nurses and relatives. Identified requirements are used to generate BAN data flow models. Petri Nets (PNs) are used as the formal modelling language. We check the validity of the models and compare them with the existing related work. Finally, using the models, we identify communication and security requirements based on the most common active and passive attack scenarios.
Resumo:
Development of a new class of single pan high efficiency, low emission stoves, named gasifier stoves, that promise constant power that can be controlled using any solid biomass fuel in the form of pellets is reported here. These stoves use battery-run fan-based air supply for gasification (primary air) and for combustion (secondary air).Design with the correct secondary air flow ensures near-stoichiometric combustion that allows attainment of peak combustion temperatures with accompanying high water boiling efficiencies (up to 50% for vessels of practical relevance) and very low emissions (of carbon monoxide, particulate matter and oxides of nitrogen). The use of high density agro-residue based pellets or coconut shell pieces ensures operational duration of about an hour or more at power levels of 3 kWth (similar to 12 g/min). The principles involved and the optimization aspects of the design are outlined. The dependence of efficiency and emissions on the design parameters are described. The field imperatives that drive the choice of the rechargeable battery source and the fan are brought out. The implications of developments of Oorja-Plus and OorjaSuper stoves to the domestic cooking scenario of India are briefly discussed. The process development, testing and internal qualification tasks were undertaken by Indian Institute of Science. Product development and the fuel pellet production were dealt with by First Energy Private Ltd.Close interaction at several times during this period has helped progress the project from the laboratory to large scale commercial operation. At this time, over four hundred thousand stoves and 30 kilotonnes fuel have been sold in four states in India.
Resumo:
This paper reports on findings from the Interests and Recruitment in Science study, which explored the experiences of first year students studying science, technology, engineering and mathematics (STEM) courses in Australian universities. First year STEM students who went to school in rural or regional areas were as engaged, aspirational and motivated as their more metropolitan counterparts. However, they were less likely to have studied physics or advance mathematics, and more likely to have enrolled in an Agricultural or Environmental Science degree. The relationships between these results and broader contextual issues such as employment and Higher Education budgetary and policy settings are discussed.
Resumo:
In 2002, a number of lecturers from different clinical schools within the Faculty of Health Sciences at La Trobe University embarked on the development of a new interdisciplinary professional practice subject to be undertaken by all final-year undergraduate health science students. The subject was designed to better prepare students for their first professional appointment by introducing them to the concepts of interdisciplinary teamwork, the health care context, and the challenges and constraints that organizational contexts present. This report details the background of the project, the consultation and development that took place in the design of the subject, and implementation of the subject. The uniqueness of the project is explained by the number of disciplines involved, the online delivery, and the focus on a set of generic graduate attributes for health science students. It is hoped that students who have undertaken this subject will have a better understanding of the roles of other health professionals and the context in which they will be working by grappling with many real-life professional issues that they will face when they graduate and enter the workforce.
Resumo:
This thesis describes current and past n-in-one methods and presents three early experimental studies using mass spectrometry and the triple quadrupole instrument on the application of n-in-one in drug discovery. N-in-one strategy pools and mix samples in drug discovery prior to measurement or analysis. This allows the most promising compounds to be rapidly identified and then analysed. Nowadays properties of drugs are characterised earlier and in parallel with pharmacological efficacy. Studies presented here use in vitro methods as caco-2 cells and immobilized artificial membrane chromatography for drug absorption and lipophilicity measurements. The high sensitivity and selectivity of liquid chromatography mass spectrometry are especially important for new analytical methods using n-in-one. In the first study, the fragmentation patterns of ten nitrophenoxy benzoate compounds, serial homology, were characterised and the presence of the compounds was determined in a combinatorial library. The influence of one or two nitro substituents and the alkyl chain length of methyl to pentyl on collision-induced fragmentation was studied, and interesting structurefragmentation relationships were detected. Two nitro group compounds increased fragmentation compared to one nitro group, whereas less fragmentation was noted in molecules with a longer alkyl chain. The most abundant product ions were nitrophenoxy ions, which were also tested in the precursor ion screening of the combinatorial library. In the second study, the immobilized artificial membrane chromatographic method was transferred from ultraviolet detection to mass spectrometric analysis and a new method was developed. Mass spectra were scanned and the chromatographic retention of compounds was analysed using extract ion chromatograms. When changing detectors and buffers and including n-in-one in the method, the results showed good correlation. Finally, the results demonstrated that mass spectrometric detection with gradient elution can provide a rapid and convenient n-in-one method for ranking the lipophilic properties of several structurally diverse compounds simultaneously. In the final study, a new method was developed for caco-2 samples. Compounds were separated by liquid chromatography and quantified by selected reaction monitoring using mass spectrometry. This method was used for caco-2 samples, where absorption of ten chemically and physiologically different compounds was screened using both single and nin- one approaches. These three studies used mass spectrometry for compound identification, method transfer and quantitation in the area of mixture analysis. Different mass spectrometric scanning modes for the triple quadrupole instrument were used in each method. Early drug discovery with n-in-one is area where mass spectrometric analysis, its possibilities and proper use, is especially important.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.
Resumo:
One of the effects of the Internet is that the dissemination of scientific publications in a few years has migrated to electronic formats. The basic business practices between libraries and publishers for selling and buying the content, however, have not changed much. In protest against the high subscription prices of mainstream publishers, scientists have started Open Access (OA) journals and e-print repositories, which distribute scientific information freely. Despite widespread agreement among academics that OA would be the optimal distribution mode for publicly financed research results, such channels still constitute only a marginal phenomenon in the global scholarly communication system. This paper discusses, in view of the experiences of the last ten years, the many barriers hindering a rapid proliferation of Open Access. The discussion is structured according to the main OA channels; peer-reviewed journals for primary publishing, subject- specific and institutional repositories for secondary parallel publishing. It also discusses the types of barriers, which can be classified as consisting of the legal framework, the information technology infrastructure, business models, indexing services and standards, the academic reward system, marketing, and critical mass.