999 resultados para Source Expertise
Resumo:
A body of research in conversation analysis has identified a range of structurally-provided positions in which sources of trouble in talk-in-interaction can be addressed using repair. These practices are contained within what Schegloff (1992) calls the repair space. In this paper, I examine a rare instance in which a source of trouble is not resolved within the repair space and comes to be addressed outside of it. The practice by which this occurs is a post-completion account; that is, an account that is produced after the possible completion of the sequence containing a source of trouble. Unlike fourth position repair, the final repair position available within the repair space, this account is not made in preparation for a revised response to the trouble-source turn. Its more restrictive aim, rather, is to circumvent an ongoing difference between the parties involved. I argue that because the trouble is addressed in this manner, and in this particular position, the repair space can be considered as being limited to the sequence in which a source of trouble originates.
Resumo:
For the evaluation, design, and planning of traffic facilities and measures, traffic simulation packages are the de facto tools for consultants, policy makers, and researchers. However, the available commercial simulation packages do not always offer the desired work flow and flexibility for academic research. In many cases, researchers resort to designing and building their own dedicated models, without an intrinsic incentive (or the practical means) to make the results available in the public domain. To make matters worse, a substantial part of these efforts pertains to rebuilding basic functionality and, in many respects, reinventing the wheel. This problem not only affects the research community but adversely affects the entire traffic simulation community and frustrates the development of traffic simulation in general. For this problem to be addressed, this paper describes an open source approach, OpenTraffic, which is being developed as a collaborative effort between the Queensland University of Technology, Australia; the National Institute of Informatics, Tokyo; and the Technical University of Delft, the Netherlands. The OpenTraffic simulation framework enables academies from geographic areas and disciplines within the traffic domain to work together and contribute to a specific topic of interest, ranging from travel choice behavior to car following, and from response to intelligent transportation systems to activity planning. The modular approach enables users of the software to focus on their area of interest, whereas other functional modules can be regarded as black boxes. Specific attention is paid to a standardization of data inputs and outputs for traffic simulations. Such standardization will allow the sharing of data with many existing commercial simulation packages.
Resumo:
The steady problem of free surface flow due to a submerged line source is revisited for the case in which the fluid depth is finite and there is a stagnation point on the free surface directly above the source. Both the strength of the source and the fluid speed in the far field are measured by a dimensionless parameter, the Froude number. By applying techniques in exponential asymptotics, it is shown that there is a train of periodic waves on the surface of the fluid with an amplitude which is exponentially small in the limit that the Froude number vanishes. This study clarifies that periodic waves do form for flows due to a source, contrary to a suggestion by Chapman & Vanden-Broeck (2006, J. Fluid Mech., 567, 299--326). The exponentially small nature of the waves means they appear beyond all orders of the original power series expansion; this result explains why attempts at describing these flows using a finite number of terms in an algebraic power series incorrectly predict a flat free surface in the far field.
Resumo:
The content for the school science curriculum has always been an interplay or contest between the interests of a number of stakeholders, who have an interest in establishing it at a new level of schooling or in changing its current form. For most of its history, the interplay was dominated by the interests of academic scientists, but in the 1980s the needs of both future scientists and future citizens began to be more evenly balanced as science educators promoted a wider sense of science. The contest changed again in the 1990s with a super-ordinate control being exerted by government bureaucrats at the expense of the subject experts. This change coincides with the rise in a number of countries of a market view of education, and of science education in particular, accompanied by demands for public accountability via simplistic auditing measures. This shift from expertise to bureaucratise and its consequences for the quality of science education is illustrated with five case studies of science curriculum reform in Australia.
Resumo:
None of currently used tonometers produce estimated IOP values that are free of errors. Measurement incredibility arises from indirect measurement of corneal deformation and the fact that pressure calculations are based on population averaged parameters of anterior segment. Reliable IOP values are crucial for understanding and monitoring of number of eye pathologies e.g. glaucoma. We have combined high speed swept source OCT with air-puff chamber. System provides direct measurement of deformation of cornea and anterior surface of the lens. This paper describes in details the performance of air-puff ssOCT instrument. We present different approaches of data presentation and analysis. Changes in deformation amplitude appears to be good indicator of IOP changes. However, it seems that in order to provide accurate intraocular pressure values an additional information on corneal biomechanics is necessary. We believe that such information could be extracted from data provided by air-puff ssOCT.
Resumo:
A general electrical model of a piezoelectric transducer for ultrasound applications consists of a capacitor in parallel with RLC legs. A high power voltage source converter can however generate significant voltage stress across the transducer that creates high leakage currents. One solution is to reduce the voltage stress across the piezoelectric transducer by using an LC filter, however a main drawback is changing the piezoelectric resonant frequency and its characteristics. Thereby it reduces the efficiency of energy conversion through the transducer. This paper proposes that a high frequency current source converter is a suitable topology to drive high power piezoelectric transducers efficiently.
Resumo:
On August 16, 2012 the SIGIR 2012 Workshop on Open Source Information Retrieval was held as part of the SIGIR 2012 conference in Portland, Oregon, USA. There were 2 invited talks, one from industry and one from academia. There were 6 full papers and 6 short papers presented as well as demonstrations of 4 open source tools. Finally there was a lively discussion on future directions for the open source Information Retrieval community. This contribution discusses the events of the workshop and outlines future directions for the community.
Resumo:
Understanding the link between tectonic-driven extensional faulting and volcanism is crucial from a hazard perspective in active volcanic environments, while ancient volcanic successions provide records on how volcanic eruption styles, compositions, magnitudes and frequencies can change in response to extension timing, distribution and intensity. This study draws on intimate relationships of volcanism and extension preserved in the Sierra Madre Occidental (SMO) and Gulf of California (GoC) regions of western Mexico. Here, a major Oligocene rhyolitic ignimbrite “flare-up” (>300,000 km3) switched to a dominantly bimodal and mixed effusive-explosive volcanic phase in the Early Miocene (~100,000 km3), associated with distributed extension and opening of numerous grabens. Rhyolitic dome fields were emplaced along graben edges and at intersections of cross-graben and graben-parallel structures during early stages of graben development. Concomitant with this change in rhyolite eruption style was a change in crustal source as revealed by zircon chronochemistry with rapid rates of rhyolite magma generation due to remelting of mid- to upper crustal, highly differentiated igneous rocks emplaced during earlier SMO magmatism. Extension became more focused ~18 Ma resulting in volcanic activity being localised along the site of GoC opening. This localised volcanism (known as the Comondú “arc”) was dominantly effusive and andesite-dacite in composition. This compositional change resulted from increased mixing of basaltic and rhyolitic magmas rather than fluid flux melting of the mantle wedge above the subducting Guadalupe Plate. A poor understanding of space-time relationships of volcanism and extension has thus led to incorrect past tectonic interpretations of Comondú-age volcanism.
Resumo:
The Beauty Leaf tree (Calophyllum inophyllum) is a potential source of non-edible vegetable oil for producing future generation biodiesel because of its ability to grow in a wide range of climate conditions, easy cultivation, high fruit production rate, and the high oil content in the seed. This plant naturally occurs in the coastal areas of Queensland and the Northern Territory in Australia, and is also widespread in south-east Asia, India and Sri Lanka. Although Beauty Leaf is traditionally used as a source of timber and orientation plant, its potential as a source of second generation biodiesel is yet to be exploited. In this study, the extraction process from the Beauty Leaf oil seed has been optimised in terms of seed preparation, moisture content and oil extraction methods. The two methods that have been considered to extract oil from the seed kernel are mechanical oil extraction using an electric powered screw press, and chemical oil extraction using n-hexane as an oil solvent. The study found that seed preparation has a significant impact on oil yields, especially in the screw press extraction method. Kernels prepared to 15% moisture content provided the highest oil yields for both extraction methods. Mechanical extraction using the screw press can produce oil from correctly prepared product at a low cost, however overall this method is ineffective with relatively low oil yields. Chemical extraction was found to be a very effective method for oil extraction for its consistence performance and high oil yield, but cost of production was relatively higher due to the high cost of solvent. However, a solvent recycle system can be implemented to reduce the production cost of Beauty Leaf biodiesel. The findings of this study are expected to serve as the basis from which industrial scale biodiesel production from Beauty Leaf can be made.
Resumo:
We describe a pedagogical approach that addresses challenges in design education for novices. These include an inability to frame new problems and limited-to-no design capability or domain knowledge. Such challenges can reduce student engagement with design practice, cause derivative design solutions as well as the inappropriate simplification of design assignments and assessment criteria by educators. We argue that a curriculum that develops the student’s design process will enable them to deal with the uncertain and dynamic situations that characterise design. We describe how this may be achieved and explain our pedagogical approach in terms of methods from Reflective Practice and theories of abstraction and creativity. We present a landscape architecture unit, recently taught, as an example. It constitutes design exercises that require little domain or design expertise to support the development of conceptual thinking and a design rationale. We show how this approach (a) leveraged the novice’s existing spatial and thinking skills while (b) retaining contextually-rich design situations. Examples of the design exercises taught are described along with samples of student work. The assessment rationale is also presented and explained. Finally, we conclude by reflecting on how this approach relates to innovation, sustainability and other disciplines.
Resumo:
Particulate matter research is essential because of the well known significant adverse effects of aerosol particles on human health and the environment. In particular, identification of the origin or sources of particulate matter emissions is of paramount importance in assisting efforts to control and reduce air pollution in the atmosphere. This thesis aims to: identify the sources of particulate matter; compare pollution conditions at urban, rural and roadside receptor sites; combine information about the sources with meteorological conditions at the sites to locate the emission sources; compare sources based on particle size or mass; and ultimately, provide the basis for control and reduction in particulate matter concentrations in the atmosphere. To achieve these objectives, data was obtained from assorted local and international receptor sites over long sampling periods. The samples were analysed using Ion Beam Analysis and Scanning Mobility Particle Sizer methods to measure the particle mass with chemical composition and the particle size distribution, respectively. Advanced data analysis techniques were employed to derive information from large, complex data sets. Multi-Criteria Decision Making (MCDM), a ranking method, drew on data variability to examine the overall trends, and provided the rank ordering of the sites and years that sampling was conducted. Coupled with the receptor model Positive Matrix Factorisation (PMF), the pollution emission sources were identified and meaningful information pertinent to the prioritisation of control and reduction strategies was obtained. This thesis is presented in the thesis by publication format. It includes four refereed papers which together demonstrate a novel combination of data analysis techniques that enabled particulate matter sources to be identified and sampling site/year ranked. The strength of this source identification process was corroborated when the analysis procedure was expanded to encompass multiple receptor sites. Initially applied to identify the contributing sources at roadside and suburban sites in Brisbane, the technique was subsequently applied to three receptor sites (roadside, urban and rural) located in Hong Kong. The comparable results from these international and national sites over several sampling periods indicated similarities in source contributions between receptor site-types, irrespective of global location and suggested the need to apply these methods to air pollution investigations worldwide. Furthermore, an investigation into particle size distribution data was conducted to deduce the sources of aerosol emissions based on particle size and elemental composition. Considering the adverse effects on human health caused by small-sized particles, knowledge of particle size distribution and their elemental composition provides a different perspective on the pollution problem. This thesis clearly illustrates that the application of an innovative combination of advanced data interpretation methods to identify particulate matter sources and rank sampling sites/years provides the basis for the prioritisation of future air pollution control measures. Moreover, this study contributes significantly to knowledge based on chemical composition of airborne particulate matter in Brisbane, Australia and on the identity and plausible locations of the contributing sources. Such novel source apportionment and ranking procedures are ultimately applicable to environmental investigations worldwide.
Resumo:
In a recent paper, Gordon, Muratov, and Shvartsman studied a partial differential equation (PDE) model describing radially symmetric diffusion and degradation in two and three dimensions. They paid particular attention to the local accumulation time (LAT), also known in the literature as the mean action time, which is a spatially dependent timescale that can be used to provide an estimate of the time required for the transient solution to effectively reach steady state. They presented exact results for three-dimensional applications and gave approximate results for the two-dimensional analogue. Here we make two generalizations of Gordon, Muratov, and Shvartsman’s work: (i) we present an exact expression for the LAT in any dimension and (ii) we present an exact expression for the variance of the distribution. The variance provides useful information regarding the spread about the mean that is not captured by the LAT. We conclude by describing further extensions of the model that were not considered by Gordon,Muratov, and Shvartsman. We have found that exact expressions for the LAT can also be derived for these important extensions...
Resumo:
The electron Volt Spectrometer (eVS) is an inverse geometry filter difference spectrometer that has been optimised to measure the single atom properties of condensed matter systems using a technique known as Neutron Compton Scattering (NCS) or Deep Inelastic Neutron Scattering (DINS). The spectrometer utilises the high flux of epithermal neutrons that are produced by the ISIS neutron spallation source enabling the direct measurement of atomic momentum distributions and ground state kinetic energies. In this paper the procedure that is used to calibrate the spectrometer is described. This includes details of the method used to determine detector positions and neutron flight path lengths as well as the determination of the instrument resolution. Examples of measurements on 3 different samples are shown, ZrH2, 4He and Sn which show the self-consistency of the calibration procedure.
Resumo:
The electron Volt Spectrometer (eVS) is an inverse geometry filter difference spectrometer that has been optimised to measure the single atom properties of condensed matter systems using a technique known as Neutron Compton Scattering (NCS) or Deep Inelastic Neutron Scattering (DINS). The spectrometer utilises the high flux of epithermal neutrons that are produced by the ISIS neutron spallation source enabling the direct measurement of atomic momentum distributions and ground state kinetic energies. In this paper the procedure that is used to calibrate the spectrometer is described. This includes details of the method used to determine detector positions and neutron flight path lengths as well as the determination of the instrument resolution. Examples of measurements on 3 different samples are shown, ZrH2, 4He and Sn which show the self-consistency of the calibration procedure.