932 resultados para A priori
Resumo:
An on-line algorithm is developed for the location of single cross point faults in a PLA (FPLA). The main feature of the valgorithm is the determination of a fault set corresponding to the response obtained for a failed test. For the apparently small number of faults in this set, all other tests are generated and a fault table is formed. Subsequently, an adaptive procedure is used to diagnose the fault. Functional equivalence test is carried out to determine the actual fault class if the adaptive testing results in a set of faults with identical tests. The large amount of computation time and storage required in the determination, a priori, of all the fault equivalence classes or in the construction of a fault dictionary are not needed here. A brief study of functional equivalence among the cross point faults is also made.
Resumo:
Plasticity in amorphous alloys is associated with strain softening, induced by the creation of additional free volume during deformation. In this paper, the role of free volume, which was a priori in the material, on work softening was investigated. For this, an as-cast Zr-based bulk metallic glass (BMG) was systematically annealed below its glass transition temperature, so as to reduce the free volume content. The bonded-interface indentation technique is used to generate extensively deformed and well defined plastic zones. Nanoindentation was utilized to estimate the hardness of the deformed as well as undeformed regions. The results show that the structural relaxation annealing enhances the hardness and that both the subsurface shear band number density and the plastic zone size decrease with annealing time. The serrations in the nanoindentation load-displacement curves become smoother with structural relaxation. Regardless of the annealing condition, the nanohardness of the deformed regions is similar to 12-15% lower, implying that the prior free volume only changes the yield stress (or hardness) but not the relative flow stress (or the extent of strain softening). Statistical distributions of the nanohardness obtained from deformed and undeformed regions have no overlap, suggesting that shear band number density has no influence on the plastic characteristics of the deformed region.
Resumo:
The paper presents a new approach to improve the detection and tracking performance of a track-while-scan (TWS) radar. The contribution consists of three parts. In Part 1 the scope of various papers in this field is reviewed. In Part 2, a new approach for integrating the detection and tracking functions is presented. It shows how a priori information from the TWS computer can be used to improve detection. A new multitarget tracking algorithm has also been developed. It is specifically oriented towards solving the combinatorial problems in multitarget tracking. In Part 3, analytical derivations are presented for quantitatively assessing, a priori, the performance of a track-while-scan radar system (true track initiation, false track initiation, true track continuation and false track deletion characteristics). Simulation results are also shown.
Resumo:
The paper presents, in three parts, a new approach to improve the detection and tracking performance of a track-while-scan radar. Part 1 presents a review of the current status of the subject. Part 2 details the new approach. It shows how a priori information provided by the tracker can be used to improve detection. It also presents a new multitarget tracking algorithm. In the present Part, analytical derivations are presented for assessing, a priori, the performance of the TWS radar system. True track initiation, false track initiation, true track continuation and false track deletion characteristics have been studied. It indicates how the various thresholds can be chosen by the designer to optimise performance. Simulation results are also presented.
Resumo:
This research is based on the problems in secondary school algebra I have noticed in my own work as a teacher of mathematics. Algebra does not touch the pupil, it remains knowledge that is not used or tested. Furthermore the performance level in algebra is quite low. This study presents a model for 7th grade algebra instruction in order to make algebra more natural and useful to students. I refer to the instruction model as the Idea-based Algebra (IDEAA). The basic ideas of this IDEAA model are 1) to combine children's own informal mathematics with scientific mathematics ("math math") and 2) to structure algebra content as a "map of big ideas", not as a traditional sequence of powers, polynomials, equations, and word problems. This research project is a kind of design process or design research. As such, this project has three, intertwined goals: research, design and pedagogical practice. I also assume three roles. As a researcher, I want to learn about learning and school algebra, its problems and possibilities. As a designer, I use research in the intervention to develop a shared artefact, the instruction model. In addition, I want to improve the practice through intervention and research. A design research like this is quite challenging. Its goals and means are intertwined and change in the research process. Theory emerges from the inquiry; it is not given a priori. The aim to improve instruction is normative, as one should take into account what "good" means in school algebra. An important part of my study is to work out these paradigmatic questions. The result of the study is threefold. The main result is the instruction model designed in the study. The second result is the theory that is developed of the teaching, learning and algebra. The third result is knowledge of the design process. The instruction model (IDEAA) is connected to four main features of good algebra education: 1) the situationality of learning, 2) learning as knowledge building, in which natural language and intuitive thinking work as "intermediaries", 3) the emergence and diversity of algebra, and 4) the development of high performance skills at any stage of instruction.
Resumo:
BACKGROUND OR CONTEXT Thermodynamics is a core concept for mechanical engineers yet notoriously difficult. Evidence suggests students struggle to understand and apply the core fundamental concepts of thermodynamics with analysis indicating a problem with student learning/engagement. A contributing factor is that thermodynamics is a ‘science involving concepts based on experiments’ (Mayhew 1990) with subject matter that cannot be completely defined a priori. To succeed, students must engage in a deep-holistic approach while taking ownership of their learning. The difficulty in achieving this often manifests itself in students ‘not getting’ the principles and declaring thermodynamics ‘hard’. PURPOSE OR GOAL Traditionally, students practice and “learn” the application of thermodynamics in their tutorials, however these do not consider prior conceptions (Holman & Pilling 2004). As ‘hands on’ learning is the desired outcome of tutorials it is pertinent to study methods of improving their efficacy. Within the Australian context, the format of thermodynamics tutorials has remained relatively unchanged over the decades, relying anecdotally on a primarily didactic pedagogical approach. Such approaches are not conducive to deep learning (Ramsden 2003) with students often disengaged from the learning process. Evidence suggests (Haglund & Jeppsson 2012), however, that a deeper level and ownership of learning can be achieved using a more constructivist approach for example through self generated analogies. This pilot study aimed to collect data to support the hypothesis that the ‘difficulty’ of thermodynamics is associated with the pedagogical approach of tutorials rather than actual difficulty in subject content or deficiency in students. APPROACH Successful application of thermodynamic principles requires solid knowledge of the core concepts. Typically, tutorial sessions guide students in this application. However, a lack of deep and comprehensive understanding can lead to student confusion in the applications resulting in the learning of the ‘process’ of application without understanding ‘why’. The aim of this study was to gain empirical data on student learning of both concepts and application, within thermodynamic tutorials. The approach taken for data collection and analysis was: - 1 Four concurrent tutorial streams were timetabled to examine student engagement/learning in traditional ‘didactic’ (3 weeks) and non-traditional (3 weeks). In each week, two of the selected four sessions were traditional and two non-traditional. This provided a control group for each week. - 2 The non-traditional tutorials involved activities designed to promote student-centered deep learning. Specific pedagogies employed were: self-generated analogies, constructivist, peer-to-peer learning, inquiry based learning, ownership of learning and active learning. - 3 After a three-week period, teaching styles of the selected groups was switched, to allow each group to experience both approaches with the same tutor. This also acted to mimimise any influence of tutor personality / style on the data. - 4 At the conclusion of the trial participants completed a ‘5 minute essay’ on how they liked the sessions, a small questionnaire, modelled on the modified (Christo & Hoang, 2013)SPQ designed by Biggs (1987) and a small formative quiz to gauge the level of learning achieved. DISCUSSION Preliminary results indicate that overall students respond positively to in class demonstrations (inquiry based learning), and active learning activities. Within the active learning exercises, the current data suggests students preferred individual rather than group or peer-to-peer activities. Preliminary results from the open-ended questions such as “What did you like most/least about this tutorial” and “do you have other comments on how this tutorial could better facilitate your learning”, however, indicated polarising views on the nontraditional tutorial. Some student’s responded that they really like the format and emphasis on understanding the concepts, while others were very vocal that that ‘hated’ the style and just wanted the solutions to be presented by the tutor. RECOMMENDATIONS/IMPLICATIONS/CONCLUSION Preliminary results indicated a mixed, but overall positive response by students with more collaborative tutorials employing tasks promoting inquiry based, peer-to-peer, active, and ownership of learning activities. Preliminary results from student feedback supports evidence that students learn differently, and running tutorials focusing on only one pedagogical approached (typically didactic) may not be beneficial to all students. Further, preliminary data suggests that the learning / teaching style of both students and tutor are important to promoting deep learning in students. Data collection is still ongoing and scheduled for completion at the end of First Semester (Australian academic calendar). The final paper will examine in more detail the results and analysis of this project.
Resumo:
The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.
Resumo:
The method of initial functions has been applied for deriving higher order theories for cross-ply laminated composite thick rectangular plates. The equations of three-dimensional elasticity have been used. No a priori assumptions regarding the distribution of stresses or displacements are needed. Numerical solutions of the governing equations have been presented for simply supported edges and the results are compared with available ones.
Resumo:
Remote detection of management-related trend in the presence of inter-annual climatic variability in the rangelands is difficult. Minimally disturbed reference areas provide a useful guide, but suitable benchmarks are usually difficult to identify. We describe a method that uses a unique conceptual framework to identify reference areas from multitemporal sequences of ground cover derived from Landsat TM and ETM+ imagery. The method does not require ground-based reference sites nor GIS layers about management. We calculate a minimum ground cover image across all years to identify locations of most persistent ground cover in years of lowest rainfall. We then use a moving window approach to calculate the difference between the window's central pixel and its surrounding reference pixels. This difference estimates ground-cover change between successive below-average rainfall years, which provides a seasonally interpreted measure of management effects. We examine the approach's sensitivity to window size and to cover-index percentiles used to define persistence. The method successfully detected management-related change in ground cover in Queensland tropical savanna woodlands in two case studies: (1) a grazing trial where heavy stocking resulted in substantial decline in ground cover in small paddocks, and (2) commercial paddocks where wet-season spelling (destocking) resulted in increased ground cover. At a larger scale, there was broad agreement between our analysis of ground-cover change and ground-based land condition change for commercial beef properties with different a priori ratings of initial condition, but there was also some disagreement where changing condition reflected pasture composition rather than ground cover. We conclude that the method is suitably robust to analyse grazing effects on ground cover across the 1.3 x 10(6) km(2) of Queensland's rangelands. Crown Copyright (c) 2012 Published by Elsevier Inc. All rights reserved.
Resumo:
The mean flow development in an initially turbulent boundary layer subjected to a large favourable pressure gradient beginning at a point x0 is examined through analyses expected a priori to be valid on either side of relaminarization. The ‘quasi-laminar’ flow in the later stages of reversion, where the Reynolds stresses have by definition no significant effect on the mean flow, is described by an asymptotic theory constructed for large values of a pressure-gradient parameter Λ, scaled on a characteristic Reynolds stress gradient. The limiting flow consists of an inner laminar boundary layer and a matching inviscid (but rotational) outer layer. There is consequently no entrainment to lowest order in Λ−1, and the boundary layer thins down to conserve outer vorticity. In fact, the predictions of the theory for the common measures of boundary-layer thickness are in excellent agreement with experimental results, almost all the way from x0. On the other hand the development of wall parameters like the skin friction suggests the presence of a short bubble-shaped reverse-transitional region on the wall, where neither turbulent nor quasi-laminar calculations are valid. The random velocity fluctuations inherited from the original turbulence decay with distance, in the inner layer, according to inverse-power laws characteristic of quasi-steady perturbations on a laminar flow. In the outer layer, there is evidence that the dominant physical mechanism is a rapid distortion of the turbulence, with viscous and inertia forces playing a secondary role. All the observations available suggest that final retransition to turbulence quickly follows the onset of instability in the inner layer.It is concluded that reversion in highly accelerated flows is essentially due to domination of pressure forces over the slowly responding Reynolds stresses in an originally turbulent flow, accompanied by the generation of a new laminar boundary layer stabilized by the favourable pressure gradient.
Resumo:
The monograph dissertation deals with kernel integral operators and their mapping properties on Euclidean domains. The associated kernels are weakly singular and examples of such are given by Green functions of certain elliptic partial differential equations. It is well known that mapping properties of the corresponding Green operators can be used to deduce a priori estimates for the solutions of these equations. In the dissertation, natural size- and cancellation conditions are quantified for kernels defined in domains. These kernels induce integral operators which are then composed with any partial differential operator of prescribed order, depending on the size of the kernel. The main object of study in this dissertation being the boundedness properties of such compositions, the main result is the characterization of their Lp-boundedness on suitably regular domains. In case the aforementioned kernels are defined in the whole Euclidean space, their partial derivatives of prescribed order turn out to be so called standard kernels that arise in connection with singular integral operators. The Lp-boundedness of singular integrals is characterized by the T1 theorem, which is originally due to David and Journé and was published in 1984 (Ann. of Math. 120). The main result in the dissertation can be interpreted as a T1 theorem for weakly singular integral operators. The dissertation deals also with special convolution type weakly singular integral operators that are defined on Euclidean spaces.
Resumo:
In the High Middle Ages female saints were customarily noble virgins. Thus, as a wife and a mother of eight children, the Swedish noble lady Birgitta (1302/3 1373) was an atypical candidate for sanctity. However, in 1391 she was canonized only 18 years after her death and became a role model for many late medieval women, who were mothers and widows. The dissertation Power and Authority Birgitta of Sweden and Her Revelations investigates how Birgitta went about establishing her power and authority during the first ten years of her career as a living saint, in 1340 1349. It is written from the perspectives of gender, authority, and power. The sources consist of approximately seven hundred revelations, hagiographical texts and other medieval documents. This work concentrates on the interaction between Birgitta and her audience. During her lifetime Birgitta was already regarded as a holy woman, as a living saint. A living saint could be given no formal papal or other recognition, for one could never be certain about his or her future activities. Thus, the living saint needed an audience for whom to perform signs of sanctity. In this study particular attention is paid to situations within which the power relations between the living saint and her audience can be traced and are open to critical analysis. Situations of conflict that arose in Birgitta s life are especially fruitful for this purpose. During the Middle Ages, institutional power and authority were exclusively in the hands of secular male leaders and churchmen. In this work it is argued, however, that Birgitta used different kinds of power than men. It is evident that she exercized influence on lay people as well as on secular and clerical authorities. The second, third, and fourth chapter of this study examine the beginning of Birgitta s career as a visionary, what factors and influences lay behind it, and what kind of roles they played in establishing her religious authority. The fifth, sixth, and seventh chapter concentrate on Birgitta s exercising of power in specific situations during her time in Sweden until she left on a pilgrimage to Rome in 1349. The central question is how she exercised power with different people. As a result, this book will offer a narrative of Birgitta s social interactions in Sweden seen from the perspectives of power and authority. Along with the concept of power, authority is a key issue. By definition, one who has power also has authority but a person who does not have official power can, nevertheless, have authority. Authority in action is defined here as meaning that a person was listened to. Birgitta acted both in situations of open conflict and where no conflict was evident. Her strategies included, for example, inducement, encouragement and flattery. In order to make people do as she felt was right she also threatened them openly with divine wrath. Sometimes she even used both positive persuasion and threats. Birgitta s power seems very similar to that of priests and ascetics. Common to all of them was that their power demanded interaction with other people and audiences. Because Birgitta did not have power and authority ex officio she had to persuade people to believe in her powers. She did this because she was convinced of her mission and sought to make people change their lives. In so doing, she moved from the domestic field to the public fields of religion and politics.
Resumo:
In this work we numerically model isothermal turbulent swirling flow in a cylindrical burner. Three versions of the RNG k-epsilon model are assessed against performance of the standard k-epsilon model. Sensitivity of numerical predictions to grid refinement, differing convective differencing schemes and choice of (unknown) inlet dissipation rate, were closely scrutinised to ensure accuracy. Particular attention is paid to modelling the inlet conditions to within the range of uncertainty of the experimental data, as model predictions proved to be significantly sensitive to relatively small changes in upstream flow conditions. We also examine the characteristics of the swirl--induced recirculation zone predicted by the models over an extended range of inlet conditions. Our main findings are: - (i) the standard k-epsilon model performed best compared with experiment; - (ii) no one inlet specification can simultaneously optimize the performance of the models considered; - (iii) the RNG models predict both single-cell and double-cell IRZ characteristics, the latter both with and without additional internal stagnation points. The first finding indicates that the examined RNG modifications to the standard k-e model do not result in an improved eddy viscosity based model for the prediction of swirl flows. The second finding suggests that tuning established models for optimal performance in swirl flows a priori is not straightforward. The third finding indicates that the RNG based models exhibit a greater variety of structural behaviour, despite being of the same level of complexity as the standard k-e model. The plausibility of the predicted IRZ features are discussed in terms of known vortex breakdown phenomena.
Resumo:
CONTEXT: The role and importance of circulating sclerostin is poorly understood. High bone mass (HBM) caused by activating LRP5 mutations has been reported to be associated with increased plasma sclerostin concentrations; whether the same applies to HBM due to other causes is unknown. OBJECTIVE: Our objective was to determine circulating sclerostin concentrations in HBM. DESIGN AND PARTICIPANTS: In this case-control study, 406 HBM index cases were identified by screening dual-energy x-ray absorptiometry (DXA) databases from 4 United Kingdom centers (n = 219 088), excluding significant osteoarthritis/artifact. Controls comprised unaffected relatives and spouses. MAIN MEASURES: Plasma sclerostin; lumbar spine L1, total hip, and total body DXA; and radial and tibial peripheral quantitative computed tomography (subgroup only) were evaluated. RESULTS: Sclerostin concentrations were significantly higher in both LRP5 HBM and non-LRP5 HBM cases compared with controls: mean (SD) 130.1 (61.7) and 88.0 (39.3) vs 66.4 (32.3) pmol/L (both P < .001, which persisted after adjustment for a priori confounders). In combined adjusted analyses of cases and controls, sclerostin concentrations were positively related to all bone parameters found to be increased in HBM cases (ie, L1, total hip, and total body DXA bone mineral density and radial/tibial cortical area, cortical bone mineral density, and trabecular density). Although these relationships were broadly equivalent in HBM cases and controls, there was some evidence that associations between sclerostin and trabecular phenotypes were stronger in HBM cases, particularly for radial trabecular density (interaction P < .01). CONCLUSIONS: Circulating plasma sclerostin concentrations are increased in both LRP5 and non-LRP5 HBM compared with controls. In addition to the general positive relationship between sclerostin and DXA/peripheral quantitative computed tomography parameters, genetic factors predisposing to HBM may contribute to increased sclerostin levels.
Resumo:
Visual information processing in brain proceeds in both serial and parallel fashion throughout various functionally distinct hierarchically organised cortical areas. Feedforward signals from retina and hierarchically lower cortical levels are the major activators of visual neurons, but top-down and feedback signals from higher level cortical areas have a modulating effect on neural processing. My work concentrates on visual encoding in hierarchically low level cortical visual areas in human brain and examines neural processing especially in cortical representation of visual field periphery. I use magnetoencephalography and functional magnetic resonance imaging to measure neuromagnetic and hemodynamic responses during visual stimulation and oculomotor and cognitive tasks from healthy volunteers. My thesis comprises six publications. Visual cortex forms a great challenge for modeling of neuromagnetic sources. My work shows that a priori information of source locations are needed for modeling of neuromagnetic sources in visual cortex. In addition, my work examines other potential confounding factors in vision studies such as light scatter inside the eye which may result in erroneous responses in cortex outside the representation of stimulated region, and eye movements and attention. I mapped cortical representations of peripheral visual field and identified a putative human homologue of functional area V6 of the macaque in the posterior bank of parieto-occipital sulcus. My work shows that human V6 activates during eye-movements and that it responds to visual motion at short latencies. These findings suggest that human V6, like its monkey homologue, is related to fast processing of visual stimuli and visually guided movements. I demonstrate that peripheral vision is functionally related to eye-movements and connected to rapid stream of functional areas that process visual motion. In addition, my work shows two different forms of top-down modulation of neural processing in the hierachically lowest cortical levels; one that is related to dorsal stream activation and may reflect motor processing or resetting signals that prepare visual cortex for change in the environment and another local signal enhancement at the attended region that reflects local feed-back signal and may perceptionally increase the stimulus saliency.