909 resultados para THRESHOLD
Resumo:
It is commonly perceived that variables ‘measuring’ different dimensions of teaching (construed as instructional attributes) used in student evaluation of teaching (SET) questionnaires are so highly correlated that they pose a serious multicollinearity problem for quantitative analysis including regression analysis. Using nearly 12000 individual student responses to SET questionnaires and ten key dimensions of teaching and 25 courses at various undergraduate and postgraduate levels for multiple years at a large Australian university, this paper investigates whether this is indeed the case and if so under what circumstances. This paper tests this proposition first by examining variance inflation factors (VIFs), across courses, levels and over time using individual responses; and secondly by using class averages. In the first instance, the paper finds no sustainable evidence of multicollinearity. While, there were one or two isolated cases of VIFs marginally exceeding the conservative threshold of 5, in no cases did the VIFs for any of the instructional attributes come anywhere close to the high threshold value of 10. In the second instance, however, the paper finds that the attributes are highly correlated as all the VIFs exceed 10. These findings have two implications: (a) given the ordinal nature of the data ordered probit analysis using individual student responses can be employed to quantify the impact of instructional attributes on TEVAL score; (b) Data based on class averages cannot be used for probit analysis. An illustrative exercise using level 2 undergraduate courses data suggests higher TEVAL scores depend first and foremost on improving explanation, presentation, and organization of lecture materials.
Resumo:
Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.
Resumo:
A crucial issue with hybrid quantum secret sharing schemes is the amount of data that is allocated to the participants. The smaller the amount of allocated data, the better the performance of a scheme. Moreover, quantum data is very hard and expensive to deal with, therefore, it is desirable to use as little quantum data as possible. To achieve this goal, we first construct extended unitary operations by the tensor product of n, n ≥ 2, basic unitary operations, and then by using those extended operations, we design two quantum secret sharing schemes. The resulting dual compressible hybrid quantum secret sharing schemes, in which classical data play a complementary role to quantum data, range from threshold to access structure. Compared with the existing hybrid quantum secret sharing schemes, our proposed schemes not only reduce the number of quantum participants, but also the number of particles and the size of classical shares. To be exact, the number of particles that are used to carry quantum data is reduced to 1 while the size of classical secret shares also is also reduced to l−2 m−1 based on ((m+1, n′)) threshold and to l−2 r2 (where r2 is the number of maximal unqualified sets) based on adversary structure. Consequently, our proposed schemes can greatly reduce the cost and difficulty of generating and storing EPR pairs and lower the risk of transmitting encoded particles.
Resumo:
This thesis consists of three studies on investment strategies for Australian retirees. Specifically, it investigates retirees' preference between alternative drawdown strategies in the presence of government pensions, appropriate management of longevity risk through the use of deferred annuities and asset allocation in retirement. It finds drawdown strategies linked to life expectancy to be the best performers. Deferred annuities are found to improve retirement incomes for risk averse retirees. For retirees who want to meet certain wealth thresholds in retirement, equity dominated portfolios provide superior outcomes for higher threshold levels.
Resumo:
The consequences of falls are often dreadful for individuals with lower limb amputation using bone-anchored prosthesis.[1-5] Typically, the impact on the fixation is responsible for bending the intercutaneous piece that could lead to a complete breakage over time. .[3, 5-8] The surgical replacement of this piece is possible but complex and expensive. Clearly, there is a need for solid data enabling an evidence-based design of protective devices limiting impact forces and torsion applied during a fall. The impact on the fixation during an actual fall is obviously difficult to record during a scientific experiment.[6, 8-13] Consequently, Schwartze and colleagues opted for one of the next best options science has to offer: simulation with an able-bodied participant. They recorded body movements and knee impacts on the floor while mimicking several plausible falling scenarios. Then, they calculated the forces and moments that would be applied at four levels along the femur corresponding to amputation heights.[6, 8-11, 14-25] The overall forces applied during the falls were similar regardless of the amputation height indicating that the impact forces were simply translated along the femur. As expected, they showed that overall moments generally increased with amputation height due to changes in lever arm. This work demonstrates that devices preventing only against force overload do not require considering amputation height while those protecting against bending moments should. Another significant contribution is to provide, for the time, the magnitude of the impact load during different falls. This loading range is crucial to the overall design and, more precisely, the triggering threshold of protective devices. Unfortunately, the analysis of only a single able-bodied participant replicating falls limits greatly the generalisation of the findings. Nonetheless, this case study is an important milestone contributing to a better understanding of load impact during a fall. This new knowledge will improve the treatment, the safe ambulation and, ultimately, the quality of life of individuals fitted with bone-anchored prosthesis.
Resumo:
Objective Explosive ordnance disposal (EOD) often requires technicians to wear multiple protective garments in challenging environmental conditions. The accumulative effect of increased metabolic cost coupled with decreased heat dissipation associated with these garments predisposes technicians to high levels of physiological strain. It has been proposed that a perceptual strain index (PeSI) using subjective ratings of thermal sensation and perceived exertion as surrogate measures of core body temperature and heart rate, may provide an accurate estimation of physiological strain. Therefore, this study aimed to determine if the PeSI could estimate the physiological strain index (PSI) across a range of metabolic workloads and environments while wearing heavy EOD and chemical protective clothing. Methods Eleven healthy males wore an EOD and chemical protective ensemble while walking on a treadmill at 2.5, 4 and 5.5 km·h− 1 at 1% grade in environmental conditions equivalent to wet bulb globe temperature (WBGT) 21, 30 and 37 °C. WBGT conditions were randomly presented and a maximum of three randomised treadmill walking trials were completed in a single testing day. Trials were ceased at a maximum of 60-min or until the attainment of termination criteria. A Pearson's correlation coefficient, mixed linear model, absolute agreement and receiver operating characteristic (ROC) curves were used to determine the relationship between the PeSI and PSI. Results A significant moderate relationship between the PeSI and the PSI was observed [r = 0.77; p < 0.001; mean difference = 0.8 ± 1.1 a.u. (modified 95% limits of agreement − 1.3 to 3.0)]. The ROC curves indicated that the PeSI had a good predictive power when used with two, single-threshold cut-offs to differentiate between low and high levels of physiological strain (area under curve: PSI three cut-off = 0.936 and seven cut-off = 0.841). Conclusions These findings support the use of the PeSI for monitoring physiological strain while wearing EOD and chemical protective clothing. However, future research is needed to confirm the validity of the PeSI for active EOD technicians operating in the field.
Resumo:
This paper outlines the results from a study into the educational use of the board game Monopoly City™ in a first year real estate unit. This game play was introduced as a fun and interactive way of achieving a number of desired outcomes including: introduction of foundational threshold concepts in real estate education; introduction of problem solving and critical analysis skills; early acculturation of real estate students to enhance student retention; early team building within the student cohort; and enhanced engagement of first year students and, all in an engaging and entertaining way. Results from this two-stage research project are encouraging. The students participating in this project have demonstrated explicit linkages between their Monopoly City™ experiences and foundation urban economic and valuation theories. Students are also recognising the role strategy and chance play in the real estate sector. Findings from this project and key success factors are presented.
Resumo:
Nursing students used GoSoapBox, a web-based student response system to poll responses to multiple choice questions (MCQs) presented during bioscience lectures. Participation in GoSoapBox appears to have facilitated student engagement, interaction and learning. The majority of students surveyed appreciated the immediate feedback to the student responses and being able to participate anonymously. The use of this tool facilitated collaborative group and class discussion and clarification around any misconceptions or challenging concepts. Information collected using GoSoapBox provided the academic with feedback allowing for reflection, adjustment and improvement in framing of formative and summative MCQs.
Resumo:
The Source Monitoring Framework is a promising model of constructive memory, yet fails because it is connectionist and does not allow content tagging. The Dual-Process Signal Detection Model is an improvement because it reduces mnemic qualia to a single memory signal (or degree of belief), but still commits itself to non-discrete representation. By supposing that ‘tagging’ means the assignment of propositional attitudes to aggregates of anemic characteristics informed inductively, then a discrete model becomes plausible. A Bayesian model of source monitoring accounts for the continuous variation of inputs and assignment of prior probabilities to memory content. A modified version of the High-Threshold Dual-Process model is recommended to further source monitoring research.
Resumo:
The growing call for physical educators to move beyond the bounds of performance has been a powerful discourse. However, it is a discourse that has tended to be heavy on theory but light on practical application. This paper discusses recent work in the area of skill acquisition and what this might mean for pedagogical practices in physical education. The acquisition of motor skill has traditionally been a core objective for physical educators, and there has been a perception that child-centred pedagogies have failed in the achievement of this traditional yardstick. However, drawing from the work of Rovegno and Kirk (1995) and Langley (1995; 1997), and making links with current work in the motor learning area, it is possible to show that skill acquisition is not necessarily compromised by child-centred pedagogy. Indeed, working beyond Mosston's discovery threshold and using models such as Games for Understanding, can provide deeper skill-learning experiences as well as being socially just.
Resumo:
This paper details the design and performance assessment of a unique collision avoidance decision and control strategy for autonomous vision-based See and Avoid systems. The general approach revolves around re-positioning a collision object in the image using image-based visual servoing, without estimating range or time to collision. The decision strategy thus involves determining where to move the collision object, to induce a safe avoidance manuever, and when to cease the avoidance behaviour. These tasks are accomplished by exploiting human navigation models, spiral motion properties, expected image feature uncertainty and the rules of the air. The result is a simple threshold based system that can be tuned and statistically evaluated by extending performance assessment techniques derived for alerting systems. Our results demonstrate how autonomous vision-only See and Avoid systems may be designed under realistic problem constraints, and then evaluated in a manner consistent to aviation expectations.
Resumo:
Objective: To prospectively test two simplified peer review processes, estimate the agreement between the simplified and official processes, and compare the costs of peer review. Design, participants and setting: A prospective parallel study of Project Grant proposals submitted in 2013 to the National Health and Medical Research Council (NHMRC) of Australia. The official funding outcomes were compared with two simplified processes using proposals in Public Health and Basic Science. The two simplified processes were: panels of 7 reviewers who met face-to-face and reviewed only the nine-page research proposal and track record (simplified panel); and 2 reviewers who independently reviewed only the nine-page research proposal (journal panel). The official process used panels of 12 reviewers who met face-to-face and reviewed longer proposals of around 100 pages. We compared the funding outcomes of 72 proposals that were peer reviewed by the simplified and official processes. Main outcome measures: Agreement in funding outcomes; costs of peer review based on reviewers’ time and travel costs. Results: The agreement between the simplified and official panels (72%, 95% CI 61% to 82%), and the journal and official panels (74%, 62% to 83%), was just below the acceptable threshold of 75%. Using the simplified processes would save $A2.1–$A4.9 million per year in peer review costs. Conclusions: Using shorter applications and simpler peer review processes gave reasonable agreement with the more complex official process. Simplified processes save time and money that could be reallocated to actual research. Funding agencies should consider streamlining their application processes.
Resumo:
Our aim was to make a quantitative comparison of the response of the different visual cortical areas to selective stimulation of the two different cone-opponent pathways [long- and medium-wavelength (L/M)- and short-wavelength (S)-cone-opponent] and the achromatic pathway under equivalent conditions. The appropriate stimulus-contrast metric for the comparison of colour and achromatic sensitivity is unknown, however, and so a secondary aim was to investigate whether equivalent fMRI responses of each cortical area are predicted by stimulus contrast matched in multiples of detection threshold that approximately equates for visibility, or direct (cone) contrast matches in which psychophysical sensitivity is uncorrected. We found that the fMRI response across the two colour and achromatic pathways is not well predicted by threshold-scaled stimuli (perceptual visibility) but is better predicted by cone contrast, particularly for area V1. Our results show that the early visual areas (V1, V2, V3, VP and hV4) all have robust responses to colour. No area showed an overall colour preference, however, until anterior to V4 where we found a ventral occipital region that has a significant preference for chromatic stimuli, indicating a functional distinction from earlier areas. We found that all of these areas have a surprisingly strong response to S-cone stimuli, at least as great as the L/M response, suggesting a relative enhancement of the S-cone cortical signal. We also identified two areas (V3A and hMT+) with a significant preference for achromatic over chromatic stimuli, indicating a functional grouping into a dorsal pathway with a strong magnocellular input.
Resumo:
Fusing data from multiple sensing modalities, e.g. laser and radar, is a promising approach to achieve resilient perception in challenging environmental conditions. However, this may lead to \emph{catastrophic fusion} in the presence of inconsistent data, i.e. when the sensors do not detect the same target due to distinct attenuation properties. It is often difficult to discriminate consistent from inconsistent data across sensing modalities using local spatial information alone. In this paper we present a novel consistency test based on the log marginal likelihood of a Gaussian process model that evaluates data from range sensors in a relative manner. A new data point is deemed to be consistent if the model statistically improves as a result of its fusion. This approach avoids the need for absolute spatial distance threshold parameters as required by previous work. We report results from object reconstruction with both synthetic and experimental data that demonstrate an improvement in reconstruction quality, particularly in cases where data points are inconsistent yet spatially proximal.
Resumo:
Abnormally high price spikes in spot electricity markets represent a significant risk to market participants. As such, a literature has developed that focuses on forecasting the probability of such spike events, moving beyond simply forecasting the level of price. Many univariate time series models have been proposed to dealwith spikes within an individual market region. This paper is the first to develop a multivariate self-exciting point process model for dealing with price spikes across connected regions in the Australian National Electricity Market. The importance of the physical infrastructure connecting the regions on the transmission of spikes is examined. It is found that spikes are transmitted between the regions, and the size of spikes is influenced by the available transmission capacity. It is also found that improved risk estimates are obtained when inter-regional linkages are taken into account.