975 resultados para Average nusselt number
Resumo:
Problem: This study considers whether requiring learner drivers to complete a set number of hours while on a learner licence affects the amount of hours of supervised practice that they undertake. It compares the amount of practice that learners in Queensland and New South Wales report undertaking. At the time the study was conducted, learner drivers in New South Wales were required to complete 50 hours of supervised practice while those from Queensland were not. Method: Participants were approached outside driver licensing centres after they had just completed their practical driving test to obtain their provisional (intermediate) licence. Those agreeing to participate were interviewed over the phone later and asked a range of questions to obtain information including socio-demographic details and amount of supervised practice completed. Results: There was a significant difference in the amount of practice that learners reported undertaking. Participants from New South Wales reported completing a significantly greater amount of practice (M = 73.3 hours, sd = 29.12 hours) on their learner licence than those from Queensland (M = 64.1 hours, sd = 51.05 hours). However, the distribution of hours of practice among the Queensland participants was bimodal in nature. Participants from Queensland reported either completing much less or much more practice than the New South Wales average. Summary: While it appears that the requirement that learner drivers complete a set number of hours may increase the average amount of hours of practice obtained, it may also serve to discourage drivers from obtaining additional practice, over and above the required hours. Impact on Industry: The results of this study suggest that the implications of requiring learner drivers to complete a set number of hours of supervised practice are complex. In some cases, policy makers may inadvertently limit the amount of hours learners obtain to the mandated amount rather than encouraging them to obtain as much practice as possible.
Resumo:
Unmanned Aerial Vehicles (UAVs) are emerging as an ideal platform for a wide range of civil applications such as disaster monitoring, atmospheric observation and outback delivery. However, the operation of UAVs is currently restricted to specially segregated regions of airspace outside of the National Airspace System (NAS). Mission Flight Planning (MFP) is an integral part of UAV operation that addresses some of the requirements (such as safety and the rules of the air) of integrating UAVs in the NAS. Automated MFP is a key enabler for a number of UAV operating scenarios as it aids in increasing the level of onboard autonomy. For example, onboard MFP is required to ensure continued conformance with the NAS integration requirements when there is an outage in the communications link. MFP is a motion planning task concerned with finding a path between a designated start waypoint and goal waypoint. This path is described with a sequence of 4 Dimensional (4D) waypoints (three spatial and one time dimension) or equivalently with a sequence of trajectory segments (or tracks). It is necessary to consider the time dimension as the UAV operates in a dynamic environment. Existing methods for generic motion planning, UAV motion planning and general vehicle motion planning cannot adequately address the requirements of MFP. The flight plan needs to optimise for multiple decision objectives including mission safety objectives, the rules of the air and mission efficiency objectives. Online (in-flight) replanning capability is needed as the UAV operates in a large, dynamic and uncertain outdoor environment. This thesis derives a multi-objective 4D search algorithm entitled Multi- Step A* (MSA*) based on the seminal A* search algorithm. MSA* is proven to find the optimal (least cost) path given a variable successor operator (which enables arbitrary track angle and track velocity resolution). Furthermore, it is shown to be of comparable complexity to multi-objective, vector neighbourhood based A* (Vector A*, an extension of A*). A variable successor operator enables the imposition of a multi-resolution lattice structure on the search space (which results in fewer search nodes). Unlike cell decomposition based methods, soundness is guaranteed with multi-resolution MSA*. MSA* is demonstrated through Monte Carlo simulations to be computationally efficient. It is shown that multi-resolution, lattice based MSA* finds paths of equivalent cost (less than 0.5% difference) to Vector A* (the benchmark) in a third of the computation time (on average). This is the first contribution of the research. The second contribution is the discovery of the additive consistency property for planning with multiple decision objectives. Additive consistency ensures that the planner is not biased (which results in a suboptimal path) by ensuring that the cost of traversing a track using one step equals that of traversing the same track using multiple steps. MSA* mitigates uncertainty through online replanning, Multi-Criteria Decision Making (MCDM) and tolerance. Each trajectory segment is modeled with a cell sequence that completely encloses the trajectory segment. The tolerance, measured as the minimum distance between the track and cell boundaries, is the third major contribution. Even though MSA* is demonstrated for UAV MFP, it is extensible to other 4D vehicle motion planning applications. Finally, the research proposes a self-scheduling replanning architecture for MFP. This architecture replicates the decision strategies of human experts to meet the time constraints of online replanning. Based on a feedback loop, the proposed architecture switches between fast, near-optimal planning and optimal planning to minimise the need for hold manoeuvres. The derived MFP framework is original and shown, through extensive verification and validation, to satisfy the requirements of UAV MFP. As MFP is an enabling factor for operation of UAVs in the NAS, the presented work is both original and significant.
Resumo:
The strategies employed by 130 Grade 5 Brisbane students in comparing decimal numbers which have the same whole-number part were compared with those identified in similar studies conducted in the USA, France and Israel. Three new strategies were identified. Similar to USA results, the most common comparison errors stemmed from the incorrect whole-number strategy in which length is confused with size. The findings of this present study tend to support Resnick et al.’s (1989) hypothesis that the introduction of decimal-fraction recording before common-fraction recording seems to promote better comparison of decimal numbers.
Resumo:
This paper reports on an intervention study planned to help Year 6 students construct the multiplicative structure underlying decimal-number numeration. Three types of intervention were designed from a numeration model developed from a large study of 173 Year 6 students’ decimal-number knowledge. The study found that students could acquire multiplicative structure as an abstract schema if instruction took account of prior knowledge as informed by the model.
Resumo:
Centre for Mathematics and Science Education, QUT, Brisbane, Australia This paper reports on a study in which Years 6 and 10 students were individually interviewed to determine their ability to unitise and reunitise number lines used to represent mixed numbers and improper fractions. Only 16.7% of the students (all Year 6) were successful on all three tasks and, in general, Year 6 students outperformed Year 8 students. The interviews revealed that the remaining students had incomplete, fragmented or non-existent structural knowledge of mixed numbers and improper fractions, and were unable to unitise or reunitise number lines. The implication for teaching is that instruction should focus on providing students with a variety of fraction representations in order to develop rich and flexible schema for all fraction types (mixed numbers, and proper and improper fractions).
Resumo:
Few studies have evaluated the reliability of lifetime sun exposure estimated from inquiring about the number of hours people spent outdoors in a given period on a typical weekday or weekend day (the time-based approach). Some investigations have suggested that women have a particularly difficult task in estimating time outdoors in adulthood due to their family and occupational roles. We hypothesized that people might gain additional memory cues and estimate lifetime hours spent outdoors more reliably if asked about time spent outdoors according to specific activities (an activity-based approach). Using self-administered, mailed questionnaires, test-retest responses to time-based and to activity-based approaches were evaluated in 124 volunteer radiologic technologist participants from the United States: 64 females and 60 males 48 to 80 years of age. Intraclass correlation coefficients (ICC) were used to evaluate the test-retest reliability of average number of hours spent outdoors in the summer estimated for each approach. We tested the differences between the two ICCs, corresponding to each approach, using a t test with the variance of the difference estimated by the jackknife method. During childhood and adolescence, the two approaches gave similar ICCs for average numbers of hours spent outdoors in the summer. By contrast, compared with the time-based approach, the activity-based approach showed significantly higher ICCs during adult ages (0.69 versus 0.43, P = 0.003) and over the lifetime (0.69 versus 0.52, P = 0.05); the higher ICCs for the activity-based questionnaire were primarily derived from the results for females. Research is needed to further improve the activity-based questionnaire approach for long-term sun exposure assessment. (Cancer Epidemiol Biomarkers Prev 2009;18(2):464–71)
Resumo:
Games and related virtual environments have been a much-hyped area of the entertainment industry. The classic quote is that games are now approaching the size of Hollywood box office sales [1]. Books are now appearing that talk up the influence of games on business [2], and it is one of the key drivers of present hardware development. Some of this 3D technology is now embedded right down at the operating system level via the Windows Presentation Foundations – hit Windows/Tab on your Vista box to find out... In addition to this continued growth in the area of games, there are a number of factors that impact its development in the business community. Firstly, the average age of gamers is approaching the mid thirties. Therefore, a number of people who are in management positions in large enterprises are experienced in using 3D entertainment environments. Secondly, due to the pressure of demand for more computational power in both CPU and Graphical Processing Units (GPUs), your average desktop, any decent laptop, can run a game or virtual environment. In fact, the demonstrations at the end of this paper were developed at the Queensland University of Technology (QUT) on a standard Software Operating Environment, with an Intel Dual Core CPU and basic Intel graphics option. What this means is that the potential exists for the easy uptake of such technology due to 1. a broad range of workers being regularly exposed to 3D virtual environment software via games; 2. present desktop computing power now strong enough to potentially roll out a virtual environment solution across an entire enterprise. We believe such visual simulation environments can have a great impact in the area of business process modeling. Accordingly, in this article we will outline the communication capabilities of such environments, giving fantastic possibilities for business process modeling applications, where enterprises need to create, manage, and improve their business processes, and then communicate their processes to stakeholders, both process and non-process cognizant. The article then concludes with a demonstration of the work we are doing in this area at QUT.
Resumo:
The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.
Resumo:
Understanding the future development of interaction design as it applies to learning and training scenarios is crucial to effective development of curriculum and appropriate application of social and mobile communication technologies. As Attewell & Saville-Smith have recognised (2004), the use of mobile communication devices for improved literacy and numeracy is a desirable prospect among young people who represent the average age of undergraduate students. Further, with the growing penetration of broadband internet access, the ubiquity of wireless access in educational locations, the rise of ultra-mobile portable computers and the proliferation of social software applications in educational contexts, there are a growing number of channels for facilitation of learning. Nevertheless, there has been insufficient consideration of the interaction design issues that affect the effective facilitation of such learning. This paper contends that there is a clear need to design mobile and social learning to accommodate the benefits of these diverse channels for interaction. Additionally, there is a need to implement suitable testing processes to ensure participants in mobile and social learning are contributing effectively and maximising their learning. Through the presentation of case studies in mobile and social learning, the paper attempts to demonstrate how considered interaction design techniques can improve the effectiveness of new learning channels.
Resumo:
Purpose : The Hong Kong Special Administrative Region (referred to as Hong Kong from here onwards) is an international leading commercial hub particularly in Asia. In order to keep up its reputation a number of large public works projects have been considered. Public Private Partnership (PPP) has increasingly been suggested for these projects, but the suitability of using this procurement method in Hong Kong is yet to be studied empirically. The findings presented in this paper will specifically consider whether PPPs should be used to procure public works projects in Hong Kong by studying the attractive and negative factors for adopting PPP. Design/methodology/approach : As part of this study a questionnaire survey was conducted with industrial practitioners. The respondents were requested to rank the importance of fifteen attractive factors and thirteen negative factors for adopting PPP. Findings : The results found that in general the top attractive factors ranked by respondents from Hong Kong were efficiency related, these included (1) ‘Provide an integrated solution (for public infrastructure / services)’; (2) ‘Facilitate creative and innovative approaches’; and (3) ‘Solve the problem of public sector budget restraint’. It was found that Australian respondents also shared similar findings to those in Hong Kong, but the United Kingdom respondents showed a higher priority to those economic driven attractive factors. Also, the ranking of the attractive and negative factors for adopting PPP showed that on average the attractive factors were scored higher than the negative factors. Originality/value : The results of this research have enabled a comparison of the attractive and negative factors for adopting PPP between three administrative systems. These findings have confirmed that PPP is a suitable means to procure large public projects which are believed to be useful and interesting to PPP researchers and practitioners.
Resumo:
PURPOSE: We report our telephone-based system for selecting community control series appropriate for a complete Australia-wide series of Ewing's sarcoma cases. METHODS: We used electronic directory random sampling to select age-matched controls. The sampling has all listed telephone numbers on an up-dated CD-Rom. RESULTS: 95% of 2245 telephone numbers selected were successfully contacted. The mean number of attempts needed was 1.94, 58% answering at the first attempt. On average, we needed 4.5 contacts per control selected. Calls were more likely to be successful (reach a respondent) when made in the evening (except Saturdays). The overall response rate among contacted telephone numbers was 92.8%. Participation rates among female and male respondents were practically the same. The exclusion of unlisted numbers (13.5% of connected households) and unconnected households (3.7%) led to potential selection bias. However, restricting the case series to listed cases only, plus having external information on the direction of potential bias allow meaningful interpretation of our data. CONCLUSION: Sampling from an electronic directory is convenient, economical and simple, and gives a very good yield of eligible subjects compared to other methods.
Resumo:
In this chapter we propose clipping with amplitude and phase corrections to reduce the peak-to-average power ratio (PAR) of orthogonal frequency division multiplexed (OFDM) signals in high-speed wireless local area networks defined in IEEE 802.11a physical layer. The proposed techniques can be implemented with a small modification at the transmitter and the receiver remains standard compliant. PAR reduction as much as 4dB can be achieved by selecting a suitable clipping ratio and a correction factor depending on the constellation used. Out of band noise (OBN) is also reduced.
Resumo:
Parallel combinatory orthogonal frequency division multiplexing (PC-OFDM yields lower maximum peak-to-average power ratio (PAR), high bandwidth efficiency and lower bit error rate (BER) on Gaussian channels compared to OFDM systems. However, PC-OFDM does not improve the statistics of PAR significantly. In this chapter, the use of a set of fixed permutations to improve the statistics of the PAR of a PC-OFDM signal is presented. For this technique, interleavers are used to produce K-1 permuted sequences from the same information sequence. The sequence with the lowest PAR, among K sequences is chosen for the transmission. The PAR of a PC-OFDM signal can be further reduced by 3-4 dB by this technique. Mathematical expressions for the complementary cumulative density function (CCDF)of PAR of PC-OFDM signal and interleaved PC-OFDM signal are also presented.
Resumo:
A point interpolation method with locally smoothed strain field (PIM-LS2) is developed for mechanics problems using a triangular background mesh. In the PIM-LS2, the strain within each sub-cell of a nodal domain is assumed to be the average strain over the adjacent sub-cells of the neighboring element sharing the same field node. We prove theoretically that the energy norm of the smoothed strain field in PIM-LS2 is equivalent to that of the compatible strain field, and then prove that the solution of the PIM- LS2 converges to the exact solution of the original strong form. Furthermore, the softening effects of PIM-LS2 to system and the effects of the number of sub-cells that participated in the smoothing operation on the convergence of PIM-LS2 are investigated. Intensive numerical studies verify the convergence, softening effects and bound properties of the PIM-LS2, and show that the very ‘‘tight’’ lower and upper bound solutions can be obtained using PIM-LS2.