873 resultados para Beginning inference


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Australian Constitutional referendums have been part of the Australian political system since federation. Up to the year 1999 (the time of the last referendum in Australia), constitutional change in Australia does not have a good history of acceptance. Since 1901, there have been 44 proposed constitutional changes with eight gaining the required acceptance according to section 128 of the Australian Constitution. In the modern era since 1967, there have been 20 proposals over seven referendum votes for a total of four changes. Over this same period, there have been 13 federal general elections which have realised change in government just five times. This research examines the electoral behaviour of Australian voters from 1967 to 1999 for each referendum. Party identification has long been a key indicator in general election voting. This research considers whether the dominant theory of voter behaviour in general elections (the Michigan Model) provides a plausible explanation for voting in Australian referendums. In order to explain electoral behaviour in each referendum, this research has utilised available data from the Australian Electoral Commission, the 1996 Australian Bureau of Statistics Census data, and the 1999 Australian Constitutional Referendum Study. This data has provided the necessary variables required to measure the impact of the Michigan Model of voter behaviour. Measurements have been conducted using bivariate and multivariate analyses. Each referendum provides an overview of the events at the time of the referendum as well as the =yes‘ and =no‘ cases at the time each referendum was initiated. Results from this research provide support for the Michigan Model of voter behaviour in Australian referendum voting. This research concludes that party identification, as a key variable of the Michigan Model, shows that voters continue to take their cues for voting from the political party they identify with in Australian referendums. However, the outcome of Australian referendums clearly shows that partisanship is only one of a number of contributory factors in constitutional referendums.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, several aspects of high frequency related issues of modern AC motor drive systems, such as common mode voltage, shaft voltage and resultant bearing current and leakage currents, have been discussed. Conducted emission is a major problem in modern motor drives that produce undesirable effects on electronic devices. In modern power electronic systems, increasing power density and decreasing cost and size of system are market requirements. Switching losses, harmonics and EMI are the key factors which should be considered at the beginning stage of a design to optimise a drive system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electromagnetic compatibility of power electronic systems becomes an engineering discipline and it should be considered at the beginning stage of a design. Thus, a power electronics design becomes more complex and challenging and it requires a good communication between EMI and Power electronics experts. Three major issues in designing a power electronic system are Losses, EMI and Harmonics. These issues affect system cost, size, efficiency and quality and it is a tradeoff between these factors when we design a power converter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes outcomes of a research study conducted to investigate the nutrient build-up and wash-off processes on urban impervious surfaces. The data needed for the study was generated through a series of field investigations and laboratory test procedures. The study sites were selected in urbanised catchments to represent typical characteristics of residential, industrial and commercial land uses. The build-up and wash-off samples were collected from road surfaces in the selected study sites. A specially designed vacuum collection system and a rainfall simulator were used for sample collection. According to the data analysis, the solids build-up on road surfaces was significantly finer with more than 80% of the particles below 150 ìm for all the land uses. Nutrients were mostly associated with the particle size range below 150 ìm in both build-up and wash-off samples irrespective of type of land use. Therefore, the finer fraction of solids was the most important for the nutrient build-up and particulate nutrient wash-off processes. Consequently, the design of stormwater quality mitigation measures should target particles less than 150 ìm for the removal of nutrients irrespective of type of land use. Total kjeldahl nitrogen (TKN) was the most dominant form of nitrogen species in build-up on road surfaces. Phosphorus build-up on road surfaces was mainly in inorganic form and phosphate (PO4 3-) was the most dominant form. The nutrient wash-off process was found to be dependent on rainfall intensity and duration. Concentration of both total nitrogen and phosphorus was higher at the beginning of the rain event and decreased with the increase in rainfall duration. Consequently, in the design of stormwater quality mitigation strategies for nutrients removal, it is important to target the initial period of rain events. The variability of wash-off of nitrogen with rainfall intensity was significantly different to phosphorus wash-off. The concentration of nitrogen was higher in the wash-off for low intensity rain events compared to the wash-off for high intensity rain events. On the other hand, the concentration of phosphorus in the wash-off was high for high intensity rain events compared to low intensity rain events. Consequently, the nitrogen washoff can be defined as a source limiting process and phosphorus wash-off as a transport limiting process. This highlights the importance of taking into consideration the wash-off of low intensity rain events in the design of stormwater quality mitigation strategies targeting the nitrogen removal. All the nitrogen species in wash-off are primarily in dissolved form whereas phosphorus is in particulate form. The differences in the nitrogen and phosphorus wash-off processes is principally due to the degree of solubility, attachment to particulates, composition of total nitrogen and total phosphorus and the degree of adherence of the solids particles to the surface to which nutrients are attached. The particulate nitrogen available for wash-off is removed readily as these are mobilised as free solids particles on the surface. Phosphorus is washed-off mostly with the solids particles which are strongly adhered to the surface or as the fixed solids load. Investigation of the nitrogen wash-off process using bulk wash-off samples was in close agreement with the investigation of dissolved fraction of wash-off solids. This was primarily due to the predominant nature of dissolved nitrogen. However, the investigation of the processes which underpin phosphorus wash-off using bulk washoff samples could lead to loss of information. This is due to the composition of total phosphorus in wash-off solids and the inherent variability of the wash-off process for the different particle size ranges. This variability should preferably be taken into consideration as phosphorus wash-off is predominantly in particulate form. Therefore, care needs to be taken in the investigation of the phosphorus wash-off process using bulk wash-off samples to ensure that there is no loss of information and hence result in misleading outcomes. The investigation of different particle size ranges of wash-off solids is preferable in the interest of designing effective stormwater quality management strategies targeting phosphorus removal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The wavelet packet transform decomposes a signal into a set of bases for time–frequency analysis. This decomposition creates an opportunity for implementing distributed data mining where features are extracted from different wavelet packet bases and served as feature vectors for applications. This paper presents a novel approach for integrated machine fault diagnosis based on localised wavelet packet bases of vibration signals. The best basis is firstly determined according to its classification capability. Data mining is then applied to extract features and local decisions are drawn using Bayesian inference. A final conclusion is reached using a weighted average method in data fusion. A case study on rolling element bearing diagnosis shows that this approach can greatly improve the accuracy ofdiagno sis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research project described in this paper was designed to explore the potential of a wiki to facilitate collaboration and to reduce the isolation of postgraduate students enrolled in a professional doctoral program at a Queensland university. It was also intended to foster a community of practice for reviewing and commenting on one another’s work despite the small number of students and their disparate topics. The students were interviewed and surveyed at the beginning and during the face-to-face sessions of the course and their wikis were examined over the year to monitor, analyse and evaluate the extent to which the agency of the technology (wiki) mediated their development of scholarly skills. The study showed that students paradoxically eschewed use of the structured wiki and formed their own informal networks. This paper will contend that this paradox arose from a mismatch between the agency of technology and its intended purpose.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An educational priority of many nations is to enhance mathematical learning in early childhood. One area in need of special attention is that of statistics. This paper argues for a renewed focus on statistical reasoning in the beginning school years, with opportunities for children to engage in data modelling activities. Such modelling involves investigations of meaningful phenomena, deciding what is worthy of attention (i.e., identifying complex attributes), and then progressing to organising, structuring, visualising, and representing data. Results are reported from the first year of a three-year longitudinal study in which three classes of first-grade children and their teachers engaged in activities that required the creation of data models. The theme of “Looking after our Environment,” a component of the children’s science curriculum at the time, provided the context for the activities. Findings focus on how the children dealt with given complex attributes and how they generated their own attributes in classifying broad data sets, and the nature of the models the children created in organising, structuring, and representing their data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Beginning around 2003, television studies has seen the growth of interest in the genre of reality shows. However, concentrating on this genre has tended to sideline the even more significant emergence of the program format as a central mode of business and culture in the new television landscape. "Localizing Global TV" redresses this balance, and heralds the emergence of an important, exciting and challenging area of television studies. Topics explored include reality TV, makeover programs, sitcoms, talent shows and fiction serials, as well as broadcaster management policies, production decision chains and audience participation processes. This seminal work will be of considerable interest to media scholars internationally.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Building Information Modelling (BIM) is evolving in the Construction Industry as a successor to CAD. CAD is mostly a technical tool that conforms to existing industry practices, however BIM has the capacity to revolutionise industry practice. Rather than producing representations of design intent, BIM produces an exact Virtual Prototype of any building that in an ideal situation is centrally stored and freely exchanged between the project team, facilitating collaboration and allowing experimentation in design. Exposing design students to this technology through their formal studies allows them to engage with cutting edge industry practices and to help shape the industry upon their graduation. Since this technology is relatively new to the construction industry, there are no accepted models for how to “teach” BIM effectively at university level. Developing learning models to enable students to make the most out of their learning with BIM presents significant challenges to those teaching in the field of design. To date there are also no studies of students experiences of using this technology. This research reports on the introduction of Building Information Modeling (BIM) software into a second year Bachelor of Design course. This software has the potential to change industry standards through its ability to revolutionise the work practices of those involved in large scale design projects. Students’ understandings and experiences of using the software in order to complete design projects as part of their assessment are reported here. In depth semi-structured interviews with 6 students revealed that students had views that ranged from novice to sophisticate about the software. They had variations in understanding of how the software could be used to complete course requirements, to assist with the design process and in the workplace. They had engaged in limited exploration of the collaborative potential of the software as a design tool. Their understanding of the significance of BIM for the workplace was also variable. The results indicate that students are beginning to develop an appreciation for how BIM could aid or constrain the work of designers, but that this appreciation is highly varied and likely to be dependent on the students’ previous experiences of working in a design studio environment. Their range of understandings of the significance of the technology is a reflection of their level of development as designers (they are “novice” designers). The results also indicate that there is a need for subjects in later years of the course that allow students to specialise in the area of digital design and to develop more sophisticated views of the role of technology in the design process. There is also a need to capitalise on the collaborative potential inherent in the software in order to realise its capability to streamline some aspects of the design process. As students become more sophisticated designers we should explore their understanding of the role of technology as a design tool in more depth in order to make recommendations for improvements to teaching and learning practice related to BIM and other digital design tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study is the first to investigate the effect of prolonged reading on reading performance and visual functions in students with low vision. The study focuses on one of the most common modes of achieving adequate magnification for reading by students with low vision, their close reading distance (proximal or relative distance magnification). Close reading distances impose high demands on near visual functions, such as accommodation and convergence. Previous research on accommodation in children with low vision shows that their accommodative responses are reduced compared to normal vision. In addition, there is an increased lag of accommodation for higher stimulus levels as may occur at close reading distance. Reduced accommodative responses in low vision and higher lag of accommodation at close reading distances together could impact on reading performance of students with low vision especially during prolonged reading tasks. The presence of convergence anomalies could further affect reading performance. Therefore, the aims of the present study were 1) To investigate the effect of prolonged reading on reading performance in students with low vision 2) To investigate the effect of prolonged reading on visual functions in students with low vision. This study was conducted as cross-sectional research on 42 students with low vision and a comparison group of 20 students with normal vision, aged 7 to 20 years. The students with low vision had vision impairments arising from a range of causes and represented a typical group of students with low vision, with no significant developmental delays, attending school in Brisbane, Australia. All participants underwent a battery of clinical tests before and after a prolonged reading task. An initial reading-specific history and pre-task measurements that included Bailey-Lovie distance and near visual acuities, Pelli-Robson contrast sensitivity, ocular deviations, sensory fusion, ocular motility, near point of accommodation (pull-away method), accuracy of accommodation (Monocular Estimation Method (MEM)) retinoscopy and Near Point of Convergence (NPC) (push-up method) were recorded for all participants. Reading performance measures were Maximum Oral Reading Rates (MORR), Near Text Visual Acuity (NTVA) and acuity reserves using Bailey-Lovie text charts. Symptoms of visual fatigue were assessed using the Convergence Insufficiency Symptom Survey (CISS) for all participants. Pre-task measurements of reading performance and accuracy of accommodation and NPC were compared with post-task measurements, to test for any effects of prolonged reading. The prolonged reading task involved reading a storybook silently for at least 30 minutes. The task was controlled for print size, contrast, difficulty level and content of the reading material. Silent Reading Rate (SRR) was recorded every 2 minutes during prolonged reading. Symptom scores and visual fatigue scores were also obtained for all participants. A visual fatigue analogue scale (VAS) was used to assess visual fatigue during the task, once at the beginning, once at the middle and once at the end of the task. In addition to the subjective assessments of visual fatigue, tonic accommodation was monitored using a photorefractor (PlusoptiX CR03™) every 6 minutes during the task, as an objective assessment of visual fatigue. Reading measures were done at the habitual reading distance of students with low vision and at 25 cms for students with normal vision. The initial history showed that the students with low vision read for significantly shorter periods at home compared to the students with normal vision. The working distances of participants with low vision ranged from 3-25 cms and half of them were not using any optical devices for magnification. Nearly half of the participants with low vision were able to resolve 8-point print (1M) at 25 cms. Half of the participants in the low vision group had ocular deviations and suppression at near. Reading rates were significantly reduced in students with low vision compared to those of students with normal vision. In addition, there were a significantly larger number of participants in the low vision group who could not sustain the 30-minute task compared to the normal vision group. However, there were no significant changes in reading rates during or following prolonged reading in either the low vision or normal vision groups. Individual changes in reading rates were independent of their baseline reading rates, indicating that the changes in reading rates during prolonged reading cannot be predicted from a typical clinical assessment of reading using brief reading tasks. Contrary to previous reports the silent reading rates of the students with low vision were significantly lower than their oral reading rates, although oral and silent reading was assessed using different methods. Although the visual acuity, contrast sensitivity, near point of convergence and accuracy of accommodation were significantly poorer for the low vision group compared to those of the normal vision group, there were no significant changes in any of these visual functions following prolonged reading in either group. Interestingly, a few students with low vision (n =10) were found to be reading at a distance closer than their near point of accommodation. This suggests a decreased sensitivity to blur. Further evaluation revealed that the equivalent intrinsic refractive errors (an estimate of the spherical dioptirc defocus which would be expected to yield a patient’s visual acuity in normal subjects) were significantly larger for the low vision group compared to those of the normal vision group. As expected, accommodative responses were significantly reduced for the low vision group compared to the expected norms, which is consistent with their close reading distances, reduced visual acuity and contrast sensitivity. For those in the low vision group who had an accommodative error exceeding their equivalent intrinsic refractive errors, a significant decrease in MORR was found following prolonged reading. The silent reading rates however were not significantly affected by accommodative errors in the present study. Suppression also had a significant impact on the changes in reading rates during prolonged reading. The participants who did not have suppression at near showed significant decreases in silent reading rates during and following prolonged reading. This impact of binocular vision at near on prolonged reading was possibly due to the high demands on convergence. The significant predictors of MORR in the low vision group were age, NTVA, reading interest and reading comprehension, accounting for 61.7% of the variances in MORR. SRR was not significantly influenced by any factors, except for the duration of the reading task sustained; participants with higher reading rates were able to sustain a longer reading duration. In students with normal vision, age was the only predictor of MORR. Participants with low vision also reported significantly greater visual fatigue compared to the normal vision group. Measures of tonic accommodation however were little influenced by visual fatigue in the present study. Visual fatigue analogue scores were found to be significantly associated with reading rates in students with low vision and normal vision. However, the patterns of association between visual fatigue and reading rates were different for SRR and MORR. The participants with low vision with higher symptom scores had lower SRRs and participants with higher visual fatigue had lower MORRs. As hypothesized, visual functions such as accuracy of accommodation and convergence did have an impact on prolonged reading in students with low vision, for students whose accommodative errors were greater than their equivalent intrinsic refractive errors, and for those who did not suppress one eye. Those students with low vision who have accommodative errors higher than their equivalent intrinsic refractive errors might significantly benefit from reading glasses. Similarly, considering prisms or occlusion for those without suppression might reduce the convergence demands in these students while using their close reading distances. The impact of these prescriptions on reading rates, reading interest and visual fatigue is an area of promising future research. Most importantly, it is evident from the present study that a combination of factors such as accommodative errors, near point of convergence and suppression should be considered when prescribing reading devices for students with low vision. Considering these factors would also assist rehabilitation specialists in identifying those students who are likely to experience difficulty in prolonged reading, which is otherwise not reflected during typical clinical reading assessments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In mid 2007, the Australian Learning and Teaching Council (ALTC), formerly the Carrick Institute for Learning and Teaching in Higher Education, commissioned an intensive research project to examine the use of ePortfolios by university students in Australia. The project was awarded to a consortium of four universities: Queensland University of Technology as lead institution, The University of Melbourne, University of New England and University of Wollongong.---------- The overarching aim of the research project, which was given the working title of the Australian ePortfolio Project, was to examine the current levels of ePortfolio practice in Australian higher education. The principal project goals sought to provide an overview and analysis of the national and international ePortfolio contexts, document the types of ePortfolios used in Australian higher education, examine the relationship with the National Diploma Supplement project funded by the Federal government, identify any significant issues relating to ePortfolio implementation, and offer guidance about future opportunities for ePortfolio development. The research findings revealed that there was a high level of interest in the use of ePortfolios in the context of higher education, particularly in terms of the potential to help students become reflective learners who are conscious of their personal and professional strengths and weaknesses, as well as to make their existing and developing skills more explicit. There were some good examples of early adoption in different institutions, although this tended to be distributed across the sector. The greatest use of ePortfolios was recorded in coursework programs, rather than in research programs, with implementation generally reflecting subject-specific or program-based activity, as opposed to faculty- or university-wide activity. Accordingly, responsibility for implementation frequently rested with the individual teaching unit, although an alternative centralised model of coordination by ICT services, careers and employment or teaching and learning support was beginning to emerge. The project report concludes with a series of recommendations to guide the process, drawing on the need for open dialogue and effective collaboration between the stakeholders across the range of contexts: government policy, international technical standards, academic policy, and learning and teaching research and practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increase of life expectancy worldwide during the last three decades has increased age-related disability leading to the risk of loss of quality of life. How to improve quality of life including physical health and mental health for older people and optimize their life potential has become an important health issue. This study used the Theory of Planned Behaviour Model to examine factors influencing health behaviours, and the relationship with quality of life. A cross-sectional mailed survey of 1300 Australians over 50 years was conducted at the beginning of 2009, with 730 completed questionnaires returned (response rate 63%). Preliminary analysis reveals that physiological changes of old age, especially increasing waist circumference and co morbidity was closely related to health status, especially worse physical health summary score. Physical activity was the least adherent behaviour among the respondents compared to eating healthy food and taking medication regularly as prescribed. Increasing number of older people living alone with co morbidity of disease may be the barriers that influence their attitude and self control toward physical activity. A multidisciplinary and integrated approach including hospital and non hospital care is required to provide appropriate services and facilities toward older people.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports on the opportunities for transformational learning experienced by a group of pre-service teachers who were engaged in service-learning as a pedagogical process with a focus on reflection. Critical social theory informed the design of the reflection process as it enabled a move away from knowledge transmission toward knowledge transformation. The structured reflection log was designed to illustrate the critical social theory expectations of quality learning that teach students to think critically: ideology critique and utopian critique. Butin's lenses and a reflection framework informed by the work of Bain, Ballantyne, Mills and Lester were used in the design of the service-learning reflection log. Reported data provide evidence of transformational learning and highlight how the students critique their world and imagine how they could contribute to a better world in their work as a beginning teacher.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for shorter time-to-market, higher quality and reduced cost, model-driven software development has emerged as a promising approach to software engineering. The central idea is to promote models to first-class citizens in the development process. Starting from a set of very abstract models in the early stage of the development, they are refined into more concrete models and finally, as a last step, into code. As early phases of development focus on different concepts compared to later stages, various modelling languages are employed to most accurately capture the concepts and relations under discussion. In light of this refinement process, translating between modelling languages becomes a time-consuming and error-prone necessity. This is remedied by model transformations providing support for reusing and automating recurring translation efforts. These transformations typically can only be used to translate a source model into a target model, but not vice versa. This poses a problem if the target model is subject to change. In this case the models get out of sync and therefore do not constitute a coherent description of the software system anymore, leading to erroneous results in later stages. This is a serious threat to the promised benefits of quality, cost-saving, and time-to-market. Therefore, providing a means to restore synchronisation after changes to models is crucial if the model-driven vision is to be realised. This process of reflecting changes made to a target model back to the source model is commonly known as Round-Trip Engineering (RTE). While there are a number of approaches to this problem, they impose restrictions on the nature of the model transformation. Typically, in order for a transformation to be reversed, for every change to the target model there must be exactly one change to the source model. While this makes synchronisation relatively “easy”, it is ill-suited for many practically relevant transformations as they do not have this one-to-one character. To overcome these issues and to provide a more general approach to RTE, this thesis puts forward an approach in two stages. First, a formal understanding of model synchronisation on the basis of non-injective transformations (where a number of different source models can correspond to the same target model) is established. Second, detailed techniques are devised that allow the implementation of this understanding of synchronisation. A formal underpinning for these techniques is drawn from abductive logic reasoning, which allows the inference of explanations from an observation in the context of a background theory. As non-injective transformations are the subject of this research, there might be a number of changes to the source model that all equally reflect a certain target model change. To help guide the procedure in finding “good” source changes, model metrics and heuristics are investigated. Combining abductive reasoning with best-first search and a “suitable” heuristic enables efficient computation of a number of “good” source changes. With this procedure Round-Trip Engineering of non-injective transformations can be supported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Building on the investigation of the Charity Commission (2009) on the effects of the economic downturn on the largest trusts and foundation in the United Kingdom, the purpose of this research was to assess the extent to which Australian trusts and foundations were taking an actively strategic approach to their investments and pursuit of mission (including grant-making), and the relationship between the two in the context of the economic downturn. Focus was given to identifying the issues raised as a consequence of the economic downturn, rather than providing a generalised snapshot of the ‘average’ foundations response. In September 2009, semi-structured, in depth interviews were conducted with executives of 23 grant making trusts and foundations. The interviews for this research focused on the largest grant makers in terms of grant expenditure, however included foundations from different geographical locations and from across different cause areas. It is important to stress at the outset that this was not a representative sample of foundations; the study aimed to identify issues rather than to present a representative picture of the ‘average’ foundation’s response. It is also important to note that the study was undertaken in September 2009 at a time when many foundations were beginning to feel more optimistic about the longer term future, but aware of continuing and possibly worsening short term income problems. But whatever the financial future, some of the underlying issues, concerning investment and grant making management practices, raised in this report will be of continuing relevance worthy of wider discussion. If a crisis is too good to waste, it is also too good to forget. One other introductory point – as previously noted, interviews for this study were conducted in September 2009 – just one month prior to the introduction of the new Private Ancillary Fund (PAF) legislation which replaced the previous Prescribed Private Fund (PPF) arrangement1. References to PAFs and/or PPFs reflect that time.