514 resultados para uses


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis develops a critical realist explanatory critique of alternative schooling programs for youth at risk taking place at three case study sites. Throughout the thesis the author pursues the question, \Are alternative provisions of schooling working academically and socially for youth at risk?. The academic lens targets literacy learning and associated pedagogies. Social outcomes are posited as positive social behaviours and continued engagement in learning. A four phased analysis, drawing on critical realism, interpretive and subject specific theories is used to elicit explanations for the research question. An overall framework is a critical realist methodology as set out by Danermark, Ekstrom, Jakobsen and Karlsson (2002, p. 129). Consequently phase one describes the phenomena of alternative schooling programs taking place at three case study sites. This is reported first as staff narratives that are resolved into imaginable historical causal components of \generative events., \prior schooling structures., \models of alternative schooling., \purpose., \individual agency., and \relations with linked community organisations.. Then transcendental questions are posed about each component using retroduction to uncover structures, underlying mechanisms and powers, and individual agency. In the second phase the researcher uses modified grounded theory methodology to theoretically redescribe causal categories related to a \needed different teaching and administrative approach. that emerged from the previous critique. A transcendental question is then applied to this redescription. The research phenomena are again theoretically redescribed in the third phase, this time using three theoretically based constructs associated with literacy and literacy pedagogies; the NRS, the 4 Resources Model, and Productive Pedagogies. This redescription is again questioned in terms of its core or \necessary. components. The fourth phase makes an explanatory critique by comparing and critiquing all previous explanations, recontextualising them in a wider macro reality of alternative schooling. Through this critical realist explanatory critiquing process, a response emerges not only to whether alternative provisions of schooling are working, but also how they are working, and how they are not working, with realistically based implications for future improvement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Value creation is an area with long-standing importance in the marketing field, yet little is known about the value construct itself. In social marketing, value can be regarded as an incentive for consumers to perform desirable behaviours that lead to bother greater social good and individual benefit. An understanding of customer value in the consumption of social products is an important aspect of designing social marketing interventions that can effectively change social behaviours. This paper uses qualitative data, gathered during depth interviews, to explore the value dimensions women experience from using government-provided breast screening services every two years. Thematic analysis was used in discovering that emotional functional, social and altruistic dimensions of value were present in womens experiences with these services as well as in the outcomes from using them.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In condition-based maintenance (CBM), effective diagnostics and prognostics are essential tools for maintenance engineers to identify imminent fault and to predict the remaining useful life before the components finally fail. This enables remedial actions to be taken in advance and reschedules production if necessary. This paper presents a technique for accurate assessment of the remnant life of machines based on historical failure knowledge embedded in the closed loop diagnostic and prognostic system. The technique uses the Support Vector Machine (SVM) classifier for both fault diagnosis and evaluation of health stages of machine degradation. To validate the feasibility of the proposed model, the five different level data of typical four faults from High Pressure Liquefied Natural Gas (HP-LNG) pumps were used for multi-class fault diagnosis. In addition, two sets of impeller-rub data were analysed and employed to predict the remnant life of pump based on estimation of health state. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose The paper aims to describe a workforce-planning model developed in-house in an Australian university library that is based on rigorous environmental scanning of an institution, the profession and the sector. Design/methodology/approach The paper uses a case study that describes the stages of the planning process undertaken to develop the Librarys Workforce Plan and the documentation produced. Findings While it has been found that the process has had successful and productive outcomes, workforce planning is an ongoing process. To remain effective, the workforce plan needs to be reviewed annually in the context of the librarys overall planning program. This is imperative if the plan is to remain current and to be regarded as a living document that will continue to guide library practice. Research limitations/implications Although a single case study, the work has been contextualized within the wider research into workforce planning. Practical implications The paper provides a model that can easily be deployed within a library without external or specialist consultant skills, and due to its scalability can be applied at department or wider level. Originality/value The paper identifies the trends impacting on, and the emerging opportunities for, university libraries and provides a model for workforce planning that recognizes the context and culture of the organization as key drivers in determining workforce planning. Keywords - Australia, University libraries, Academic libraries, Change management, Manpower planning Paper type - Case study

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study is to investigate how secondary school media educators might best meet the needs of students who prefer practical production work to theory work in media studies classrooms. This is a significant problem for a curriculum area that claims to develop students media literacies by providing them with critical frameworks and a metalanguage for thinking about the media. It is a problem that seems to have become more urgent with the availability of new media technologies and forms like video games. The study is located in the field of media education, which tends to draw on structuralist understandings of the relationships between young people and media and suggests that students can be empowered to resist medias persuasive discourses. Recent theoretical developments suggest too little emphasis has been placed on the participatory aspects of young people playing with, creating and gaining pleasure from media. This study contributes to this participatory approach by bringing post structuralist perspectives to the field, which have been absent from studies of secondary school media education. I suggest theories of media learning must take account of the ongoing formation of students subjectivities as they negotiate social, cultural and educational norms. Michel Foucaults theory of technologies of the self and Judith Butlers theories of performativity and recognition are used to develop an argument that media learning occurs in the context of students negotiating various ethical systems as they establish their social viability through achieving recognition within communities of practice. The concept of ethical systems has been developed for this study by drawing on Foucaults theories of discourse and truth regimes and Butlers updating of Althussers theory of interpellation. This post structuralist approach makes it possible to investigate the ways in which students productively repeat and vary norms to creatively do and undo the various media learning activities with which they are required to engage. The study focuses on a group of year ten students in an all boys Catholic urban school in Australia who undertook learning about video games in a three-week intensive immersion program. The analysis examines the ethical systems operating in the classroom, including formal systems of schooling, informal systems of popular cultural practice and systems of masculinity. It also examines the students use of semiotic resources to repeat and/or vary norms while reflecting on, discussing, designing and producing video games. The key findings of the study are that students are motivated to learn technology skills and production processes rather than theory work. This motivation stems from the students desire to become recognisable in communities of technological and masculine practice. However, student agency is not only possible through critical responses to media, but through performative variation of norms through creative ethical practices as students participate with new media technologies. Therefore, the opportunities exist for media educators to create the conditions for variation of norms through production activities. The study offers several implications for media education theory and practice including: the productive possibilities of post structuralism for informing ways of doing media education; the importance of media teachers having the autonomy to creatively plan curriculum; the advantages of media and technology teachers collaborating to draw on a broad range of resources to develop curriculum; the benefits of placing more emphasis on students creative uses of media; and the advantages of blending formal classroom approaches to media education with less formal out of school experiences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose an efficient and low-complexity scheme for estimating and compensating clipping noise in OFDMA systems. Conventional clipping noise estimation schemes, which need all demodulated data symbols, may become infeasible in OFDMA systems where a specific user may only know his own modulation scheme. The proposed scheme first uses equalized output to identify a limited number of candidate clips, and then exploits the information on known subcarriers to reconstruct clipped signal. Simulation results show that the proposed scheme can significantly improve the system performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Emissions from airport operations are of significant concern because of their potential impact on local air quality and human health. The currently limited scientific knowledge of aircraft emissions is an important issue worldwide, when considering air pollution associated with airport operation, and this is especially so for ultrafine particles. This limited knowledge is due to scientific complexities associated with measuring aircraft emissions during normal operations on the ground. In particular this type of research has required the development of novel sampling techniques which must take into account aircraft plume dispersion and dilution as well as the various particle dynamics that can affect the measurements of the aircraft engine plume from an operational aircraft. In order to address this scientific problem, a novel mobile emission measurement method called the Plume Capture and Analysis System (PCAS), was developed and tested. The PCAS permits the capture and analysis of aircraft exhaust during ground level operations including landing, taxiing, takeoff and idle. The PCAS uses a sampling bag to temporarily store a sample, providing sufficient time to utilize sensitive but slow instrumental techniques to be employed to measure gas and particle emissions simultaneously and to record detailed particle size distributions. The challenges in relation to the development of the technique include complexities associated with the assessment of the various particle loss and deposition mechanisms which are active during storage in the PCAS. Laboratory based assessment of the method showed that the bag sampling technique can be used to accurately measure particle emissions (e.g. particle number, mass and size distribution) from a moving aircraft or vehicle. Further assessment of the sensitivity of PCAS results to distance from the source and plume concentration was conducted in the airfield with taxiing aircraft. The results showed that the PCAS is a robust method capable of capturing the plume in only 10 seconds. The PCAS is able to account for aircraft plume dispersion and dilution at distances of 60 to 180 meters downwind of moving a aircraft along with particle deposition loss mechanisms during the measurements. Characterization of the plume in terms of particle number, mass (PM2.5), gaseous emissions and particle size distribution takes only 5 minutes allowing large numbers of tests to be completed in a short time. The results were broadly consistent and compared well with the available data. Comprehensive measurements and analyses of the aircraft plumes during various modes of the landing and takeoff (LTO) cycle (e.g. idle, taxi, landing and takeoff) were conducted at Brisbane Airport (BNE). Gaseous (NOx, CO2) emission factors, particle number and mass (PM2.5) emission factors and size distributions were determined for a range of Boeing and Airbus aircraft, as a function of aircraft type and engine thrust level. The scientific complexities including the analysis of the often multimodal particle size distributions to describe the contributions of different particle source processes during the various stages of aircraft operation were addressed through comprehensive data analysis and interpretation. The measurement results were used to develop an inventory of aircraft emissions at BNE, including all modes of the aircraft LTO cycle and ground running procedures (GRP). Measurements of the actual duration of aircraft activity in each mode of operation (time-in-mode) and compiling a comprehensive matrix of gas and particle emission rates as a function of aircraft type and engine thrust level for real world situations was crucial for developing the inventory. The significance of the resulting matrix of emission rates in this study lies in the estimate it provides of the annual particle emissions due to aircraft operations, especially in terms of particle number. In summary, this PhD thesis presents for the first time a comprehensive study of the particle and NOx emission factors and rates along with the particle size distributions from aircraft operations and provides a basis for estimating such emissions at other airports. This is a significant addition to the scientific knowledge in terms of particle emissions from aircraft operations, since the standard particle number emissions rates are not currently available for aircraft activities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates the problem of robot navigation using only landmark bearings. The proposed system allows a robot to move to a ground target location specified by the sensor values observed at this ground target posi- tion. The control actions are computed based on the difference between the current landmark bearings and the target landmark bearings. No Cartesian coordinates with respect to the ground are computed by the control system. The robot navigates using solely information from the bearing sensor space. Most existing robot navigation systems require a ground frame (2D Cartesian coordinate system) in order to navigate from a ground point A to a ground point B. The commonly used sensors such as laser range scanner, sonar, infrared, and vision do not directly provide the 2D ground coordi- nates of the robot. The existing systems use the sensor measurements to localise the robot with respect to a map, a set of 2D coordinates of the objects of interest. It is more natural to navigate between the points in the sensor space corresponding to A and B without requiring the Cartesian map and the localisation process. Research on animals has revealed how insects are able to exploit very limited computational and memory resources to successfully navigate to a desired destination without computing Cartesian positions. For example, a honeybee balances the left and right optical flows to navigate in a nar- row corridor. Unlike many other ants, Cataglyphis bicolor does not secrete pheromone trails in order to find its way home but instead uses the sun as a compass to keep track of its home direction vector. The home vector can be inaccurate, so the ant also uses landmark recognition. More precisely, it takes snapshots and compass headings of some landmarks. To return home, the ant tries to line up the landmarks exactly as they were before it started wandering. This thesis introduces a navigation method based on reflex actions in sensor space. The sensor vector is made of the bearings of some landmarks, and the reflex action is a gradient descent with respect to the distance in sensor space between the current sensor vector and the target sensor vec- tor. Our theoretical analysis shows that except for some fully characterized pathological cases, any point is reachable from any other point by reflex action in the bearing sensor space provided the environment contains three landmarks and is free of obstacles. The trajectories of a robot using reflex navigation, like other image- based visual control strategies, do not correspond necessarily to the shortest paths on the ground, because the sensor error is minimized, not the moving distance on the ground. However, we show that the use of a sequence of waypoints in sensor space can address this problem. In order to identify relevant waypoints, we train a Self Organising Map (SOM) from a set of observations uniformly distributed with respect to the ground. This SOM provides a sense of location to the robot, and allows a form of path planning in sensor space. The navigation proposed system is analysed theoretically, and evaluated both in simulation and with experiments on a real robot.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, support and confidences) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The CDIO Initiative has been globally recognised as an enabler for engineering education reform. With the CDIO process, the CDIO Standards and the CDIO Syllabus, many scholarly contributions have been made around cultural change, curriculum reform and learning environments. In the Australasian region, reform is gaining significant momentum within the engineering education community, the profession, and higher education institutions. This paper presents the CDIO Syllabus cast into the Australian context by mapping it to the Engineers Australia Graduate Attributes, the Washington Accord Graduate Attributes and the Queensland University of Technology Graduate Capabilities. Furthermore, in recognition that many secondary schools and technical training institutions offer introductory engineering technology subjects, this paper presents an extended self-rating framework suited for recognising developing levels of proficiency at a preparatory level. The framework is consistent with conventional application to undergraduate programs and professional practice, but adapted for the preparatory context. As with the original CDIO framework with proficiency levels, this extended framework is informed by Blooms Educational Objectives. A proficiency evaluation of Queensland Study Authoritys Engineering Technology senior syllabus is demonstrated indicating proficiency levels embedded within this secondary school subject within a preparatory scope. Through this extended CDIO framework, students and faculty have greater awareness and access to tools to promote (i) student engagement in their own graduate capability development, (ii) faculty engagement in course and program design, through greater transparency and utility of the continuum of graduate capability development with associate levels of proficiency, and the context in which they exist in terms of pre-tertiary engineering studies; and (iii) course maintenance and quality audit methodology for the purpose of continuous improvement processes and program accreditation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sandy soils have low water and nutrient retention capabilities so that zeolite soil amendments are used for high value land uses including turf and horticulture to reduce leaching losses of NH4+ fertilisers. MesoLite is a zeolitic material made by caustic treatment of kaolin at 80-95oC. It has a moderately low surface area (9-12m2/g) and very high cation exchange capacity (494 cmol(+)/kg). Laboratory column experiments showed that an addition of 0.4% MesoLite to a sandy soil greatly (90%) reduced leaching of added NH4+ compared to an unamended soil and MesoLite is 11 times more efficient in retaining NH4+ than natural zeolite. Furthermore, NH4+-MesoLite slowly releases NH4+ to soil solution and is likely to be an effective slow release fertiliser.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We developed orthogonal least-squares techniques for fitting crystalline lens shapes, and used the bootstrap method to determine uncertainties associated with the estimated vertex radii of curvature and asphericities of five different models. Three existing models were investigated including one that uses two separate conics for the anterior and posterior surfaces, and two whole lens models based on a modulated hyperbolic cosine function and on a generalized conic function. Two new models were proposed including one that uses two interdependent conics and a polynomial based whole lens model. The models were used to describe the in vitro shape for a data set of twenty human lenses with ages 782 years. The two-conic-surface model (7 mm zone diameter) and the interdependent surfaces model had significantly lower merit functions than the other three models for the data set, indicating that most likely they can describe human lens shape over a wide age range better than the other models (although with the two-conic-surfaces model being unable to describe the lens equatorial region). Considerable differences were found between some models regarding estimates of radii of curvature and surface asphericities. The hyperbolic cosine model and the new polynomial based whole lens model had the best precision in determining the radii of curvature and surface asphericities across the five considered models. Most models found significant increase in anterior, but not posterior, radius of curvature with age. Most models found a wide scatter of asphericities, but with the asphericities usually being positive and not significantly related to age. As the interdependent surfaces model had lower merit function than three whole lens models, there is further scope to develop an accurate model of the complete shape of human lenses of all ages. The results highlight the continued difficulty in selecting an appropriate model for the crystalline lens shape.