510 resultados para embedded Systems
Resumo:
An emergent form of political economy, facilitated by information and communication technologies (ICTs), is widely propagated as the apotheosis of unmitigated social, economic, and technological progress. Meanwhile, throughout the world, social degradation and economic inequality are increasing logarithmically. Valued categories of thought are, axiomatically, the basic commodities of the “knowledge economy”. Language is its means of exchange. This paper proposes a sociolinguistic method with which to critically engage the hyperbole of the “Information Age”. The method is grounded in a systemic social theory that synthesises aspects of autopoiesis and Marxist political economy. A trade policy statement is analysed to exemplify the sociolinguistically created aberrations that are today most often construed as social and political determinants.
Resumo:
In this chapter I propose a theoretical framework for understanding the role of mediation processes in the inculcation, maintenance, and change of evaluative meaning systems, or axiologies, and how such a perspective can provide a useful and complementary dimension to analysis for SFL and CDA. I argue that an understanding of mediation—the movement of meaning across time and space—is essential for the analysis of meaning. Using two related texts as examples, I show how an understanding of mediation can aid SFL and CDA practitioners in the analysis of social change.
Resumo:
The title of this book, Hard Lesson: Reflections on Crime control in Late Modernity, contains a number of clues about its general theoretical direction. It is a book concerned, fist and foremost, with the vagaries of crime control in western neo-liberal and English speaking countries. More specifically, Hard Lessons draws attention to a number of examples in which discrete populations – those who have in one way or another offended against the criminal law - have become the subjects of various forms of stare intervention, regulation and control. We are concerned most of all with the ways in which recent criminal justice policies and practices have resulted in what are variously described as unintended consequences, unforeseen outcomes, unanticipated results, counter-productive effects or negative side effects. At their simplest, such terms refer to the apparent gulf between intention and outcome; they often form the basis for considerable amount of policy reappraisal, soul searching and even nihilistic despair among the mamandirns of crime control. Unintended consequences can, of course, be both positive and negative. Occasionally, crime control measures may result in beneficial outcomes, such as the use of DNA to acquit wrongly convicted prisoners. Generally, however, unforeseen effects tend to be negative and even entirely counterproductive, and/or directly opposite to what were originally intended. All this, of course, presupposes some sort of rational, well meaning and transparent policy making process so beloved by liberal social policy theorists. Yet, as Judith Bessant points out in her chapter, this view of policy formulation tends to obscure the often covert, regulatory and downright malevolent intentions contained in many government policies and practices. Indeed, history is replete with examples of governments seeking to mask their real aims from a prying public eye. Denials and various sorts of ‘techniques of neutralisation’ serve to cloak the real or ‘underlying’ aims of the powerful (Cohen 2000). The latest crop of ‘spin doctors’ and ‘official spokespersons’ has ensured that the process of governmental obfuscation, distortion and concealment remains deeply embedded in neo-liberal forms of governance. There is little new or surprising in this; nor should we be shocked when things ‘go wrong’ in the domain of crime control since many unintended consequences are, more often than not, quite predictable. Prison riots, high rates of recidivism and breaches of supervision orders, expansion rather than contraction of control systems, laws that create the opposite of what was intended – all these are normative features of western crime control. Indeed, without the deep fault lines running between policy and outcome it would be hard to imagine what many policy makers, administrators and practitioners would do: their day to day work practices and (and incomes) are directly dependent upon emergent ‘service delivery’ problems. Despite recurrent howls of official anguish and occasional despondency it is apparent that those involved in the propping up the apparatus of crime control have a vested interest in ensuring that polices and practices remain in an enduring state of review and reform.
Resumo:
Context The School of Information Technology at QUT has recently undertaken a major restructuring of their Bachelor of Information Technology (BIT) course. Some of the aims of this restructuring include a reduction in first year attrition and to provide an attractive degree course that meets both student and industry expectations. Emphasis has been placed on the first semester in the context of retaining students by introducing a set of four units that complement one another and provide introductory material on technology, programming and related skills, and generic skills that will aid the students throughout their undergraduate course and in their careers. This discussion relates to one of these four fist semester units, namely Building IT Systems. The aim of this unit is to create small Information Technology (IT) systems that use programming or scripting, databases as either standalone applications or web applications. In the prior history of teaching introductory computer programming at QUT, programming has been taught as a stand alone subject and integration of computer applications with other systems such as databases and networks was not undertaken until students had been given a thorough grounding in those topics as well. Feedback has indicated that students do not believe that working with a database requires programming skills. In fact, the teaching of the building blocks of computer applications have been compartmentalized and taught in isolation from each other. The teaching of introductory computer programming has been an industry requirement of IT degree courses as many jobs require at least some knowledge of the topic. Yet, computer programming is not a skill that all students have equal capabilities of learning (Bruce et al., 2004) and this is clearly shown by the volume of publications dedicated to this topic in the literature over a broad period of time (Eckerdal & Berglund, 2005; Mayer, 1981; Winslow, 1996). The teaching of this introductory material has been done pretty much the same way over the past thirty years. During this period of time that introductory computer programming courses have been taught at QUT, a number of different programming languages and programming paradigms have been used and different approaches to teaching and learning have been attempted in an effort to find the golden thread that would allow students to learn this complex topic. Unfortunately, computer programming is not a skill that can be learnt in one semester. Some basics can be learnt but it can take many years to master (Norvig, 2001). Faculty data typically has shown a bimodal distribution of results for students undertaking introductory programming courses with a high proportion of students receiving a high mark and a high proportion of students receiving a low or failing mark. This indicates that there are students who understand and excel with the introductory material while there is another group who struggle to understand the concepts and practices required to be able to translate a specification or problem statement into a computer program that achieves what is being requested. The consequence of a large group of students failing the introductory programming course has been a high level of attrition amongst first year students. This attrition level does not provide good continuity in student numbers in later years of the degree program and the current approach is not seen as sustainable.
Resumo:
The purpose of this study was to identify the pedagogical knowledge relevant to the successful completion of a pie chart item. This purpose was achieved through the identification of the essential fluencies that 12–13-year-olds required for the successful solution of a pie chart item. Fluency relates to ease of solution and is particularly important in mathematics because it impacts on performance. Although the majority of students were successful on this multiple choice item, there was considerable divergence in the strategies they employed. Approximately two-thirds of the students employed efficient multiplicative strategies, which recognised and capitalised on the pie chart as a proportional representation. In contrast, the remaining one-third of students used a less efficient additive strategy that failed to capitalise on the representation of the pie chart. The results of our investigation of students’ performance on the pie chart item during individual interviews revealed that five distinct fluencies were involved in the solution process: conceptual (understanding the question), linguistic (keywords), retrieval (strategy selection), perceptual (orientation of a segment of the pie chart) and graphical (recognising the pie chart as a proportional representation). In addition, some students exhibited mild disfluencies corresponding to the five fluencies identified above. Three major outcomes emerged from the study. First, a model of knowledge of content and students for pie charts was developed. This model can be used to inform instruction about the pie chart and guide strategic support for students. Second, perceptual and graphical fluency were identified as two aspects of the curriculum, which should receive a greater emphasis in the primary years, due to their importance in interpreting pie charts. Finally, a working definition of fluency in mathematics was derived from students’ responses to the pie chart item.
Resumo:
In condition-based maintenance (CBM), effective diagnostics and prognostics are essential tools for maintenance engineers to identify imminent fault and to predict the remaining useful life before the components finally fail. This enables remedial actions to be taken in advance and reschedules production if necessary. This paper presents a technique for accurate assessment of the remnant life of machines based on historical failure knowledge embedded in the closed loop diagnostic and prognostic system. The technique uses the Support Vector Machine (SVM) classifier for both fault diagnosis and evaluation of health stages of machine degradation. To validate the feasibility of the proposed model, the five different level data of typical four faults from High Pressure Liquefied Natural Gas (HP-LNG) pumps were used for multi-class fault diagnosis. In addition, two sets of impeller-rub data were analysed and employed to predict the remnant life of pump based on estimation of health state. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
Crash risk is the statistical probability of a crash. Its assessment can be performed through ex post statistical analysis or in real-time with on-vehicle systems. These systems can be cooperative. Cooperative Vehicle-Infrastructure Systems (CVIS) are a developing research avenue in the automotive industry worldwide. This paper provides a survey of existing CVIS systems and methods to assess crash risk with them. It describes the advantages of cooperative systems versus non-cooperative systems. A sample of cooperative crash risk assessment systems is analysed to extract vulnerabilities according to three criteria: market penetration, over-reliance on GPS and broadcasting issues. It shows that cooperative risk assessment systems are still in their infancy and requires further development to provide their full benefits to road users.
Resumo:
We propose an efficient and low-complexity scheme for estimating and compensating clipping noise in OFDMA systems. Conventional clipping noise estimation schemes, which need all demodulated data symbols, may become infeasible in OFDMA systems where a specific user may only know his own modulation scheme. The proposed scheme first uses equalized output to identify a limited number of candidate clips, and then exploits the information on known subcarriers to reconstruct clipped signal. Simulation results show that the proposed scheme can significantly improve the system performance.
Resumo:
The explosive growth of the World-Wide-Web and the emergence of ecommerce are the major two factors that have led to the development of recommender systems (Resnick and Varian, 1997). The main task of recommender systems is to learn from users and recommend items (e.g. information, products or books) that match the users’ personal preferences. Recommender systems have been an active research area for more than a decade. Many different techniques and systems with distinct strengths have been developed to generate better quality recommendations. One of the main factors that affect recommenders’ recommendation quality is the amount of information resources that are available to the recommenders. The main feature of the recommender systems is their ability to make personalised recommendations for different individuals. However, for many ecommerce sites, it is difficult for them to obtain sufficient knowledge about their users. Hence, the recommendations they provided to their users are often poor and not personalised. This information insufficiency problem is commonly referred to as the cold-start problem. Most existing research on recommender systems focus on developing techniques to better utilise the available information resources to achieve better recommendation quality. However, while the amount of available data and information remains insufficient, these techniques can only provide limited improvements to the overall recommendation quality. In this thesis, a novel and intuitive approach towards improving recommendation quality and alleviating the cold-start problem is attempted. This approach is enriching the information resources. It can be easily observed that when there is sufficient information and knowledge base to support recommendation making, even the simplest recommender systems can outperform the sophisticated ones with limited information resources. Two possible strategies are suggested in this thesis to achieve the proposed information enrichment for recommenders: • The first strategy suggests that information resources can be enriched by considering other information or data facets. Specifically, a taxonomy-based recommender, Hybrid Taxonomy Recommender (HTR), is presented in this thesis. HTR exploits the relationship between users’ taxonomic preferences and item preferences from the combination of the widely available product taxonomic information and the existing user rating data, and it then utilises this taxonomic preference to item preference relation to generate high quality recommendations. • The second strategy suggests that information resources can be enriched simply by obtaining information resources from other parties. In this thesis, a distributed recommender framework, Ecommerce-oriented Distributed Recommender System (EDRS), is proposed. The proposed EDRS allows multiple recommenders from different parties (i.e. organisations or ecommerce sites) to share recommendations and information resources with each other in order to improve their recommendation quality. Based on the results obtained from the experiments conducted in this thesis, the proposed systems and techniques have achieved great improvement in both making quality recommendations and alleviating the cold-start problem.
Resumo:
There is a growing need for international transparency of engineering qualifications, and mechanisms to support and facilitate student mobility. In response, there are a number of global initiatives attempting to address these needs, particularly in Europe, North America and Australia. The Conceive-Design-Implement-Operate (CDIO) Initiative has a set of standards, competencies, and proficiency levels developed through a global community of practice. It is a well-structured framework in which best-practice internationalisation and student mobility can be embedded. However, the current 12 CDIO Standards do not address international qualifications or student mobility. Based on an environmental scan of global activities, the underpinning principles of best practice are identified and form the basis of the proposed 13th CDIO Standard — “Internationalization and Mobility”.
Resumo:
The issue of what an effective high quality / high equity education system might look like remains contested. Indeed there is more educational commentary on those systems that do not achieve this goal (see for example Luke & Woods, 2009 for a detailed review of the No Child Left Behind policy initiatives put forward in the United States under the Bush Administration) than there is detailed consideration of what such a system might enact and represent. A long held critique of socio cultural and critical perspectives in education has been their focus on deconstruction to the supposed detriment of reconstructive work. This critique is less warranted in recent times based on work in the field, especially the plethora of qualitative research focusing on case studies of ‘best practice’. However it certainly remains the case that there is more work to be done in investigating the characteristics of a socially just system. This issue of Point and Counterpoint aims to progress such a discussion. Several of the authors call for a reconfiguration of the use of large scale comparative assessment measures and all suggest new ways of thinking about quality and equity for school systems. Each of the papers tackles different aspects of the problematic of how to achieve high equity without compromising quality within a large education system. They each take a reconstructive focus, highlighting ways forward for education systems in Australia and beyond. While each paper investigates different aspects of the issue, the clearly stated objective of seeking to delineate and articulate characteristics of socially just education is consistent throughout the issue.
Resumo:
The CDIO Initiative has been globally recognised as an enabler for engineering education reform. With the CDIO process, the CDIO Standards and the CDIO Syllabus, many scholarly contributions have been made around cultural change, curriculum reform and learning environments. In the Australasian region, reform is gaining significant momentum within the engineering education community, the profession, and higher education institutions. This paper presents the CDIO Syllabus cast into the Australian context by mapping it to the Engineers Australia Graduate Attributes, the Washington Accord Graduate Attributes and the Queensland University of Technology Graduate Capabilities. Furthermore, in recognition that many secondary schools and technical training institutions offer introductory engineering technology subjects, this paper presents an extended self-rating framework suited for recognising developing levels of proficiency at a preparatory level. The framework is consistent with conventional application to undergraduate programs and professional practice, but adapted for the preparatory context. As with the original CDIO framework with proficiency levels, this extended framework is informed by Bloom’s Educational Objectives. A proficiency evaluation of Queensland Study Authority’s Engineering Technology senior syllabus is demonstrated indicating proficiency levels embedded within this secondary school subject within a preparatory scope. Through this extended CDIO framework, students and faculty have greater awareness and access to tools to promote (i) student engagement in their own graduate capability development, (ii) faculty engagement in course and program design, through greater transparency and utility of the continuum of graduate capability development with associate levels of proficiency, and the context in which they exist in terms of pre-tertiary engineering studies; and (iii) course maintenance and quality audit methodology for the purpose of continuous improvement processes and program accreditation.
Resumo:
Participatory design has the moral and pragmatic tenet of including those who will be most affected by a design into the design process. However, good participation is hard to achieve and results linking project success and degree of participation are inconsistent. Through three case studies examining some of the challenges that different properties of knowledge – novelty, difference, dependence – can impose on the participatory endeavour we examine some of the consequences to the participatory process of failing to bridge across knowledge boundaries – syntactic, semantic, and pragmatic. One pragmatic consequence, disrupting the user’s feeling of involvement to the project, has been suggested as a possible explanation for the inconsistent results linking participation and project success. To aid in addressing these issues a new form of participatory research, called embedded research, is proposed and examined within the framework of the case studies and knowledge framework with a call for future research into its possibilities.
Resumo:
The aim of this paper is to show how principles of ecological psychology and dynamical systems theory can underpin a philosophy of coaching practice in a nonlinear pedagogy. Nonlinear pedagogy is based on a view of the human movement system as a nonlinear dynamical system. We demonstrate how this perspective of the human movement system can aid understanding of skill acquisition processes and underpin practice for sports coaches. We provide a description of nonlinear pedagogy followed by a consideration of some of the fundamental principles of ecological psychology and dynamical systems theory that underpin it as a coaching philosophy. We illustrate how each principle impacts on nonlinear pedagogical coaching practice, demonstrating how each principle can substantiate a framework for the coaching process.
Resumo:
This study considers the solution of a class of linear systems related with the fractional Poisson equation (FPE) (−∇2)α/2φ=g(x,y) with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix A raised to the fractional power α/2. The solution of the linear system then requires the action of the matrix function f(A)=A−α/2 on a vector b. For large, sparse, and symmetric positive definite matrices, the Lanczos approximation generates f(A)b≈β0Vmf(Tm)e1. This method works well when both the analytic grade of A with respect to b and the residual for the linear system are sufficiently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.