858 resultados para Games in literature.
Resumo:
This report reviews the selection, design, and installation of fiber reinforced polymer systems for strengthening of reinforced concrete or pre-stressed concrete bridges and other structures. The report is prepared based on the knowledge gained from worldwide experimental research, analytical work, and field applications of FRP systems used to strengthen concrete structures. Information on material properties, design and installation methods of FRP systems used as external reinforcement are presented. This information can be used to select an FRP system for increasing the strength and stiffness of reinforced concrete beams or the ductility of columns, and other applications. Based on the available research, the design considerations and concepts are covered in this report. In the next stage of the project, these will be further developed as design tools. It is important to note, however, that the design concepts proposed in literature have not in many cases been thoroughly developed and proven. Therefore, a considerable amount of research work will be required prior to development of the design concepts into practical design tools, which is a major goal of the current research project. The durability and long-term performance of FRP materials has been the subject of much research, which still are on going. Long-term field data are not currently available, and it is still difficult to accurately predict the life of FRP strengthening systems. The report briefly addresses environmental degradation and long-term durability issues as well. A general overview of using FRP bars as primary reinforcement of concrete structures is presented in Chapter 8. In Chapter 9, a summary of strengthening techniques identified as part of this initial stage of the research project and the issues which require careful consideration prior to practical implementation of these identified techniques are presented.
Resumo:
Key topics: Since the birth of the Open Source movement in the mid-80's, open source software has become more and more widespread. Amongst others, the Linux operating system, the Apache web server and the Firefox internet explorer have taken substantial market shares to their proprietary competitors. Open source software is governed by particular types of licenses. As proprietary licenses only allow the software's use in exchange for a fee, open source licenses grant users more rights like the free use, free copy, free modification and free distribution of the software, as well as free access to the source code. This new phenomenon has raised many managerial questions: organizational issues related to the system of governance that underlie such open source communities (Raymond, 1999a; Lerner and Tirole, 2002; Lee and Cole 2003; Mockus et al. 2000; Tuomi, 2000; Demil and Lecocq, 2006; O'Mahony and Ferraro, 2007;Fleming and Waguespack, 2007), collaborative innovation issues (Von Hippel, 2003; Von Krogh et al., 2003; Von Hippel and Von Krogh, 2003; Dahlander, 2005; Osterloh, 2007; David, 2008), issues related to the nature as well as the motivations of developers (Lerner and Tirole, 2002; Hertel, 2003; Dahlander and McKelvey, 2005; Jeppesen and Frederiksen, 2006), public policy and innovation issues (Jullien and Zimmermann, 2005; Lee, 2006), technological competitions issues related to standard battles between proprietary and open source software (Bonaccorsi and Rossi, 2003; Bonaccorsi et al. 2004, Economides and Katsamakas, 2005; Chen, 2007), intellectual property rights and licensing issues (Laat 2005; Lerner and Tirole, 2005; Gambardella, 2006; Determann et al., 2007). A major unresolved issue concerns open source business models and revenue capture, given that open source licenses imply no fee for users. On this topic, articles show that a commercial activity based on open source software is possible, as they describe different possible ways of doing business around open source (Raymond, 1999; Dahlander, 2004; Daffara, 2007; Bonaccorsi and Merito, 2007). These studies usually look at open source-based companies. Open source-based companies encompass a wide range of firms with different categories of activities: providers of packaged open source solutions, IT Services&Software Engineering firms and open source software publishers. However, business models implications are different for each of these categories: providers of packaged solutions and IT Services&Software Engineering firms' activities are based on software developed outside their boundaries, whereas commercial software publishers sponsor the development of the open source software. This paper focuses on open source software publishers' business models as this issue is even more crucial for this category of firms which take the risk of investing in the development of the software. Literature at last identifies and depicts only two generic types of business models for open source software publishers: the business models of ''bundling'' (Pal and Madanmohan, 2002; Dahlander 2004) and the dual licensing business models (Välimäki, 2003; Comino and Manenti, 2007). Nevertheless, these business models are not applicable in all circumstances. Methodology: The objectives of this paper are: (1) to explore in which contexts the two generic business models described in literature can be implemented successfully and (2) to depict an additional business model for open source software publishers which can be used in a different context. To do so, this paper draws upon an explorative case study of IdealX, a French open source security software publisher. This case study consists in a series of 3 interviews conducted between February 2005 and April 2006 with the co-founder and the business manager. It aims at depicting the process of IdealX's search for the appropriate business model between its creation in 2000 and 2006. This software publisher has tried both generic types of open source software publishers' business models before designing its own. Consequently, through IdealX's trials and errors, I investigate the conditions under which such generic business models can be effective. Moreover, this study describes the business model finally designed and adopted by IdealX: an additional open source software publisher's business model based on the principle of ''mutualisation'', which is applicable in a different context. Results and implications: Finally, this article contributes to ongoing empirical work within entrepreneurship and strategic management on open source software publishers' business models: it provides the characteristics of three generic business models (the business model of bundling, the dual licensing business model and the business model of mutualisation) as well as conditions under which they can be successfully implemented (regarding the type of product developed and the competencies of the firm). This paper also goes further into the traditional concept of business model used by scholars in the open source related literature. In this article, a business model is not only considered as a way of generating incomes (''revenue model'' (Amit and Zott, 2001)), but rather as the necessary conjunction of value creation and value capture, according to the recent literature about business models (Amit and Zott, 2001; Chresbrough and Rosenblum, 2002; Teece, 2007). Consequently, this paper analyses the business models from these two components' point of view.
Resumo:
The multi-level current reinjection concept described in literature is well-known to produce high quality AC current waveforms in high power and high voltage self-commutating current source converters. This paper proposes a novel reinjection circuitry which is capable of producing a 7-level reinjection current. It is shown that this reinjection current effectively increases the pulse number of the converter to 72. The use of PSCAD/EMTDC simulation validates the functionality of the proposed concept illustrating its effectiveness on both AC and DC sides of the converter.
Resumo:
Machine downtime, whether planned or unplanned, is intuitively costly to manufacturing organisations, but is often very difficult to quantify. The available literature showed that costing processes are rarely undertaken within manufacturing organisations. Where cost analyses have been undertaken, they generally have only valued a small proportion of the affected costs, leading to an overly conservative estimate. This thesis aimed to develop a cost of downtime model, with particular emphasis on the application of the model to Australia Post’s Flat Mail Optical Character Reader (FMOCR). The costing analysis determined a cost of downtime of $5,700,000 per annum, or an average cost of $138 per operational hour. The second section of this work focused on the use of the cost of downtime to objectively determine areas of opportunity for cost reduction on the FMOCR. This was the first time within Post that maintenance costs were considered along side of downtime for determining machine performance. Because of this, the results of the analysis revealed areas which have historically not been targeted for cost reduction. Further exploratory work was undertaken on the Flats Lift Module (FLM) and Auto Induction Station (AIS) Deceleration Belts through the comparison of the results against two additional FMOCR analysis programs. This research has demonstrated the development of a methodical and quantifiable cost of downtime for the FMOCR. This has been the first time that Post has endeavoured to examine the cost of downtime. It is also one of the very few methodologies for valuing downtime costs that has been proposed in literature. The work undertaken has also demonstrated how the cost of downtime can be incorporated into machine performance analysis with specific application to identifying high costs modules. The outcome of this report has both been the methodology for costing downtime, as well as a list of areas for cost reduction. In doing so, this thesis has outlined the two key deliverables presented at the outset of the research.
Resumo:
The purpose of this study is to investigate how secondary school media educators might best meet the needs of students who prefer practical production work to ‘theory’ work in media studies classrooms. This is a significant problem for a curriculum area that claims to develop students’ media literacies by providing them with critical frameworks and a metalanguage for thinking about the media. It is a problem that seems to have become more urgent with the availability of new media technologies and forms like video games. The study is located in the field of media education, which tends to draw on structuralist understandings of the relationships between young people and media and suggests that students can be empowered to resist media’s persuasive discourses. Recent theoretical developments suggest too little emphasis has been placed on the participatory aspects of young people playing with, creating and gaining pleasure from media. This study contributes to this ‘participatory’ approach by bringing post structuralist perspectives to the field, which have been absent from studies of secondary school media education. I suggest theories of media learning must take account of the ongoing formation of students’ subjectivities as they negotiate social, cultural and educational norms. Michel Foucault’s theory of ‘technologies of the self’ and Judith Butler’s theories of performativity and recognition are used to develop an argument that media learning occurs in the context of students negotiating various ‘ethical systems’ as they establish their social viability through achieving recognition within communities of practice. The concept of ‘ethical systems’ has been developed for this study by drawing on Foucault’s theories of discourse and ‘truth regimes’ and Butler’s updating of Althusser’s theory of interpellation. This post structuralist approach makes it possible to investigate the ways in which students productively repeat and vary norms to creatively ‘do’ and ‘undo’ the various media learning activities with which they are required to engage. The study focuses on a group of year ten students in an all boys’ Catholic urban school in Australia who undertook learning about video games in a three-week intensive ‘immersion’ program. The analysis examines the ethical systems operating in the classroom, including formal systems of schooling, informal systems of popular cultural practice and systems of masculinity. It also examines the students’ use of semiotic resources to repeat and/or vary norms while reflecting on, discussing, designing and producing video games. The key findings of the study are that students are motivated to learn technology skills and production processes rather than ‘theory’ work. This motivation stems from the students’ desire to become recognisable in communities of technological and masculine practice. However, student agency is not only possible through critical responses to media, but through performative variation of norms through creative ethical practices as students participate with new media technologies. Therefore, the opportunities exist for media educators to create the conditions for variation of norms through production activities. The study offers several implications for media education theory and practice including: the productive possibilities of post structuralism for informing ways of doing media education; the importance of media teachers having the autonomy to creatively plan curriculum; the advantages of media and technology teachers collaborating to draw on a broad range of resources to develop curriculum; the benefits of placing more emphasis on students’ creative uses of media; and the advantages of blending formal classroom approaches to media education with less formal out of school experiences.
Resumo:
A deconvolution method that combines nanoindentation and finite element analysis was developed to determine elastic modulus of thin coating layer in a coating-substrate bilayer system. In this method, the nanoindentation experiments were conducted to obtain the modulus of both the bilayer system and the substrate. The finite element analysis was then applied to deconvolve the elastic modulus of the coating. The results demonstrated that the elastic modulus obtained using the developed method was in good agreement with that reported in literature.
Resumo:
Thirteen papers examine Asian and European experiences with developing national and city policy agendas around cultural and creative industries. Papers discuss policy transfer and the field of the cultural and creative industries--what can be learned from Europe; creative industries across cultural borders--the case of video games in Asia; spaces of culture and economy--mapping the cultural-creative cluster landscape; beyond networks and relations--toward rethinking creative cluster theory; the capital complex--Beijing's new creative clusters; the European creative class and regional development--how relevant Richard Florida's theory is for Europe; getting out of place--the mobile creative class taking on the local--a U.K. perspective on the creative class; Asian cities and limits to creative capital theory; the creative industries, governance, and economic development--a U.K. perspective; Shanghai's emergence into the global creative economy; Shanghai moderne--creative economy in a creative city?; urbanity as a political project--toward post-national European cities; and alternative policies in urban innovation. Contributors include economists. Kong is with the Department of Geography at the National University of Singapore. O'Connor is at Queensland University of Technology. Index.
Resumo:
John Hartley uses the 1956 Olympic Games in Melbourne to discuss the notions of a history of TV and TV History and concludes that the internet offers entirely new possibilities for TV as History.
Resumo:
This paper presents a retrospective view of a game design practice that recently switched from the development of complex learning games to the development of simple authoring tools for students to design their own learning games for each other. We introduce how our ‘10% Rule’, a premise that only 10% of what is learnt during a game design process is ultimately appreciated by the player, became a major contributor to the evolving practice. We use this rule primarily as an analytical and illustrative tool to discuss the learning involved in designing and playing learning games rather than as a scientifically and empirically proven rule. The 10% rule was promoted by our experience as designers and allows us to explore the often overlooked and valuable learning processes involved in designing learning games and mobile games in particular. This discussion highlights that in designing mobile learning games, students are not only reflecting on their own learning processes through setting up structures for others to enquire and investigate, they are also engaging in high-levels of independent inquiry and critical analysis in authentic learning settings. We conclude the paper with a discussion of the importance of these types of learning processes and skills of enquiry in 21st Century learning.
Resumo:
Introduction: The core business of public health is to protect and promote health in the population. Public health planning is the means to maximise these aspirations. Health professionals develop plans to address contemporary health priorities as the evidence about changing patterns of mortality and morbidity is presented. Officials are also alert to international trends in patterns of disease that have the potential to affect the health of Australians. Integrated planning and preparation is currently underway involving all emergency health services, hospitals and population health units to ensure Australia's quick and efficient response to any major infectious disease outbreak, such as avian influenza (bird flu). Public health planning for the preparations for the Sydney Olympics and Paralympic Games in 2000 took almost three years. ‘Its major components included increased surveillance of communicable disease; presentations to sentinel emergency departments; medical encounters at Olympic venues; cruise ship surveillance; environmental and food safety inspections; bioterrorism surveillance and global epidemic intelligence’ (Jorm et al 2003, 102). In other words, the public health plan was developed to ensure food safety, hospital capacity, safe crowd control, protection against infectious diseases, and an integrated emergency and disaster plan. We have national and state plans for vaccinating children against infectious diseases in childhood; plans to promote dental health for children in schools; and screening programs for cervical, breast and prostate cancer. An effective public health response to a change in the distribution of morbidity and mortality requires planning. All levels of government plan for the public’s health. Local governments (councils) ensure healthy local environments to protect the public’s health. They plan parks for recreation, construct traffic-calming devices near schools to prevent childhood accidents, build shade structures and walking paths, and even embed drafts/chess squares in tables for people to sit and play. Environmental Health officers ensure food safety in restaurants and measure water quality. These public health measures attempt to promote the quality of life of residents. Australian and state governments produce plans that protect and promote health through various policy and program initiatives and innovations. To be effective, program plans need to be evaluated. However, building an integrated evaluation plan into a program plan is often forgotten, as planning and evaluation are seen as two distinct entities. Consequently, it is virtually impossible to measure, with any confidence, the extent to which a program has achieved its goals and objectives. This chapter introduces you to the concepts of public health program planning and evaluation. Case studies and reflection questions are presented to illustrate key points. As various authors use different terminology to describe the same concepts/actions of planning and evaluation, the glossary at the back of this book will help you to clarify the terms used in this chapter.
Resumo:
Trace concerns writing-walking and walking-writing. The multiple voices of both novel and exegesis assemble a rhizomic map of a walk and create a never-entirely-certain wandering look upon a woman walking, rather than a single cocksure gaze. Trace explores the aesthetics of Western walking literature and the various nostalgias inherent in that tradition. Trace wonders how lost a character can become on a walk and whether a walk is itself a kind of becoming. In the undefined liminal space where the urban bleeds into the rural, Trace challenges the singular perspective of the dominating gaze with a wandering look, which aims to make an original contribution to both the walk in literature and to exegetical form.
Resumo:
Gabor representations have been widely used in facial analysis (face recognition, face detection and facial expression detection) due to their biological relevance and computational properties. Two popular Gabor representations used in literature are: 1) Log-Gabor and 2) Gabor energy filters. Even though these representations are somewhat similar, they also have distinct differences as the Log-Gabor filters mimic the simple cells in the visual cortex while the Gabor energy filters emulate the complex cells, which causes subtle differences in the responses. In this paper, we analyze the difference between these two Gabor representations and quantify these differences on the task of facial action unit (AU) detection. In our experiments conducted on the Cohn-Kanade dataset, we report an average area underneath the ROC curve (A`) of 92.60% across 17 AUs for the Gabor energy filters, while the Log-Gabor representation achieved an average A` of 96.11%. This result suggests that small spatial differences that the Log-Gabor filters pick up on are more useful for AU detection than the differences in contours and edges that the Gabor energy filters extract.