918 resultados para Conventional ovens
Resumo:
The majority of the world’s citizens now live in cities. Although urban planning can thus be thought of as a field with significant ramifications on the human condition, many practitioners feel that it has reached the crossroads in thought leadership between traditional practice and a new, more participatory and open approach. Conventional ways to engage people in participatory planning exercises are limited in reach and scope. At the same time, socio-cultural trends and technology innovation offer opportunities to re-think the status quo in urban planning. Neogeography introduces tools and services that allow non-geographers to use advanced geographical information systems. Similarly, is there potential for the emergence of a neo-planning paradigm in which urban planning is carried out through active civic engagement aided by Web 2.0 and new media technologies thus redefining the role of practicing planners? This paper traces a number of evolving links between urban planning, neogeography and information and communication technology. Two significant trends – participation and visualisation – with direct implications for urban planning are discussed. Combining advanced participation and visualisation features, the popular virtual reality environment Second Life is then introduced as a test bed to explore a planning workshop and an integrated software event framework to assist narrative generation. We discuss an approach to harness and analyse narratives using virtual reality logging to make transparent how users understand and interpret proposed urban designs.
Resumo:
Shared Services (SS) involves the convergence and streamlining of an organisation’s functions to ensure timely service delivery as effectively and efficiently as possible. As a management structure designed to promote value generation, cost savings and improved service delivery by leveraging on economies of scale, the idea of SS is driven by cost reduction and improvements in quality levels of service and efficiency. Current conventional wisdom is that the potential for SS is increasing due to the increasing costs of changing systems and business requirements for organisations and in implementing and running information systems. In addition, due to commoditisation of large information systems such as enterprise systems, many common, supporting functions across organisations are becoming more similar than not, leading to an increasing overlap in processes and fuelling the notion that it is possible for organisations to derive benefits from collaborating and sharing their common services through an inter-organisational shared services (IOSS) arrangement. While there is some research on traditional SS, very little research has been done on IOSS. In particular, it is unclear what are the potential drivers and inhibitors of IOSS. As the concepts of IOSS and SS are closely related to that of Outsourcing, and their distinction is sometimes blurred, this research has the first objective of seeking a clear conceptual understanding of the differences between SS and Outsourcing (in motivators, arrangements, benefits, disadvantages, etc) and based on this conceptual understanding, the second objective of this research is to develop a decision model (Shared Services Potential model) which would aid organisations in deciding which arrangement would be more appropriate for them to adopt in pursuit of process improvements for their operations. As the context of the study is on universities in higher education sharing administrative services common to or across them and with the assumption that such services were homogenous in nature, this thesis also reports on a case study. The case study involved face to face interviews from representatives of an Australian university to explore the potential for IOSS. Our key findings suggest that it is possible for universities to share services common across them as most of them were currently using the same systems although independently.
Resumo:
Medeleven was a practice-based research work that challenged conventional notions of how audiences ‘experience’ contemporary dance. It resulted in a 40 minute ‘experiential’ performance that inverted the traditional ‘passive’ presentation paradigm by situating the audience centrally within the creative process. The ‘traditional presentation paradigm’ was inverted in numerous ways including: asking the audience onstage with the performers, placing swings onstage for the audience to play on during the performance and guiding the audience through backstage corridors before entering onstage – all of which added elements of physicality, agency and liminality to the performance not usually available to audience members of contemporary dance. Five dancers moved throughout the space allowing the audience to choose where and how they engaged with this work and the swings were utilised both as a performance space and for audience seating. In addition to these spatial variations, the quadraphonic soundscore created distinct ‘environments’ throughout the stage space that varied individual experience possibilities. By positing performance as an experiential phenomenon, the pivotal objective of this work was to create a live-art experience that extended its performativity to include audience members as active meaning-makers, challenging both their role within this paradigm and their expectations of contemporary dance. The work produced strong responses from the audience with surveys indicating the presentation format, as well as the construction of the work, changed audience experience and ability to connect with the dance work. The research suggested that, in addition to existing research on dance audiences, barriers to engagement with contemporary dance may include how the art is constructed and where the audience is positioned within that creative framework. The research builds upon artistic practices being undertaken throughout the world that challenge the notion of existing ‘passive’ performance paradigms via creative ‘engagement’ with audience.
Resumo:
Urban infrastructure development in Korea has recently shifted from an old paradigm of conventional infrastructure planning to a new paradigm of intelligent infrastructure provision. This new paradigm, so called ubiquitous infrastructure, is based on a combination of urban infrastructure, information and communication technologies and digital networks. Ubiquitous infrastructure basically refers to an urban infrastructure where any citizen could access any infrastructure and services via any electronic device regardless of time and location. This paper introduces this new paradigm of intellectual infrastructure planning and its design schemes. The paper also examines the ubiquitous infrastructure development in Korea and discusses the positive effects of ubiquitous infrastructure on sustainable urban development.
Resumo:
Mechanical harmonic transmissions are relatively new kind of drives having several unusual features. For example, they can provide reduction ratio up to 500:1 in one stage, have very small teeth module compared to conventional drives and very large number of teeth (up to 1000) on a flexible gear. If for conventional drives manufacturing methods are well-developed, fabrication of large size harmonic drives presents a challenge. For example, how to fabricate a thin shell of 1.7m in diameter and wall thickness of 30mm having high precision external teeth at one end and internal splines at the other end? It is so flexible that conventional fabrication methods become unsuitable. In this paper special fabrication methods are discussed that can be used for manufacturing of large size harmonic drive components. They include electro-slag welding and refining, the use of special expandable devices to locate and hold a flexible gear, welding peripheral parts of disks with wear resistant materials with subsequent machining and others. These fabrication methods proved to be effective and harmonic drives built with the use of these innovative technologies have been installed on heavy metallurgical equipment and successfully tested.
Resumo:
In this paper, we provide specific examples of the educational promises and problems that arise as multiliteracies pedagogical initiatives encounter conventional institutional beliefs and practices in mainstream schooling. This paper documents and characterizes the ways in which two specific digital learning initiatives were played out in two distinctive traditional schooling contexts, as experienced by two different student groups: one comprising an elite mainstream and the other an excluded minority. By learning from the instructive complications that arose out of attempts by innovative and well-meaning educators to provide students with more relevant learning experiences than currently exist in mainstream schooling, this paper contributes fresh perspectives and more nuanced understandings of how diverse learners and their teachers negotiate the opportunities and challenges of the New London Group's vision of a multiliteracies approach to literacy and learning. We conclude by arguing that, where multiliteracies are understood as “garnish” to the “pedagogical roast” of traditional code-based and print-based academic literacies, they will continue to work on the sidelines of mainstream schooling and be seen only as either useful extensions or helpful interventions for high-performing and at-risk students respectively.
Resumo:
The central thesis in the article is that the venture creation process is different for innovative versus imitative ventures. This holds up; the pace of the process differs by type of venture as do, in line with theory-based hypotheses, the effects of certain human capital (HC) and social capital (SC) predictors. Importantly, and somewhat unexpectedly, the theoretically derived models using HC, SC, and certain controls are relatively successful explaining progress in the creation process for the minority of innovative ventures, but achieve very limited success for the imitative majority. This may be due to a rationalistic bias in conventional theorizing and suggests that there is need for considerable theoretical development regarding the important phenomenon of new venture creation processes. Another important result is that the building up of instrumental social capital, which we assess comprehensively and as a time variant construct, is important for making progress with both types of ventures, and increasingly, so as the process progresses. This result corroborates with stronger operationalization and more appropriate analysis method what previously published research has only been able to hint at.
Resumo:
Background The accurate measurement of Cardiac output (CO) is vital in guiding the treatment of critically ill patients. Invasive or minimally invasive measurement of CO is not without inherent risks to the patient. Skilled Intensive Care Unit (ICU) nursing staff are in an ideal position to assess changes in CO following therapeutic measures. The USCOM (Ultrasonic Cardiac Output Monitor) device is a non-invasive CO monitor whose clinical utility and ease of use requires testing. Objectives To compare cardiac output measurement using a non-invasive ultrasonic device (USCOM) operated by a non-echocardiograhically trained ICU Registered Nurse (RN), with the conventional pulmonary artery catheter (PAC) using both thermodilution and Fick methods. Design Prospective observational study. Setting and participants Between April 2006 and March 2007, we evaluated 30 spontaneously breathing patients requiring PAC for assessment of heart failure and/or pulmonary hypertension at a tertiary level cardiothoracic hospital. Methods SCOM CO was compared with thermodilution measurements via PAC and CO estimated using a modified Fick equation. This catheter was inserted by a medical officer, and all USCOM measurements by a senior ICU nurse. Mean values, bias and precision, and mean percentage difference between measures were determined to compare methods. The Intra-Class Correlation statistic was also used to assess agreement. The USCOM time to measure was recorded to assess the learning curve for USCOM use performed by an ICU RN and a line of best fit demonstrated to describe the operator learning curve. Results In 24 of 30 (80%) patients studied, CO measures were obtained. In 6 of 30 (20%) patients, an adequate USCOM signal was not achieved. The mean difference (±standard deviation) between USCOM and PAC, USCOM and Fick, and Fick and PAC CO were small, −0.34 ± 0.52 L/min, −0.33 ± 0.90 L/min and −0.25 ± 0.63 L/min respectively across a range of outputs from 2.6 L/min to 7.2 L/min. The percent limits of agreement (LOA) for all measures were −34.6% to 17.8% for USCOM and PAC, −49.8% to 34.1% for USCOM and Fick and −36.4% to 23.7% for PAC and Fick. Signal acquisition time reduced on average by 0.6 min per measure to less than 10 min at the end of the study. Conclusions In 80% of our cohort, USCOM, PAC and Fick measures of CO all showed clinically acceptable agreement and the learning curve for operation of the non-invasive USCOM device by an ICU RN was found to be satisfactorily short. Further work is required in patients receiving positive pressure ventilation.
Resumo:
In this paper, a new power sharing control method for a microgrid with several distributed generation units is proposed. The presence of both inertial and noninertial sources with different power ratings, maximum power point tracking, and various types of loads pose a great challenge for the power sharing and system stability. The conventional droop control method is modified to achieve the desired power sharing ensuring system stability in a highly resistive network. A transformation matrix is formed to derive equivalent real and reactive power output of the converter and equivalent feedback gain matrix for the modified droop equation. The proposed control strategy, aimed for the prototype microgrid planned at Queensland University of Technology, is validated through extensive simulation results using PSCAD/EMTDC software.
Resumo:
This article reports an enhanced solvent casting/particulate (salt) leaching (SCPL) method developed for preparing three-dimensional porous polyurethane (PU) scaffolds for cardiac tissue engineering. The solvent for the preparation of the PU scaffolds was a mixture of dimethylformamide (DFM) and tetrahydrofuran (THF). The enhanced method involved the combination of a conventional SCPL method and a step of centrifugation, with the centrifugation being employed to improve the pore uniformity and the pore interconnectivity of scaffolds. Highly porous three-dimensional scaffolds with a well interconnected porous structure could be achieved at the polymer solution concentration of up to 20% by air or vacuum drying to remove the solvent. When the salt particle sizes of 212-295, 295-425, or 425-531 µm and a 15% w/v polymer solution concentration were used, the porosity of the scaffolds was between 83-92% and the compression moduli of the scaffolds were between 13 kPa and 28 kPa. Type I collagen acidic solution was introduced into the pores of a PU scaffold to coat the collagen onto the pore walls throughout the whole PU scaffold. The human aortic endothelial cells (HAECs) cultured in the collagen-coated PU scaffold for 2 weeks were observed by scanning electron microscopy (SEM). It was shown that the enhanced SCPL method and the collagen coating resulted in a spatially uniform distribution of cells throughout the collagen-coated PU scaffold.
Resumo:
The quality of office indoor environments is considered to consist of those factors that impact occupants according to their health and well-being and (by consequence) their productivity. Indoor Environment Quality (IEQ) can be characterized by four indicators: • Indoor air quality indicators • Thermal comfort indicators • Lighting indicators • Noise indicators. Within each indicator, there are specific metrics that can be utilized in determining an acceptable quality of an indoor environment based on existing knowledge and best practice. Examples of these metrics are: indoor air levels of pollutants or odorants; operative temperature and its control; radiant asymmetry; task lighting; glare; ambient noise. The way in which these metrics impact occupants is not fully understood, especially when multiple metrics may interact in their impacts. While the potential cost of lost productivity from poor IEQ has been estimated to exceed building operation costs, the level of impact and the relative significance of the above four indicators are largely unknown. However, they are key factors in the sustainable operation or refurbishment of office buildings. This paper presents a methodology for assessing indoor environment quality (IEQ) in office buildings, and indicators with related metrics for high performance and occupant comfort. These are intended for integration into the specification of sustainable office buildings as key factors to ensure a high degree of occupant habitability, without this being impaired by other sustainability factors. The assessment methodology was applied in a case study on IEQ in Australia’s first ‘six star’ sustainable office building, Council House 2 (CH2), located in the centre of Melbourne. The CH2 building was designed and built with specific focus on sustainability and the provision of a high quality indoor environment for occupants. Actual IEQ performance was assessed in this study by field assessment after construction and occupancy. For comparison, the methodology was applied to a 30 year old conventional building adjacent to CH2 which housed the same or similar occupants and activities. The impact of IEQ on occupant productivity will be reported in a separate future paper
Resumo:
Decentralized and regional load-frequency control of power systems operating in normal and near-normal conditions has been well studied; and several analysis/synthesis approaches have been developed during the last few decades. However in contingency and off-normal conditions, the existing emergency control plans, such as under-frequency load shedding, are usually applied in a centralized structure using a different analysis model. This paper discusses the feasibility of using frequency-based emergency control schemes based on tie-line measurements and local information available within a control area. The conventional load-frequency control model is generalized by considering the dynamics of emergency control/protection schemes and an analytic approach to analyze the regional frequency response under normal and emergency conditions is presented.
Resumo:
Introduction: Bone mineral density (BMD) is currently the preferred surrogate for bone strength in clinical practice. Finite element analysis (FEA) is a computer simulation technique that can predict the deformation of a structure when a load is applied, providing a measure of stiffness (Nmm−1). Finite element analysis of X-ray images (3D-FEXI) is a FEA technique whose analysis is derived froma single 2D radiographic image. Methods: 18 excised human femora had previously been quantitative computed tomography scanned, from which 2D BMD-equivalent radiographic images were derived, and mechanically tested to failure in a stance-loading configuration. A 3D proximal femur shape was generated from each 2D radiographic image and used to construct 3D-FEA models. Results: The coefficient of determination (R2%) to predict failure load was 54.5% for BMD and 80.4% for 3D-FEXI. Conclusions: This ex vivo study demonstrates that 3D-FEXI derived from a conventional 2D radiographic image has the potential to significantly increase the accuracy of failure load assessment of the proximal femur compared with that currently achieved with BMD. This approach may be readily extended to routine clinical BMD images derived by dual energy X-ray absorptiometry. Crown Copyright © 2009 Published by Elsevier Ltd on behalf of IPEM. All rights reserved
Resumo:
Summary Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was warped to the size and shape of a single 2D radiographic image of a subject. Mean absolute depth errors are comparable with previous approaches utilising multiple 2D input projections. Introduction Several approaches have been adopted to derive volumetric density (g cm-3) from a conventional 2D representation of areal bone mineral density (BMD, g cm-2). Such approaches have generally aimed at deriving an average depth across the areal projection rather than creating a formal 3D shape of the bone. Methods Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was subsequently warped to suit the size and shape of a single 2D radiographic image of a subject. CT scans of excised human femora, 18 and 24 scanned at pixel resolutions of 1.08 mm and 0.674 mm, respectively, were equally split into training (created 3D shape template) and test cohorts. Results The mean absolute depth errors of 3.4 mm and 1.73 mm, respectively, for the two CT pixel sizes are comparable with previous approaches based upon multiple 2D input projections. Conclusions This technique has the potential to derive volumetric density from BMD and to facilitate 3D finite element analysis for prediction of the mechanical integrity of the proximal femur. It may further be applied to other anatomical bone sites such as the distal radius and lumbar spine.
Resumo:
Focusing primarily on Anglophone countries, this article begins by looking at the changing environment of foundations, the pressures on foundations and some responses to those pressures. It then focuses on the potential of a structural change approach - often known as 'social change' or 'social justice' grant-making - as a solution to some of the modern dilemmas of foundations, and considers why this approach has, with some exceptions, gained relatively little support. This raises the wider issues of why and how resource-independent, endowed foundations change when conventional explanations of organisational change do not easily apply. Researching a 'lack' is clearly difficult; this article adopts an analytic perspective, examining the characteristics of the structural change approach as a mimetic model, and draws on the work of Rogers (2003) on the characteristics required for the successful diffusion of innovations. It suggests that the structural change approach suffers from some fundamental weaknesses as a mimetic model, failing to meet some key characteristics for the diffusion of innovations. In conclusion, the article looks at conditions under which these weaknesses may be overcome.