742 resultados para Tracking technology
Resumo:
The purpose of this study is to demonstrate the appropriateness of “Japanese Manufacturing Management” (JMM) strategies in the Asian, ASEAN and Australasian automotive sectors. Secondly, the study assessed JMM as a prompt, effective and efficient global manufacturing management practice for automotive manufacturing companies to learn; benchmark for best practice; acquire product and process innovation, and enhance their capabilities and capacities. In this study, the philosophies, systems and tools that have been adopted in various automotive manufacturing assembly plants and their tier 1 suppliers in the three Regions were examined. A number of top to middle managers in these companies were located in Thailand, Indonesia, Malaysia, Singapore, Philippines, Viet Nam, and Australia and were interviewed by using a qualitative methodology. The results confirmed that the six pillars of JMM (culture change, quality at shop floor, consensus, incremental continual improvement, benchmarking, and backward-forward integration) are key enablers to success in adopting JMM in both automotive and other manufacturing sectors in the three Regions. The analysis and on-site interviews identified a number of recommendations that were validated by the automotive manufacturing company’s managers as the most functional JMM strategies.
Resumo:
This paper presents research findings about the use of remote desktop applications to teach music sequencing software. It highlights the successes, shortcomings and interactive issues encountered during a pilot project with a theoretical focus on a specific interactive bottleneck. The paper proposes a new delivery and partnership model to widen this bottleneck, which currently hinders interactions between the technical support, education and professional development communities in music technology.
Resumo:
In this article I outline and demonstrate a synthesis of the methods developed by Lemke (1998) and Martin (2000) for analyzing evaluations in English. I demonstrate the synthesis using examples from a 1.3-million-word technology policy corpus drawn from institutions at the local, state, national, and supranational levels. Lemke's (1998) critical model is organized around the broad 'evaluative dimensions' that are deployed to evaluate propositions and proposals in English. Martin's (2000) model is organized with a more overtly systemic-functional orientation around the concept of 'encoded feeling'. In applying both these models at different times, whilst recognizing their individual usefulness and complementarity, I found specific limitations that led me to work towards a synthesis of the two approaches. I also argue for the need to consider genre, media, and institutional aspects more explicitly when claiming intertextual and heteroglossic relations as the basis for inferred evaluations. A basic assertion made in this article is that the perceived Desirability of a process, person, circumstance, or thing is identical to its 'value'. But the Desirability of anything is a socially and thus historically conditioned attribution that requires significant amounts of institutional inculcation of other 'types' of value-appropriateness, importance, beauty, power, and so on. I therefore propose a method informed by critical discourse analysis (CDA) that sees evaluation as happening on at least four interdependent levels of abstraction.
Resumo:
In architecture courses, instilling a wider understanding of the industry specific representations practiced in the Building Industry is normally done under the auspices of Technology and Science subjects. Traditionally, building industry professionals communicated their design intentions using industry specific representations. Originally these mainly two dimensional representations such as plans, sections, elevations, schedules, etc. were produced manually, using a drawing board. Currently, this manual process has been digitised in the form of Computer Aided Design and Drafting (CADD) or ubiquitously simply CAD. While CAD has significant productivity and accuracy advantages over the earlier manual method, it still only produces industry specific representations of the design intent. Essentially, CAD is a digital version of the drawing board. The tool used for the production of these representations in industry is still mainly CAD. This is also the approach taken in most traditional university courses and mirrors the reality of the situation in the building industry. A successor to CAD, in the form of Building Information Modelling (BIM), is presently evolving in the Construction Industry. CAD is mostly a technical tool that conforms to existing industry practices. BIM on the other hand is revolutionary both as a technical tool and as an industry practice. Rather than producing representations of design intent, BIM produces an exact Virtual Prototype of any building that in an ideal situation is centrally stored and freely exchanged between the project team. Essentially, BIM builds any building twice: once in the virtual world, where any faults are resolved, and finally, in the real world. There is, however, no established model for learning through the use of this technology in Architecture courses. Queensland University of Technology (QUT), a tertiary institution that maintains close links with industry, recognises the importance of equipping their graduates with skills that are relevant to industry. BIM skills are currently in increasing demand throughout the construction industry through the evolution of construction industry practices. As such, during the second half of 2008, QUT 4th year architectural students were formally introduced for the first time to BIM, as both a technology and as an industry practice. This paper will outline the teaching team’s experiences and methodologies in offering a BIM unit (Architectural Technology and Science IV) at QUT for the first time and provide a description of the learning model. The paper will present the results of a survey on the learners’ perspectives of both BIM and their learning experiences as they learn about and through this technology.
Resumo:
Principal Topic In this paper we seek to highlight the important intermediate role that the gestation process plays in entrepreneurship by examining its key antecedents and its consequences for new venture emergence. In doing so we take a behavioural perspective and argue that it is not only what a nascent venture is, but what it does (Katz & Gartner, 1988; Shane & Delmar, 2004; Reynolds, 2007) and when it does it during start-up (Reynolds & Miller, 1992; Lichtenstein, Carter, Dooley & Gartner, 2007) that is important. To extend an analogy from biological development, what we suggest is that the way a new venture is nurtured is just as fundamental as its nature. Much prior research has focused on the nature of new ventures and attempted to attribute variations in outcomes directly to the impact resource endowments and investments have. While there is little doubt that venture resource attributes such as human capital, and specifically prior entrepreneurial experience (Alsos & Kolvereid, 1998), access to social (Davidsson & Honig, 2003) and financial capital have an influence. Resource attributes themselves are distal from successful start-up endeavours and remain inanimate if not for the actions of the nascent venture. The key contribution we make is to shift focus from whether or not actions are taken, but when these actions happen and how that is situated in the overall gestation process. Thus, we suggest that it is gestation process dynamics, or when gestation actions occur, that is more proximal to venture outcomes and we focus on this. Recently scholars have highlighted the complexity that exists in the start-up or gestation process, be it temporal or contextual (Liao, Welsch & Tan, 2005; Lichtenstein et al. 2007). There is great variation in how long a start-up process might take (Reynolds & Miller, 1992), some processes require less action than others (Carter, Gartner & Reynolds, 1996), and the overall intensity of the start-up effort is also deemed important (Reynolds, 2007). And, despite some evidence that particular activities are more influential than others (Delmar & Shane, 2003), the order in which events may happen is, until now, largely indeterminate as regard its influence on success (Liao & Welsch, 2008). We suggest that it is this complexity of the intervening gestation process that attenuates the effect of resource endowment and has resulted in mixed findings in previous research. Thus, in order to reduce complexity we shall take a holistic view of the gestation process and argue that it is its’ dynamic properties that determine nascent venture attempt outcomes. Importantly, we acknowledge that particular gestation processes of themselves would not guarantee successful start-up, but it is more correctly the fit between the process dynamics and the ventures attributes (Davidsson, 2005) that is influential. So we aim to examine process dynamics by comparing sub-groups of venture types by resource attributes. Thus, as an initial step toward unpacking the complexity of the gestation process, this paper aims to establish the importance of its role as an intermediary between attributes of the nascent venture and the emergence of that venture. Here, we make a contribution by empirically examining gestation process dynamics and their fit with venture attributes. We do this by firstly, examining that nature of the influence that venture attributes such as human and social capital have on the dynamics of the gestation process, and secondly by investigating the effect that gestation process dynamics have on venture creation outcomes. Methodology and Propositions In order to explore the importance that gestation processes dynamics have in nascent entrepreneurship we conduct an empirical study of ventures start-ups. Data is drawn from a screened random sample of 625 Australian nascent business ventures prior to them achieving consistent outcomes in the market. This data was collected during 2007/8 and 2008/9 as part of the Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE) project (Davidsson et al., 2008). CAUSEE is a longitudinal panel study conducted over four years, sourcing information from annually administered telephone surveys. Importantly for our study, this methodology allows for the capture and tracking of active nascent venture creation as it happens, thus reducing hindsight and selection biases. In addition, improved tests of causality may be made given that outcome measures are temporally removed from preceding events. The data analysed in this paper represents the first two of these four years, and for the first time has access to follow-up outcome measures for these venture attempts: where 260 were successful, 126 were abandoned, and 191 are still in progress. With regards to venture attributes as gestation process antecedents, we examine specific human capital measured as successful prior experience in entrepreneurship, and direct social capital of the venture as ‘team start-ups’. In assessing gestation process dynamics we follow Lichtenstein et al. (2007) to suggest that the rate, concentration and timing of gestation activities may be used to summarise the complexity dynamics of that process. In addition, we extend this set of measures to include the interaction of discovery and exploitation by way of changes made to the venture idea. Those ventures with successful prior experience or those who conduct symbiotic parallel start-up attempts may be able to, or be forced to, leave their gestation action until later and still derive a successful outcome. In addition access to direct social capital may provide the support upon which the venture may draw in order to persevere in the face of adversity, turning a seemingly futile start-up attempt into a success. On the other hand prior experience may engender the foresight to terminate a venture attempt early should it be seen to be going nowhere. The temporal nature of these conjectures highlight the importance that process dynamics play and will be examined in this research Statistical models are developed to examine gestation process dynamics. We use multivariate general linear modelling to analyse how human and social capital factors influence gestation process dynamics. In turn, we use event history models and stratified Cox regression to assess the influence that gestation process dynamics have on venture outcomes. Results and Implications What entrepreneurs do is of interest to both scholars and practitioners’ alike. Thus the results of this research are important since they focus on nascent behaviour and its outcomes. While venture attributes themselves may be influential this is of little actionable assistance to practitioners. For example it is unhelpful to say to the prospective first time entrepreneur “you’ll be more successful if you have lots of prior experience in firm start-ups”. This research attempts to close this relevance gap by addressing what gestation behaviours might be appropriate, when actions best be focused, and most importantly in what circumstances. Further, we make a contribution to the entrepreneurship literature, examining the role that gestation process dynamics play in outcomes, by specifically attributing these to the nature of the venture itself. This extension is to the best of our knowledge new to the research field.
Resumo:
Organisations are increasingly investing in complex technological innovations such as enterprise information systems with the aim of improving the operations of the business, and in this way gaining competitive advantage. However, the implementation of technological innovations tends to have an excessive focus on either technology innovation effectiveness (also known as system effectiveness), or the resulting operational effectiveness; focusing on either one of them is detrimental to the long-term enterprise benefits through failure to achieve the real value of technological innovations. The lack of research on the dimensions and performance objectives that organisations must be focusing on is the main reason for this misalignment. This research uses a combination of qualitative and quantitative, three-stage methodological approach. Initial findings suggest that factors such as quality of information from technology innovation effectiveness, and quality and speed from operational effectiveness are important and significantly well correlated factors that promote the alignment between technology innovation effectiveness and operational effectiveness.
Resumo:
This chapter reports on Australian and Swedish experiences in the iterative design, development, and ongoing use of interactive educational systems we call ‘Media Maps.’ Like maps in general, Media Maps are usefully understood as complex cultural technologies; that is, they are not only physical objects, tools and artefacts, but also information creation and distribution technologies, the use and development of which are embedded in systems of knowledge and social meaning. Drawing upon Australian and Swedish experiences with one Media Map technology, this paper illustrates this three-layered approach to the development of media mapping. It shows how media mapping is being used to create authentic learning experiences for students preparing for work in the rapidly evolving media and communication industries. We also contextualise media mapping as a response to various challenges for curriculum and learning design in Media and Communication Studies that arise from shifts in tertiary education policy in a global knowledge economy.
Resumo:
Various reasons have been proffered for female under-representation in tertiary information technology (IT) courses and the IT industry with most relating to cultural moirés. The 2006 Geek Goddess calendar was designed to alter IT’s “geeky image” and the term is used here to represent young women enrolled in pre-service IT teaching courses. Their special mix of IT and teaching draws on conflicting stereotypes and represents a micro-climate which is typically lost in studies of IT occupations because of the aggregation of all IT roles. This paper will report on a small-scale investigation of female students (N=25) at a university in Queensland (Australia) studying to become teachers of secondary IT subjects. They are entering the IT industry, gendered as a “male” occupation, through the safe space of teaching a discipline allied to feminine qualities of nurturing. They are “geek goddesses” who – perhaps to balance the masculine and feminine of these occupations - have decided to go to school rather than into corporations or government.
Resumo:
This article reports on the impact on student personal creativity of a longitudinal study that had as its major goal the creation of a unique intervention program for elementary students. The intervention was based on the National Profile and Statement (Curriculum Corporation, 1994a, 1994b) for the curriculum area of technology. The intervention program comprised thematically based units of work that integrated all eight Australian Key Learning Areas (KLA). A pretest/posttest control group design investigation (Campbell & Stanley, 1963) was undertaken with 580 students from 7 schools and 24 class groups that were randomly divided into 3 treatment groups. One group (10 classes) formed the control group. Another 7 classes received the year-long intervention program, while the remaining 7 classes received the intervention, but with the added seamless integration of their available classroom computer technologies. The effect of the intervention on the personal creativity characteristics of the students involved in the study was assessed using the Creativity Checklist, an instrument that was developed during the study. The results suggest that the purposeful integration of computer technology with the intervention program positively affects the personal creativity characteristics of students.
Resumo:
Principal Topic High technology consumer products such as notebooks, digital cameras and DVD players are not introduced into a vacuum. Consumer experience with related earlier generation technologies, such as PCs, film cameras and VCRs, and the installed base of these products strongly impacts the market diffusion of the new generation products. Yet technology substitution has received only sparse attention in the diffusion of innovation literature. Research for consumer durables has been dominated by studies of (first purchase) adoption (c.f. Bass 1969) which do not explicitly consider the presence of an existing product/technology. More recently, considerable attention has also been given to replacement purchases (c.f. Kamakura and Balasubramanian 1987). Only a handful of papers explicitly deal with the diffusion of technology/product substitutes (e.g. Norton and Bass, 1987: Bass and Bass, 2004). They propose diffusion-type aggregate-level sales models that are used to forecast the overall sales for successive generations. Lacking household data, these aggregate models are unable to give insights into the decisions by individual households - whether to adopt generation II, and if so, when and why. This paper makes two contributions. It is the first large-scale empirical study that collects household data for successive generations of technologies in an effort to understand the drivers of adoption. Second, in comparision to traditional analysis that evaluates technology substitution as an ''adoption of innovation'' type process, we propose that from a consumer's perspective, technology substitution combines elements of both adoption (adopting the new generation technology) and replacement (replacing the generation I product with generation II). Based on this proposition, we develop and test a number of hypotheses. Methodology/Key Propositions In some cases, successive generations are clear ''substitutes'' for the earlier generation, in that they have almost identical functionality. For example, successive generations of PCs Pentium I to II to III or flat screen TV substituting for colour TV. More commonly, however, the new technology (generation II) is a ''partial substitute'' for existing technology (generation I). For example, digital cameras substitute for film-based cameras in the sense that they perform the same core function of taking photographs. They have some additional attributes of easier copying and sharing of images. However, the attribute of image quality is inferior. In cases of partial substitution, some consumers will purchase generation II products as substitutes for their generation I product, while other consumers will purchase generation II products as additional products to be used as well as their generation I product. We propose that substitute generation II purchases combine elements of both adoption and replacement, but additional generation II purchases are solely adoption-driven process. Extensive research on innovation adoption has consistently shown consumer innovativeness is the most important consumer characteristic that drives adoption timing (Goldsmith et al. 1995; Gielens and Steenkamp 2007). Hence, we expect consumer innovativeness also to influence both additional and substitute generation II purchases. Hypothesis 1a) More innovative households will make additional generation II purchases earlier. 1 b) More innovative households will make substitute generation II purchases earlier. 1 c) Consumer innovativeness will have a stronger impact on additional generation II purchases than on substitute generation II purchases. As outlined above, substitute generation II purchases act, in part like a replacement purchase for the generation I product. Prior research (Bayus 1991; Grewal et al 2004) identified product age as the most dominant factor influencing replacements. Hence, we hypothesise that: Hypothesis 2: Households with older generation I products will make substitute generation II purchases earlier. Our survey of 8,077 households investigates their adoption of two new generation products: notebooks as a technology change to PCs, and DVD players as a technology shift from VCRs. We employ Cox hazard modelling to study factors influencing the timing of a household's adoption of generation II products. We determine whether this is an additional or substitute purchase by asking whether the generation I product is still used. A separate hazard model is conducted for additional and substitute purchases. Consumer Innovativeness is measured as domain innovativeness adapted from the scales of Goldsmith and Hofacker (1991) and Flynn et al. (1996). The age of the generation I product is calculated based on the most recent household purchase of that product. Control variables include age, size and income of household, and age and education of primary decision-maker. Results and Implications Our preliminary results confirm both our hypotheses. Consumer innovativeness has a strong influence on both additional purchases (exp = 1.11) and substitute purchases (exp = 1.09). Exp is interpreted as the increased probability of purchase for an increase of 1.0 on a 7-point innovativeness scale. Also consistent with our hypotheses, the age of the generation I product has a dramatic influence for substitute purchases of VCR/DVD (exp = 2.92) and a strong influence for PCs/notebooks (exp = 1.30). Exp is interpreted as the increased probability of purchase for an increase of 10 years in the age of the generation I product. Yet, also as hypothesised, there was no influence on additional purchases. The results lead to two key implications. First, there is a clear distinction between additional and substitute purchases of generation II products, each with different drivers. Treating these as a single process will mask the true drivers of adoption. For substitute purchases, product age is a key driver. Hence, implications for marketers of high technology products can utilise data on generation I product age (e.g. from warranty or loyalty programs) to target customers who are more likely to make a purchase.
Resumo:
In this paper, a new power sharing control method for a microgrid with several distributed generation units is proposed. The presence of both inertial and noninertial sources with different power ratings, maximum power point tracking, and various types of loads pose a great challenge for the power sharing and system stability. The conventional droop control method is modified to achieve the desired power sharing ensuring system stability in a highly resistive network. A transformation matrix is formed to derive equivalent real and reactive power output of the converter and equivalent feedback gain matrix for the modified droop equation. The proposed control strategy, aimed for the prototype microgrid planned at Queensland University of Technology, is validated through extensive simulation results using PSCAD/EMTDC software.
Resumo:
The challenges of maintaining a building such as the Sydney Opera House are immense and are dependent upon a vast array of information. The value of information can be enhanced by its currency, accessibility and the ability to correlate data sets (integration of information sources). A building information model correlated to various information sources related to the facility is used as definition for a digital facility model. Such a digital facility model would give transparent and an integrated access to an array of datasets and obviously would support Facility Management processes. In order to construct such a digital facility model, two state-of-the-art Information and Communication technologies are considered: an internationally standardized building information model called the Industry Foundation Classes (IFC) and a variety of advanced communication and integration technologies often referred to as the Semantic Web such as the Resource Description Framework (RDF) and the Web Ontology Language (OWL). This paper reports on some technical aspects for developing a digital facility model focusing on Sydney Opera House. The proposed digital facility model enables IFC data to participate in an ontology driven, service-oriented software environment. A proof-of-concept prototype has been developed demonstrating the usability of IFC information to collaborate with Sydney Opera House’s specific data sources using semantic web ontologies.
Resumo:
The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.