906 resultados para software creation methodology
Resumo:
The paper discusses an aspect of reading research methodology as represented by papers published by the Reading Research Quarterly from the beginning of 1989(volume 24, Number 1) to the end of 1993 (volume 28, Number 4). The discussion suggests some points of departure between this research community and an Australian community broadly defined as poststructural. A focus for this investigation is the function of “gender” within the methodological approaches of the two communities. Suggestions are made regarding some potentially productive points of intersection between the work of American and Australian reading researchers.
Resumo:
Safety at Railway Level Crossings (RLXs) is an important issue within the Australian transport system. Crashes at RLXs involving road vehicles in Australia are estimated to cost $10 million each year. Such crashes are mainly due to human factors; unintentional errors contribute to 46% of all fatal collisions and are far more common than deliberate violations. This suggests that innovative intervention targeting drivers are particularly promising to improve RLX safety. In recent years there has been a rapid development of a variety of affordable technologies which can be used to increase driver’s risk awareness around crossings. To date, no research has evaluated the potential effects of such technologies at RLXs in terms of safety, traffic and acceptance of the technology. Integrating driving and traffic simulations is a safe and affordable approach for evaluating these effects. This methodology will be implemented in a driving simulator, where we recreated realistic driving scenario with typical road environments and realistic traffic. This paper presents a methodology for evaluating comprehensively potential benefits and negative effects of such interventions: this methodology evaluates driver awareness at RLXs , driver distraction and workload when using the technology . Subjective assessment on perceived usefulness and ease of use of the technology is obtained from standard questionnaires. Driving simulation will provide a model of driving behaviour at RLXs which will be used to estimate the effects of such new technology on a road network featuring RLX for different market penetrations using a traffic simulation. This methodology can assist in evaluating future safety interventions at RLXs.
Resumo:
There is consistent evidence showing that driver behaviour contributes to crashes and near miss incidents at railway level crossings (RLXs). The development of emerging Vehicle-to-Vehicle and Vehicle-to-Infrastructure technologies is a highly promising approach to improve RLX safety. To date, research has not evaluated comprehensively the potential effects of such technologies on driving behaviour at RLXs. This paper presents an on-going research programme assessing the impacts of such new technologies on human factors and drivers’ situational awareness at RLX. Additionally, requirements for the design of such promising technologies and ways to display safety information to drivers were systematically reviewed. Finally, a methodology which comprehensively assesses the effects of in-vehicle and road-based interventions warning the driver of incoming trains at RLXs is discussed, with a focus on both benefits and potential negative behavioural adaptations. The methodology is designed for implementation in a driving simulator and covers compliance, control of the vehicle, distraction, mental workload and drivers’ acceptance. This study has the potential to provide a broad understanding of the effects of deploying new in-vehicle and road-based technologies at RLXs and hence inform policy makers on safety improvements planning for RLX.
Resumo:
The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology(1) even in complex tissue sections(2). Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells(3), however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.
Resumo:
Starting from the vantage point that explaining success at creating a venture should be the unique contribution—or at least one unique contribution—of entrepreneurship research, we argue that this success construct has not yet been adequately defined an operationalized. We thus offer suggestions for more precise conceptualization and measurement of this central construct. Rather than regarding various success proxies used in prior research as poor operationalizations of success we argue that they represent other important aspects of the venture creation process: engagement, persistence and progress. We hold that in order to attain a better understanding of venture creation these constructs also need to be theoretically defined. Further, their respective drivers need to be theorized and tested separately. We suggest theoretical definitions of each. We then develop and test hypotheses concerning how human capital, venture idea novelty and business planning has different impact on the different assessments of the process represented by engagement, persistence, progress and success. The results largely confirm the stated hypotheses, suggesting that the conceptual and empirical approach we are suggesting is a path towards improved understanding of the central entrepreneurship phenomenon of new venture creation.
Resumo:
IT-supported field data management benefits on-site construction management by improving accessibility to the information and promoting efficient communication between project team members. However, most of on-site safety inspections still heavily rely on subjective judgment and manual reporting processes and thus observers’ experiences often determine the quality of risk identification and control. This study aims to develop a methodology to efficiently retrieve safety-related information so that the safety inspectors can easily access to the relevant site safety information for safer decision making. The proposed methodology consists of three stages: (1) development of a comprehensive safety database which contains information of risk factors, accident types, impact of accidents and safety regulations; (2) identification of relationships among different risk factors based on statistical analysis methods; and (3) user-specified information retrieval using data mining techniques for safety management. This paper presents an overall methodology and preliminary results of the first stage research conducted with 101 accident investigation reports.
Resumo:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.
Resumo:
One of the next great challenges of cell biology is the determination of the enormous number of protein structures encoded in genomes. In recent years, advances in electron cryo-microscopy and high-resolution single particle analysis have developed to the point where they now provide a methodology for high resolution structure determination. Using this approach, images of randomly oriented single particles are aligned computationally to reconstruct 3-D structures of proteins and even whole viruses. One of the limiting factors in obtaining high-resolution reconstructions is obtaining a large enough representative dataset ($>100,000$ particles). Traditionally particles have been manually picked which is an extremely labour intensive process. The problem is made especially difficult by the low signal-to-noise ratio of the images. This paper describes the development of automatic particle picking software, which has been tested with both negatively stained and cryo-electron micrographs. This algorithm has been shown to be capable of selecting most of the particles, with few false positives. Further work will involve extending the software to detect differently shaped and oriented particles.
Resumo:
This project investigates musicalisation and intermediality in the writing and devising of composed theatre. Its research question asks “How does the narrative of a musical play differ when it emerges from a setlist of original songs?”, the aim being to create performance event that is neither music nor theatre. This involves composition of lyrics, music, action and spoken text, projected image: gathered in a script and presented in performance. Scholars such as Kulezic-Wilson(in Kendrick, L and Roesner, D 2011:34) outline the acoustic dimension to the ‘performative turn’ (Mungen, Ernst and Bentzweizer, 2012) as heralding “…a shift of emphasis on how meaning is created (and veiled) and how the spectrum of theatrical creation and reception is widened.” Rebstock and Roesner (2012) capture approaches similar to this, building on Lehmann’s work in the post-dramatic under the new term ‘composed theatre’. This practice led research draws influence from these new theoretical frames, pushing beyond ‘the musical’. Springing from a set of original songs in dialogue with performed narrative, Bear with Me is a 45 minute music driven work for children, involving projected image and participatory action. Bear with Me’s intermedial hybrid of theatrical, screen and concert presentations shows that a simple setlist of original songs can be the starting point for the structure of a complex intermedial performance. Bear with Me was programmed into the Queensland Performing Arts Centre’s Out of the Box Festival. It was first performed in the Tony Gould Gallery at the Queensland in June 2012. The season sold out. A masterclass on my playwriting methodology was presented at the Connecting The Dots Symposium which ran alongside the festival.
Resumo:
Nurse researchers are increasingly adopting qualitative methodologies for research practice and theory development. These approaches to research are, in many cases, more appropriate for the field of nursing inquiry than the previously dominant techno-rational methods. However, there remains the issue of adapting methodologies developed in other academic disciplines to the nursing research context. This paper draws upon my own experience with interpretive research to raise questions about the issue of nursing research within a social science research framework. The paper argues that by integrating the characteristics of nursing practice with the characteristics of research practice, the researcher can develop a 'nursing lens', an approach to qualitative research that brings an added dimension to social science methodologies in the nursing research context. Attention is drawn to the unique nature of the nurse-patient relationship, and the ways in which this aspect of nursing practice can enhance nursing research. Examples are given from interview transcripts to support this position.
Resumo:
Building Information Modelling (BIM) appears to be the next evolutionary link in project delivery within the AEC (Architecture, Engineering and Construction) Industry. There have been several surveys of implementation at the local level but to date little is known of the international context. This paper is a preliminary report of a large scale electronic survey of the implementation of BIM and the impact on AEC project delivery and project stakeholders in Australia and internationally. National and regional patterns of BIM usage will be identified. These patterns will include disciplinary users, project lifecycle stages, technology integration–including software compatibility—and organisational issues such as human resources and interoperability. Also considered is the current status of the inclusion of BIM within tertiary level curricula and potential for the creation of a new discipline.
Resumo:
Ethnography is now a well-established research methodology for virtual environments, and the vast majority of accounts have one aspect in common, whether textual or graphic environments – that of the embodied avatar. In this article, I first discuss the applicability of such a methodology to non-avatar environments such as Eve Online, considering where the methodology works and the issues that arise in its implementation – particularly for the consideration of sub-communities within the virtual environment. Second, I consider what alternative means exist for getting at the information that is obtained through an ethnographic study of the virtual environment. To that end, I consider the practical and ethical implications of utilizing existing accounts, the importance of the meta-game discourse, including those sources outside of the control of the environment developer, and finally the utility in combining personal observations with accounts of other ethnographers, both within and between environments.
Resumo:
Since 2007 Kite Arts Education Program (KITE), based at Queensland Performing Arts Centre (QPAC), has been engaged in delivering a series of theatre-based experiences for children in low socio-economic primary schools in Queensland. The artist in residence (AIR) project titled Yonder includes performances developed by the children with the support and leadership of teacher artists from KITE for their community and parents/carers,supported by a peak community cultural institution. In 2009,Queensland Performing Arts Centre partnered with Queensland University of Technology (QUT) Creative Industries Faculty (Drama) to conduct a three-year evaluation of the Yonder project to understand the operational dynamics, artistic outputs and the educational benefits of the project. This paper outlines the research findings for children engaged in the Yonder project in the interrelated areas of literacy development and social competencies. Findings are drawn from six iterations of the project in suburban locations on the edge of Brisbane city and in regional Queensland.
Resumo:
Whole-body computer control interfaces present new opportunities to engage children with games for learning. Stomp is a suite of educational games that use such a technology, allowing young children to use their whole body to interact with a digital environment projected on the floor. To maximise the effectiveness of this technology, tenets of self-determination theory (SDT) are applied to the design of Stomp experiences. By meeting user needs for competence, autonomy, and relatedness our aim is to increase children's engagement with the Stomp learning platform. Analysis of Stomp's design suggests that these tenets are met. Observations from a case study of Stomp being used by young children show that they were highly engaged and motivated by Stomp. This analysis demonstrates that continued application of SDT to Stomp will further enhance user engagement. It also is suggested that SDT, when applied more widely to other whole-body multi-user interfaces, could instil similar positive effects.
Resumo:
This research examines the entrepreneurship phenomenon, and the question: Why are some venture attempts more successful than others? This question is not a new one. Prior research has answered this by describing those that engage in nascent entrepreneurship. Yet, this approach yielded little consensus and offers little comfort for those newly considering venture creation (Gartner, 1988). Rather, this research considers the process of venture creation, by focusing on the actions of nascent entrepreneurs. However, the venture creation process is complex (Liao, Welsch, & Tan, 2005), and multi-dimensional (Davidsson, 2004). The process can vary in the amount of action engaged by the entrepreneur; the temporal dynamics of how action is enacted (Lichtenstein, Carter, Dooley, and Gartner 2007); or the sequence in which actions are undertaken. And little is known about whether any, or all three, of these dimensions matter. Further, there exists scant general knowledge about how the venture creation process influences venture creation outcomes (Gartner & Shaver, 2011). Therefore, this research conducts a systematic study of what entrepreneurs do as they create a new venture. The primary goal is to develop general principles so that advice may be offered on how to ‘proceed’, rather than how to ‘be’. Three integrated empirical studies were conducted that separately focus on each of the interrelated dimensions. The basis for this was a randomly sampled, longitudinal panel, of nascent ventures. Upon recruitment these ventures were in the process of being created, but yet to be established as new businesses. The ventures were tracked one year latter to follow up on outcomes. Accordingly, this research makes the following original contributions to knowledge. First, the findings suggest that all three of the dimensions play an important role: action, dynamics, and sequence. This implies that future research should take a multi-dimensional view of the venture creation process. Failing to do so can only result in a limited understanding of a complex phenomenon. Second, action is the fundamental means through which venture creation is achieved. Simply put, more active venture creation efforts are more likely more successful. Further, action is the medium which allows resource endowments their effect upon venture outcomes. Third, the dynamics of how venture creation plays out over time is also influential. Here, a process with a high rate of action which increases in intensity will more likely achieve positive outcomes. Forth, sequence analysis, suggests that the order in which actions are taken will also drive outcomes. Although venture creation generally flows in sequence from discovery toward exploitation (Shane & Venkataraman, 2000; Eckhardt & Shane, 2003; Shane, 2003), processes that actually proceed in this way are less likely to be realized. Instead, processes which specifically intertwine discovery and exploitation action together in symbiosis more likely achieve better outcomes (Sarasvathy, 2001; Baker, Miner, & Eesley, 2003). Further, an optimal venture creation order exists somewhere between these sequential and symbiotic process archetypes. A process which starts out as symbiotic discovery and exploitation, but switches to focus exclusively on exploitation later on is most likely to achieve venture creation. These sequence findings are unique, and suggest future integration between opposing theories for order in venture creation.