426 resultados para software creation methodology
Resumo:
IT-supported field data management benefits on-site construction management by improving accessibility to the information and promoting efficient communication between project team members. However, most of on-site safety inspections still heavily rely on subjective judgment and manual reporting processes and thus observers’ experiences often determine the quality of risk identification and control. This study aims to develop a methodology to efficiently retrieve safety-related information so that the safety inspectors can easily access to the relevant site safety information for safer decision making. The proposed methodology consists of three stages: (1) development of a comprehensive safety database which contains information of risk factors, accident types, impact of accidents and safety regulations; (2) identification of relationships among different risk factors based on statistical analysis methods; and (3) user-specified information retrieval using data mining techniques for safety management. This paper presents an overall methodology and preliminary results of the first stage research conducted with 101 accident investigation reports.
Resumo:
Software as a Service (SaaS) is a promising approach for Small and Medium Enterprises (SMEs) firms, in particular those that are focused on growing fast and leveraging new technology, due to the potential benefits arising from its inherent scalability, reduced total cost of ownership and the ease of access to global innovations. This paper proposes a dynamic perspective on IS capabilities to understand and explain SMEs sourcing and levering SaaS. The model is derived from combining the IS capabilities of Feeny and Willcocks (1998) and the dynamic capabilities of Teece (2007) and contextualizing it for SMEs and SaaS. We conclude that SMEs sourcing and leveraging SaaS require leadership, business systems thinking and informed buying for sensing and seizing SaaS opportunities and require leadership and vendor development for transforming in terms of aligning and realigning specific tangible and intangible assets.
Resumo:
Fundamental tooling is required in order to apply USDL in practical settings. This chapter discusses three fundamental types of tools for USDL. First, USDL editors have been developed for expert and casual users, respectively. Second, several USDL repositories have been built to allow editors accessing and storing USDL descriptions. Third, our generic USDL marketplace allows providers to describe their services once and potentially trade them anywhere. In addition, the iosyncrasies of service trading as opposed to the simpler case of product trading. The chapter also presents several deployment scenarios of such tools to foster individual value chains and support new business models across organizational boundaries. We close the chapter with an application of USDL in the context of service engineering.
Resumo:
As the service-oriented architecture paradigm has become ever more popular, different standardization efforts have been proposed by various consortia to enable interaction among heterongeneous environments through this paradigm. This chapter will overview the most prevalent of these SOA approaches. It will first show how technical services can be described, how they can interact with each other and be discovered by users. Next, the chapter will present different standards to facilitate service composition and to design service-oriented environments in light of a universal understanding of service orientation. The chapter will conclude with a summary and a discussion on the limitations of the reviewed standards along their ability to describe service properties. This paves the way to the next chapters where the USDL standard will be presented, which aims to lift such limitations.
Resumo:
Enabling web-based service networks and ecosystems requires a way of describing services by a "commercial envelope" as discussed in Chapter 1. A uniform conception of services across all walks of life (including technical services) is required capturing business, operational and technical aspects. Therefore, our proposed Unified Service Description Language (USDL) particularly draws from and generalizes the best-of-breed approaches presented in Part I. The following chapter presents the design rationale of USDL where the different aspects are put in a framework of descriptions requirements. This is followed by the subsequent chapters of this part that provide details on specific aspects such as pricing or legal issues.
Resumo:
Over the last twenty years, the use of open content licenses has become increasingly and surprisingly popular. The use of such licences challenges the traditional incentive-based model of exclusive rights under copyright. Instead of providing a means to charge for the use of particular works, what seems important is mitigating against potential personal harm to the author and, in some cases, preventing non-consensual commercial exploitation. It is interesting in this context to observe the primacy of what are essentially moral rights over the exclusionary economic rights. The core elements of common open content licences map somewhat closely to continental conceptions of the moral rights of authorship. Most obviously, almost all free software and free culture licences require attribution of authorship. More interestingly, there is a tension between social norms developed in free software communities and those that have emerged in the creative arts over integrity and commercial exploitation. For programmers interested in free software, licence terms that prohibit commercial use or modification are almost completely inconsistent with the ideological and utilitarian values that underpin the movement. For those in the creative industries, on the other hand, non-commercial terms and, to a lesser extent, terms that prohibit all but verbatim distribution continue to play an extremely important role in the sharing of copyright material. While prohibitions on commercial use often serve an economic imperative, there is also a certain personal interest for many creators in avoiding harmful exploitation of their expression – an interest that has sometimes been recognised as forming a component of the moral right of integrity. One particular continental moral right – the right of withdrawal – is present neither in Australian law or in any of the common open content licences. Despite some marked differences, both free software and free culture participants are using contractual methods to articulate the norms of permissible sharing. Legal enforcement is rare and often prohibitively expensive, and the various communities accordingly rely upon shared understandings of acceptable behaviour. The licences that are commonly used represent a formalised expression of these community norms and provide the theoretically enforceable legal baseline that lends them legitimacy. The core terms of these licences are designed primarily to alleviate risk in sharing and minimise transaction costs in sharing and using copyright expression. Importantly, however, the range of available licences reflect different optional balances in the norms of creating and sharing material. Generally, it is possible to see that, stemming particularly from the US, open content licences are fundamentally important in providing a set of normatively accepted copyright balances that reflect the interests sought to be protected through moral rights regimes. As the cost of creation, distribution, storage, and processing of expression continues to fall towards zero, there are increasing incentives to adopt open content licences to facilitate wide distribution and reuse of creative expression. Thinking of these protocols not only as reducing transaction costs but of setting normative principles of participation assists in conceptualising the role of open content licences and the continuing tensions that permeate modern copyright law.
Resumo:
The paper discusses an aspect of reading research methodology as represented by papers published by the Reading Research Quarterly from the beginning of 1989(volume 24, Number 1) to the end of 1993 (volume 28, Number 4). The discussion suggests some points of departure between this research community and an Australian community broadly defined as poststructural. A focus for this investigation is the function of “gender” within the methodological approaches of the two communities. Suggestions are made regarding some potentially productive points of intersection between the work of American and Australian reading researchers.
Resumo:
Safety at Railway Level Crossings (RLXs) is an important issue within the Australian transport system. Crashes at RLXs involving road vehicles in Australia are estimated to cost $10 million each year. Such crashes are mainly due to human factors; unintentional errors contribute to 46% of all fatal collisions and are far more common than deliberate violations. This suggests that innovative intervention targeting drivers are particularly promising to improve RLX safety. In recent years there has been a rapid development of a variety of affordable technologies which can be used to increase driver’s risk awareness around crossings. To date, no research has evaluated the potential effects of such technologies at RLXs in terms of safety, traffic and acceptance of the technology. Integrating driving and traffic simulations is a safe and affordable approach for evaluating these effects. This methodology will be implemented in a driving simulator, where we recreated realistic driving scenario with typical road environments and realistic traffic. This paper presents a methodology for evaluating comprehensively potential benefits and negative effects of such interventions: this methodology evaluates driver awareness at RLXs , driver distraction and workload when using the technology . Subjective assessment on perceived usefulness and ease of use of the technology is obtained from standard questionnaires. Driving simulation will provide a model of driving behaviour at RLXs which will be used to estimate the effects of such new technology on a road network featuring RLX for different market penetrations using a traffic simulation. This methodology can assist in evaluating future safety interventions at RLXs.
Resumo:
There is consistent evidence showing that driver behaviour contributes to crashes and near miss incidents at railway level crossings (RLXs). The development of emerging Vehicle-to-Vehicle and Vehicle-to-Infrastructure technologies is a highly promising approach to improve RLX safety. To date, research has not evaluated comprehensively the potential effects of such technologies on driving behaviour at RLXs. This paper presents an on-going research programme assessing the impacts of such new technologies on human factors and drivers’ situational awareness at RLX. Additionally, requirements for the design of such promising technologies and ways to display safety information to drivers were systematically reviewed. Finally, a methodology which comprehensively assesses the effects of in-vehicle and road-based interventions warning the driver of incoming trains at RLXs is discussed, with a focus on both benefits and potential negative behavioural adaptations. The methodology is designed for implementation in a driving simulator and covers compliance, control of the vehicle, distraction, mental workload and drivers’ acceptance. This study has the potential to provide a broad understanding of the effects of deploying new in-vehicle and road-based technologies at RLXs and hence inform policy makers on safety improvements planning for RLX.
Resumo:
The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology(1) even in complex tissue sections(2). Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells(3), however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.
Resumo:
Starting from the vantage point that explaining success at creating a venture should be the unique contribution—or at least one unique contribution—of entrepreneurship research, we argue that this success construct has not yet been adequately defined an operationalized. We thus offer suggestions for more precise conceptualization and measurement of this central construct. Rather than regarding various success proxies used in prior research as poor operationalizations of success we argue that they represent other important aspects of the venture creation process: engagement, persistence and progress. We hold that in order to attain a better understanding of venture creation these constructs also need to be theoretically defined. Further, their respective drivers need to be theorized and tested separately. We suggest theoretical definitions of each. We then develop and test hypotheses concerning how human capital, venture idea novelty and business planning has different impact on the different assessments of the process represented by engagement, persistence, progress and success. The results largely confirm the stated hypotheses, suggesting that the conceptual and empirical approach we are suggesting is a path towards improved understanding of the central entrepreneurship phenomenon of new venture creation.
Resumo:
IT-supported field data management benefits on-site construction management by improving accessibility to the information and promoting efficient communication between project team members. However, most of on-site safety inspections still heavily rely on subjective judgment and manual reporting processes and thus observers’ experiences often determine the quality of risk identification and control. This study aims to develop a methodology to efficiently retrieve safety-related information so that the safety inspectors can easily access to the relevant site safety information for safer decision making. The proposed methodology consists of three stages: (1) development of a comprehensive safety database which contains information of risk factors, accident types, impact of accidents and safety regulations; (2) identification of relationships among different risk factors based on statistical analysis methods; and (3) user-specified information retrieval using data mining techniques for safety management. This paper presents an overall methodology and preliminary results of the first stage research conducted with 101 accident investigation reports.
Resumo:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.
Resumo:
One of the next great challenges of cell biology is the determination of the enormous number of protein structures encoded in genomes. In recent years, advances in electron cryo-microscopy and high-resolution single particle analysis have developed to the point where they now provide a methodology for high resolution structure determination. Using this approach, images of randomly oriented single particles are aligned computationally to reconstruct 3-D structures of proteins and even whole viruses. One of the limiting factors in obtaining high-resolution reconstructions is obtaining a large enough representative dataset ($>100,000$ particles). Traditionally particles have been manually picked which is an extremely labour intensive process. The problem is made especially difficult by the low signal-to-noise ratio of the images. This paper describes the development of automatic particle picking software, which has been tested with both negatively stained and cryo-electron micrographs. This algorithm has been shown to be capable of selecting most of the particles, with few false positives. Further work will involve extending the software to detect differently shaped and oriented particles.