479 resultados para Interoperability
Resumo:
There is a need for decision support tools that integrate energy simulation into early design in the context of Australian practice. Despite the proliferation of simulation programs in the last decade, there are no ready-to-use applications that cater specifically for the Australian climate and regulations. Furthermore, the majority of existing tools focus on achieving interaction with the design domain through model-based interoperability, and largely overlook the issue of process integration. This paper proposes an energy-oriented design environment that both accommodates the Australian context and provides interactive and iterative information exchanges that facilitate feedback between domains. It then presents the structure for DEEPA, an openly customisable system that couples parametric modelling and energy simulation software as a means of developing a decision support tool to allow designers to rapidly and flexibly assess the performance of early design alternatives. Finally, it discusses the benefits of developing a dynamic and concurrent performance evaluation process that parallels the characteristics and relationships of the design process.
Resumo:
High Speed Rail (HSR) is rapidly gaining popularity worldwide as a safe and efficient transport option for long-distance travel. Designed to win market shares from air transport, HSR systems optimise their productivity between increasing speeds and station spacing to offer high quality service and gain ridership. Recent studies have investigated the effects that the deployment of HSR infrastructure has on spatial distribution and the economic development of cities and regions. Findings appear mostly positive at higher geographical scales, where HSR links connect major urban centres several hundred kilometres apart and already well positioned within a national or international context. Also, at the urban level, studies have shown regeneration and concentration effects around HSR station areas with positive returns on city’s image and economy. However, doubts persist on the effects of HSR at an intermediate scale, where the accessibility trade off on station spacing limits access to many small and medium agglomerations. Thereby, their ability to participate in the development opportunities facilitated by HSR infrastructure is significantly reduced. The locational advantages deriving from transport improvements appear contrasting especially in regions that tend to have a polycentric structure, where cities may present greater accessibility disparities between those served by HSR and those left behind. This thesis fits in this context where intermediate and regional cities do not directly enjoy the presence of an HSR station while having an existing or planned proximate HSR corridor. With the aim of understanding whether there might be a solution to this apparent incongruity, the research investigates strategies to integrate HSR accessibility at the regional level. While current literature recommends to commit with ancillary investments to the uplift of station areas and the renewal of feeder systems, I hypothesised the interoperability between the HSR and the conventional networks to explore the possibilities offered by mixed traffic and infrastructure sharing. Thus, I developed a methodology to quantify the exchange of benefits deriving from this synergistic interaction. In this way, it was possible to understand which level of service quality offered by alternative transit strategies best facilitates the distribution of accessibility benefits for areas far from actual HSR stations. Therefore, strategies were selected for their type of service capable of regional extensions and urban penetrations, while incorporating a combination of specific advantages (e.g. speed, sub-urbanity, capacity, frequency and automation) in order to emulate HSR quality with increasingly efficient services. The North-eastern Italian macro region was selected as case study to ground the research offering concurrently a peripheral polycentric metropolitan form, the presence of a planned HSR corridor with some portions of HSR infrastructure implementation, and the project to develop a suburban rail service extended regionally. Results show significant distributive potential, in terms of network effects produced in relation with HSR, in increasing proportions for all the strategies considered: a regional metro rail strategy (abbreviated RMR), a regional high speed rail strategy (abbreviated RHSR), a regional light rail transit (abbreviated LRT) strategy, and a non-stopping continuous railway system (abbreviated CRS) strategy. The provision of additional tools to value HSR infrastructure against its accessibility benefits and their regional distribution through alternative strategies beyond the actual HSR stations, would have great implications, both politically and technically, in moving towards new dimensions of HSR evaluation and development.
Resumo:
Preservation and enhancement of transportation infrastructure is critical to continuous economic development in Australia. Of particular importance are the road assets infrastructure, due to their high costs of setting up and their social and economic impact on the national economy. Continuous availability of road assets, however, is contingent upon their effective design, condition monitoring, maintenance, and renovation and upgrading. However, in order to achieve this data exchange, integration, and interoperability is required across municipal boundaries. On the other hand, there are no agreed reference frameworks that consistently describe road infrastructure assets. As a consequence, specifications and technical solutions being chosen to manage road assets do not provide adequate detail and quality of information to support asset lifecycle management processes and decisions taken are based on perception not reality. This paper presents a road asset information model, which works as reference framework to, link other kinds of information with asset information; integrate different data suppliers; and provide a foundation for service driven integrated information framework for community infrastructure and asset management.
Resumo:
The digital humanities are growing rapidly in response to a rise in Internet use. What humanists mostly work on, and which forms much of the contents of our growing repositories, are digital surrogates of originally analog artefacts. But is the data model upon which many of those surrogates are based – embedded markup – adequate for the task? Or does it in fact inhibit reusability and flexibility? To enhance interoperability of resources and tools, some changes to the standard markup model are needed. Markup could be removed from the text and stored in standoff form. The versions of which many cultural heritage texts are composed could also be represented externally, and computed automatically. These changes would not disrupt existing data representations, which could be imported without significant data loss. They would also enhance automation and ease the increasing burden on the modern digital humanist.
Resumo:
Building Information Modelling (BIM) appears to be the next evolutionary link in project delivery within the AEC (Architecture, Engineering and Construction) Industry. There have been several surveys of implementation at the local level but to date little is known of the international context. This paper is a preliminary report of a large scale electronic survey of the implementation of BIM and the impact on AEC project delivery and project stakeholders in Australia and internationally. National and regional patterns of BIM usage will be identified. These patterns will include disciplinary users, project lifecycle stages, technology integration–including software compatibility—and organisational issues such as human resources and interoperability. Also considered is the current status of the inclusion of BIM within tertiary level curricula and potential for the creation of a new discipline.
Resumo:
Flexible information exchange is critical to successful design-analysis integration, but current top-down, standards-based and model-oriented strategies impose restrictions that contradicts this flexibility. In this article we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. We then discuss how a shared mapping process that is flexible and user friendly supports non-programmers in creating these custom connections. Adopting a services-oriented system architecture, we propose a web-based platform that enables data, semantics and models to be shared on the fly. We then discuss potential challenges and opportunities for its development as a flexible, visual, collaborative, scalable and open system.
Resumo:
Flexible information exchange is critical to successful design integration, but current top-down, standards-based and model-oriented strategies impose restrictions that are contradictory to this flexibility. In this paper we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. Adopting a services-oriented system architecture, we propose a web-based platform that enables data, semantics and models to be shared on the fly. We discuss potential challenges and opportunities for the development thereof as a flexible, visual, collaborative, scalable and open system.
Resumo:
This paper describes the use of property graphs for mapping data between AEC software tools, which are not linked by common data formats and/or other interoperability measures. The intention of introducing this in practice, education and research is to facilitate the use of diverse, non-integrated design and analysis applications by a variety of users who need to create customised digital workflows, including those who are not expert programmers. Data model types are examined by way of supporting the choice of directed, attributed, multi-relational graphs for such data transformation tasks. A brief exemplar design scenario is also presented to illustrate the concepts and methods proposed, and conclusions are drawn regarding the feasibility of this approach and directions for further research.
Resumo:
The rapid growth of services available on the Internet and exploited through ever globalizing business networks poses new challenges for service interoperability. New services, from consumer “apps”, enterprise suites, platform and infrastructure resources, are vying for demand with quickly evolving and overlapping capabilities, and shorter cycles of extending service access from user interfaces to software interfaces. Services, drawn from a wider global setting, are subject to greater change and heterogeneity, demanding new requirements for structural and behavioral interface adaptation. In this paper, we analyze service interoperability scenarios in global business networks, and propose new patterns for service interactions, above those proposed over the last 10 years through the development of Web service standards and process choreography languages. By contrast, we reduce assumptions of design-time knowledge required to adapt services, giving way to run-time mismatch resolutions, extend the focus from bilateral to multilateral messaging interactions, and propose declarative ways in which services and interactions take part in long-running conversations via the explicit use of state.
Resumo:
Widespread adoption by electricity utilities of Non-Conventional Instrument Transformers, such as optical or capacitive transducers, has been limited due to the lack of a standardised interface and multi-vendor interoperability. Low power analogue interfaces are being replaced by IEC 61850 9 2 and IEC 61869 9 digital interfaces that use Ethernet networks for communication. These ‘process bus’ connections achieve significant cost savings by simplifying connections between switchyard and control rooms; however the in-service performance when these standards are employed is largely unknown. The performance of real-time Ethernet networks and time synchronisation was assessed using a scale model of a substation automation system. The test bed was constructed from commercially available timing and protection equipment supplied by a range of vendors. Test protocols have been developed to thoroughly evaluate the performance of Ethernet networks and network based time synchronisation. The suitability of IEEE Std 1588 Precision Time Protocol (PTP) as a synchronising system for sampled values was tested in the steady state and under transient conditions. Similarly, the performance of hardened Ethernet switches designed for substation use was assessed under a range of network operating conditions. This paper presents test methods that use a precision Ethernet capture card to accurately measure PTP and network performance. These methods can be used for product selection and to assess ongoing system performance as substations age. Key findings on the behaviour of multi-function process bus networks are presented. System level tests were performed using a Real Time Digital Simulator and transformer protection relay with sampled value and Generic Object Oriented Substation Events (GOOSE) capability. These include the interactions between sampled values, PTP and GOOSE messages. Our research has demonstrated that several protocols can be used on a shared process bus, even with very high network loads. This should provide confidence that this technology is suitable for transmission substations.
Resumo:
Parametric and generative modelling methods are ways in which computer models are made more flexible, and of formalising domain-specific knowledge. At present, no open standard exists for the interchange of parametric and generative information. The Industry Foundation Classes (IFC) which are an open standard for interoperability in building information models is presented as the base for an open standard in parametric modelling. The advantage of allowing parametric and generative representations are that the early design process can allow for more iteration and changes can be implemented quicker than with traditional models. This paper begins with a formal definition of what constitutes to be parametric and generative modelling methods and then proceeds to describe an open standard in which the interchange of components could be implemented. As an illustrative example of generative design, Frazer’s ‘Reptiles’ project from 1968 is reinterpreted.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Dedicated Short Range Communication (DSRC) is the emerging key technology supporting cooperative road safety systems within Intelligent Transportation Systems (ITS). The DSRC protocol stack includes a variety of standards such as IEEE 802.11p and SAE J2735. The effectiveness of the DSRC technology depends on not only the interoperable cooperation of these standards, but also on the interoperability of DSRC devices manufactured by various manufacturers. To address the second constraint, the SAE defines a message set dictionary under the J2735 standard for construction of device independent messages. This paper focuses on the deficiencies of the SAE J2735 standard being developed for deployment in Vehicular Ad-hoc Networks (VANET). In this regard, the paper discusses the way how a Basic Safety Message (BSM) as the fundamental message type defined in SAE J2735 is constructed, sent and received by safety communication platforms to provide a comprehensive device independent solution for Cooperative ITS (C-ITS). This provides some insight into the technical knowledge behind the construction and exchange of BSMs within VANET. A series of real-world DSRC data collection experiments was conducted. The results demonstrate that the reliability and throughput of DSRC highly depend on the applications utilizing the medium. Therefore, an active application-dependent medium control measure, using a novel message-dissemination frequency controller, is introduced. This application level message handler improves the reliability of both BSM transmissions/receptions and the Application layer error handling which is extremely vital to decentralized congestion control (DCC) mechanisms.
Resumo:
This chapter presents an historical narrative on the recent evolution of information and communications technology (ICT) that has been, and is, utilized for purposes of learning. In other words, it presents an account of the development of e-learning supported through the Web and other similar virtual environments. It does not attempt to present a definitive account; as such an exercise is fraught with assumptions, contextual bias, and probable conjecture. The concern here is more with contextualizing the role of inquiry in learning and the evolving digital tools that enable interfaces that promote and support it. In tracking this evolution, both multi-disciplinary and trans-disciplinary research has been pursued. Key historical developments are identified as well as interpretations of the key drivers of e-learning over time and into what might be better described as digital learning. Innovations in the development of digital tools are described as dynamic and emergent, evolving as a consequence of multiple, sometimes hidden drivers of change. But conflating advancements in learning technologies with e-learning seems to be pervasive. As is the push for the “open” agenda – a growing number of initiatives and movements dominated by themes associated with access, intellectual property, public benefit, sharing and technical interoperability. Openness is also explored in this chapter, however, more in terms of what it means when associated with inquiry. By investigating opportunities for the stimulation and support of questioning online – in particular, why-questioning – this chapter is focused on “opening” content – not just for access but for inquiry and deeper learning.