418 resultados para Digital material representation
Resumo:
As a functioning performing arts centre, commercial enterprise, tourist attraction and major national asset, Sydney Opera House must continue to demonstrate the optimal use and effectiveness of its facilities management (FM) to provide value for its stakeholders. To better achieve this, the Cooperative Research Centre for Construction Innovation focussed on the following three themes for investigation in the FM Exemplar Project — Sydney Opera House: digital modelling — developing a building information model capable of integrating information from disparate software systems and hard copy, and combining this with a spatial 3D computeraided design (CAD)/geographic information system (GIS) platform. This model offers a visual representation of the building and its component elements in 3D, and provides comprehensive information on each element. The model can work collaboratively through an open data exchange standard (common to all compliant software) in order to mine the data required to further FM objectives (such as maintenance) more efficiently and effectively. services procurement — developing a multi-criteria performance-based procurement framework aligned with organisational objectives for FM service delivery performance benchmarking — developing an FM benchmarking framework that enables facilities/ organisations to develop key performance indicators (KPIs) to identify better practice and improvement strategies. These three research stream outcomes were then aligned within the broader context of Sydney Opera House’s Total Asset Management (TAM) Plan and Strategic Asset Maintenance (SAM) Plan in arriving at a business framework aligned with, and in support of, organisational objectives. The Sydney Opera House is managed by the Sydney Opera House Trust on behalf of the Government of the State of New South Wales. Within the framework of the TAM Plan prepared in accordance with NSW Treasury Guidelines, the assimilation of these three themes provides an integrated FM solution capable of supporting Sydney Opera House’s business objectives and functional requirements. FM as a business enabler showcases innovative methods in improving FM performance, a better alignment of service and performance objectives and provides a better-practice model to support the business enterprise.
Resumo:
Since 1995 the buildingSMART International Alliance for Interoperability (buildingSMART)has developed a robust standard called the Industry Foundation Classes (IFC). IFC is an object oriented data model with related file format that has facilitated the efficient exchange of data in the development of building information models (BIM). The Cooperative Research Centre for Construction Innovation has contributed to the international effort in the development of the IFC standard and specifically the reinforced concrete part of the latest IFC 2x3 release. Industry Foundation Classes have been endorsed by the International Standards Organisation as a Publicly Available Specification (PAS) under the ISO label ISO/PAS 16739. For more details, go to http://www.tc184- sc4.org/About_TC184-SC4/About_SC4_Standards/ The current IFC model covers the building itself to a useful level of detail. The next stage of development for the IFC standard is where the building meets the ground (terrain) and with civil and external works like pavements, retaining walls, bridges, tunnels etc. With the current focus in Australia on infrastructure projects over the next 20 years a logical extension to this standard was in the area of site and civil works. This proposal recognises that there is an existing body of work on the specification of road representation data. In particular, LandXML is recognised as also is TransXML in the broader context of transportation and CityGML in the common interfacing of city maps, buildings and roads. Examination of interfaces between IFC and these specifications is therefore within the scope of this project. That such interfaces can be developed has already been demonstrated in principle within the IFC for Geographic Information Systems (GIS) project. National road standards that are already in use should be carefully analysed and contacts established in order to gain from this knowledge. The Object Catalogue for the Road Transport Sector (OKSTRA) should be noted as an example. It is also noted that buildingSMART Norway has submitted a proposal
Resumo:
BIM (Building Information Modelling) is an approach that involves applying and maintaining an integral digital representation of all building information for different phases of the project lifecycle. This paper presents an analysis of the current state of BIM in the industry and a re-assessment of its role and potential contribution in the near future, given the apparent slow rate of adoption by the industry. The paper analyses the readiness of the building industry with respect to the product, processes and people to present an argument on where the expectations from BIM and its adoption may have been misplaced. This paper reports on the findings from: (1) a critical review of latest BIM literature and commercial applications, and (2) workshops with focus groups on changing work-practice, role of technology, current perceptions and expectations of BIM.
Resumo:
These National Guidelines and Case Studies for Digital Modelling are the outcomes from one of a number of Building Information Modelling (BIM)-related projects undertaken by the CRC for Construction Innovation. Since the CRC opened its doors in 2001, the industry has seen a rapid increase in interest in BIM, and widening adoption. These guidelines and case studies are thus very timely, as the industry moves to model-based working and starts to share models in a new context called integrated practice. Governments, both federal and state, and in New Zealand are starting to outline the role they might take, so that in contrast to the adoption of 2D CAD in the early 90s, we ensure that a national, industry-wide benefit results from this new paradigm of working. Section 1 of the guidelines give us an overview of BIM: how it affects our current mode of working, what we need to do to move to fully collaborative model-based facility development. The role of open standards such as IFC is described as a mechanism to support new processes, and make the extensive design and construction information available to asset operators and managers. Digital collaboration modes, types of models, levels of detail, object properties and model management complete this section. It will be relevant for owners, managers and project leaders as well as direct users of BIM. Section 2 provides recommendations and guides for key areas of model creation and development, and the move to simulation and performance measurement. These are the more practical parts of the guidelines developed for design professionals, BIM managers, technical staff and ‘in the field’ workers. The guidelines are supported by six case studies including a summary of lessons learnt about implementing BIM in Australian building projects. A key aspect of these publications is the identification of a number of important industry actions: the need for BIM-compatible product information and a national context for classifying product data; the need for an industry agreement and setting process-for-process definition; and finally, the need to ensure a national standard for sharing data between all of the participants in the facility-development process.
Resumo:
These National Guidelines and Case Studies for Digital Modelling are the outcomes from one of a number of Building Information Modelling (BIM)-related projects undertaken by the CRC for Construction Innovation. Since the CRC opened its doors in 2001, the industry has seen a rapid increase in interest in BIM, and widening adoption. These guidelines and case studies are thus very timely, as the industry moves to model-based working and starts to share models in a new context called integrated practice. Governments, both federal and state, and in New Zealand are starting to outline the role they might take, so that in contrast to the adoption of 2D CAD in the early 90s, we ensure that a national, industry-wide benefit results from this new paradigm of working. Section 1 of the guidelines give us an overview of BIM: how it affects our current mode of working, what we need to do to move to fully collaborative model-based facility development. The role of open standards such as IFC is described as a mechanism to support new processes, and make the extensive design and construction information available to asset operators and managers. Digital collaboration modes, types of models, levels of detail, object properties and model management complete this section. It will be relevant for owners, managers and project leaders as well as direct users of BIM. Section 2 provides recommendations and guides for key areas of model creation and development, and the move to simulation and performance measurement. These are the more practical parts of the guidelines developed for design professionals, BIM managers, technical staff and ‘in the field’ workers. The guidelines are supported by six case studies including a summary of lessons learnt about implementing BIM in Australian building projects. A key aspect of these publications is the identification of a number of important industry actions: the need for BIMcompatible product information and a national context for classifying product data; the need for an industry agreement and setting process-for-process definition; and finally, the need to ensure a national standard for sharing data between all of the participants in the facility-development process.
Resumo:
This paper examines three functions of music technology in the study of music. Firstly, as a tool, secondly, as an instrument and, lastly, as a medium for thinking. As our societies become increasingly embroiled in digital media for representation and communication, our philosophies of music education need to adapt to integrate these developments while maintaining the essence of music. The foundation of music technology in the 1990s is the digital representation of sound. It is this fundamental shift to a new medium with which to represent sound that carries with it the challenge to address digital technology and its multiple effects on music creation and presentation. In this paper I suggest that music institutions should take a broad and integrated approach to the place of music technology in their courses, based on the understanding of digital representation of sound and these three functions it can serve. Educators should reconsider digital technologies such as synthesizers and computers as music instruments and cognitive amplifiers, not simply as efficient tools.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.
Resumo:
This research used the Queensland Police Service, Australia, as a major case study. Information on principles, techniques and processes used, and the reason for the recording, storing and release of audit information for evidentiary purposes is reported. It is shown that Law Enforcement Agencies have a two-fold interest in, and legal obligation pertaining to, audit trails. The first interest relates to the situation where audit trails are actually used by criminals in the commission of crime and the second to where audit trails are generated by the information systems used by the police themselves in support of the recording and investigation of crime. Eleven court cases involving Queensland Police Service audit trails used in evidence in Queensland courts were selected for further analysis. It is shown that, of the cases studied, none of the evidence presented was rejected or seriously challenged from a technical perspective. These results were further analysed and related to normal requirements for trusted maintenance of audit trail information in sensitive environments with discussion on the ability and/or willingness of courts to fully challenge, assess or value audit evidence presented. Managerial and technical frameworks for firstly what is considered as an environment where a computer system may be considered to be operating “properly” and, secondly, what aspects of education, training, qualifications, expertise and the like may be considered as appropriate for persons responsible within that environment, are both proposed. Analysis was undertaken to determine if audit and control of information in a high security environment, such as law enforcement, could be judged as having improved, or not, in the transition from manual to electronic processes. Information collection, control of processing and audit in manual processes used by the Queensland Police Service, Australia, in the period 1940 to 1980 was assessed against current electronic systems essentially introduced to policing in the decades of the 1980s and 1990s. Results show that electronic systems do provide for faster communications with centrally controlled and updated information readily available for use by large numbers of users who are connected across significant geographical locations. However, it is clearly evident that the price paid for this is a lack of ability and/or reluctance to provide improved audit and control processes. To compare the information systems audit and control arrangements of the Queensland Police Service with other government departments or agencies, an Australia wide survey was conducted. Results of the survey were contrasted with the particular results of a survey, conducted by the Australian Commonwealth Privacy Commission four years previous, to this survey which showed that security in relation to the recording of activity against access to information held on Australian government computer systems has been poor and a cause for concern. However, within this four year period there is evidence to suggest that government organisations are increasingly more inclined to generate audit trails. An attack on the overall security of audit trails in computer operating systems was initiated to further investigate findings reported in relation to the government systems survey. The survey showed that information systems audit trails in Microsoft Corporation's “Windows” operating system environments are relied on quite heavily. An audit of the security for audit trails generated, stored and managed in the Microsoft “Windows 2000” operating system environment was undertaken and compared and contrasted with similar such audit trail schemes in the “UNIX” and “Linux” operating systems. Strength of passwords and exploitation of any security problems in access control were targeted using software tools that are freely available in the public domain. Results showed that such security for the “Windows 2000” system is seriously flawed and the integrity of audit trails stored within these environments cannot be relied upon. An attempt to produce a framework and set of guidelines for use by expert witnesses in the information technology (IT) profession is proposed. This is achieved by examining the current rules and guidelines related to the provision of expert evidence in a court environment, by analysing the rationale for the separation of distinct disciplines and corresponding bodies of knowledge used by the Medical Profession and Forensic Science and then by analysing the bodies of knowledge within the discipline of IT itself. It is demonstrated that the accepted processes and procedures relevant to expert witnessing in a court environment are transferable to the IT sector. However, unlike some discipline areas, this analysis has clearly identified two distinct aspects of the matter which appear particularly relevant to IT. These two areas are; expertise gained through the application of IT to information needs in a particular public or private enterprise; and expertise gained through accepted and verifiable education, training and experience in fundamental IT products and system.
Resumo:
With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.
Resumo:
Instrumental music performance is a well-established case of real-time interaction with technology and, when extended to ensembles, of interaction with others. However, these interactions are fleeting and the opportunities to reflect on action is limited, even though audio and video recording has recently provided important opportunities in this regard. In this paper we report on research to further extend these reflective opportunities through the capture and visualization of gestural data collected during collaborative virtual performances; specifically using the digital media instrument Jam2jam AV and the specifically-developed visualization software Jam2jam AV Visualize. We discusses how such visualization may assist performance development and understanding. The discussion engages with issues of representation, authenticity of virtual experiences, intersubjectivity and wordless collaboration, and creativity support. Two usage scenarios are described showing that collaborative intent is evident in the data visualizations more clearly than in audio-visual recordings alone, indicating that the visualization of performance gestures can be an efficient way of identifying deliberate and co-operative performance behaviours.
Resumo:
This paper presents a robust stochastic framework for the incorporation of visual observations into conventional estimation, data fusion, navigation and control algorithms. The representation combines Isomap, a non-linear dimensionality reduction algorithm, with expectation maximization, a statistical learning scheme. The joint probability distribution of this representation is computed offline based on existing training data. The training phase of the algorithm results in a nonlinear and non-Gaussian likelihood model of natural features conditioned on the underlying visual states. This generative model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The instantiated likelihoods are expressed as a Gaussian mixture model and are conveniently integrated within existing non-linear filtering algorithms. Example applications based on real visual data from heterogenous, unstructured environments demonstrate the versatility of the generative models.
Resumo:
This paper presents a robust stochastic model for the incorporation of natural features within data fusion algorithms. The representation combines Isomap, a non-linear manifold learning algorithm, with Expectation Maximization, a statistical learning scheme. The representation is computed offline and results in a non-linear, non-Gaussian likelihood model relating visual observations such as color and texture to the underlying visual states. The likelihood model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The likelihoods are expressed as a Gaussian Mixture Model so as to permit convenient integration within existing nonlinear filtering algorithms. The resulting compactness of the representation is especially suitable to decentralized sensor networks. Real visual data consisting of natural imagery acquired from an Unmanned Aerial Vehicle is used to demonstrate the versatility of the feature representation.
Resumo:
Mainstream representations of trans people typically run the gamut from victim to mentally ill and are almost always articulated by non-trans voices. The era of user-generated digital content and participatory culture has heralded unprecedented opportunities for trans people who wish to speak their own stories in public spaces. Digital Storytelling, as an easy accessible autobiographic audio-visual form, offers scope to play with multi-dimensional and ambiguous representations of identity that contest mainstream assumptions of what it is to be ‘male’ or ‘female’. Also, unlike mainstream media forms, online and viral distribution of Digital Stories offer potential to reach a wide range of audiences, which is appealing to activist oriented storytellers who wish to confront social prejudices. However, with these newfound possibilities come concerns regarding visibility and privacy, especially for storytellers who are all too aware of the risks of being ‘out’ as trans. This paper explores these issues from the perspective of three trans storytellers, with reference to the Digital Stories they have created and shared online and on DVD. These examplars are contextualised with some popular and scholarly perspectives on trans representation, in particular embodied and performed identity. It is contended that trans Digital Stories, while appearing in some ways to be quite conventional, actually challenge common notions of gender identity in ways that are both radical and transformative.
Resumo:
Women and Representation in Local Government opens up an opportunity to critique and move beyond suppositions and labels in relation to women in local government. Presenting a wealth of new empirical material, this book brings together international experts to examine and compare the presence of women at this level and features case studies on the US, UK, France, Germany, Spain, Finland, Uganda, China, Australia and New Zealand. Divided into four main sections, each explores a key theme related to the subject of women and representation in local government and engages with contemporary gender theory and the broader literature on women and politics. The contributors explore local government as a gendered environment; critiquing strategies to address the limited number of elected female members in local government and examine the impact of significant recent changes on local government through a gender lens. Addressing key questions of how gender equality can be achieved in this sector, it will be of strong interest to students and academics working in the fields of gender studies, local government and international politics.
Resumo:
New technologies have the potential to both expose children to and protect them from television news footage likely to disturb or frighten. The advent of cheap, portable and widely available digital technology has vastly increased the possibility of violent news events being captured and potentially broadcast. This material has the potential to be particularly disturbing and harmful to young children. But on the flipside, available digital technology could be used to build in protection for young viewers especially when it comes to preserving scheduled television programming and guarding against violent content being broadcast during live crosses from known trouble spots. Based on interviews with news directors, parents and a review of published material two recommendations are put forward: 1. Digital television technology should be employed to prevent news events "overtaking" scheduled children's programming and to protect safe harbours placed in the classifications zones to protect children. 2. Broadcasters should regain control of the images that go to air during "live" feeds from obviously volatile situations by building in short delays in G classification zones.