952 resultados para orthonormal basis functions (OBF)
Resumo:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.
Resumo:
This paper is concerned with recent advances in the development of near wall-normal-free Reynolds-stress models, whose single point closure formulation, based on the inhomogeneity direction concept, is completely independent of the distance from the wall, and of the normal to the wall direction. In the present approach the direction of the inhomogeneity unit vector is decoupled from the coefficient functions of the inhomogeneous terms. A study of the relative influence of the particular closures used for the rapid redistribution terms and for the turbulent diffusion is undertaken, through comparison with measurements, and with a baseline Reynolds-stress model (RSM) using geometric wall normals. It is shown that wall-normal-free rsms can be reformulated as a projection on a tensorial basis that includes the inhomogeneity direction unit vector, suggesting that the theory of the redistribution tensor closure should be revised by taking into account inhomogeneity effects in the tensorial integrity basis used for its representation.
Resumo:
The research described in this paper forms part of an in-depth investigation of safety culture in one of Australia’s largest construction companies. The research builds on a previous qualitative study with organisational safety leaders and further investigates how safety culture is perceived and experienced by organisational members, as well as how this relates to their safety behaviour and related outcomes at work. Participants were 2273 employees of the case study organisation, with 689 from the Construction function and 1584 from the Resources function. The results of several analyses revealed some interesting organisational variance on key measures. Specifically, the Construction function scored significantly higher on all key measures: safety climate, safety motivation, safety compliance, and safety participation. The results are discussed in terms of relevance in an applied research context.
Resumo:
Urban design that harnesses natural features (such as green roofs and green walls) to improve design outcomes is gaining significant interest, particularly as there is growing evidence of links between human health and wellbeing, and contact with nature. The use of such natural features can provide many significant benefits, such as reduced urban heat island effects, reduced peak energy demand for building cooling, enhanced stormwater attenuation and management, and reduced air pollution and greenhouse gas emissions. The principle of harnessing natural features as functional design elements, particularly in buildings, is becoming known as ‘biophilic urbanism’. Given the potential for global application and benefits for cities from biophilic urbanism, and the growing number of successful examples of this, it is timely to develop enabling policies that help overcome current barriers to implementation. This paper describes a basis for inquiry into policy considerations related to increasing the application of biophilic urbanism. The paper draws on research undertaken as part of the Sustainable Built Environment National Research Centre (SBEnrc) In Australia in partnership with the Western Australian Department of Finance, Parsons Brinckerhoff, Green Roofs Australasia, and Townsville City Council (CitySolar Program). The paper discusses the emergence of a qualitative, mixed-method approach that combines an extensive literature review, stakeholder workshops and interviews, and a detailed study of leading case studies. It highlights the importance of experiential and contextual learnings to inform biophilic urbanism and provides a structure to distil such learnings to benefit other applications.
Resumo:
In 1980 Alltop produced a family of cubic phase sequences that nearly meet the Welch bound for maximum non-peak correlation magnitude. This family of sequences were shown by Wooters and Fields to be useful for quantum state tomography. Alltop’s construction used a function that is not planar, but whose difference function is planar. In this paper we show that Alltop type functions cannot exist in fields of characteristic 3 and that for a known class of planar functions, x^3 is the only Alltop type function.
Resumo:
Communication processes are vital in the lifecycle of BPM projects. With this in mind, much research has been performed into facilitating this key component between stakeholders. Amongst the methods used to support this process are personalized process visualisations. In this paper, we review the development of this visualization trend, then, we propose a theoretical analysis framework based upon communication theory. We use this framework to provide theoretical support to the conjecture that 3D virtual worlds are powerful tools for communicating personalised visualisations of processes within a workplace. Meta requirements are then derived and applied, via 3D virtual world functionalities, to generate example visualisations containing personalized aspects, which we believe enhance the process of communcation between analysts and stakeholders in BPM process (re)design activities.
Resumo:
This paper provides a commentary on the contribution by Dr Chow who questioned whether the functions of learning are general across all categories of tasks or whether there are some task-particular aspects to the functions of learning in relation to task type. Specifically, they queried whether principles and practice for the acquisition of sport skills are different than what they are for musical, industrial, military and human factors skills. In this commentary we argue that ecological dynamics contains general principles of motor learning that can be instantiated in specific performance contexts to underpin learning design. In this proposal, we highlight the importance of conducting skill acquisition research in sport, rather than relying on empirical outcomes of research from a variety of different performance contexts. Here we discuss how task constraints of different performance contexts (sport, industry, military, music) provide different specific information sources that individuals use to couple their actions when performing and acquiring skills. We conclude by suggesting that his relationship between performance task constraints and learning processes might help explain the traditional emphasis on performance curves and performance outcomes to infer motor learning.
Resumo:
This report looks at opportunities in relation to what is either already available or starting to take off in Information and Communication Technology (ICT). ICT focuses on the entire system of information, communication, processes and knowledge within an organisation. It focuses on how technology can be implemented to serve the information and communication needs of people and organisations. An ICT system involves a combination of work practices, information, people and a range of technologies and applications organised to make the business or organisation fully functional and efficient, and to accomplish goals in an organisation. Our focus is on vocational, workbased education in New Zealand. It is not about eLearning, although we briefly touch on the topic. We provide a background on vocational education in New Zealand, cover what we consider to be key trends impacting workbased, vocational education and training (VET), and offer practical suggestions for leveraging better value from ICT initiatives across the main activities of an Industry Training Organisation (ITO). We use a learning value chain approach to demonstrate the main functions ITOs engage in and also use this approach as the basis for developing and prioritising an ICT strategy. Much of what we consider in this report is applicable to the wider tertiary education sector as it relates to life-long learning. We consider ICT as an enabler that: a) connects education businesses (all types including tertiary education institutions) to learners, their career decisions and their learning, and as well, b) enables those same businesses to run more efficiently. We suggest that these two sets of activities are considered as interconnected parts of the same education or training business ICT strategy.
Resumo:
The current study examined the structure of the volunteer functions inventory within a sample of older individuals (N = 187). The career items were replaced with items examining the concept of continuity of work, a potentially more useful and relevant concept for this population. Factor analysis supported a four factor solution, with values, social and continuity emerging as single factors and enhancement and protective items loading together on a single factor. Understanding items did not load highly on any factor. The values and continuity functions were the only dimensions to emerge as predictors of intention to volunteer. This research has important implications for understanding the motivation of older adults to engage in contemporary volunteering settings.
Resumo:
In this study, the nature of the coupling interactions between copper and uracil as well as its several derivatives has been systematically investigated employing the atoms in molecules (AIM) theory and energy decomposition analyses. The whole interaction process has been investigated through the analyses of the radial distribution functions of the Cu⋯X (X = S and O) contact on the basis of the ab initio molecular dynamics. No direct relationship between the adsorption strengths and inhibition efficiencies of the inhibitors has been observed. Additionally, the possibility of the methyl-substituted dithiouracil species to act as copper corrosion inhibitors has been tested.
Resumo:
Universities are more and more challenged by the emerging global higher education market, facilitated by advances in Information and Communication Technologies (ICT). This requires them to reconsider their mission and direction in order to function effectively and efficiently, and to be responsive to changes in their environment. In the face of increasing demands and competitive pressures, Universities like other companies, seek to continuously innovate and improve their performance. Universities are considering co-operating or sharing, both internally and externally, in a wide range of areas to achieve cost effectiveness and improvements in performance. Shared services are an effective model for re-organizing to reduce costs, increase quality and create new capabilities. Shared services are not limited to the Higher Education (HE) sector. Organizations across different sectors are adopting shared services, in particular for support functions such as Finance, Accounting, Human Resources and Information Technology. While shared services has been around for more than three decades, commencing in the 1970’s in the banking sector and then been adopted by other sectors, it is an under researched domain, with little consensus on the most fundamental issues even as basic as defining what shared services is. Moreover, the interest in shared services within Higher Education is a global phenomenon. This study on shared services is situated within the Higher Education Sector of Malaysia, and originated as an outcome resulting from a national project (2005 – 2007) conducted by the Ministry of Higher Education (MOHE) entitled "Knowledge, Information Communication Technology Strategic Plan (KICTSP) for Malaysian Public Higher Education"- where progress towards more collaborations via shared services was a key recommendation. The study’s primary objective was to understand the nature and potential for ICT shared services, in particular in the Malaysian HE sector; by laying a foundation in terms of definition, typologies and research agenda and deriving theoretically based conceptualisations of the potential benefits of shared services, success factors and issues of pursuing shared services. The study embarked on this objective with a literature review and pilot case study as a means to further define the context of the study, given the current under-researched status of ICT shared services and of shared services in Higher Education. This context definition phase illustrated a range of unaddressed issues; including a lack of common understanding of what shared services are, how they are formed, what objectives they full fill, who is involved etc. The study thus embarked on a further investigation of a more foundational nature with an exploratory phase that aimed to address these gaps, where a detailed archival analysis of shared services literature within the IS context was conducted to better understand shared services from an IS perspective. The IS literature on shared services was analysed in depth to report on the current status of shared services research in the IS domain; in particular definitions, objectives, stakeholders, the notion of sharing, theories used, and research methods applied were analysed, which provided a firmer base to this study’s design. The study also conducted a detailed content analysis of 36 cases (globally) of shared services implementations in the HE sector to better understand how shared services are structured within the HE sector and what is been shared. The results of the context definition phase and exploratory phase formed a firm basis in the multiple case studies phase which was designed to address the primary goals of this study (as presented above). Three case sites within the Malaysian HE sector was included in this analysis, resulting in empirically supported theoretical conceptualizations of shared services success factors, issues and benefits. A range of contributions are made through this study. First, the detailed archival analysis of shared services in Information Systems (IS) demonstrated the dearth of research on shared services within Information Systems. While the existing literature was synthesised to contribute towards an improved understanding of shared services in the IS domain, the areas that are yet under-developed and requires further exploration is identified and presented as a proposed research agenda for the field. This study also provides theoretical considerations and methodological guidelines to support the research agenda; to conduct better empirical research in this domain. A number of literatures based a priori frameworks (i.e. on the forms of sharing and shared services stakeholders etc) are derived in this phase, contributing to practice and research with early conceptualisations of critical aspects of shared services. Furthermore, the comprehensive archival analysis design presented and executed here is an exemplary approach of a systematic, pre-defined and tool-supported method to extract, analyse and report literature, and is documented as guidelines that can be applied for other similar literature analysis, with particular attention to supporting novice researchers. Second, the content analysis of 36 shared services initiatives in the Higher Education sector presented eight different types of structural arrangements for shared services, as observed in practice, and the salient dimensions along which those types can be usefully differentiated. Each of the eight structural arrangement types are defined and demonstrated through case examples, with further descriptive details and insights to what is shared and how the sharing occurs. This typology, grounded on secondary empirical evidence, can serve as a useful analytical tool for researchers investigating the shared services phenomenon further, and for practitioners considering the introduction or further development of shared services. Finally, the multiple case studies conducted in the Malaysian Higher Education sector, provided further empirical basis to instantiate the conceptual frameworks and typology derived from the prior phases and develops an empirically supported: (i) framework of issues and challenges, (ii) a preliminary theory of shared services success, and (iii) a benefits framework, for shared services in the Higher Education sector.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
This thesis investigated the viability of using Frequency Response Functions in combination with Artificial Neural Network technique in damage assessment of building structures. The proposed approach can help overcome some of limitations associated with previously developed vibration based methods and assist in delivering more accurate and robust damage identification results. Excellent results are obtained for damage identification of the case studies proving that the proposed approach has been developed successfully.
Resumo:
Parliamentary committees fulfil several important functions within the Parliament, with one of these being the oversight of various agencies including those that are designed to reduce corruption within the police service and other public sector agencies. The cross-party nature of committees combined with the protections of Parliament make them powerful agencies. Prenzler & Faulkner (2010) suggest that the ideal system for an agency that has oversight of a public sector integrity commission should include monitoring by a parliamentary committee, with an inspector attached to the committee. This occurs in Queensland, New South Wales and Western Australia. There has been very little research conducted on the role of parliamentary committees with oversight responsibilities for public sector integrity agencies. This paper will address this gap by examining the relationship between a parliamentary committee, a parliamentary inspector and a corruption commission. Queensland’s Parliamentary Crime and Misconduct Committee (PCMC/the Committee) and the Parliamentary Crime and Misconduct Commissioner (the Commissioner) provide oversight of the Crime and Misconduct Commission (CMC). By focussing on the PCMC and the Commissioner, the paper will examine the legislative basis for the Committee and Commissioner and their respective roles in providing oversight of the CMC. One key method by which the PCMC provides oversight of the CMC is to conduct and publish a review of the CMC every three years. Additionally, the paper will identify some of the similarities and differences between the PCMC and other committees that operate within the Queensland Parliament. By doing so, the paper will provide insights into the relationships that exist between corruption commissions, parliamentary committees and parliamentary inspectors and demonstrate the important role of the parliamentary committee in preventing instances of public sector corruption.