956 resultados para open quantum system
Resumo:
Enterprise resource planning (ERP) systems are rapidly being combined with “big data” analytics processes and publicly available “open data sets”, which are usually outside the arena of the enterprise, to expand activity through better service to current clients as well as identifying new opportunities. Moreover, these activities are now largely based around relevant software systems hosted in a “cloud computing” environment. However, the over 50- year old phrase related to mistrust in computer systems, namely “garbage in, garbage out” or “GIGO”, is used to describe problems of unqualified and unquestioning dependency on information systems. However, a more relevant GIGO interpretation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems based around ERP and open datasets as well as “big data” analytics, particularly in a cloud environment, the ability to verify the authenticity and integrity of the data sets used may be almost impossible. In turn, this may easily result in decision making based upon questionable results which are unverifiable. Illicit “impersonation” of and modifications to legitimate data sets may become a reality while at the same time the ability to audit any derived results of analysis may be an important requirement, particularly in the public sector. The pressing need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment is discussed in this paper. Some current and appropriate technologies currently being offered are also examined. However, severe limitations in addressing the problems identified are found and the paper proposes further necessary research work for the area. (Note: This paper is based on an earlier unpublished paper/presentation “Identity, Addressing, Authenticity and Audit Requirements for Trust in ERP, Analytics and Big/Open Data in a ‘Cloud’ Computing Environment: A Review and Proposal” presented to the Department of Accounting and IT, College of Management, National Chung Chen University, 20 November 2013.)
Resumo:
Objective While many jurisdictions internationally now require learner drivers to complete a specified number of hours of supervised driving practice before being able to drive unaccompanied, very few require learner drivers to complete a log book to record this practice and then present it to the licensing authority. Learner drivers in most Australian jurisdictions must complete a log book that records their practice thereby confirming to the licensing authority that they have met the mandated hours of practice requirement. These log books facilitate the management and enforcement of minimum supervised hours of driving requirements. Method Parents of learner drivers in two Australian states, Queensland and New South Wales, completed an online survey assessing a range of factors, including their perceptions of the accuracy of their child’s learner log book and the effectiveness of the log book system. Results The study indicates that the large majority of parents believe that their child’s learner log book is accurate. However, they generally report that the log book system is only moderately effective as a system to measure the number of hours of supervised practice a learner driver has completed. Conclusions The results of this study suggest the presence of a paradox with many parents possibly believing that others are not as diligent in the use of log books as they are or that the system is too open to misuse. Given that many parents report that their child’s log book is accurate, this study has important implications for the development and ongoing monitoring of hours of practice requirements in graduated driver licensing systems.
Resumo:
A growing number of online journals and academic platforms are adopting light peer review or 'publish then filter' models of scholarly communication. These approaches have the advantage of enabling instant exchanges of knowledge between academics and are part of a wider search for alternatives to traditional peer review and certification processes in scholarly publishing. However, establishing credibility and identifying the correct balance between communication and scholarly rigour remains an important challenge for digital communication platforms targeting academic communities. This paper looks at a highly influential, government-backed, open publishing platform in China: Science Paper Online, which is using transparent post-publication peer-review processes to encourage innovation and address systemic problems in China's traditional academic publishing system. There can be little doubt that the Chinese academic publishing landscape differs in important ways from counterparts in the United States and Western Europe. However, this article suggests that developments in China also provide important lessons about the potential of digital technology and government policy to facilitate a large-scale shift towards more open and networked models of scholarly communication.
Resumo:
Accurate three-dimensional representations of cultural heritage sites are highly valuable for scientific study, conservation, and educational purposes. In addition to their use for archival purposes, 3D models enable efficient and precise measurement of relevant natural and architectural features. Many cultural heritage sites are large and complex, consisting of multiple structures spatially distributed over tens of thousands of square metres. The process of effectively digitising such geometrically complex locations requires measurements to be acquired from a variety of viewpoints. While several technologies exist for capturing the 3D structure of objects and environments, none are ideally suited to complex, large-scale sites, mainly due to their limited coverage or acquisition efficiency. We explore the use of a recently developed handheld mobile mapping system called Zebedee in cultural heritage applications. The Zebedee system is capable of efficiently mapping an environment in three dimensions by continually acquiring data as an operator holding the device traverses through the site. The system was deployed at the former Peel Island Lazaret, a culturally significant site in Queensland, Australia, consisting of dozens of buildings of various sizes spread across an area of approximately 400 × 250 m. With the Zebedee system, the site was scanned in half a day, and a detailed 3D point cloud model (with over 520 million points) was generated from the 3.6 hours of acquired data in 2.6 hours. We present results demonstrating that Zebedee was able to accurately capture both site context and building detail comparable in accuracy to manual measurement techniques, and at a greatly increased level of efficiency and scope. The scan allowed us to record derelict buildings that previously could not be measured because of the scale and complexity of the site. The resulting 3D model captures both interior and exterior features of buildings, including structure, materials, and the contents of rooms.
Resumo:
A system is something that can be separated from its surrounds, but this definition leaves much scope for refinement. Starting with the notion of measurement, we explore increasingly contextual system behaviour, and identify three major forms of contextuality that might be exhibited by a system: (a) between components; (b) between system and experimental method, and; (c) between a system and its environment. Quantum Theory is shown to provide a highly useful formalism from which all three forms of contextuality can be analysed, offering numerous tests for contextual behaviour, as well as modelling possibilities for systems that do indeed display it. I conclude with the introduction of a Contextualised General Systems Theory based upon an extension of this formalism.
Resumo:
Unified Communication (UC) is the integration of two or more real time communication systems into one platform. Integrating core communication systems into one overall enterprise level system delivers more than just cost saving. These real-time interactive communication services and applications over Internet Protocol (IP) have become critical in boosting employee accessibility and efficiency, improving customer support and fostering business agility. However, some small and medium-sized businesses (SMBs) are far from implementing this solution due to the high cost of initial deployment and ongoing support. In this paper, we will discuss and demonstrate an open source UC solution, viz. “Asterisk” for use by SMBs, and report on some performance tests using SIPp. The contribution from this research is the provision of technical advice to SMBs in deploying UC, which is manageable in terms of cost, ease of deployment and support.
Resumo:
This paper details the initial design and planning of a Field Programmable Gate Array (FPGA) implemented control system that will enable a path planner to interact with a MAVLink based flight computer. The design is aimed at small Unmanned Aircraft Vehicles (UAV) under autonomous operation which are typically subject to constraints arising from limited on-board processing capabilities, power and size. An FPGA implementation for the de- sign is chosen for its potential to address such limitations through low power and high speed in-hardware computation. The MAVLink protocol offers a low bandwidth interface for the FPGA implemented path planner to communicate with an on-board flight computer. A control system plan is presented that is capable of accepting a string of GPS waypoints generated on-board from a previously developed in- hardware Genetic Algorithm (GA) path planner and feeding them to the open source PX4 autopilot, while simultaneously respond- ing with flight status information.
Resumo:
Weta possess typical Ensifera ears. Each ear comprises three functional parts: two equally sized tympanal membranes, an underlying system of modified tracheal chambers, and the auditory sensory organ, the crista acustica. This organ sits within an enclosed fluid-filled channel-previously presumed to be hemolymph. The role this channel plays in insect hearing is unknown. We discovered that the fluid within the channel is not actually hemolymph, but a medium composed principally of lipid from a new class. Three-dimensional imaging of this lipid channel revealed a previously undescribed tissue structure within the channel, which we refer to as the olivarius organ. Investigations into the function of the olivarius reveal de novo lipid synthesis indicating that it is producing these lipids in situ from acetate. The auditory role of this lipid channel was investigated using Laser Doppler vibrometry of the tympanal membrane, which shows that the displacement of the membrane is significantly increased when the lipid is removed from the auditory system. Neural sensitivity of the system, however, decreased upon removal of the lipid-a surprising result considering that in a typical auditory system both the mechanical and auditory sensitivity are positively correlated. These two results coupled with 3D modelling of the auditory system lead us to hypothesize a model for weta audition, relying strongly on the presence of the lipid channel. This is the first instance of lipids being associated with an auditory system outside of the Odentocete cetaceans, demonstrating convergence for the use of lipids in hearing.
Resumo:
Speech recognition in car environments has been identified as a valuable means for reducing driver distraction when operating noncritical in-car systems. Under such conditions, however, speech recognition accuracy degrades significantly, and techniques such as speech enhancement are required to improve these accuracies. Likelihood-maximizing (LIMA) frameworks optimize speech enhancement algorithms based on recognized state sequences rather than traditional signal-level criteria such as maximizing signal-to-noise ratio. LIMA frameworks typically require calibration utterances to generate optimized enhancement parameters that are used for all subsequent utterances. Under such a scheme, suboptimal recognition performance occurs in noise conditions that are significantly different from that present during the calibration session – a serious problem in rapidly changing noise environments out on the open road. In this chapter, we propose a dialog-based design that allows regular optimization iterations in order to track the ever-changing noise conditions. Experiments using Mel-filterbank noise subtraction (MFNS) are performed to determine the optimization requirements for vehicular environments and show that minimal optimization is required to improve speech recognition, avoid over-optimization, and ultimately assist with semireal-time operation. It is also shown that the proposed design is able to provide improved recognition performance over frameworks incorporating a calibration session only.
Resumo:
This chapter analyses the copyright law framework needed to ensure open access to outputs of the Australian academic and research sector such as journal articles and theses. It overviews the new knowledge landscape, the principles of copyright law, the concept of open access to knowledge, the recently developed open content models of copyright licensing and the challenges faced in providing greater access to knowledge and research outputs.
Resumo:
Particle swarm optimization (PSO), a new population based algorithm, has recently been used on multi-robot systems. Although this algorithm is applied to solve many optimization problems as well as multi-robot systems, it has some drawbacks when it is applied on multi-robot search systems to find a target in a search space containing big static obstacles. One of these defects is premature convergence. This means that one of the properties of basic PSO is that when particles are spread in a search space, as time increases they tend to converge in a small area. This shortcoming is also evident on a multi-robot search system, particularly when there are big static obstacles in the search space that prevent the robots from finding the target easily; therefore, as time increases, based on this property they converge to a small area that may not contain the target and become entrapped in that area.Another shortcoming is that basic PSO cannot guarantee the global convergence of the algorithm. In other words, initially particles explore different areas, but in some cases they are not good at exploiting promising areas, which will increase the search time.This study proposes a method based on the particle swarm optimization (PSO) technique on a multi-robot system to find a target in a search space containing big static obstacles. This method is not only able to overcome the premature convergence problem but also establishes an efficient balance between exploration and exploitation and guarantees global convergence, reducing the search time by combining with a local search method, such as A-star.To validate the effectiveness and usefulness of algorithms,a simulation environment has been developed for conducting simulation-based experiments in different scenarios and for reporting experimental results. These experimental results have demonstrated that the proposed method is able to overcome the premature convergence problem and guarantee global convergence.
No specific role for the manual motor system in processing the meanings of words related to the hand
Resumo:
The present study explored whether semantic and motor systems are functionally interwoven via the use of a dual-task paradigm. According to embodied language accounts that propose an automatic and necessary involvement of the motor system in conceptual processing, concurrent processing of hand-related information should interfere more with hand movements than processing of unrelated body-part (i.e., foot, mouth) information. Across three experiments, 100 right-handed participants performed left- or right-hand tapping movements while repeatedly reading action words related to different body-parts, or different body-part names, in both aloud and silent conditions. Concurrent reading of single words related to specific body-parts, or the same words embedded in sentences differing in syntactic and phonological complexity (to manipulate context-relevant processing), and reading while viewing videos of the actions and body-parts described by the target words (to elicit visuomotor associations) all interfered with right-hand but not left-hand tapping rate. However, this motor interference was not affected differentially by hand-related stimuli. Thus, the results provide no support for proposals that body-part specific resources in cortical motor systems are shared between overt manual movements and meaning-related processing of words related to the hand.
Resumo:
The 3D Water Chemistry Atlas is an intuitive, open source, Web-based system that enables the three-dimensional (3D) sub-surface visualization of ground water monitoring data, overlaid on the local geological model (formation and aquifer strata). This paper firstly describes the results of evaluating existing virtual globe technologies, which led to the decision to use the Cesium open source WebGL Virtual Globe and Map Engine as the underlying platform. Next it describes the backend database and search, filtering, browse and analysis tools that were developed to enable users to interactively explore the groundwater monitoring data and interpret it spatially and temporally relative to the local geological formations and aquifers via the Cesium interface. The result is an integrated 3D visualization system that enables environmental managers and regulators to assess groundwater conditions, identify inconsistencies in the data, manage impacts and risks and make more informed decisions about coal seam gas extraction, waste water extraction, and water reuse.
Resumo:
Research on development of efficient passivation materials for high performance and stable quantum dot sensitized solar cells (QDSCs) is highly important. While ZnS is one of the most widely used passivation material in QDSCs, an alternative material based on ZnSe which was deposited on CdS/CdSe/TiO2 photoanode to form a semi-core/shell structure has been found to be more efficient in terms of reducing electron recombination in QDSCs in this work. It has been found that the solar cell efficiency was improved from 1.86% for ZnSe0 (without coating) to 3.99% using 2 layers of ZnSe coating (ZnSe2) deposited by successive ionic layer adsorption and reaction (SILAR) method. The short circuit current density (Jsc) increased nearly 1-fold (from 7.25 mA/cm2 to13.4 mA/cm2), and the open circuit voltage (Voc) was enhanced by 100 mV using ZnSe2 passivation layer compared to ZnSe0. Studies on the light harvesting efficiency (ηLHE) and the absorbed photon-to-current conversion efficiency (APCE) have revealed that the ZnSe coating layer caused the enhanced ηLHE at wavelength beyond 500 nm and a significant increase of the APCE over the spectrum 400−550 nm. A nearly 100% APCE was obtained with ZnSe2, indicating the excellent charge injection and collection process in the device. The investigation on charge transport and recombination of the device has indicated that the enhanced electron collection efficiency and reduced electron recombination should be responsible for the improved Jsc and Voc of the QDSCs. The effective electron lifetime of the device with ZnSe2 was nearly 6 times higher than ZnSe0 while the electron diffusion coefficient was largely unaffected by the coating. Study on the regeneration of QDs after photoinduced excitation has indicated that the hole transport from QDs to the reduced species (S2−) in electrolyte was very efficient even when the QDs were coated with a thick ZnSe shell (three layers). For comparison, ZnS coated CdS/CdSe sensitized solar cell with optimum shell thickness was also fabricated, which generated a lower energy conversion efficiency (η = 3.43%) than the ZnSe based QDSC counterpart due to a lower Voc and FF. This study suggests that ZnSe may be a more efficient passivation layer than ZnS, which is attributed to the type II energy band alignment of the core (CdS/CdSe quantum dots) and passivation shell (ZnSe) structure, leading to more efficient electron−hole separation and slower electron recombination.
Resumo:
In the mining optimisation literature, most researchers focused on two strategic-level and tactical-level open-pit mine optimisation problems, which are respectively termed ultimate pit limit (UPIT) or constrained pit limit (CPIT). However, many researchers indicate that the substantial numbers of variables and constraints in real-world instances (e.g., with 50-1000 thousand blocks) make the CPIT’s mixed integer programming (MIP) model intractable for use. Thus, it becomes a considerable challenge to solve the large scale CPIT instances without relying on exact MIP optimiser as well as the complicated MIP relaxation/decomposition methods. To take this challenge, two new graph-based algorithms based on network flow graph and conjunctive graph theory are developed by taking advantage of problem properties. The performance of our proposed algorithms is validated by testing recent large scale benchmark UPIT and CPIT instances’ datasets of MineLib in 2013. In comparison to best known results from MineLib, it is shown that the proposed algorithms outperform other CPIT solution approaches existing in the literature. The proposed graph-based algorithms leads to a more competent mine scheduling optimisation expert system because the third-party MIP optimiser is no longer indispensable and random neighbourhood search is not necessary.