845 resultados para Computer aided analysis, Machine vision, Video surveillance
Resumo:
Data structures such as k-D trees and hierarchical k-means trees perform very well in approximate k nearest neighbour matching, but are only marginally more effective than linear search when performing exact matching in high-dimensional image descriptor data. This paper presents several improvements to linear search that allows it to outperform existing methods and recommends two approaches to exact matching. The first method reduces the number of operations by evaluating the distance measure in order of significance of the query dimensions and terminating when the partial distance exceeds the search threshold. This method does not require preprocessing and significantly outperforms existing methods. The second method improves query speed further by presorting the data using a data structure called d-D sort. The order information is used as a priority queue to reduce the time taken to find the exact match and to restrict the range of data searched. Construction of the d-D sort structure is very simple to implement, does not require any parameter tuning, and requires significantly less time than the best-performing tree structure, and data can be added to the structure relatively efficiently.
Resumo:
An evolution in the use of digital modelling has occurred in the Queensland Department of Public Works Division of Project Services over the last 20 years from: the initial implementation of computer aided design and documentation (CADD); to experimentation with building information modelling (BIM); to embedding integrated practice (IP); to current steps towards integrated project delivery (IPD) including the active involvement of consultants and contractors in the design/delivery process. This case study is one of three undertaken through the Australian Sustainable Built Environment National Research Centre investigating past R&D investment. The intent of these cases is to inform the development of policy guidelines for future investment in the construction industry in Australia. This research is informing the activities of CIB Task Group 85 R&D Investment and Impact. The uptake of digital modelling by Project Services has been approached through an incremental learning approach. This has been driven by a strong and clear vision with a focus on developing more efficient delivery mechanisms through the use of new technology coupled with process change. Findings reveal an organisational focus on several areas including: (i) strategic decision making including the empowerment of innovation leaders and champions; (ii) the acquisition and exploitation of knowledge; (iii) product and process development (with a focus on efficiency and productivity); (iv) organisational learning; (v) maximising the use of technology; and (vi) supply chain integration. Key elements of this approach include pilot projects, researcher engagement, industry partnerships and leadership.
Resumo:
This paper considers the design of a radial flux permanent magnet iron less core brushless DC motor for use in an electric wheel drive with an integrated epicyclic gear reduction. The motor has been designed for a continuous output torque of 30 Nm and peak rating of 60 Nm with a maximum operating speed of 7000 RPM. In the design of brushless DC motors with a toothed iron stator the peak air-gap magnetic flux density is typically chosen to be close to that of the remanence value of the magnets used. This paper demonstrates that for an ironless motor the optimal peak air-gap flux density is closer to the maximum energy product of the magnets used. The use of a radial flux topology allows for high frequency operation and can be shown to give high specific power output while maintaining a relatively low magnet mass. Two-dimensional finite element analysis is used to predict the air-gap flux density. The motor design is based around commonly available NdFeB bar magnet size
Resumo:
Successful anatomic fitting of a total artificial heart (TAH) is vital to achieve optimal pump hemodynamics after device implantation. Although many anatomic fitting studies have been completed in humans prior to clinical trials, few reports exist that detail the experience in animals for in vivo device evaluation. Optimal hemodynamics are crucial throughout the in vivo phase to direct design iterations and ultimately validate device performance prior to pivotal human trials. In vivo evaluation in a sheep model allows a realistically sized representation of a smaller patient, for which smaller third-generation TAHs have the potential to treat. Our study aimed to assess the anatomic fit of a single device rotary TAH in sheep prior to animal trials and to use the data to develop a threedimensional, computer-aided design (CAD)-operated anatomic fitting tool for future TAH development. Following excision of the native ventricles above the atrio-ventricular groove, a prototype TAH was inserted within the chest cavity of six sheep (28–40 kg).Adjustable rods representing inlet and outlet conduits were oriented toward the center of each atrial chamber and the great vessels, with conduit lengths and angles recorded for future analysis. A threedimensional, CAD-operated anatomic fitting tool was then developed, based on the results of this study, and used to determine the inflow and outflow conduit orientation of the TAH. The mean diameters of the sheep left atrium, right atrium, aorta, and pulmonary artery were 39, 33, 12, and 11 mm, respectively. The center-to-center distance and outer-edge-to-outer-edge distance between the atria, found to be 39 ± 9 mm and 72 ± 17 mm in this study, were identified as the most critical geometries for successful TAH connection. This geometric constraint restricts the maximum separation allowable between left and right inlet ports of a TAH to ensure successful alignment within the available atrial circumference.
Resumo:
Digital Human Models (DHM) have been used for over 25 years. They have evolved from simple drawing templates, which are nowadays still used in architecture, to complex and Computer Aided Engineering (CAE) integrated design and analysis tools for various ergonomic tasks. DHM are most frequently used for applications in product design and production planning, with many successful implementations documented. DHM from other domains, as for example computer user interfaces, artificial intelligence, training and education, or the entertainment industry show that there is also an ongoing development towards a comprehensive understanding and holistic modeling of human behavior. While the development of DHM for the game sector has seen significant progress in recent years, advances of DHM in the area of ergonomics have been comparatively modest. As a consequence, we need to question if current DHM systems are fit for the design of future mobile work systems. So far it appears that DHM in Ergonomics are rather limited to some traditional applications. According to Dul et al. (2012), future characteristics of Human Factors and Ergonomics (HFE) can be assigned to six main trends: (1) global change of work systems, (2) cultural diversity, (3) ageing, (4) information and communication technology (ICT), (5) enhanced competiveness and the need for innovation, and; (6) sustainability and corporate social responsibility. Based on a literature review, we systematically investigate the capabilities of current ergonomic DHM systems versus the ‘Future of Ergonomics’ requirements. It is found that DHMs already provide broad functionality in support of trends (1) and (2), and more limited options in regards to trend (3). Today’s DHM provide access to a broad range of national and international databases for correct differentiation and characterization of anthropometry for global populations. Some DHM explicitly address social and cultural modeling of groups of people. In comparison, the trends of growing importance of ICT (4), the need for innovation (5) and sustainability (6) are addressed primarily from a hardware-oriented and engineering perspective and not reflected in DHM. This reflects a persistent separation between hardware design (engineering) and software design (information technology) in the view of DHM – a disconnection which needs to be urgently overcome in the era of software defined user interfaces and mobile devices. The design of a mobile ICT-device is discussed to exemplify the need for a comprehensive future DHM solution. Designing such mobile devices requires an approach that includes organizational aspects as well as technical and cognitive ergonomics. Multiple interrelationships between the different aspects result in a challenging setting for future DHM. In conclusion, the ‘Future of Ergonomics’ pose particular challenges for DHM in regards to the design of mobile work systems, and moreover mobile information access.
Resumo:
Bundle adjustment is one of the essential components of the computer vision toolbox. This paper revisits the resection-intersection approach, which has previously been shown to have inferior convergence properties. Modifications are proposed that greatly improve the performance of this method, resulting in a fast and accurate approach. Firstly, a linear triangulation step is added to the intersection stage, yielding higher accuracy and improved convergence rate. Secondly, the effect of parameter updates is tracked in order to reduce wasteful computation; only variables coupled to significantly changing variables are updated. This leads to significant improvements in computation time, at the cost of a small, controllable increase in error. Loop closures are handled effectively without the need for additional network modelling. The proposed approach is shown experimentally to yield comparable accuracy to a full sparse bundle adjustment (20% error increase) while computation time scales much better with the number of variables. Experiments on a progressive reconstruction system show the proposed method to be more efficient by a factor of 65 to 177, and 4.5 times more accurate (increasing over time) than a localised sparse bundle adjustment approach.
Resumo:
Using Media-Access-Control (MAC) address for data collection and tracking is a capable and cost effective approach as the traditional ways such as surveys and video surveillance have numerous drawbacks and limitations. Positioning cell-phones by Global System for Mobile communication was considered an attack on people's privacy. MAC addresses just keep a unique log of a WiFi or Bluetooth enabled device for connecting to another device that has not potential privacy infringements. This paper presents the use of MAC address data collection approach for analysis of spatio-temporal dynamics of human in terms of shared space utilization. This paper firstly discuses the critical challenges and key benefits of MAC address data as a tracking technology for monitoring human movement. Here, proximity-based MAC address tracking is postulated as an effective methodology for analysing the complex spatio-temporal dynamics of human movements at shared zones such as lounge and office areas. A case study of university staff lounge area is described in detail and results indicates a significant added value of the methodology for human movement tracking. By analysis of MAC address data in the study area, clear statistics such as staff’s utilisation frequency, utilisation peak periods, and staff time spent is obtained. The analyses also reveal staff’s socialising profiles in terms of group and solo gathering. The paper is concluded with a discussion on why MAC address tracking offers significant advantages for tracking human behaviour in terms of shared space utilisation with respect to other and more prominent technologies, and outlines some of its remaining deficiencies.
Resumo:
Lean construction and building information modeling (BIM) are quite different initiatives, but both are having profound impacts on the construction industry. A rigorous analysis of the myriad specific interactions between them indicates that a synergy exists which, if properly understood in theoretical terms, can be exploited to improve construction processes beyond the degree to which it might be improved by application of either of these paradigms independently. Using a matrix that juxtaposes BIM functionalities with prescriptive lean construction principles, 56 interactions have been identified, all but four of which represent constructive interaction. Although evidence for the majority of these has been found, the matrix is not considered complete but rather a framework for research to explore the degree of validity of the interactions. Construction executives, managers, designers, and developers of information technology systems for construction can also benefit from the framework as an aid to recognizing the potential synergies when planning their lean and BIM adoption strategies.
Resumo:
CIB is developing a priority theme, now termed Improving Construction and Use through Integrated Design & Delivery Solutions (IDDS). The IDDS working group for this theme adopted the following definition: Integrated Design and Delivery Solutions use collaborative work processes and enhanced skills, with integrated data, information, and knowledge management to minimize structural and process inefficiencies and to enhance the value delivered during design, build, and operation, and across projects. The design, construction, and commissioning sectors have been repeatedly analysed as inefficient and may or may not be quite as bad as portrayed; however, there is unquestionably significant scope for IDDS to improve the delivery of value to clients, stakeholders (including occupants), and society in general, simultaneously driving down cost and time to deliver operational constructed facilities. Although various initiatives developed from computer‐aided design and manufacturing technologies, lean construction, modularization, prefabrication and integrated project delivery are currently being adopted by some sectors and specialisations in construction; IDDS provides the vision for a more holistic future transformation. Successful use of IDDS requires improvements in work processes, technology, and people’s capabilities to span the entire construction lifecycle from conception through design, construction, commissioning, operation, refurbishment/ retrofit and recycling, and considering the building’s interaction with its environment. This vision extends beyond new buildings to encompass modifications and upgrades, particularly those aimed at improved local and area sustainability goals. IDDS will facilitate greater flexibility of design options, work packaging strategies and collaboration with suppliers and trades, which will be essential to meet evolving sustainability targets. As knowledge capture and reuse become prevalent, IDDS best practice should become the norm, rather than the exception.
Resumo:
This paper describes a novel system for automatic classification of images obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The IIF protocol on HEp-2 cells has been the hallmark method to identify the presence of ANAs, due to its high sensitivity and the large range of antigens that can be detected. However, it suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg. speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. We propose a novel automatic cell image classification method termed Cell Pyramid Matching (CPM), which is comprised of regional histograms of visual words coupled with the Multiple Kernel Learning framework. We present a study of several variations of generating histograms and show the efficacy of the system on two publicly available datasets: the ICPR HEp-2 cell classification contest dataset and the SNPHEp-2 dataset.
Resumo:
Firstly, we would like to thank Ms. Alison Brough and her colleagues for their positive commentary on our published work [1] and their appraisal of our utility of the “off-set plane” protocol for anthropometric analysis. The standardized protocols described in our manuscript have wide applications, ranging from forensic anthropology and paleodemographic research to clinical settings such as paediatric practice and orthopaedic surgical design. We affirm that the use of geometrically based reference tools commonly found in computer aided design (CAD) programs such as Geomagic Design X® are imperative for more automated and precise measurement protocols for quantitative skeletal analysis. Therefore we stand by our recommendation of the use of software such as Amira and Geomagic Design X® in the contexts described in our manuscript...
Resumo:
Background: A major challenge for assessing students’ conceptual understanding of STEM subjects is the capacity of assessment tools to reliably and robustly evaluate student thinking and reasoning. Multiple-choice tests are typically used to assess student learning and are designed to include distractors that can indicate students’ incomplete understanding of a topic or concept based on which distractor the student selects. However, these tests fail to provide the critical information uncovering the how and why of students’ reasoning for their multiple-choice selections. Open-ended or structured response questions are one method for capturing higher level thinking, but are often costly in terms of time and attention to properly assess student responses. Purpose: The goal of this study is to evaluate methods for automatically assessing open-ended responses, e.g. students’ written explanations and reasoning for multiple-choice selections. Design/Method: We incorporated an open response component for an online signals and systems multiple-choice test to capture written explanations of students’ selections. The effectiveness of an automated approach for identifying and assessing student conceptual understanding was evaluated by comparing results of lexical analysis software packages (Leximancer and NVivo) to expert human analysis of student responses. In order to understand and delineate the process for effectively analysing text provided by students, the researchers evaluated strengths and weakness for both the human and automated approaches. Results: Human and automated analyses revealed both correct and incorrect associations for certain conceptual areas. For some questions, that were not anticipated or included in the distractor selections, showing how multiple-choice questions alone fail to capture the comprehensive picture of student understanding. The comparison of textual analysis methods revealed the capability of automated lexical analysis software to assist in the identification of concepts and their relationships for large textual data sets. We also identified several challenges to using automated analysis as well as the manual and computer-assisted analysis. Conclusions: This study highlighted the usefulness incorporating and analysing students’ reasoning or explanations in understanding how students think about certain conceptual ideas. The ultimate value of automating the evaluation of written explanations is that it can be applied more frequently and at various stages of instruction to formatively evaluate conceptual understanding and engage students in reflective
Resumo:
We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University’s pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set.We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder’s native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Resumo:
The standard method for deciding bit-vector constraints is via eager reduction to propositional logic. This is usually done after first applying powerful rewrite techniques. While often efficient in practice, this method does not scale on problems for which top-level rewrites cannot reduce the problem size sufficiently. A lazy solver can target such problems by doing many satisfiability checks, each of which only reasons about a small subset of the problem. In addition, the lazy approach enables a wide range of optimization techniques that are not available to the eager approach. In this paper we describe the architecture and features of our lazy solver (LBV). We provide a comparative analysis of the eager and lazy approaches, and show how they are complementary in terms of the types of problems they can efficiently solve. For this reason, we propose a portfolio approach that runs a lazy and eager solver in parallel. Our empirical evaluation shows that the lazy solver can solve problems none of the eager solvers can and that the portfolio solver outperforms other solvers both in terms of total number of problems solved and the time taken to solve them.
Resumo:
The research reported here addresses the problem of detecting and tracking independently moving objects from a moving observer in real time, using corners as object tokens. Local image-plane constraints are employed to solve the correspondence problem removing the need for a 3D motion model. The approach relaxes the restrictive static-world assumption conventionally made, and is therefore capable of tracking independently moving and deformable objects. The technique is novel in that feature detection and tracking is restricted to areas likely to contain meaningful image structure. Feature instantiation regions are defined from a combination of odometry informatin and a limited knowledge of the operating scenario. The algorithms developed have been tested on real image sequences taken from typical driving scenarios. Preliminary experiments on a parallel (transputer) architecture indication that real-time operation is achievable.