111 resultados para Computer-aided instruction


Relevância:

80.00% 80.00%

Publicador:

Resumo:

As buildings have become more advanced and complex, our ability to understand how they are operated and managed has diminished. Modern technologies have given us systems to look after us but it appears to have taken away our say in how we like our environment to be managed. The aim of this paper is to discuss our research concerning spaces that are sensitive to changing needs and allow building-users to have a certain level of freedom to understand and control their environment. We discuss why, what we call the Active Layer, is needed in modern buildings; how building inhabitants are to interact with it; and the development of interface prototypes to test consequences of having the Active Layer in our environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Successful anatomic fitting of a total artificial heart (TAH) is vital to achieve optimal pump hemodynamics after device implantation. Although many anatomic fitting studies have been completed in humans prior to clinical trials, few reports exist that detail the experience in animals for in vivo device evaluation. Optimal hemodynamics are crucial throughout the in vivo phase to direct design iterations and ultimately validate device performance prior to pivotal human trials. In vivo evaluation in a sheep model allows a realistically sized representation of a smaller patient, for which smaller third-generation TAHs have the potential to treat. Our study aimed to assess the anatomic fit of a single device rotary TAH in sheep prior to animal trials and to use the data to develop a threedimensional, computer-aided design (CAD)-operated anatomic fitting tool for future TAH development. Following excision of the native ventricles above the atrio-ventricular groove, a prototype TAH was inserted within the chest cavity of six sheep (28–40 kg).Adjustable rods representing inlet and outlet conduits were oriented toward the center of each atrial chamber and the great vessels, with conduit lengths and angles recorded for future analysis. A threedimensional, CAD-operated anatomic fitting tool was then developed, based on the results of this study, and used to determine the inflow and outflow conduit orientation of the TAH. The mean diameters of the sheep left atrium, right atrium, aorta, and pulmonary artery were 39, 33, 12, and 11 mm, respectively. The center-to-center distance and outer-edge-to-outer-edge distance between the atria, found to be 39 ± 9 mm and 72 ± 17 mm in this study, were identified as the most critical geometries for successful TAH connection. This geometric constraint restricts the maximum separation allowable between left and right inlet ports of a TAH to ensure successful alignment within the available atrial circumference.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Digital Human Models (DHM) have been used for over 25 years. They have evolved from simple drawing templates, which are nowadays still used in architecture, to complex and Computer Aided Engineering (CAE) integrated design and analysis tools for various ergonomic tasks. DHM are most frequently used for applications in product design and production planning, with many successful implementations documented. DHM from other domains, as for example computer user interfaces, artificial intelligence, training and education, or the entertainment industry show that there is also an ongoing development towards a comprehensive understanding and holistic modeling of human behavior. While the development of DHM for the game sector has seen significant progress in recent years, advances of DHM in the area of ergonomics have been comparatively modest. As a consequence, we need to question if current DHM systems are fit for the design of future mobile work systems. So far it appears that DHM in Ergonomics are rather limited to some traditional applications. According to Dul et al. (2012), future characteristics of Human Factors and Ergonomics (HFE) can be assigned to six main trends: (1) global change of work systems, (2) cultural diversity, (3) ageing, (4) information and communication technology (ICT), (5) enhanced competiveness and the need for innovation, and; (6) sustainability and corporate social responsibility. Based on a literature review, we systematically investigate the capabilities of current ergonomic DHM systems versus the ‘Future of Ergonomics’ requirements. It is found that DHMs already provide broad functionality in support of trends (1) and (2), and more limited options in regards to trend (3). Today’s DHM provide access to a broad range of national and international databases for correct differentiation and characterization of anthropometry for global populations. Some DHM explicitly address social and cultural modeling of groups of people. In comparison, the trends of growing importance of ICT (4), the need for innovation (5) and sustainability (6) are addressed primarily from a hardware-oriented and engineering perspective and not reflected in DHM. This reflects a persistent separation between hardware design (engineering) and software design (information technology) in the view of DHM – a disconnection which needs to be urgently overcome in the era of software defined user interfaces and mobile devices. The design of a mobile ICT-device is discussed to exemplify the need for a comprehensive future DHM solution. Designing such mobile devices requires an approach that includes organizational aspects as well as technical and cognitive ergonomics. Multiple interrelationships between the different aspects result in a challenging setting for future DHM. In conclusion, the ‘Future of Ergonomics’ pose particular challenges for DHM in regards to the design of mobile work systems, and moreover mobile information access.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This research looked at using the metaphor of biological evolution as a way of solving architectural design problems. Drawing from fields such as language grammars, algorithms and cellular biology, this thesis looked at ways of encoding design information for processing. The aim of this work is to help in the building of software that support the architectural design process and allow designers to examine more variations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The article introduces a novel platform for conducting controlled and risk-free driving and traveling behavior studies, called Cyber-Physical System Simulator (CPSS). The key features of CPSS are: (1) simulation of multiuser immersive driving in a threedimensional (3D) virtual environment; (2) integration of traffic and communication simulators with human driving based on dedicated middleware; and (3) accessibility of multiuser driving simulator on popular software and hardware platforms. This combination of features allows us to easily collect large-scale data on interesting phenomena regarding the interaction between multiple user drivers, which is not possible with current single-user driving simulators. The core original contribution of this article is threefold: (1) we introduce a multiuser driving simulator based on DiVE, our original massively multiuser networked 3D virtual environment; (2) we introduce OpenV2X, a middleware for simulating vehicle-to-vehicle and vehicle to infrastructure communication; and (3) we present two experiments based on our CPSS platform. The first experiment investigates the “rubbernecking” phenomenon, where a platoon of four user drivers experiences an accident in the oncoming direction of traffic. Second, we report on a pilot study about the effectiveness of a Cooperative Intelligent Transport Systems advisory system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lean construction and building information modeling (BIM) are quite different initiatives, but both are having profound impacts on the construction industry. A rigorous analysis of the myriad specific interactions between them indicates that a synergy exists which, if properly understood in theoretical terms, can be exploited to improve construction processes beyond the degree to which it might be improved by application of either of these paradigms independently. Using a matrix that juxtaposes BIM functionalities with prescriptive lean construction principles, 56 interactions have been identified, all but four of which represent constructive interaction. Although evidence for the majority of these has been found, the matrix is not considered complete but rather a framework for research to explore the degree of validity of the interactions. Construction executives, managers, designers, and developers of information technology systems for construction can also benefit from the framework as an aid to recognizing the potential synergies when planning their lean and BIM adoption strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

CIB is developing a priority theme, now termed Improving Construction and Use through Integrated Design & Delivery Solutions (IDDS). The IDDS working group for this theme adopted the following definition: Integrated Design and Delivery Solutions use collaborative work processes and enhanced skills, with integrated data, information, and knowledge management to minimize structural and process inefficiencies and to enhance the value delivered during design, build, and operation, and across projects. The design, construction, and commissioning sectors have been repeatedly analysed as inefficient and may or may not be quite as bad as portrayed; however, there is unquestionably significant scope for IDDS to improve the delivery of value to clients, stakeholders (including occupants), and society in general, simultaneously driving down cost and time to deliver operational constructed facilities. Although various initiatives developed from computer‐aided design and manufacturing technologies, lean construction, modularization, prefabrication and integrated project delivery are currently being adopted by some sectors and specialisations in construction; IDDS provides the vision for a more holistic future transformation. Successful use of IDDS requires improvements in work processes, technology, and people’s capabilities to span the entire construction lifecycle from conception through design, construction, commissioning, operation, refurbishment/ retrofit and recycling, and considering the building’s interaction with its environment. This vision extends beyond new buildings to encompass modifications and upgrades, particularly those aimed at improved local and area sustainability goals. IDDS will facilitate greater flexibility of design options, work packaging strategies and collaboration with suppliers and trades, which will be essential to meet evolving sustainability targets. As knowledge capture and reuse become prevalent, IDDS best practice should become the norm, rather than the exception.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a novel system for automatic classification of images obtained from Anti-Nuclear Antibody (ANA) pathology tests on Human Epithelial type 2 (HEp-2) cells using the Indirect Immunofluorescence (IIF) protocol. The IIF protocol on HEp-2 cells has been the hallmark method to identify the presence of ANAs, due to its high sensitivity and the large range of antigens that can be detected. However, it suffers from numerous shortcomings, such as being subjective as well as time and labour intensive. Computer Aided Diagnostic (CAD) systems have been developed to address these problems, which automatically classify a HEp-2 cell image into one of its known patterns (eg. speckled, homogeneous). Most of the existing CAD systems use handpicked features to represent a HEp-2 cell image, which may only work in limited scenarios. We propose a novel automatic cell image classification method termed Cell Pyramid Matching (CPM), which is comprised of regional histograms of visual words coupled with the Multiple Kernel Learning framework. We present a study of several variations of generating histograms and show the efficacy of the system on two publicly available datasets: the ICPR HEp-2 cell classification contest dataset and the SNPHEp-2 dataset.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recurrent congestion caused by high commuter traffic is an irritation to motorway users. Ramp metering (RM) is the most effective motorway control means (M Papageorgiou & Kotsialos, 2002) for significantly reducing motorway congestion. However, given field constraints (e.g. limited ramp space and maximum ramp waiting time), RM cannot eliminate recurrent congestion during the increased long peak hours. This paper, therefore, focuses on rapid congestion recovery to further improve RM systems: that is, to quickly clear congestion in recovery periods. The feasibility of using RM for recovery is analyzed, and a zone recovery strategy (ZRS) for RM is proposed. Note that this study assumes no incident and demand management involved, i.e. no re-routing behavior and strategy considered. This strategy is modeled, calibrated and tested in the northbound model of the Pacific Motorway, Brisbane, Australia in a micro-simulation environment for recurrent congestion scenario, and evaluation results have justified its effectiveness.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Age-related Macular Degeneration (AMD) is one of the major causes of vision loss and blindness in ageing population. Currently, there is no cure for AMD, however early detection and subsequent treatment may prevent the severe vision loss or slow the progression of the disease. AMD can be classified into two types: dry and wet AMDs. The people with macular degeneration are mostly affected by dry AMD. Early symptoms of AMD are formation of drusen and yellow pigmentation. These lesions are identified by manual inspection of fundus images by the ophthalmologists. It is a time consuming, tiresome process, and hence an automated diagnosis of AMD screening tool can aid clinicians in their diagnosis significantly. This study proposes an automated dry AMD detection system using various entropies (Shannon, Kapur, Renyi and Yager), Higher Order Spectra (HOS) bispectra features, Fractional Dimension (FD), and Gabor wavelet features extracted from greyscale fundus images. The features are ranked using t-test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance (CBBD), Receiver Operating Characteristics (ROC) curve-based and Wilcoxon ranking methods in order to select optimum features and classified into normal and AMD classes using Naive Bayes (NB), k-Nearest Neighbour (k-NN), Probabilistic Neural Network (PNN), Decision Tree (DT) and Support Vector Machine (SVM) classifiers. The performance of the proposed system is evaluated using private (Kasturba Medical Hospital, Manipal, India), Automated Retinal Image Analysis (ARIA) and STructured Analysis of the Retina (STARE) datasets. The proposed system yielded the highest average classification accuracies of 90.19%, 95.07% and 95% with 42, 54 and 38 optimal ranked features using SVM classifier for private, ARIA and STARE datasets respectively. This automated AMD detection system can be used for mass fundus image screening and aid clinicians by making better use of their expertise on selected images that require further examination.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Age-related macular degeneration (AMD) affects the central vision and subsequently may lead to visual loss in people over 60 years of age. There is no permanent cure for AMD, but early detection and successive treatment may improve the visual acuity. AMD is mainly classified into dry and wet type; however, dry AMD is more common in aging population. AMD is characterized by drusen, yellow pigmentation, and neovascularization. These lesions are examined through visual inspection of retinal fundus images by ophthalmologists. It is laborious, time-consuming, and resource-intensive. Hence, in this study, we have proposed an automated AMD detection system using discrete wavelet transform (DWT) and feature ranking strategies. The first four-order statistical moments (mean, variance, skewness, and kurtosis), energy, entropy, and Gini index-based features are extracted from DWT coefficients. We have used five (t test, Kullback–Lieber Divergence (KLD), Chernoff Bound and Bhattacharyya Distance, receiver operating characteristics curve-based, and Wilcoxon) feature ranking strategies to identify optimal feature set. A set of supervised classifiers namely support vector machine (SVM), decision tree, k -nearest neighbor ( k -NN), Naive Bayes, and probabilistic neural network were used to evaluate the highest performance measure using minimum number of features in classifying normal and dry AMD classes. The proposed framework obtained an average accuracy of 93.70 %, sensitivity of 91.11 %, and specificity of 96.30 % using KLD ranking and SVM classifier. We have also formulated an AMD Risk Index using selected features to classify the normal and dry AMD classes using one number. The proposed system can be used to assist the clinicians and also for mass AMD screening programs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present CHURNs, a method for providing freshness and authentication assurances to human users. In computer-to-computer protocols, it has long been accepted that assurances of freshness such as random nonces are required to prevent replay attacks. Typically, no such assurance of freshness is presented to a human in a human-and-computer protocol. A Computer–HUman Recognisable Nonce (CHURN) is a computer-aided random sequence that the human has a measure of control over and input into. Our approach overcomes limitations such as ‘humans cannot do random’ and that humans will follow the easiest path. Our findings show that CHURNs are significantly more random than values produced by unaided humans; that humans may be used as a second source of randomness, and we give measurements as to how much randomness can be gained from humans using our approach; and that our CHURN-generator makes the user feel more in control, thus removing the need for complete trust in devices and underlying protocols. We give an example of how a CHURN may be used to provide assurances of freshness and authentication for humans in a widely used protocol.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis addressed issues that have prevented qualitative researchers from using thematic discovery algorithms. The central hypothesis evaluated whether allowing qualitative researchers to interact with thematic discovery algorithms and incorporate domain knowledge improved their ability to address research questions and trust the derived themes. Non-negative Matrix Factorisation and Latent Dirichlet Allocation find latent themes within document collections but these algorithms are rarely used, because qualitative researchers do not trust and cannot interact with the themes that are automatically generated. The research determined the types of interactivity that qualitative researchers require and then evaluated interactive algorithms that matched these requirements. Theoretical contributions included the articulation of design guidelines for interactive thematic discovery algorithms, the development of an Evaluation Model and a Conceptual Framework for Interactive Content Analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Firstly, we would like to thank Ms. Alison Brough and her colleagues for their positive commentary on our published work [1] and their appraisal of our utility of the “off-set plane” protocol for anthropometric analysis. The standardized protocols described in our manuscript have wide applications, ranging from forensic anthropology and paleodemographic research to clinical settings such as paediatric practice and orthopaedic surgical design. We affirm that the use of geometrically based reference tools commonly found in computer aided design (CAD) programs such as Geomagic Design X® are imperative for more automated and precise measurement protocols for quantitative skeletal analysis. Therefore we stand by our recommendation of the use of software such as Amira and Geomagic Design X® in the contexts described in our manuscript...

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The standard method for deciding bit-vector constraints is via eager reduction to propositional logic. This is usually done after first applying powerful rewrite techniques. While often efficient in practice, this method does not scale on problems for which top-level rewrites cannot reduce the problem size sufficiently. A lazy solver can target such problems by doing many satisfiability checks, each of which only reasons about a small subset of the problem. In addition, the lazy approach enables a wide range of optimization techniques that are not available to the eager approach. In this paper we describe the architecture and features of our lazy solver (LBV). We provide a comparative analysis of the eager and lazy approaches, and show how they are complementary in terms of the types of problems they can efficiently solve. For this reason, we propose a portfolio approach that runs a lazy and eager solver in parallel. Our empirical evaluation shows that the lazy solver can solve problems none of the eager solvers can and that the portfolio solver outperforms other solvers both in terms of total number of problems solved and the time taken to solve them.