827 resultados para 3D user interface
Resumo:
Real-World Data Mining Applications generally do not end up with the creation of the models. The use of the model is the final purpose especially in prediction tasks. The problem arises when the model is built based on much more information than that the user can provide in using the model. As a result, the performance of model reduces drastically due to many missing attributes values. This paper develops a new learning system framework, called as User Query Based Learning System (UQBLS), for building data mining models best suitable for users use. We demonstrate its deployment in a real-world application of the lifetime prediction of metallic components in buildings
Resumo:
This paper deals with the problem of using the data mining models in a real-world situation where the user can not provide all the inputs with which the predictive model is built. A learning system framework, Query Based Learning System (QBLS), is developed for improving the performance of the predictive models in practice where not all inputs are available for querying to the system. The automatic feature selection algorithm called Query Based Feature Selection (QBFS) is developed for selecting features to obtain a balance between the relative minimum subset of features and the relative maximum classification accuracy. Performance of the QBLS system and the QBFS algorithm is successfully demonstrated with a real-world application
Resumo:
The indoor air quality (IAQ) in buildings is currently assessed by measurement of pollutants during building operation for comparison with air quality standards. Current practice at the design stage tries to minimise potential indoor air quality impacts of new building materials and contents by selecting low-emission materials. However low-emission materials are not always available, and even when used the aggregated pollutant concentrations from such materials are generally overlooked. This paper presents an innovative tool for estimating indoor air pollutant concentrations at the design stage, based on emissions over time from large area building materials, furniture and office equipment. The estimator considers volatile organic compounds, formaldehyde and airborne particles from indoor materials and office equipment and the contribution of outdoor urban air pollutants affected by urban location and ventilation system filtration. The estimated pollutants are for a single, fully mixed and ventilated zone in an office building with acceptable levels derived from Australian and international health-based standards. The model acquires its dimensional data for the indoor spaces from a 3D CAD model via IFC files and the emission data from a building products/contents emissions database. This paper describes the underlying approach to estimating indoor air quality and discusses the benefits of such an approach for designers and the occupants of buildings.
Resumo:
Large design projects, such as those in the AEC domain, involve collaboration among a number of design disciplines, often in separate locations. With the increase in CAD usage in design offices, there has been an increase in the interest in collaboration using the electronic medium, both synchronously and asynchronously. The use of a single shared database representing a single model of a building has been widely put forward but this paper argues that this does not take into account the different representations required by each discipline. This paper puts forward an environment which provides real-time multi-user collaboration in a 3D virtual world for designers in different locations. Agent technology is used to manage the different views, creation and modifications of objects in the 3D virtual world and the necessary relationships with the database(s) belonging to each discipline.
Resumo:
Alvin Toffler’s image of the prosumer (1970, 1980, 1990) continues to influence in a significant way our understanding of the user-led, collaborative processes of content creation which are today labelled “social media” or “Web 2.0”. A closer look at Toffler’s own description of his prosumer model reveals, however, that it remains firmly grounded in the mass media age: the prosumer is clearly not the self-motivated creative originator and developer of new content which can today be observed in projects ranging from open source software through Wikipedia to Second Life, but simply a particularly well-informed, and therefore both particularly critical and particularly active, consumer. The highly specialised, high end consumers which exist in areas such as hi-fi or car culture are far more representative of the ideal prosumer than the participants in non-commercial (or as yet non-commercial) collaborative projects. And to expect Toffler’s 1970s model of the prosumer to describe these 21st-century phenomena was always an unrealistic expectation, of course. To describe the creative and collaborative participation which today characterises user-led projects such as Wikipedia, terms such as ‘production’ and ‘consumption’ are no longer particularly useful – even in laboured constructions such as ‘commons-based peer-production’ (Benkler 2006) or ‘p2p production’ (Bauwens 2005). In the user communities participating in such forms of content creation, roles as consumers and users have long begun to be inextricably interwoven with those as producer and creator: users are always already also able to be producers of the shared information collection, regardless of whether they are aware of that fact – they have taken on a new, hybrid role which may be best described as that of a produser (Bruns 2008). Projects which build on such produsage can be found in areas from open source software development through citizen journalism to Wikipedia, and beyond this also in multi-user online computer games, filesharing, and even in communities collaborating on the design of material goods. While addressing a range of different challenges, they nonetheless build on a small number of universal key principles. This paper documents these principles and indicates the possible implications of this transition from production and prosumption to produsage.
Resumo:
This paper summarises findings from a survey of user behaviors and intentions towards digital media and information in Australia. It was undertaken in the first quarter of 2009 by the Queensland University of Technology Creative Industries Faculty and was funded by the Smart Services Cooperative Research Centre. The survey targeted users of 2 news and information sites that are available online only. Findings highlighted differences between the 18-24 year age segment and older users. Social networks (specifically friends and family) were rated as the least reliable, relevant and accurate sources of news. Other findings indicate online news sources that are associated with an established newspaper are highly valued as reliable, relevant and accurate news sources by most people. While most people prefer to use online news sources, there is a great deal of variation in the ways in which people actually use online news. From a total of 524 respondents to the survey it was possible to identify three main types of online news consumers: convenience, loyal and customising users.
Resumo:
Properly designed decision support environments encourage proactive and objective decision making. The work presented in this paper inquires into developing a decision support environment and a tool to facilitate objective decision making in dealing with road traffic noise. The decision support methodology incorporates traffic amelioration strategies both within and outside the road reserve. The project is funded by the CRC for Construction Innovation and conducted jointly by the RMIT University and the Queensland Department of Main Roads (MR) in collaboration with the Queensland Department of Public Works, Arup Pty Ltd., and the Queensland University of Technology. In this paper, the proposed decision support framework is presented in the way of a flowchart which enabled the development of the decision support tool (DST). The underpinning concept is to establish and retain an information warehouse for each critical road segment (noise corridor) for a given planning horizon. It is understood that, in current practice, some components of the approach described are already in place but not fully integrated and supported. It provides an integrated user-friendly interface between traffic noise modeling software, noise management criteria and cost databases.
Resumo:
The road and transport industry in Australia and overseas has come a long way to understanding the impact of road traffic noise on the urban environment. Most road authorities now have guidelines to help assess and manage the impact of road traffic noise on noise-sensitive areas and development. While several economic studies across Australia and overseas have tried to value the impact of noise on property prices, decision-makers investing in road traffic noise management strategies have relatively limited historic data and case studies to go on. The perceived success of a noise management strategy currently relies largely on community expectations at a given time, and is not necessarily based on the analysis of the costs and benefits, or the long-term viability and value to the community of the proposed treatment options. With changing trends in urban design, it is essential that the 'whole-of-life' costs and benefits of noise ameliorative treatment options and strategies be identified and made available for decisionmakers in future investment considerations. For this reason, CRC for Construction Innovation Australia funded a research project, Noise Management in Urban Environments to help decision-makers with future road traffic noise management investment decisions. RMIT University and the Queensland Department of Main Roads (QDMR) have conducted the research work, in collaboration with the Queensland Department of Public Works, ARUP Pty Ltd, and the Queensland University of Technology. The research has formed the basis for the development of a decision-support software tool, and helped collate technical and costing data for known noise amelioration treatment options. We intend that the decision support software tool (DST) should help an investment decision-maker to be better informed of suitable noise ameliorative treatment options on a project-by-project basis and identify likely costs and benefits associated with each of those options. This handbook has been prepared as a procedural guide for conducting a comparative assessment of noise ameliorative options. The handbook outlines the methodology and assumptions adopted in the decision-support framework for the investment decision-maker and user of the DST. The DST has been developed to provide an integrated user-friendly interface between road traffic noise modelling software, the relevant assessment criteria and the options analysis process. A user guide for the DST is incorporated in this handbook.
Resumo:
The rise of videosharing and self-(re)broadcasting Web services is posing new threats to a television industry already struggling with the impact of filesharing networks. This paper outlines these threats, focussing especially on the DIY re-broadcasting of live sports using Websites such as Justin.tv and a range of streaming media networks built on peer-to-peer filesharing technology.
Resumo:
Angetrieben und unterstützt durch Web-2.0-Technologien, gibt es heute einen Trend zur Verbindung der Nutzung und Produktion von Inhalten als Produtzung (engl. produsage ). Um dabei die Qualität der erstellten Inhalte und eine nachhaltige Teilnahme der Nutzer sicherzustellen, müsen vier grundlegende Prinzipien eingehalten werden: * Größtmögliche Offenheit. * Ankurbeln der Gemeinschaft durch Inhalte und Hilfsmittel. * Unterstützung der Gruppendynamik und Abtretung von Verantwortung. * Keine Ausbeutung der Gemeinschaft und ihrer Arbeit.
Resumo:
Introduction: Bone mineral density (BMD) is currently the preferred surrogate for bone strength in clinical practice. Finite element analysis (FEA) is a computer simulation technique that can predict the deformation of a structure when a load is applied, providing a measure of stiffness (Nmm−1). Finite element analysis of X-ray images (3D-FEXI) is a FEA technique whose analysis is derived froma single 2D radiographic image. Methods: 18 excised human femora had previously been quantitative computed tomography scanned, from which 2D BMD-equivalent radiographic images were derived, and mechanically tested to failure in a stance-loading configuration. A 3D proximal femur shape was generated from each 2D radiographic image and used to construct 3D-FEA models. Results: The coefficient of determination (R2%) to predict failure load was 54.5% for BMD and 80.4% for 3D-FEXI. Conclusions: This ex vivo study demonstrates that 3D-FEXI derived from a conventional 2D radiographic image has the potential to significantly increase the accuracy of failure load assessment of the proximal femur compared with that currently achieved with BMD. This approach may be readily extended to routine clinical BMD images derived by dual energy X-ray absorptiometry. Crown Copyright © 2009 Published by Elsevier Ltd on behalf of IPEM. All rights reserved
Resumo:
Summary Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was warped to the size and shape of a single 2D radiographic image of a subject. Mean absolute depth errors are comparable with previous approaches utilising multiple 2D input projections. Introduction Several approaches have been adopted to derive volumetric density (g cm-3) from a conventional 2D representation of areal bone mineral density (BMD, g cm-2). Such approaches have generally aimed at deriving an average depth across the areal projection rather than creating a formal 3D shape of the bone. Methods Generalized Procrustes analysis and thin plate splines were employed to create an average 3D shape template of the proximal femur that was subsequently warped to suit the size and shape of a single 2D radiographic image of a subject. CT scans of excised human femora, 18 and 24 scanned at pixel resolutions of 1.08 mm and 0.674 mm, respectively, were equally split into training (created 3D shape template) and test cohorts. Results The mean absolute depth errors of 3.4 mm and 1.73 mm, respectively, for the two CT pixel sizes are comparable with previous approaches based upon multiple 2D input projections. Conclusions This technique has the potential to derive volumetric density from BMD and to facilitate 3D finite element analysis for prediction of the mechanical integrity of the proximal femur. It may further be applied to other anatomical bone sites such as the distal radius and lumbar spine.
Resumo:
The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.
Resumo:
Interactive educational courseware has been adopted in diverse education sectors such as primary, secondary, tertiary education, vocational and professional training. In Malaysian educational context, the ministry of education has implemented Smart School Project that aims to increase high level of academic achievement in primary and secondary schools by using interactive educational courseware. However, many researchers have reported that many coursewares fail to accommodate the learner and teacher needs. In particular, the interface design is not appropriately designed in terms of quality of learning. This paper reviews educational courseware development process in terms of defining quality of interface design and suggests a conceptual model of interface design through the integration of design components and interactive learning experience into the development process. As a result, it defines the concept of interactive learning experience in a more practical approach in order to implement each stage of the development process in a seamless and integrated way.
Resumo:
Synthetic polymers have attracted much attention in tissue engineering due to their ability to modulate biomechanical properties. This study investigated the feasibility of processing poly(varepsilon-caprolactone) (PCL) homopolymer, PCL-poly(ethylene glycol) (PEG) diblock, and PCL-PEG-PCL triblock copolymers into three-dimensional porous scaffolds. Properties of the various polymers were investigated by dynamic thermal analysis. The scaffolds were manufactured using the desktop robot-based rapid prototyping technique. Gross morphology and internal three-dimensional structure of scaffolds were identified by scanning electron microscopy and micro-computed tomography, which showed excellent fusion at the filament junctions, high uniformity, and complete interconnectivity of pore networks. The influences of process parameters on scaffolds' morphological and mechanical characteristics were studied. Data confirmed that the process parameters directly influenced the pore size, porosity, and, consequently, the mechanical properties of the scaffolds. The in vitro cell culture study was performed to investigate the influence of polymer nature and scaffold architecture on the adhesion of the cells onto the scaffolds using rabbit smooth muscle cells. Light, scanning electron, and confocal laser microscopy showed cell adhesion, proliferation, and extracellular matrix formation on the surface as well as inside the structure of both scaffold groups. The completely interconnected and highly regular honeycomb-like pore morphology supported bridging of the pores via cell-to-cell contact as well as production of extracellular matrix at later time points. The results indicated that the incorporation of hydrophilic PEG into hydrophobic PCL enhanced the overall hydrophilicity and cell culture performance of PCL-PEG copolymer. However, the scaffold architecture did not significantly influence the cell culture performance in this study.