785 resultados para interface engineering
Resumo:
Design for Manufacturing (DFM) is a highly integral methodology in product development, starting from the concept development phase, with the aim of improving manufacturing productivity. It is used to reduce manufacturing costs in complex production environments, while maintaining product quality. While Design for Assembly (DFA) is focusing on elimination or combination of parts with other components, which in most cases relates to performing a function and manufacture operation in a simpler way, DFM is following a more holistic approach. Common consideration for DFM are standard components, manufacturing tool inventory and capability, materials compatibility with production process, part handling, logistics, tool wear and process optimization, quality control complexity or Poka-Yoke design. During DFM, the considerable background work required for the conceptual phase is compensated for by a shortening of later development phases. Current DFM projects normally apply an iterative step-by-step approach and eventually transfer to the developer team. The study is introducing a new, knowledge based approach to DFM, eliminating steps of DFM, and showing implications on the work process. Furthermore, a concurrent engineering process via transparent interface between the manufacturing engineering and product development systems is brought forward.
Resumo:
Digital human modelling (DHM) has today matured from research into industrial application. In the automotive domain, DHM has become a commonly used tool in virtual prototyping and human-centred product design. While this generation of DHM supports the ergonomic evaluation of new vehicle design during early design stages of the product, by modelling anthropometry, posture, motion or predicting discomfort, the future of DHM will be dominated by CAE methods, realistic 3D design, and musculoskeletal and soft tissue modelling down to the micro-scale of molecular activity within single muscle fibres. As a driving force for DHM development, the automotive industry has traditionally used human models in the manufacturing sector (production ergonomics, e.g. assembly) and the engineering sector (product ergonomics, e.g. safety, packaging). In product ergonomics applications, DHM share many common characteristics, creating a unique subset of DHM. These models are optimised for a seated posture, interface to a vehicle seat through standardised methods and provide linkages to vehicle controls. As a tool, they need to interface with other analytic instruments and integrate into complex CAD/CAE environments. Important aspects of current DHM research are functional analysis, model integration and task simulation. Digital (virtual, analytic) prototypes or digital mock-ups (DMU) provide expanded support for testing and verification and consider task-dependent performance and motion. Beyond rigid body mechanics, soft tissue modelling is evolving to become standard in future DHM. When addressing advanced issues beyond the physical domain, for example anthropometry and biomechanics, modelling of human behaviours and skills is also integrated into DHM. Latest developments include a more comprehensive approach through implementing perceptual, cognitive and performance models, representing human behaviour on a non-physiologic level. Through integration of algorithms from the artificial intelligence domain, a vision of the virtual human is emerging.
Resumo:
The purpose of this paper is to provide some insights about P2M, and more specifically, to develop some thoughts about Project Management seen as a Mirror, a place for reflection…, between the Mission of organisation and its actual creation of Values (with s: a source of value for people, organisations and society). This place is the realm of complexity, of interactions between multiple variables, each of them having a specific time horizon and occupying a specific place, playing a specific role. Before developing this paper I would like to borrow to my colleague and friend, Professor Ohara, the following, part of a paper going to be presented at IPMA World Congress, in New Delhi later this year in November 2005. “P2M is the Japanese version of project & program management, which is the first standard guide for education and certification developed in 2001. A specific finding of P2M is characterized by “mission driven management of projects” or a program which harness complexity of problem solving observed in the interface between technical system and business model.” (Ohara, 2005, IPMA Conference, New Delhi) “The term of “mission” is a key word in the field of corporate strategy, where it expresses raison d’être or “value of business”. It is more specifically used for expressing “the client needs” in terms of a strategic business unit. The concept of mission is deemed to be a useful tool to share essential content of value and needs in message for complex project.” (Ohara, 2005, IPMA Conference, New Delhi) “Mission is considered as a significant “metamodel representation” by several reasons. First, it represents multiple values for aspiration. The central objective of mission initiative is profiling of ideality in the future from reality, which all stakeholders are glad to accept and share. Second, it shall be within a stretch of efforts, and not beyond or outside of the realization. Though it looks like unique, it has to depict a solid foundation. The pragmatic sense of equilibrium between innovation and adaptation is required for the mission. Third, it shall imply a rough sketch for solution to critical issues for problems in reality.” (Ohara, 2005, IPMA Conference, New Delhi) “Project modeling” idea has been introduced in P2M program management. A package of three project models of “scheme”, “system” and “service” are given as a reference type program. (Ohara, 2005, IPMA Conference, New Delhi) If these quotes apply to P2M, they are fully congruent with the results of the research undertaken and the resulting meta-model & meta-method developed by the CIMAP, ESC Lille Research Centre in Project & Program Management, since the 80’s. The paper starts by questioning the common Project Management (PM) paradigm. Then discussing the concept of Project, it argues that an alternative epistemological position should be taken to capture Page 2 / 11 the very nature of the PM field. Based on this, a development about “the need of modelling to understand” is proposed grounded on two theoretical roots. This leads to the conclusion that, in order to enables this modelling, a standard approach is necessary, but should be understood under the perspective of the Theory of Convention in order to facilitate a situational and contextual application.
Resumo:
Here we present a sequential Monte Carlo (SMC) algorithm that can be used for any one-at-a-time Bayesian sequential design problem in the presence of model uncertainty where discrete data are encountered. Our focus is on adaptive design for model discrimination but the methodology is applicable if one has a different design objective such as parameter estimation or prediction. An SMC algorithm is run in parallel for each model and the algorithm relies on a convenient estimator of the evidence of each model which is essentially a function of importance sampling weights. Other methods for this task such as quadrature, often used in design, suffer from the curse of dimensionality. Approximating posterior model probabilities in this way allows us to use model discrimination utility functions derived from information theory that were previously difficult to compute except for conjugate models. A major benefit of the algorithm is that it requires very little problem specific tuning. We demonstrate the methodology on three applications, including discriminating between models for decline in motor neuron numbers in patients suffering from neurological diseases such as Motor Neuron disease.
Resumo:
Radial Hele-Shaw flows are treated analytically using conformal mapping techniques. The geometry of interest has a doubly-connected annular region of viscous fluid surrounding an inviscid bubble that is either expanding or contracting due to a pressure difference caused by injection or suction of the inviscid fluid. The zero-surface-tension problem is ill-posed for both bubble expansion and contraction, as both scenarios involve viscous fluid displacing inviscid fluid. Exact solutions are derived by tracking the location of singularities and critical points in the analytic continuation of the mapping function. We show that by treating the critical points, it is easy to observe finite-time blow-up, and the evolution equations may be written in exact form using complex residues. We present solutions that start with cusps on one interface and end with cusps on the other, as well as solutions that have the bubble contracting to a point. For the latter solutions, the bubble approaches an ellipse in shape at extinction.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
Resumo:
Effective digital human model (DHM) simulation of automotive driver packaging ergonomics, safety and comfort depends on accurate modelling of occupant posture, which is strongly related to the mechanical interaction between human body soft tissue and flexible seat components. This paper comprises: a study investigating the component mechanical behaviour of a spring-suspended, production level seat when indented by SAE J826 type, human thigh-buttock representing hard shell; a model of seated human buttock shape for improved indenter design using a multivariate representation of Australian population thigh-buttock anthropometry; and a finite-element study simulating the deflection of human buttock and thigh soft tissue when seated, based on seated MRI. The results of the three studies provide a description of the mechanical properties of the driver-seat interface, and allow validation of future dynamic simulations, involving multi-body and finite-element (FE) DHM in virtual ergonomic studies.
Resumo:
The QUT Extreme Science and Engineering program provides free hands-on workshops to schools, presented by scientists and engineers to students from prep to year 12 in their own classrooms. The workshops are tied to the school curriculum and give students access to professional quality instruments, helping to stimulate their interest in science and engineering, with the aim of generating a greater take up of STEM related subjects in the senior high school years. In addition to engaging students in activities, workshop presenters provide role models of both genders, helping to breakdown preconceived ideas of the type of person who becomes a scientist or engineer and demystifying the university experience. The Extreme Science and Engineering vans have been running for 10 years and as such demonstrate a sustainable and reproducible model for schools engagement. With funding provided through QUT’s Widening Participation Equity initiative (HEPPP funded) the vans which averaged 120 school visits each year has increased to 150+ visits in 2010. Additionally 100+ workshops (hands-on and career focused) have been presented to students from low socio-economic status schools, on the three QUT campuses in 2011. While this is designed as a long-term initiative the short term results have been very promising, with 3000 students attending the workshops in the first six months and teacher and students feedback has been overwhelmingly positive.
Resumo:
Sustainability has emerged as a primary context for engineering education in the 21st Century, particularly the sub-discipline of chemical engineering. However, there is confusion over how to go about integrating sustainability knowledge and skills systemically within bachelor degrees. This paper addresses this challenge, using a case study of an Australian chemical engineering degree to highlight important practical considerations for embedding sustainability at the core of the curriculum. The paper begins with context for considering a systematic process for rapid curriculum renewal. The authors then summarise a 2-year federally funded project, which comprised piloting a model for rapid curriculum renewal led by the chemical engineering staff. Model elements contributing to the renewal of this engineering degree and described in this paper include: industry outreach; staff professional development; attribute identification and alignment; program mapping; and curriculum and teaching resource development. Personal reflections on the progress and process of rapid curriculum renewal in sustainability by the authors and participating engineering staff will be presented as a means to discuss and identify methodological improvements, as well as highlight barriers to project implementation. It is hoped that this paper will provide an example of a formalised methodology on which program reform and curriculum renewal for sustainability can be built upon in other higher education institutions.
Resumo:
Individual-based models describing the migration and proliferation of a population of cells frequently restrict the cells to a predefined lattice. An implicit assumption of this type of lattice based model is that a proliferative population will always eventually fill the lattice. Here we develop a new lattice-free individual-based model that incorporates cell-to-cell crowding effects. We also derive approximate mean-field descriptions for the lattice-free model in two special cases motivated by commonly used experimental setups. Lattice-free simulation results are compared to these mean-field descriptions and to a corresponding lattice-based model. Data from a proliferation experiment is used to estimate the parameters for the new model, including the cell proliferation rate, showing that the model fits the data well. An important aspect of the lattice-free model is that the confluent cell density is not predefined, as with lattice-based models, but an emergent model property. As a consequence of the more realistic, irregular configuration of cells in the lattice-free model, the population growth rate is much slower at high cell densities and the population cannot reach the same confluent density as an equivalent lattice-based model.
Resumo:
Rapid prototyping environments can speed up the research of visual control algorithms. We have designed and implemented a software framework for fast prototyping of visual control algorithms for Micro Aerial Vehicles (MAV). We have applied a combination of a proxy-based network communication architecture and a custom Application Programming Interface. This allows multiple experimental configurations, like drone swarms or distributed processing of a drone's video stream. Currently, the framework supports a low-cost MAV: the Parrot AR.Drone. Real tests have been performed on this platform and the results show comparatively low figures of the extra communication delay introduced by the framework, while adding new functionalities and flexibility to the selected drone. This implementation is open-source and can be downloaded from www.vision4uav.com/?q=VC4MAV-FW
Resumo:
This study describes the design of a biphasic scaffold composed of a Fused Deposition Modeling scaffold (bone compartment) and an electrospun membrane (periodontal compartment) for periodontal regeneration. In order to achieve simultaneous alveolar bone and periodontal ligament regeneration a cell-based strategy was carried out by combining osteoblast culture in the bone compartment and placement of multiple periodontal ligament (PDL) cell sheets on the electrospun membrane. In vitro data showed that the osteoblasts formed mineralized matrix in the bone compartment after 21 days in culture and that the PDL cell sheet harvesting did not induce significant cell death. The cell-seeded biphasic scaffolds were placed onto a dentin block and implanted for 8 weeks in an athymic rat subcutaneous model. The scaffolds were analyzed by μCT, immunohistochemistry and histology. In the bone compartment, a more intense ALP staining was obtained following seeding with osteoblasts, confirming the μCT results which showed higher mineralization density for these scaffolds. A thin mineralized cementum-like tissue was deposited on the dentin surface for the scaffolds incorporating the multiple PDL cell sheets, as observed by H&E and Azan staining. These scaffolds also demonstrated better attachment onto the dentin surface compared to no attachment when no cell sheets were used. In addition, immunohistochemistry revealed the presence of CEMP1 protein at the interface with the dentine. These results demonstrated that the combination of multiple PDL cell sheets and a biphasic scaffold allows the simultaneous delivery of the cells necessary for in vivo regeneration of alveolar bone, periodontal ligament and cementum. © 2012 Elsevier Ltd.
Resumo:
Process-aware information systems, ranging from generic workflow systems to dedicated enterprise information systems, use work-lists to offer so-called work items to users. In real scenarios, users can be confronted with a very large number of work items that stem from multiple cases of different processes. In this jungle of work items, users may find it hard to choose the right item to work on next. The system cannot autonomously decide which is the right work item, since the decision is also dependent on conditions that are somehow outside the system. For instance, what is “best” for an organisation should be mediated with what is “best” for its employees. Current work-list handlers show work items as a simple sorted list and therefore do not provide much decision support for choosing the right work item. Since the work-list handler is the dominant interface between the system and its users, it is worthwhile to provide an intuitive graphical interface that uses contextual information about work items and users to provide suggestions about prioritisation of work items. This paper uses the so-called map metaphor to visualise work items and resources (e.g., users) in a sophisticated manner. Moreover, based on distance notions, the work-list handler can suggest the next work item by considering different perspectives. For example, urgent work items of a type that suits the user may be highlighted. The underlying map and distance notions may be of a geographical nature (e.g., a map of a city or office building), but may also be based on process designs, organisational structures, social networks, due dates, calendars, etc. The framework proposed in this paper is generic and can be applied to any process-aware information system. Moreover, in order to show its practical feasibility, the paper discusses a full-fledged implementation developed in the context of the open-source workflow environment YAWL, together with two real examples stemming from two very different scenarios. The results of an initial usability evaluation of the implementation are also presented, which provide a first indication of the validity of the approach.
Resumo:
To achieve the ultimate goal of periodontal tissue engineering, it is of great importance to develop bioactive scaffolds which could stimulate the osteogenic/cementogenic differentiation of periodontal ligament cells (PDLCs) for the favorable regeneration of alveolar bone, root cementum, and periodontal ligament. Strontium (Sr) and Sr-containing biomaterials have been found to induce osteoblast activity. However, there is no systematic report about the interaction between Sr or Sr-containing biomaterials and PDLCs for periodontal tissue engineering. The aims of this study were to prepare Sr-containing mesoporous bioactive glass (Sr-MBG) scaffolds and investigate whether the addition of Sr could stimulate the osteogenic/cementogenic differentiation of PDLCs in tissue engineering scaffold system. The composition, microstructure and mesopore properties (specific surface area, nano-pore volume and nano-pore distribution) of Sr-MBG scaffolds were characterized. The proliferation, alkaline phosphatase (ALP) activity and osteogenesis/cementogenesis-related gene expression (ALP, Runx2, Col I, OPN and CEMP1) of PDLCs on different kinds of Sr-MBG scaffolds were systematically investigated. The results show that Sr plays an important role in influencing the mesoporous structure of MBG scaffolds in which high contents of Sr decreased the well-ordered mesopores as well as their surface area/pore volume. Sr2+ ions could be released from Sr-MBG scaffolds in a controlled way. The incorporation of Sr into MBG scaffolds has significantly stimulated ALP activity and osteogenesis/cementogenesis-related gene expression of PDLCs. Furthermore, Sr-MBG scaffolds in simulated body fluids environment still maintained excellent apatite-mineralization ability. The study suggests that the incorporation of Sr into MBG scaffolds is a viable way to stimulate the biological response of PDLCs. Sr-MBG scaffolds are a promising bioactive material for periodontal tissue engineering application.