921 resultados para pacs: C6170K knowledge engineering techniques
Resumo:
With the growth of the multi-national corporation (MNCs) has come the need to understand how parent companies transfer knowledge to, and manage the operations of, their subsidiaries. This is of particular interest to manufacturing companies transferring their operations overseas. Japanese companies in particular have been pioneering in the development of techniques such as Kaizen, and elements of the Toyota Production System (TPS) such as Kanban, which can be useful tools for transferring the ethos of Japanese manufacturing and maintaining quality and control in overseas subsidiaries. Much has been written about the process of transferring Japanese manufacturing techniques but much less is understood about how the subsidiaries themselves – which are required to make use of such techniques – actually acquire and incorporate them into their operations. This research therefore takes the perspective of the subsidiary in examining how knowledge of manufacturing techniques is transferred from the parent company within its surrounding (subsidiary). There is clearly a need to take a practice-based view to understanding how the local managers and operatives incorporate this knowledge into their working practices. A particularly relevant theme is how subsidiaries both replicate and adapt knowledge from parents and the circumstances in which replication or adaptation occurs. However, it is shown that there is a lack of research which takes an in-depth look at these processes from the perspective of the participants themselves. This is particularly important as much knowledge literature argues that knowledge is best viewed as enacted and learned in practice – and therefore transferred in person – rather than by the transfer of abstract and de-contextualised information. What is needed, therefore, is further research which makes an in-depth examination of what happens at the subsidiary level for this transfer process to occur. There is clearly a need to take a practice-based view to understanding how the local managers and operatives incorporate knowledge about manufacturing techniques into their working practices. In depth qualitative research was, therefore, conducted in the subsidiary of a Japanese multinational, Gambatte Corporation, involving three main manufacturing initiatives (or philosophies), namely 'TPS‘, 'TPM‘ and 'TS‘. The case data were derived from 52 in-depth interviews with project members, moderate-participant observations, and documentations and presented and analysed in episodes format. This study contributes to our understanding of knowledge transfer in relation to the approaches and circumstances of adaptation and replication of knowledge within the subsidiary, how the whole process is developed, and also how 'innovation‘ takes place. This study further understood that the process of knowledge transfer could be explained as a process of Reciprocal Provider-Learner Exchange that can be linked to the Experiential Learning Theory.
Resumo:
We consider the problem of stable determination of a harmonic function from knowledge of the solution and its normal derivative on a part of the boundary of the (bounded) solution domain. The alternating method is a procedure to generate an approximation to the harmonic function from such Cauchy data and we investigate a numerical implementation of this procedure based on Fredholm integral equations and Nyström discretization schemes, which makes it possible to perform a large number of iterations (millions) with minor computational cost (seconds) and high accuracy. Moreover, the original problem is rewritten as a fixed point equation on the boundary, and various other direct regularization techniques are discussed to solve that equation. We also discuss how knowledge of the smoothness of the data can be used to further improve the accuracy. Numerical examples are presented showing that accurate approximations of both the solution and its normal derivative can be obtained with much less computational time than in previous works.
Resumo:
With the growth of the multinational corporation (MNC) has come the need to understand how parent companies transfer knowledge to, and manage the operations of, their subsidiaries. This is of particular interest to manufacturing companies transferring their operations overseas. Japanese companies in particular have been pioneering in this regard, with techniques such as the Toyota Production System (TPS) for transferring the ethos of Japanese manufacturing and maintaining quality and control in overseas subsidiaries. A great deal has been written about the process of transferring Japanese manufacturing techniques, but much less is understood about how the subsidiaries themselves, which are required to make use of such techniques, actually acquire and incorporate them into their operations. The research on which this paper is based therefore examines how, from the perspective of the subsidiary, knowledge of manufacturing techniques is transferred from the parent company. There is clearly a need to take a practice-based view to understanding how the local managers and operatives incorporate knowledge about manufacturing techniques into their working practices. In-depth qualitative research was, therefore, conducted in the subsidiary of a Japanese multinational, Denso Corporation, involving three main manufacturing initiatives (or philosophies), namely ‘TPS’, ‘TPM’ and ‘TS’. The case data were derived from 52 in-depth interviews with project members, moderate participant observations, and documentations. The aim of this paper is to present the preliminary findings from the case analyses. The research contributes to our understanding of knowledge transfer in relation to the circumstances of the selection between adaptation and replication of knowledge in the subsidiary from its parent. In particular this understanding relates to transfer across different flows and levels in the organisational hierarchy, how the whole process is managed, and also how modification takes place.
Resumo:
When faced with the task of designing and implementing a new self-aware and self-expressive computing system, researchers and practitioners need a set of guidelines on how to use the concepts and foundations developed in the Engineering Proprioception in Computing Systems (EPiCS) project. This report provides such guidelines on how to design self-aware and self-expressive computing systems in a principled way. We have documented different categories of self-awareness and self-expression level using architectural patterns. We have also documented common architectural primitives, their possible candidate techniques and attributes for architecting self-aware and self-expressive systems. Drawing on the knowledge obtained from the previous investigations, we proposed a pattern driven methodology for engineering self-aware and self-expressive systems to assist in utilising the patterns and primitives during design. The methodology contains detailed guidance to make decisions with respect to the possible design alternatives, providing a systematic way to build self-aware and self-expressive systems. Then, we qualitatively and quantitatively evaluated the methodology using two case studies. The results reveal that our pattern driven methodology covers the main aspects of engineering self-aware and self-expressive systems, and that the resulted systems perform significantly better than the non-self-aware systems.
Resumo:
The operating model of knowledge quantum engineering for identification and prognostic decision- making in conditions of α-indeterminacy is suggested in the article. The synthesized operating model solves three basic tasks: Аt-task to formalize tk-knowledge; Вt-task to recognize (identify) objects according to observed results; Сt-task to extrapolate (prognosticate) the observed results. Operating derivation of identification and prognostic decisions using authentic different-level algorithmic knowledge quantum (using tRAKZ-method) assumes synthesis of authentic knowledge quantum database (BtkZ) using induction operator as a system of implicative laws, and then using deduction operator according to the observed tk-knowledge and BtkZ a derivation of identification or prognostic decisions in a form of new tk-knowledge.
Resumo:
Many engineers currently in professional practice will have gained a degree level qualification which involved studying a curriculum heavy with mathematics and engineering science. While this knowledge is vital to the engineering design process so also is manufacturing knowledge, if the resulting designs are to be both technically and commercially viable.
The methodology advanced by the CDIO Initiative aims to improve engineering education by teaching in the context of Conceiving, Designing, Implementing and Operating products, processes or systems. A key element of this approach is the use of Design-Built-Test (DBT) projects as the core of an integrated curriculum. This approach facilitates the development of professional skills as well as the application of technical knowledge and skills developed in other parts of the degree programme. This approach also changes the role of lecturer to that of facilitator / coach in an active learning environment in which students gain concrete experiences that support their development.
The case study herein describes Mechanical Engineering undergraduate student involvement in the manufacture and assembly of concept and functional prototypes of a folding bicycle.
Resumo:
This keynote presentation will report some of our research work and experience on the development and applications of relevant methods, models, systems and simulation techniques in support of different types and various levels of decision making for business, management and engineering. In particular, the following topics will be covered. Modelling, multi-agent-based simulation and analysis of the allocation management of carbon dioxide emission permits in China (Nanfeng Liu & Shuliang Li Agent-based simulation of the dynamic evolution of enterprise carbon assets (Yin Zeng & Shuliang Li) A framework & system for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps: a big data perspective (Jin Xu, Zheng Li, Shuliang Li & Yanyan Zhang) Open innovation: intelligent model, social media & complex adaptive system simulation (Shuliang Li & Jim Zheng Li) A framework, model and software prototype for modelling and simulation for deshopping behaviour and how companies respond (Shawkat Rahman & Shuliang Li) Integrating multiple agents, simulation, knowledge bases and fuzzy logic for international marketing decision making (Shuliang Li & Jim Zheng Li) A Web-based hybrid intelligent system for combined conventional, digital, mobile, social media and mobile marketing strategy formulation (Shuliang Li & Jim Zheng Li) A hybrid intelligent model for Web & social media dynamics, and evolutionary and adaptive branding (Shuliang Li) A hybrid paradigm for modelling, simulation and analysis of brand virality in social media (Shuliang Li & Jim Zheng Li) Network configuration management: attack paradigms and architectures for computer network survivability (Tero Karvinen & Shuliang Li)
Resumo:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
Resumo:
In the last decades, the possibility to generate plasma at atmospheric pressure gave rise to a new emerging field called plasma medicine; it deals with the application of cold atmospheric pressure plasmas (CAPs) or plasma-activated solutions on or in the human body for therapeutic effects. Thanks to a blend of synergic biologically active agents and biocompatible temperatures, different CAP sources were successfully employed in many different biomedical applications such as dentistry, dermatology, wound healing, cancer treatment, blood coagulation, etc.… Despite their effectiveness has been verified in the above-mentioned biomedical applications, over the years, researchers throughout the world described numerous CAP sources which are still laboratory devices not optimized for the specific application. In this perspective, the aim of this dissertation was the development and the optimization of techniques and design parameters for the engineering of CAP sources for different biomedical applications and plasma medicine among which cancer treatment, dentistry and bioaerosol decontamination. In the first section, the discharge electrical parameters, the behavior of the plasma streamers and the liquid and the gas phase chemistry of a multiwire device for the treatment of liquids were performed. Moreover, two different plasma-activated liquids were used for the treatment of Epithelial Ovarian Cancer cells and fibroblasts to assess their selectivity. In the second section, in accordance with the most important standard regulations for medical devices, were reported the realization steps of a Plasma Gun device easy to handle and expected to be mounted on a tabletop device that could be used for dental clinical applications. In the third section, in relation to the current COVID-19 pandemic, were reported the first steps for the design, realization, and optimization of a dielectric barrier discharge source suitable for the treatment of different types of bioaerosol.
Resumo:
Sandy coasts represent vital areas whose preservation and maintenance also involve economic and tourist interests. Besides, these dynamic environments undergo the erosion process at different levels depending on their specific characteristics. For this reason, defence interventions are commonly realized by combining engineering solutions and management policies to evaluate their effects over time. Monitoring activities represent the fundamental instrument to obtain a deep knowledge of the investigated phenomenon. Thanks to technological development, several possibilities both in terms of geomatic surveying techniques and processing tools are available, allowing to reach high performances and accuracy. Nevertheless, when the littoral definition includes both emerged and submerged beaches, several issues have to be considered. Therefore, the geomatic surveys and all the following steps need to be calibrated according to the individual application, with the reference system, accuracy and spatial resolution as primary aspects. This study provides the evaluation of the available geomatic techniques, processing approaches, and derived products, aiming at optimising the entire workflow of coastal monitoring by adopting an accuracy-efficiency trade-off. The presented analyses highlight the balance point when the increase in performance becomes an additional value for the obtained products ensuring proper data management. This perspective can represent a helpful instrument to properly plan the monitoring activities according to the specific purposes of the analysis. Finally, the primary uses of the acquired and processed data in monitoring contexts are presented, also considering possible applications for numerical modelling as supporting tools. Moreover, the theme of coastal monitoring has been addressed throughout this thesis by considering a practical point of view, linking to the activities performed by Arpae (Regional agency for prevention, environment and energy of Emilia-Romagna). Indeed, the Adriatic coast of Emilia-Romagna, where sandy beaches particularly exposed to erosion are present, has been chosen as a case study for all the analyses and considerations.
Resumo:
The project aims to gather an understanding of additive manufacturing and other manufacturing 4.0 techniques with an eyesight for industrialization. First the internal material anisotropy of elements created with the most economically feasible FEM technique was established. An understanding of the main drivers for variability for AM was portrayed, with the focus on achieving material internal isotropy. Subsequently, a technique for deposition parameter optimization was presented, further procedure testing was performed following other polymeric materials and composites. A replicability assessment by means of the use of technology 4.0 was proposed, and subsequent industry findings gathered the ultimate need of developing a process that demonstrate how to re-engineer designs in order to show the best results with AM processing. The latest study aims to apply the Industrial Design and Structure Method (IDES) and applying all the knowledge previously stacked into fully reengineer a product with focus of applying tools from 4.0 era, from product feasibility studies, until CAE – FEM analysis and CAM – DfAM. These results would help in making AM and FDM processes a viable option to be combined with composites technologies to achieve a reliable, cost-effective manufacturing method that could also be used for mass market, industry applications.
Resumo:
Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network. Copyright (c) 2008 J. R. C. Piqueira and F. B. Cesar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Resumo:
Age-related changes in running kinematics have been reported in the literature using classical inferential statistics. However, this approach has been hampered by the increased number of biomechanical gait variables reported and subsequently the lack of differences presented in these studies. Data mining techniques have been applied in recent biomedical studies to solve this problem using a more general approach. In the present work, we re-analyzed lower extremity running kinematic data of 17 young and 17 elderly male runners using the Support Vector Machine (SVM) classification approach. In total, 31 kinematic variables were extracted to train the classification algorithm and test the generalized performance. The results revealed different accuracy rates across three different kernel methods adopted in the classifier, with the linear kernel performing the best. A subsequent forward feature selection algorithm demonstrated that with only six features, the linear kernel SVM achieved 100% classification performance rate, showing that these features provided powerful combined information to distinguish age groups. The results of the present work demonstrate potential in applying this approach to improve knowledge about the age-related differences in running gait biomechanics and encourages the use of the SVM in other clinical contexts. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.