958 resultados para Design and manufacturing integration
Resumo:
Design for Manufacturing (DFM) is a highly integral methodology in product development, starting from the concept development phase, with the aim of improving manufacturing productivity and maintaining product quality. While Design for Assembly (DFA) is focusing on elimination or combination of parts with other components (Boothroyd, Dewhurst and Knight, 2002), which in most cases relates to performing a function and manufacture operation in a simpler way, DFM is following a more holistic approach. During DFM, the considerable background work required for the conceptual phase is compensated for by a shortening of later development phases. Current DFM projects normally apply an iterative step-by-step approach and eventually transfer to the developer team. Although DFM has been a well established methodology for about 30 years, a Fraunhofer IAO study from 2009 found that DFM was still one of the key challenges of the German Manufacturing Industry. A new, knowledge based approach to DFM, eliminating steps of DFM, was introduced in Paul and Al-Dirini (2009). The concept focuses on a concurrent engineering process between the manufacturing engineering and product development systems, while current product realization cycles depend on a rigorous back-and-forth examine-and-correct approach so as to ensure compatibility of any proposed design to the DFM rules and guidelines adopted by the company. The key to achieving reductions is to incorporate DFM considerations into the early stages of the design process. A case study for DFM application in an automotive powertrain engineering environment is presented. It is argued that a DFM database needs to be interfaced to the CAD/CAM software, which will restrict designers to the DFM criteria. Consequently, a notable reduction of development cycles can be achieved. The case study is following the hypothesis that current DFM methods do not improve product design in a manner claimed by the DFM method. The critical case was to identify DFA/DFM recommendations or program actions with repeated appearance in different sources. Repetitive DFM measures are identified, analyzed and it is shown how a modified DFM process can mitigate a non-fully integrated DFM approach.
Resumo:
Creative industries in China provides a fresh account of China’s emerging commercial cultural sector. The author shows how developments in Chinese art, design and media industries are reflected in policy, in market activity, and grassroots participation. Never has the attraction of being a media producer, an artist, or a designer in China been so enticing. National and regional governments offer financial incentives; consumption of cultural goods and services have increased; creative workers from Europe, North America and Asia are moving to Chinese cities; culture is increasingly positioned as a pillar industry. But what does this mean for our understanding of Chinese society? Can culture be industrialised following the low-cost model of China’s manufacturing economy. Is the national government really committed to social liberalisation? This engaging book is a valuable resource for students and scholars interested in social change in China. It draws on leading Chinese scholarship together with insights from global media studies, economic geography and cultural studies.
Resumo:
A comprehensive one-dimensional meanline design approach for radial inflow turbines is described in the present work. An original code was developed in Python that takes a novel approach to the automatic selection of feasible machines based on pre-defined performance or geometry characteristics for a given application. It comprises a brute-force search algorithm that traverses the entire search space based on key non-dimensional parameters and rotational speed. In this study, an in-depth analysis and subsequent implementation of relevant loss models as well as selection criteria for radial inflow turbines is addressed. Comparison with previously published designs, as well as other available codes, showed good agreement. Sample (real and theoretical) test cases were trialed and results showed good agreement when compared to other available codes. The presented approach was found to be valid and the model was found to be a useful tool with regards to the preliminary design and performance estimation of radial inflow turbines, enabling its integration with other thermodynamic cycle analysis and three-dimensional blade design codes.
Resumo:
The main objective of this paper is to describe the development of a remote sensing airborne air sampling system for Unmanned Aerial Systems (UAS) and provide the capability for the detection of particle and gas concentrations in real time over remote locations. The design of the air sampling methodology started by defining system architecture, and then by selecting and integrating each subsystem. A multifunctional air sampling instrument, with capability for simultaneous measurement of particle and gas concentrations was modified and integrated with ARCAA’s Flamingo UAS platform and communications protocols. As result of the integration process, a system capable of both real time geo-location monitoring and indexed-link sampling was obtained. Wind tunnel tests were conducted in order to evaluate the performance of the air sampling instrument in controlled nonstationary conditions at the typical operational velocities of the UAS platform. Once the remote fully operative air sampling system was obtained, the problem of mission design was analyzed through the simulation of different scenarios. Furthermore, flight tests of the complete air sampling system were then conducted to check the dynamic characteristics of the UAS with the air sampling system and to prove its capability to perform an air sampling mission following a specific flight path.
Resumo:
Wireless networked control systems (WNCSs) have been widely used in the areas of manufacturing and industrial processing over the last few years. They provide real-time control with a unique characteristic: periodic traffic. These systems have a time-critical requirement. Due to current wireless mechanisms, the WNCS performance suffers from long time-varying delays, packet dropout, and inefficient channel utilization. Current wirelessly networked applications like WNCSs are designed upon the layered architecture basis. The features of this layered architecture constrain the performance of these demanding applications. Numerous efforts have attempted to use cross-layer design (CLD) approaches to improve the performance of various networked applications. However, the existing research rarely considers large-scale networks and congestion network conditions in WNCSs. In addition, there is a lack of discussions on how to apply CLD approaches in WNCSs. This thesis proposes a cross-layer design methodology to address the issues of periodic traffic timeliness, as well as to promote the efficiency of channel utilization in WNCSs. The design of the proposed CLD is highlighted by the measurement of the underlying network condition, the classification of the network state, and the adjustment of sampling period between sensors and controllers. This period adjustment is able to maintain the minimally allowable sampling period, and also maximize the control performance. Extensive simulations are conducted using the network simulator NS-2 to evaluate the performance of the proposed CLD. The comparative studies involve two aspects of communications, with and without using the proposed CLD, respectively. The results show that the proposed CLD is capable of fulfilling the timeliness requirement under congested network conditions, and is also able to improve the channel utilization efficiency and the proportion of effective data in WNCSs.
Resumo:
Aim The aim of this paper is to offer an alternative knowing-how knowing-that framework of nursing knowledge, which in the past has been accepted as the provenance of advanced practice. Background The concept of advancing practice is central to the development of nursing practice and has been seen to take on many different forms depending on its use in context. To many it has become synonymous with the work of the advanced or expert practitioner; others have viewed it as a process of continuing professional development and skills acquisition. Moreover, it is becoming closely linked with practice development. However, there is much discussion as to what constitutes the knowledge necessary for advancing and advanced practice, and it has been suggested that theoretical and practical knowledge form the cornerstone of advanced knowledge. Design The design of this article takes a discursive approach as to the meaning and integration of knowledge within the context of advancing nursing practice. Method A thematic analysis of the current discourse relating to knowledge integration models in an advancing and advanced practice arena was used to identify concurrent themes relating to the knowing-how knowing-that framework which commonly used to classify the knowledge necessary for advanced nursing practice. Conclusion There is a dichotomy as to what constitutes knowledge for advanced and advancing practice. Several authors have offered a variety of differing models, yet it is the application and integration of theoretical and practical knowledge that defines and develops the advancement of nursing practice. An alternative framework offered here may allow differences in the way that nursing knowledge important for advancing practice is perceived, developed and coordinated. Relevance to clinical practice What has inevitably been neglected is that there are various other variables which when transposed into the existing knowing-how knowing-that framework allows for advanced knowledge to be better defined. One of the more notable variables is pattern recognition, which became the focus of Benner’s work on expert practice. Therefore, if this is included into the knowing-how knowing-that framework, the knowing-how becomes the knowledge that contributes to advancing and advanced practice and the knowing-that becomes the governing action based on a deeper understanding of the problem or issue.
Resumo:
Significant attention has been given in urban policy literature to the integration of land-use and transport planning and policies—with a view to curbing sprawling urban form and diminishing externalities associated with car-dependent travel patterns. By taking land-use and transport interaction into account, this debate mainly focuses on how a successful integration can contribute to societal well-being, providing efficient and balanced economic growth while accomplishing the goal of developing sustainable urban environments and communities. The integration is also a focal theme of contemporary urban development models, such as smart growth, liveable neighbourhoods, and new urbanism. Even though available planning policy options for ameliorating urban form and transport-related externalities have matured—owing to growing research and practice worldwide—there remains a lack of suitable evaluation models to reflect on the current status of urban form and travel problems or on the success of implemented integration policies. In this study we explore the applicability of indicator-based spatial indexing to assess land-use and transport integration at the neighbourhood level. For this, a spatial index is developed by a number of indicators compiled from international studies and trialled in Gold Coast, Queensland, Australia. The results of this modelling study reveal that it is possible to propose an effective metric to determine the success level of city plans considering their sustainability performance via composite indicator methodology. The model proved useful in demarcating areas where planning intervention is applicable, and in identifying the most suitable locations for future urban development and plan amendments. Lastly, we integrate variance-based sensitivity analysis with the spatial indexing method, and discuss the applicability of the model in other urban contexts.
Resumo:
Two lecture notes describe recent developments of evolutionary multi objective optimization (MO) techniques in detail and their advantages and drawbacks compared to traditional deterministic optimisers. The role of Game Strategies (GS), such as Pareto, Nash or Stackelberg games as companions or pre-conditioners of Multi objective Optimizers is presented and discussed on simple mathematical functions in Part I , as well as their implementations on simple aeronautical model optimisation problems on the computer using a friendly design framework in Part II. Real life (robust) design applications dealing with UAVs systems or Civil Aircraft and using the EAs and Game Strategies combined material of Part I & Part II are solved and discussed in Part III providing the designer new compromised solutions useful to digital aircraft design and manufacturing. Many details related to Lectures notes Part I, Part II and Part III can be found by the reader in [68].
Resumo:
As the number of Uninhabited Airborne Systems (UAS) proliferates in civil applications, industry is increasingly putting pressure on regulation authorities to provide a path for certification and allow UAS integration into regulated airspace. The success of this integration depends on developments in improved UAS reliability and safety, regulations for certification, and technologies for operational performance and safety assessment. This paper focusses on the last topic and describes a framework for quantifying robust autonomy of UAS, which quantifies the system's ability to either continue operating in the presence of faults or safely shut down. Two figures of merit are used to evaluate vehicle performance relative to mission requirements and the consequences of autonomous decision making in motion control and guidance systems. These figures of merit are interpreted within a probabilistic framework, which extends previous work in the literature. The valuation of the figures of merit can be done using stochastic simulation scenarios during both vehicle development and certification stages with different degrees of integration of hardware-in-the-loop simulation technology. The objective of the proposed framework is to aid in decision making about the suitability of a vehicle with respect to safety and reliability relative to mission requirements.
Resumo:
Decision-making is such an integral aspect in health care routine that the ability to make the right decisions at crucial moments can lead to patient health improvements. Evidence-based practice, the paradigm used to make those informed decisions, relies on the use of current best evidence from systematic research such as randomized controlled trials. Limitations of the outcomes from randomized controlled trials (RCT), such as “quantity” and “quality” of evidence generated, has lowered healthcare professionals’ confidence in using EBP. An alternate paradigm of Practice-Based Evidence has evolved with the key being evidence drawn from practice settings. Through the use of health information technology, electronic health records (EHR) capture relevant clinical practice “evidence”. A data-driven approach is proposed to capitalize on the benefits of EHR. The issues of data privacy, security and integrity are diminished by an information accountability concept. Data warehouse architecture completes the data-driven approach by integrating health data from multi-source systems, unique within the healthcare environment.
Resumo:
We describe a design and fabrication method to enable simpler manufacturing of more efficient organic solar cell modules using a modified flat panel deposition technique. Many mini-cell pixels are individually connected to each other in parallel forming a macro-scale solar cell array. The pixel size of each array is optimized through experimentation to maximize the efficiency of the whole array. We demonstrate that integrated organic solar cell modules with a scalable current output can be fabricated in this fashion and can also be connected in series to generate a scalable voltage output.
Resumo:
This paper proposes and explores the Deep Customer Insight Innovation Framework in order to develop an understanding as to how design can be integrated within existing innovation processes. The Deep Customer Insight Innovation Framework synthesises the work of Beckman and Barry (2007) as a theoretical foundation, with the framework explored within a case study of Australian Airport Corporation seeking to drive airport innovations in operations and retail performance. The integration of a deep customer insight approach develops customer-centric and highly integrated solutions as a function of concentrated problem exploration and design-led idea generation. Businesses’ facing complex innovation challenges or seeking to making sense of future opportunities will be able to integrate design into existing innovation processes, anchoring the new approach between existing market research and business development activities. This paper contributes a framework and novel understanding as to how design methods are integrated into existing innovation processes for operationalization within industry.
Resumo:
This research deals with the development of a Solar-Powered UAV designed for remote sensing, in particular to the development of the autopilot sub-system and path planning. The design of the Solar-Powered UAS followed a systems engineering methodology, by first defining system architecture, and selecting each subsystem. Validation tests and integration of autopilot is performed, in order to evaluate the performances of each subsystem and to obtain a global operational system for data collection missions. The flight tests planning and simulation results are also explored in order to verify the mission capabilities using an autopilot on a UAS. The important aspect of this research is to develop a Solar-Powered UAS for the purpose of data collection and video monitoring, especially data and images from the ground; transmit to the GS (Ground Station), segment the collected data, and afterwards analyze it with a Matlab code.
Resumo:
Wave pipelining is a design technique for increasing the throughput of a digital circuit or system without introducing pipelining registers between adjacent combinational logic blocks in the circuit/system. However, this requires balancing of the delays along all the paths from the input to the output which comes the way of its implementation. Static CMOS is inherently susceptible to delay variation with input data, and hence, receives a low priority for wave pipelined digital design. On the other hand, ECL and CML, which are amenable to wave pipelining, lack the compactness and low power attributes of CMOS. In this paper we attempt to exploit wave pipelining in CMOS technology. We use a single generic building block in Normal Process Complementary Pass Transistor Logic (NPCPL), modeled after CPL, to achieve equal delay along all the propagation paths in the logic structure. An 8×8 b multiplier is designed using this logic in a 0.8 ?m technology. The carry-save multiplier architecture is modified suitably to support wave pipelining, viz., the logic depth of all the paths are made identical. The 1 mm×0.6 mm multiplier core supports a throughput of 400 MHz and dissipates a total power of 0.6 W. We develop simple enhancements to the NPCPL building blocks that allow the multiplier to sustain throughputs in excess of 600 MHz. The methodology can be extended to introduce wave pipelining in other circuits as well