4 resultados para Engineering design--Data processing
Resumo:
Many engineers currently in professional practice will have gained a degree level qualification which involved studying a curriculum heavy with mathematics and engineering science. While this knowledge is vital to the engineering design process so also is manufacturing knowledge, if the resulting designs are to be both technically and commercially viable.
The methodology advanced by the CDIO Initiative aims to improve engineering education by teaching in the context of Conceiving, Designing, Implementing and Operating products, processes or systems. A key element of this approach is the use of Design-Built-Test (DBT) projects as the core of an integrated curriculum. This approach facilitates the development of professional skills as well as the application of technical knowledge and skills developed in other parts of the degree programme. This approach also changes the role of lecturer to that of facilitator / coach in an active learning environment in which students gain concrete experiences that support their development.
The case study herein describes Mechanical Engineering undergraduate student involvement in the manufacture and assembly of concept and functional prototypes of a folding bicycle.
Resumo:
This paper is part of a special issue of Applied Geochemistry focusing on reliable applications of compositional multivariate statistical methods. This study outlines the application of compositional data analysis (CoDa) to calibration of geochemical data and multivariate statistical modelling of geochemistry and grain-size data from a set of Holocene sedimentary cores from the Ganges-Brahmaputra (G-B) delta. Over the last two decades, understanding near-continuous records of sedimentary sequences has required the use of core-scanning X-ray fluorescence (XRF) spectrometry, for both terrestrial and marine sedimentary sequences. Initial XRF data are generally unusable in ‘raw-format’, requiring data processing in order to remove instrument bias, as well as informed sequence interpretation. The applicability of these conventional calibration equations to core-scanning XRF data are further limited by the constraints posed by unknown measurement geometry and specimen homogeneity, as well as matrix effects. Log-ratio based calibration schemes have been developed and applied to clastic sedimentary sequences focusing mainly on energy dispersive-XRF (ED-XRF) core-scanning. This study has applied high resolution core-scanning XRF to Holocene sedimentary sequences from the tidal-dominated Indian Sundarbans, (Ganges-Brahmaputra delta plain). The Log-Ratio Calibration Equation (LRCE) was applied to a sub-set of core-scan and conventional ED-XRF data to quantify elemental composition. This provides a robust calibration scheme using reduced major axis regression of log-ratio transformed geochemical data. Through partial least squares (PLS) modelling of geochemical and grain-size data, it is possible to derive robust proxy information for the Sundarbans depositional environment. The application of these techniques to Holocene sedimentary data offers an improved methodological framework for unravelling Holocene sedimentation patterns.
Resumo:
Field-programmable gate arrays are ideal hosts to custom accelerators for signal, image, and data processing but de- mand manual register transfer level design if high performance and low cost are desired. High-level synthesis reduces this design burden but requires manual design of complex on-chip and off-chip memory architectures, a major limitation in applications such as video processing. This paper presents an approach to resolve this shortcoming. A constructive process is described that can derive such accelerators, including on- and off-chip memory storage from a C description such that a user-defined throughput constraint is met. By employing a novel statement-oriented approach, dataflow intermediate models are derived and used to support simple ap- proaches for on-/off-chip buffer partitioning, derivation of custom on-chip memory hierarchies and architecture transformation to ensure user-defined throughput constraints are met with minimum cost. When applied to accelerators for full search motion estima- tion, matrix multiplication, Sobel edge detection, and fast Fourier transform, it is shown how real-time performance up to an order of magnitude in advance of existing commercial HLS tools is enabled whilst including all requisite memory infrastructure. Further, op- timizations are presented that reduce the on-chip buffer capacity and physical resource cost by up to 96% and 75%, respectively, whilst maintaining real-time performance.
Resumo:
Permanent magnet synchronous motors (PMSMs) provide a competitive technology for EV traction drives owing to their high power density and high efficiency. In this paper, three types of interior PMSMs with different PM arrangements are modeled by the finite element method (FEM). For a given amount of permanent magnet materials, the V-shape interior PMSM is found better than the U-shape and the conventional rotor topologies for EV traction drives. Then the V-shape interior PMSM is further analyzed with the effects of stator slot opening and the permanent magnet pole chamfering on cogging torque and output torque performance. A vector-controlled flux-weakening method is developed and simulated in Matlab to expand the motor speed range for EV drive system. The results show good dynamic and steady-state performance with a capability of expanding speed up to four times of the rated. A prototype of the V-shape interior PMSM is also manufactured and tested to validate the numerical models built by the FEM.