915 resultados para Engineering design--Data processing
Resumo:
This dissertation documents the results of a theoretical and numerical study of time dependent storage of energy by melting a phase change material. The heating is provided along invading lines, which change from single-line invasion to tree-shaped invasion. Chapter 2 identifies the special design feature of distributing energy storage in time-dependent fashion on a territory, when the energy flows by fluid flow from a concentrated source to points (users) distributed equidistantly on the area. The challenge in this chapter is to determine the architecture of distributed energy storage. The chief conclusion is that the finite amount of storage material should be distributed proportionally with the distribution of the flow rate of heating agent arriving on the area. The total time needed by the source stream to ‘invade’ the area is cumulative (the sum of the storage times required at each storage site), and depends on the energy distribution paths and the sequence in which the users are served by the source stream. Chapter 3 shows theoretically that the melting process consists of two phases: “invasion” thermal diffusion along the invading line, which is followed by “consolidation” as heat diffuses perpendicularly to the invading line. This chapter also reports the duration of both phases and the evolution of the melt layer around the invading line during the two-dimensional and three-dimensional invasion. It also shows that the amount of melted material increases in time according to a curve shaped as an S. These theoretical predictions are validated by means of numerical simulations in chapter 4. This chapter also shows that the heat transfer rate density increases (i.e., the S curve becomes steeper) as the complexity and number of degrees of freedom of the structure are increased, in accord with the constructal law. The optimal geometric features of the tree structure are detailed in this chapter. Chapter 5 documents a numerical study of time-dependent melting where the heat transfer is convection dominated, unlike in chapter 3 and 4 where the melting is ruled by pure conduction. In accord with constructal design, the search is for effective heat-flow architectures. The volume-constrained improvement of the designs for heat flow begins with assuming the simplest structure, where a single line serves as heat source. Next, the heat source is endowed with freedom to change its shape as it grows. The objective of the numerical simulations is to discover the geometric features that lead to the fastest melting process. The results show that the heat transfer rate density increases as the complexity and number of degrees of freedom of the structure are increased. Furthermore, the angles between heat invasion lines have a minor effect on the global performance compared to other degrees of freedom: number of branching levels, stem length, and branch lengths. The effect of natural convection in the melt zone is documented.
Resumo:
A substantial amount of information on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. This thesis proposes techniques for more efficient textual big data analysis suitable for the Hadoop analytic platform. This research explores the direct processing of compressed textual data. The focus is on developing novel compression methods with a number of desirable properties to support text-based big data analysis in distributed environments. The novel contributions of this work include the following. Firstly, a Content-aware Partial Compression (CaPC) scheme is developed. CaPC makes a distinction between informational and functional content in which only the informational content is compressed. Thus, the compressed data is made transparent to existing software libraries which often rely on functional content to work. Secondly, a context-free bit-oriented compression scheme (Approximated Huffman Compression) based on the Huffman algorithm is developed. This uses a hybrid data structure that allows pattern searching in compressed data in linear time. Thirdly, several modern compression schemes have been extended so that the compressed data can be safely split with respect to logical data records in distributed file systems. Furthermore, an innovative two layer compression architecture is used, in which each compression layer is appropriate for the corresponding stage of data processing. Peripheral libraries are developed that seamlessly link the proposed compression schemes to existing analytic platforms and computational frameworks, and also make the use of the compressed data transparent to developers. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements.
Resumo:
Cloud computing offers massive scalability and elasticity required by many scien-tific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new oppor-tunities for application developers. This paper investigates how workflow sys-tems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.
Resumo:
This study used a mixed methods approach to develop a broad and deep understanding of students’ perceptions towards creativity in engineering education. Studies have shown that students’ attitudes can have an impact on their motivation to engage in creative behavior. Using an ex-post facto independent factorial design, attitudes of value towards creativity, time for creativity, and creativity stereotypes were measured and compared across gender, year of study, engineering discipline, preference for open-ended problem solving, and confidence in creative abilities. Participants were undergraduate engineering students at Queen’s University from all years of study. A qualitative phenomenological methodology was adopted to study students’ understandings and experiences with engineering creativity. Eleven students participated in oneon- one interviews that provided depth and insight into how students experience and define engineering creativity, and the survey included open-ended items developed using the 10 Maxims of Creativity in Education as a guiding framework. The findings from the survey suggested that students had high value for creativity, however students in fourth year or higher had less value than those in other years. Those with preference for open-ended problem solving and high confidence valued creative more than their counterparts. Students who preferred open-ended problem solving and students with high confidence reported that time was less of a hindrance to their creativity. Males identified more with creativity stereotypes than females, however overall they were both low. Open-ended survey and interview results indicated that students felt they experienced creativity in engineering design activities. Engineering creativity definitions had two elements: creative action and creative characteristic. Creative actions were associated with designing, and creative characteristics were predominantly associated with novelty. Other barriers that emerged from the qualitative analysis were lack of opportunity, lack of assessment, and discomfort with creativity. It was concluded that a universal definition is required to establish clear and aligned understandings of engineering creativity. Instructors may want to consider demonstrating value by assessing creativity and establishing clear criteria in design projects. It is recommended that students be given more opportunities for practice through design activities and that they be introduced to design and creative thinking concepts early in their engineering education.
Resumo:
This paper is based on the novel use of a very high fidelity decimation filter chain for Electrocardiogram (ECG) signal acquisition and data conversion. The multiplier-free and multi-stage structure of the proposed filters lower the power dissipation while minimizing the circuit area which are crucial design constraints to the wireless noninvasive wearable health monitoring products due to the scarce operational resources in their electronic implementation. The decimation ratio of the presented filter is 128, working in tandem with a 1-bit 3rd order Sigma Delta (ΣΔ) modulator which achieves 0.04 dB passband ripples and -74 dB stopband attenuation. The work reported here investigates the non-linear phase effects of the proposed decimation filters on the ECG signal by carrying out a comparative study after phase correction. It concludes that the enhanced phase linearity is not crucial for ECG acquisition and data conversion applications since the signal distortion of the acquired signal, due to phase non-linearity, is insignificant for both original and phase compensated filters. To the best of the authors’ knowledge, being free of signal distortion is essential as this might lead to misdiagnosis as stated in the state of the art. This article demonstrates that with their minimal power consumption and minimal signal distortion features, the proposed decimation filters can effectively be employed in biosignal data processing units.
Resumo:
This paper is part of a special issue of Applied Geochemistry focusing on reliable applications of compositional multivariate statistical methods. This study outlines the application of compositional data analysis (CoDa) to calibration of geochemical data and multivariate statistical modelling of geochemistry and grain-size data from a set of Holocene sedimentary cores from the Ganges-Brahmaputra (G-B) delta. Over the last two decades, understanding near-continuous records of sedimentary sequences has required the use of core-scanning X-ray fluorescence (XRF) spectrometry, for both terrestrial and marine sedimentary sequences. Initial XRF data are generally unusable in ‘raw-format’, requiring data processing in order to remove instrument bias, as well as informed sequence interpretation. The applicability of these conventional calibration equations to core-scanning XRF data are further limited by the constraints posed by unknown measurement geometry and specimen homogeneity, as well as matrix effects. Log-ratio based calibration schemes have been developed and applied to clastic sedimentary sequences focusing mainly on energy dispersive-XRF (ED-XRF) core-scanning. This study has applied high resolution core-scanning XRF to Holocene sedimentary sequences from the tidal-dominated Indian Sundarbans, (Ganges-Brahmaputra delta plain). The Log-Ratio Calibration Equation (LRCE) was applied to a sub-set of core-scan and conventional ED-XRF data to quantify elemental composition. This provides a robust calibration scheme using reduced major axis regression of log-ratio transformed geochemical data. Through partial least squares (PLS) modelling of geochemical and grain-size data, it is possible to derive robust proxy information for the Sundarbans depositional environment. The application of these techniques to Holocene sedimentary data offers an improved methodological framework for unravelling Holocene sedimentation patterns.
Resumo:
Field-programmable gate arrays are ideal hosts to custom accelerators for signal, image, and data processing but de- mand manual register transfer level design if high performance and low cost are desired. High-level synthesis reduces this design burden but requires manual design of complex on-chip and off-chip memory architectures, a major limitation in applications such as video processing. This paper presents an approach to resolve this shortcoming. A constructive process is described that can derive such accelerators, including on- and off-chip memory storage from a C description such that a user-defined throughput constraint is met. By employing a novel statement-oriented approach, dataflow intermediate models are derived and used to support simple ap- proaches for on-/off-chip buffer partitioning, derivation of custom on-chip memory hierarchies and architecture transformation to ensure user-defined throughput constraints are met with minimum cost. When applied to accelerators for full search motion estima- tion, matrix multiplication, Sobel edge detection, and fast Fourier transform, it is shown how real-time performance up to an order of magnitude in advance of existing commercial HLS tools is enabled whilst including all requisite memory infrastructure. Further, op- timizations are presented that reduce the on-chip buffer capacity and physical resource cost by up to 96% and 75%, respectively, whilst maintaining real-time performance.
Resumo:
Permanent magnet synchronous motors (PMSMs) provide a competitive technology for EV traction drives owing to their high power density and high efficiency. In this paper, three types of interior PMSMs with different PM arrangements are modeled by the finite element method (FEM). For a given amount of permanent magnet materials, the V-shape interior PMSM is found better than the U-shape and the conventional rotor topologies for EV traction drives. Then the V-shape interior PMSM is further analyzed with the effects of stator slot opening and the permanent magnet pole chamfering on cogging torque and output torque performance. A vector-controlled flux-weakening method is developed and simulated in Matlab to expand the motor speed range for EV drive system. The results show good dynamic and steady-state performance with a capability of expanding speed up to four times of the rated. A prototype of the V-shape interior PMSM is also manufactured and tested to validate the numerical models built by the FEM.
Resumo:
Wireless sensor networks (WSNs) differ from conventional distributed systems in many aspects. The resource limitation of sensor nodes, the ad-hoc communication and topology of the network, coupled with an unpredictable deployment environment are difficult non-functional constraints that must be carefully taken into account when developing software systems for a WSN. Thus, more research needs to be done on designing, implementing and maintaining software for WSNs. This thesis aims to contribute to research being done in this area by presenting an approach to WSN application development that will improve the reusability, flexibility, and maintainability of the software. Firstly, we present a programming model and software architecture aimed at describing WSN applications, independently of the underlying operating system and hardware. The proposed architecture is described and realized using the Model-Driven Architecture (MDA) standard in order to achieve satisfactory levels of encapsulation and abstraction when programming sensor nodes. Besides, we study different non-functional constrains of WSN application and propose two approaches to optimize the application to satisfy these constrains. A real prototype framework was built to demonstrate the developed solutions in the thesis. The framework implemented the programming model and the multi-layered software architecture as components. A graphical interface, code generation components and supporting tools were also included to help developers design, implement, optimize, and test the WSN software. Finally, we evaluate and critically assess the proposed concepts. Two case studies are provided to support the evaluation. The first case study, a framework evaluation, is designed to assess the ease at which novice and intermediate users can develop correct and power efficient WSN applications, the portability level achieved by developing applications at a high-level of abstraction, and the estimated overhead due to usage of the framework in terms of the footprint and executable code size of the application. In the second case study, we discuss the design, implementation and optimization of a real-world application named TempSense, where a sensor network is used to monitor the temperature within an area.
Resumo:
Using product and system design to influence user behaviour offers potential for improving performance and reducing user error, yet little guidance is available at the concept generation stage for design teams briefed with influencing user behaviour. This article presents the Design with Intent Method, an innovation tool for designers working in this area, illustrated via application to an everyday human–technology interaction problem: reducing the likelihood of a customer leaving his or her card in an automatic teller machine. The example application results in a range of feasible design concepts which are comparable to existing developments in ATM design, demonstrating that the method has potential for development and application as part of a user-centred design process.
Resumo:
User behaviour is a significant determinant of a product’s environmental impact; while engineering advances permit increased efficiency of product operation, the user’s decisions and habits ultimately have a major effect on the energy or other resources used by the product. There is thus a need to change users’ behaviour. A range of design techniques developed in diverse contexts suggest opportunities for engineers, designers and other stakeholders working in the field of sustainable innovation to affect users’ behaviour at the point of interaction with the product or system, in effect ‘making the user more efficient’. Approaches to changing users’ behaviour from a number of fields are reviewed and discussed, including: strategic design of affordances and behaviour-shaping constraints to control or affect energyor other resource-using interactions; the use of different kinds of feedback and persuasive technology techniques to encourage or guide users to reduce their environmental impact; and context-based systems which use feedback to adjust their behaviour to run at optimum efficiency and reduce the opportunity for user-affected inefficiency. Example implementations in the sustainable engineering and ecodesign field are suggested and discussed.
Resumo:
Design can enable sustainable behaviour by understanding everyday needs rather than treating people as the problem.
Resumo:
By understanding how everyday devices work, individuals can – with the help of a growing online community – enjoy extending the life of products and drive socially responsible design.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação de Paula Frassinetti para obtenção do grau de Mestre em Educação Pré-Escolar e ensino do 1.ºCiclo do Ensino Básico