934 resultados para early design
Resumo:
Analytical calculation methods for all the major components of the synchronous inductance of tooth-coil permanentmagnet synchronous machines are reevaluated in this paper. The inductance estimation is different in the tooth-coil machine compared with the one in the traditional rotating field winding machine. The accuracy of the analytical torque calculation highly depends on the estimated synchronous inductance. Despite powerful finite element method (FEM) tools, an accurate and fast analytical method is required at an early design stage to find an initialmachine design structure with the desired performance. The results of the analytical inductance calculation are verified and assessed in terms of accuracy with the FEM simulation results and with the prototype measurement results.
Resumo:
The paper is an investigation of the exchange of ideas and information between an architect and building users in the early stages of the building design process before the design brief or any drawings have been produced. The purpose of the research is to gain insight into the type of information users exchange with architects in early design conversations and to better understand the influence the format of design interactions and interactional behaviours have on the exchange of information. We report an empirical study of pre-briefing conversations in which the overwhelming majority of the exchanges were about the functional or structural attributes of space, discussion that touched on the phenomenological, perceptual and the symbolic meanings of space were rare. We explore the contextual features of meetings and the conversational strategies taken by the architect to prompt the users for information and the influence these had on the information provided. Recommendations are made on the format and structure of pre-briefing conversations and on designers' strategies for raising the level of information provided by the user beyond the functional or structural attributes of space.
Resumo:
Techniques for modelling urban microclimates and urban block surfaces temperatures are desired by urban planners and architects for strategic urban designs at the early design stages. This paper introduces a simplified mathematical model for urban simulations (UMsim) including urban surfaces temperatures and microclimates. The nodal network model has been developed by integrating coupled thermal and airflow model. Direct solar radiation, diffuse radiation, reflected radiation, long-wave radiation, heat convection in air and heat transfer in the exterior walls and ground within the complex have been taken into account. The relevant equations have been solved using the finite difference method under the Matlab platform. Comparisons have been conducted between the data produced from the simulation and that from an urban experimental study carried out in a real architectural complex on the campus of Chongqing University, China in July 2005 and January 2006. The results show a satisfactory agreement between the two sets of data. The UMsim can be used to simulate the microclimates, in particular the surface temperatures of urban blocks, therefore it can be used to assess the impact of urban surfaces properties on urban microclimates. The UMsim will be able to produce robust data and images of urban environments for sustainable urban design.
Resumo:
There are varieties of physical and behavioral factors to determine energy demand load profile. The attainment of the optimum mix of measures and renewable energy system deployment requires a simple method suitable for using at the early design stage. A simple method of formulating load profile (SMLP) for UK domestic buildings has been presented in this paper. Domestic space heating load profile for different types of houses have been produced using thermal dynamic model which has been developed using thermal resistant network method. The daily breakdown energy demand load profile of appliance, domestic hot water and space heating can be predicted using this method. The method can produce daily load profile from individual house to urban community. It is suitable to be used at Renewable energy system strategic design stage.
Resumo:
This paper describe a simulation program, which uses Trengenza’s average room illuminance method in conjunction with hourly solar irradiance and luminous efficacy, to predict the potential lighting energy saving for a side-lit room. Two lighting control algorithms of photoelectric switching (on/off) and photoelectric dimming (top-up) have been coded in the program. A simulation for a typical UK office room has been conducted and the results show that energy saving due to the sunlight dependent on the various factors such as orientation, control methods, building depth, glazing area and shading types, etc. This simple tool can be used for estimating the potential lighting energy saving of the windows with various shading devices at the early design stage.
Resumo:
It is necessary to minimize the environmental impact and utilize natural resources in a sustainable and efficient manner in the early design stage of developing an environmentally-conscious design for a heating, ventilating and air-conditioning system. Energy supply options play a significant role in the total environmental load of heating, ventilating and air-conditioning systems. To assess the environmental impact of different energy options, a new method based on Emergy Analysis is proposed. Emergy Accounting, was first developed and widely used in the area of ecological engineering, but this is the first time it has been used in building service engineering. The environmental impacts due to the energy options are divided into four categories under the Emergy Framework: the depletion of natural resources, the greenhouse effect (carbon dioxide equivalents), the chemical rain effect (sulphur dioxide equivalents), and anthropogenic heat release. The depletion of non-renewable natural resources is indicated by the Environmental Load Ratio, and the environmental carrying capacity is developed to represent the environmental service to dilute the pollutants and anthropogenic heat released. This Emergy evaluation method provides a new way to integrate different environmental impacts under the same framework and thus facilitates better system choices. A case study of six different kinds of energy options consisting of renewable and non-renewable energy was performed by using Emergy Theory, and thus their relative environmental impacts were compared. The results show that the method of electricity generation in energy sources, especially for electricity-powered systems, is the most important factor to determine their overall environmental performance. The direct-fired lithium-bromide absorption type consumes more non-renewable energy, and contributes more to the urban heat island effect compared with other options having the same electricity supply. Using Emergy Analysis, designers and clients can make better-informed, environmentally-conscious selections of heating, ventilating and air-conditioning systems.
Resumo:
PEDRINI, Aldomar; SZOKOLAY, Steven. Recomendações para o desenvolvimento de uma ferramenta de suporte às primeiras decisões projetuais visando ao desempenho energético de edificações de escritório em clima quente. Ambiente Construído, Porto Alegre, v. 5, n. 1, p.39-54, jan./mar. 2005. Trimestral. Disponível em:
Resumo:
Building design is an effective way to achieve HVAC energy consumption reduction. However, this potentiality is often neglected by architects due to the lack of references to support design decisions. This works intends to propose architectural design guidelines for energy efficiency and thermal performance of Campus/UFRN buildings. These guidelines are based on computer simulations results using the software DesignBuilder. The definition of simulation models has begun with envelope variables, partially done after a field study of thirteen buildings at UFRN/Campus. This field study indicated some basic envelope patterns that were applied in simulation models. Occupation variables were identified with temperature and energy consumption monitoring procedures and a verification of illumination and equipment power, both developed at the Campus/UFRN administration building. Three simulation models were proposed according to different design phases and decisions. The first model represents early design decisions, simulating the combination of different types of geometry with three levels of envelope thermal performance. The second model, still as a part of early design phase, analyses thermal changes between circulation halls lateral and central and office rooms, as well as the heat fluxes and monthly temperatures in each circulation hall. The third model analyses the influence of middle-design and detail design decisions on energy consumption and thermal performance. In this model, different solutions of roofs, shading devices, walls and external colors were simulated. The results of all simulation models suggest a high influence of thermal loads due to the incidence of solar radiation on windows and surfaces, which highlights the importance of window shading devices, office room orientation and absorptance of roof and walls surfaces
Resumo:
Ensuring the dependability requirements is essential for the industrial applications since faults may cause failures whose consequences result in economic losses, environmental damage or hurting people. Therefore, faced from the relevance of topic, this thesis proposes a methodology for the dependability evaluation of industrial wireless networks (WirelessHART, ISA100.11a, WIA-PA) on early design phase. However, the proposal can be easily adapted to maintenance and expansion stages of network. The proposal uses graph theory and fault tree formalism to create automatically an analytical model from a given wireless industrial network topology, where the dependability can be evaluated. The evaluation metrics supported are the reliability, availability, MTTF (mean time to failure), importance measures of devices, redundancy aspects and common cause failures. It must be emphasized that the proposal is independent of any tool to evaluate quantitatively the target metrics. However, due to validation issues it was used a tool widely accepted on academy for this purpose (SHARPE). In addition, an algorithm to generate the minimal cut sets, originally applied on graph theory, was adapted to fault tree formalism to guarantee the scalability of methodology in wireless industrial network environments (< 100 devices). Finally, the proposed methodology was validate from typical scenarios found in industrial environments, as star, line, cluster and mesh topologies. It was also evaluated scenarios with common cause failures and best practices to guide the design of an industrial wireless network. For guarantee scalability requirements, it was analyzed the performance of methodology in different scenarios where the results shown the applicability of proposal for networks typically found in industrial environments
Resumo:
This paper evaluates the thermal and luminous performance of different louver configurations on an office room model located in Maceió-AL (Brazil), ranking the alternatives in a way that leads to choices for alternatives with potential balanced performance. Parametric analyses were done, based on computer simulations on software Troplux 5 and DesignBuilder 2. The variables examined were number of slats, slat slope and slat reflectance, considering the window facing North, South, East and West and a fixed shading mask for each orientation. Results refer to internal average illuminance and solar heat gains through windows. It was observed that configurations of shading devices with the same shading mask may have different luminous and thermal performance. The alternatives were ranked, so the information here produced has the potential to support decisions on designing shading devices in practice.
Resumo:
The use of numerical simulation in the design and evaluation of products performance is ever increasing. To a greater extent, such estimates are needed in a early design stage, when physical prototypes are not available. When dealing with vibro-acoustic models, known to be computationally expensive, a question remains, which is related to the accuracy of such models in view of the well-know variability inherent to the mass manufacturing production techniques. In addition, both academia and industry have recently realized the importance of actually listening to a products sound, either by measurements or by virtual sound synthesis, in order to assess its performance. In this work, the scatter of significant parameter variations on a simplified vehicle vibro-acoustic model is calculated on loudness metrics using Monte Carlo analysis. The mapping from the system parameters to sound quality metric is performed by a fully-coupled vibro-acoustic finite element model. Different loudness metrics are used, including overall sound pressure level expressed in dB and Specific Loudness in Sones. Sound quality equivalent sources are used to excite this model and the sound pressure level at the driver's head position is acquired to be evaluated according to sound quality metrics. No significant variation has been perceived when evaluating the system using regular sound pressure level expressed in in dB and dB(A). This happens because of the third-octave filters that averages the results under some frequency bands. On the other hand, Zwicker Loudness presents important variations, arguably, due to the masking effects.
Resumo:
Self-organisation is increasingly being regarded as an effective approach to tackle modern systems complexity. The self-organisation approach allows the development of systems exhibiting complex dynamics and adapting to environmental perturbations without requiring a complete knowledge of the future surrounding conditions. However, the development of self-organising systems (SOS) is driven by different principles with respect to traditional software engineering. For instance, engineers typically design systems combining smaller elements where the composition rules depend on the reference paradigm, but typically produce predictable results. Conversely, SOS display non-linear dynamics, which can hardly be captured by deterministic models, and, although robust with respect to external perturbations, are quite sensitive to changes on inner working parameters. In this thesis, we describe methodological aspects concerning the early-design stage of SOS built relying on the Multiagent paradigm: in particular, we refer to the A&A metamodel, where MAS are composed by agents and artefacts, i.e. environmental resources. Then, we describe an architectural pattern that has been extracted from a recurrent solution in designing self-organising systems: this pattern is based on a MAS environment formed by artefacts, modelling non-proactive resources, and environmental agents acting on artefacts so as to enable self-organising mechanisms. In this context, we propose a scientific approach for the early design stage of the engineering of self-organising systems: the process is an iterative one and each cycle is articulated in four stages, modelling, simulation, formal verification, and tuning. During the modelling phase we mainly rely on the existence of a self-organising strategy observed in Nature and, hopefully encoded as a design pattern. Simulations of an abstract system model are used to drive design choices until the required quality properties are obtained, thus providing guarantees that the subsequent design steps would lead to a correct implementation. However, system analysis exclusively based on simulation results does not provide sound guarantees for the engineering of complex systems: to this purpose, we envision the application of formal verification techniques, specifically model checking, in order to exactly characterise the system behaviours. During the tuning stage parameters are tweaked in order to meet the target global dynamics and feasibility constraints. In order to evaluate the methodology, we analysed several systems: in this thesis, we only describe three of them, i.e. the most representative ones for each of the three years of PhD course. We analyse each case study using the presented method, and describe the exploited formal tools and techniques.
Resumo:
The digital electronic market development is founded on the continuous reduction of the transistors size, to reduce area, power, cost and increase the computational performance of integrated circuits. This trend, known as technology scaling, is approaching the nanometer size. The lithographic process in the manufacturing stage is increasing its uncertainty with the scaling down of the transistors size, resulting in a larger parameter variation in future technology generations. Furthermore, the exponential relationship between the leakage current and the threshold voltage, is limiting the threshold and supply voltages scaling, increasing the power density and creating local thermal issues, such as hot spots, thermal runaway and thermal cycles. In addiction, the introduction of new materials and the smaller devices dimension are reducing transistors robustness, that combined with high temperature and frequently thermal cycles, are speeding up wear out processes. Those effects are no longer addressable only at the process level. Consequently the deep sub-micron devices will require solutions which will imply several design levels, as system and logic, and new approaches called Design For Manufacturability (DFM) and Design For Reliability. The purpose of the above approaches is to bring in the early design stages the awareness of the device reliability and manufacturability, in order to introduce logic and system able to cope with the yield and reliability loss. The ITRS roadmap suggests the following research steps to integrate the design for manufacturability and reliability in the standard CAD automated design flow: i) The implementation of new analysis algorithms able to predict the system thermal behavior with the impact to the power and speed performances. ii) High level wear out models able to predict the mean time to failure of the system (MTTF). iii) Statistical performance analysis able to predict the impact of the process variation, both random and systematic. The new analysis tools have to be developed beside new logic and system strategies to cope with the future challenges, as for instance: i) Thermal management strategy that increase the reliability and life time of the devices acting to some tunable parameter,such as supply voltage or body bias. ii) Error detection logic able to interact with compensation techniques as Adaptive Supply Voltage ASV, Adaptive Body Bias ABB and error recovering, in order to increase yield and reliability. iii) architectures that are fundamentally resistant to variability, including locally asynchronous designs, redundancy, and error correcting signal encodings (ECC). The literature already features works addressing the prediction of the MTTF, papers focusing on thermal management in the general purpose chip, and publications on statistical performance analysis. In my Phd research activity, I investigated the need for thermal management in future embedded low-power Network On Chip (NoC) devices.I developed a thermal analysis library, that has been integrated in a NoC cycle accurate simulator and in a FPGA based NoC simulator. The results have shown that an accurate layout distribution can avoid the onset of hot-spot in a NoC chip. Furthermore the application of thermal management can reduce temperature and number of thermal cycles, increasing the systemreliability. Therefore the thesis advocates the need to integrate a thermal analysis in the first design stages for embedded NoC design. Later on, I focused my research in the development of statistical process variation analysis tool that is able to address both random and systematic variations. The tool was used to analyze the impact of self-timed asynchronous logic stages in an embedded microprocessor. As results we confirmed the capability of self-timed logic to increase the manufacturability and reliability. Furthermore we used the tool to investigate the suitability of low-swing techniques in the NoC system communication under process variations. In this case We discovered the superior robustness to systematic process variation of low-swing links, which shows a good response to compensation technique as ASV and ABB. Hence low-swing is a good alternative to the standard CMOS communication for power, speed, reliability and manufacturability. In summary my work proves the advantage of integrating a statistical process variation analysis tool in the first stages of the design flow.
Resumo:
Virtual machines emulating hardware devices are generally implemented in low-level languages and using a low-level style for performance reasons. This trend results in largely difficult to understand, difficult to extend and unmaintainable systems. As new general techniques for virtual machines arise, it gets harder to incorporate or test these techniques because of early design and optimization decisions. In this paper we show how such decisions can be postponed to later phases by separating virtual machine implementation issues from the high-level machine-specific model. We construct compact models of whole-system VMs in a high-level language, which exclude all low-level implementation details. We use the pluggable translation toolchain PyPy to translate those models to executables. During the translation process, the toolchain reintroduces the VM implementation and optimization details for specific target platforms. As a case study we implement an executable model of a hardware gaming device. We show that our approach to VM building increases understandability, maintainability and extendability while preserving performance.
Resumo:
In the last years, there has been a continued growth in the number of offshore operations for handling large equipment and objects, with emphasis on installation and maintenance of devices for exploiting marine renewable energy like generators for harnessing wind, waves and currents energy. Considering the behaviour of these devices during manoeuvrings, and due to their size and by the interaction with the surrounding fluid, the effect of inertial forces and torques is very important, which requires a specific modelling. This paper especially discusses the masses and moments of inertia modelling problem, with the aim to use it in the simulation of the complex manoeuvres of these devices and in the automatic control systems designed for their offshore operations. Given the importance and complexity of the added mass modelling, a method for its early design identification, developed by the R&D Group on Marine Renewable Energy Technologies of the UPM (GITERM) and its use on special cases like emersion manoeuvres of devices from underwater to the surface will be presented.