988 resultados para Simulation tools
Resumo:
Energy auditing can be an important contribution for identification and assessment of energy conservation measures (ECMs) in buildings. Numerous tools and software have been developed, with varying degree of precision and complexity and different areas of use. This paper evaluates PHPP as a versatile, easy-to-use energy auditing tool and gives examples of how it has been compared to a dynamic simulation tool, within the EU-project iNSPiRe. PHPP is a monthly balance energy calculation tool based on EN13790. It is intended for assisting the design of Passive Houses and energy renovation projects and as guidance in the choice of appropriate ECMs. PHPP was compared against the transient simulation software TRNSYS for a single family house and a multi-family house. It should be mentioned that dynamic building simulations might strongly depend on the model assumptions and simplifications compared to reality, such as ideal heating or real heat emission system. Setting common boundary conditions for both PHPP and TRNSYS, the ideal heating and cooling loads and demands were compared on monthly and annual basis for seven European locations and buildings with different floor area, S/V ratio, U-values and glazed area of the external walls. The results show that PHPP can be used to assess the heating demand of single-zone buildings and the reduction of heating demand with ECMs with good precision. The estimation of cooling demand is also acceptable if an appropriate shading factor is applied in PHPP. In general, PHPP intentionally overestimates heating and cooling loads, to be on the safe side for system sizing. Overall, the agreement with TRNSYS is better in cases with higher quality of the envelope as in cold climates and for good energy standards. As an energy auditing tool intended for pre-design it is a good, versatile and easy-to-use alternative to more complex simulation tools.
Resumo:
Résumé : Malgré le nombre croissant de capteurs dans les domaines de la chimie et la biologie, il reste encore à étudier en profondeur la complexité des interactions entre les différentes molécules présentes lors d’une détection à l’interface solide-liquide. Dans ce cadre, il est de tout intérêt de croiser différentes méthodes de détection afin d’obtenir des informations complémentaires. Le principal objectif de cette étude est de dimensionner, fabriquer et caractériser un détecteur optique intégré sur verre basé sur la résonance plasmonique de surface, destiné à terme à être combiné avec d’autres techniques de détection, dont un microcalorimètre. La résonance plasmonique de surface est une technique reconnue pour sa sensibilité adaptée à la détection de surface, qui a l’avantage d’être sans marquage et permet de fournir un suivi en temps réel de la cinétique d’une réaction. L’avantage principal de ce capteur est qu’il a été dimensionné pour une large gamme d’indice de réfraction de l’analyte, allant de 1,33 à 1,48. Ces valeurs correspondent à la plupart des entités biologiques associées à leurs couches d’accroche dont les matrices de polymères, présentés dans ce travail. Étant donné que beaucoup d’études biologiques nécessitent la comparaison de la mesure à une référence ou à une autre mesure, le second objectif du projet est d’étudier le potentiel du système SPR intégré sur verre pour la détection multi-analyte. Les trois premiers chapitres se concentrent sur l’objectif principal du projet. Le dimensionnement du dispositif est ainsi présenté, basé sur deux modélisations différentes, associées à plusieurs outils de calcul analytique et numérique. La première modélisation, basée sur l’approximation des interactions faibles, permet d’obtenir la plupart des informations nécessaires au dimensionnement du dispositif. La seconde modélisation, sans approximation, permet de valider le premier modèle approché et de compléter et affiner le dimensionnement. Le procédé de fabrication de la puce optique sur verre est ensuite décrit, ainsi que les instruments et protocoles de caractérisation. Un dispositif est obtenu présentant des sensibilités volumiques entre 1000 nm/RIU et 6000 nm/RIU suivant l’indice de réfraction de l’analyte. L’intégration 3D du guide grâce à son enterrage sélectif dans le verre confère au dispositif une grande compacité, le rendant adapté à la cointégration avec un microcalorimètre en particulier. Le dernier chapitre de la thèse présente l’étude de plusieurs techniques de multiplexage spectral adaptées à un système SPR intégré, exploitant en particulier la technologie sur verre. L’objectif est de fournir au moins deux détections simultanées. Dans ce cadre, plusieurs solutions sont proposées et les dispositifs associés sont dimensionnés, fabriqués et testés.
Resumo:
This thesis aims to describe and demonstrate the developed concept to facilitate the use of thermal simulation tools during the building design process. Despite the impact of architectural elements on the performance of buildings, some influential decisions are frequently based solely on qualitative information. Even though such design support is adequate for most decisions, the designer will eventually have doubts concerning the performance of some design decisions. These situations will require some kind of additional knowledge to be properly approached. The concept of designerly ways of simulating focuses on the formulation and solution of design dilemmas, which are doubts about the design that cannot be fully understood nor solved without using quantitative information. The concept intends to combine the power of analysis from computer simulation tools with the capacity of synthesis from architects. Three types of simulation tools are considered: solar analysis, thermal/energy simulation and CFD. Design dilemmas are formulated and framed according to the architect s reflection process about performance aspects. Throughout the thesis, the problem is investigated in three fields: professional, technical and theoretical fields. This approach on distinct parts of the problem aimed to i) characterize different professional categories with regards to their design practice and use of tools, ii) investigate preceding researchers on the use of simulation tools and iii) draw analogies between the proposed concept, and some concepts developed or described in previous works about design theory. The proposed concept was tested in eight design dilemmas extracted from three case studies in the Netherlands. The three investigated processes are houses designed by Dutch architectural firms. Relevant information and criteria from each case study were obtained through interviews and conversations with the involved architects. The practical application, despite its success in the research context, allowed the identification of some applicability limitations of the concept, concerning the architects need to have technical knowledge and the actual evolution stage of simulation tools
Resumo:
Ce travail présente une modélisation rapide d’ordre élévé capable de modéliser une configuration rotorique en cage complète ou en grille, de reproduire les courants de barre et tenir compte des harmoniques d’espace. Le modèle utilise une approche combinée d’éléments finis avec les circuits-couplés. En effet, le calcul des inductances est réalisé avec les éléments finis, ce qui confère une précision avancée au modèle. Cette méthode offre un gain important en temps de calcul sur les éléments finis pour des simulations transitoires. Deux outils de simulation sont développés, un dans le domaine du temps pour des résolutions dynamiques et un autre dans le domaine des phaseurs dont une application sur des tests de réponse en fréquence à l’arrêt (SSFR) est également présentée. La méthode de construction du modèle est décrite en détail de même que la procédure de modélisation de la cage du rotor. Le modèle est validé par l’étude de machines synchrones: une machine de laboratoire de 5.4 KVA et un grand alternateur de 109 MVA dont les mesures expérimentales sont comparées aux résultats de simulation du modèle pour des essais tels que des tests à vide, des courts-circuits triphasés, biphasés et un test en charge.
Resumo:
Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.
Resumo:
Determining effective hydraulic, thermal, mechanical and electrical properties of porous materials by means of classical physical experiments is often time-consuming and expensive. Thus, accurate numerical calculations of material properties are of increasing interest in geophysical, manufacturing, bio-mechanical and environmental applications, among other fields. Characteristic material properties (e.g. intrinsic permeability, thermal conductivity and elastic moduli) depend on morphological details on the porescale such as shape and size of pores and pore throats or cracks. To obtain reliable predictions of these properties it is necessary to perform numerical analyses of sufficiently large unit cells. Such representative volume elements require optimized numerical simulation techniques. Current state-of-the-art simulation tools to calculate effective permeabilities of porous materials are based on various methods, e.g. lattice Boltzmann, finite volumes or explicit jump Stokes methods. All approaches still have limitations in the maximum size of the simulation domain. In response to these deficits of the well-established methods we propose an efficient and reliable numerical method which allows to calculate intrinsic permeabilities directly from voxel-based data obtained from 3D imaging techniques like X-ray microtomography. We present a modelling framework based on a parallel finite differences solver, allowing the calculation of large domains with relative low computing requirements (i.e. desktop computers). The presented method is validated in a diverse selection of materials, obtaining accurate results for a large range of porosities, wider than the ranges previously reported. Ongoing work includes the estimation of other effective properties of porous media.
Resumo:
This thesis aims to describe and demonstrate the developed concept to facilitate the use of thermal simulation tools during the building design process. Despite the impact of architectural elements on the performance of buildings, some influential decisions are frequently based solely on qualitative information. Even though such design support is adequate for most decisions, the designer will eventually have doubts concerning the performance of some design decisions. These situations will require some kind of additional knowledge to be properly approached. The concept of designerly ways of simulating focuses on the formulation and solution of design dilemmas, which are doubts about the design that cannot be fully understood nor solved without using quantitative information. The concept intends to combine the power of analysis from computer simulation tools with the capacity of synthesis from architects. Three types of simulation tools are considered: solar analysis, thermal/energy simulation and CFD. Design dilemmas are formulated and framed according to the architect s reflection process about performance aspects. Throughout the thesis, the problem is investigated in three fields: professional, technical and theoretical fields. This approach on distinct parts of the problem aimed to i) characterize different professional categories with regards to their design practice and use of tools, ii) investigate preceding researchers on the use of simulation tools and iii) draw analogies between the proposed concept, and some concepts developed or described in previous works about design theory. The proposed concept was tested in eight design dilemmas extracted from three case studies in the Netherlands. The three investigated processes are houses designed by Dutch architectural firms. Relevant information and criteria from each case study were obtained through interviews and conversations with the involved architects. The practical application, despite its success in the research context, allowed the identification of some applicability limitations of the concept, concerning the architects need to have technical knowledge and the actual evolution stage of simulation tools
Resumo:
Several modern-day cooling applications require the incorporation of mini/micro-channel shear-driven flow condensers. There are several design challenges that need to be overcome in order to meet those requirements. The difficulty in developing effective design tools for shear-driven flow condensers is exacerbated due to the lack of a bridge between the physics-based modelling of condensing flows and the current, popular approach based on semi-empirical heat transfer correlations. One of the primary contributors of this disconnect is a lack of understanding caused by the fact that typical heat transfer correlations eliminate the dependence of the heat transfer coefficient on the method of cooling employed on the condenser surface when it may very well not be the case. This is in direct contrast to direct physics-based modeling approaches where the thermal boundary conditions have a direct and huge impact on the heat transfer coefficient values. Typical heat transfer correlations instead introduce vapor quality as one of the variables on which the value of the heat transfer coefficient depends. This study shows how, under certain conditions, a heat transfer correlation from direct physics-based modeling can be equivalent to typical engineering heat transfer correlations without making the same apriori assumptions. Another huge factor that raises doubts on the validity of the heat-transfer correlations is the opacity associated with the application of flow regime maps for internal condensing flows. It is well known that flow regimes influence heat transfer rates strongly. However, several heat transfer correlations ignore flow regimes entirely and present a single heat transfer correlation for all flow regimes. This is believed to be inaccurate since one would expect significant differences in the heat transfer correlations for different flow regimes. Several other studies present a heat transfer correlation for a particular flow regime - however, they ignore the method by which extents of the flow regime is established. This thesis provides a definitive answer (in the context of stratified/annular flows) to: (i) whether a heat transfer correlation can always be independent of the thermal boundary condition and represented as a function of vapor quality, and (ii) whether a heat transfer correlation can be independently obtained for a flow regime without knowing the flow regime boundary (even if the flow regime boundary is represented through a separate and independent correlation). To obtain the results required to arrive at an answer to these questions, this study uses two numerical simulation tools - the approximate but highly efficient Quasi-1D simulation tool and the exact but more expensive 2D Steady Simulation tool. Using these tools and the approximate values of flow regime transitions, a deeper understanding of the current state of knowledge in flow regime maps and heat transfer correlations in shear-driven internal condensing flows is obtained. The ideas presented here can be extended for other flow regimes of shear-driven flows as well. Analogous correlations can also be obtained for internal condensers in the gravity-driven and mixed-driven configuration.
Resumo:
Developing Cyber-Physical Systems requires methods and tools to support simulation and verification of hybrid (both continuous and discrete) models. The Acumen modeling and simulation language is an open source testbed for exploring the design space of what rigorousbut- practical next-generation tools can deliver to developers of Cyber- Physical Systems. Like verification tools, a design goal for Acumen is to provide rigorous results. Like simulation tools, it aims to be intuitive, practical, and scalable. However, it is far from evident whether these two goals can be achieved simultaneously. This paper explains the primary design goals for Acumen, the core challenges that must be addressed in order to achieve these goals, the “agile research method” taken by the project, the steps taken to realize these goals, the key lessons learned, and the emerging language design.
Resumo:
Radiation dose in x-ray computed tomography (CT) has become a topic of great interest due to the increasing number of CT examinations performed worldwide. In fact, CT scans are responsible of significant doses delivered to the patients, much larger than the doses due to the most common radiographic procedures. This thesis work, carried out at the Laboratory of Medical Technology (LTM) of the Rizzoli Orthopaedic Institute (IOR, Bologna), focuses on two primary objectives: the dosimetric characterization of the tomograph present at the IOR and the optimization of the clinical protocol for hip arthroplasty. In particular, after having verified the reliability of the dose estimates provided by the system, we compared the estimates of the doses delivered to 10 patients undergoing CT examination for the pre-operative planning of hip replacement with the Diagnostic Reference Level (DRL) for an osseous pelvis examination. Out of 10 patients considered, only for 3 of them the doses were lower than the DRL. Therefore, the necessity to optimize the clinical protocol emerged. This optimization was investigated using a human femur from a cadaver. Quantitative analysis and comparison of 3D reconstructions were made, after having performed manual segmentation of the femur from different CT acquisitions. Dosimetric simulations of the CT acquisitions on the femur were also made and associated to the accuracy of the 3D reconstructions, to analyse the optimal combination of CT acquisition parameters. The study showed that protocol optimization both in terms of Hausdorff distance and in terms of effective dose (ED) to the patient may be realized simply by modifying the value of the pitch in the protocol, by choosing between 0.98 and 1.37.
Resumo:
In almost all industrialized countries, the energy sector has suffered a severe restructuring that originated a greater complexity in market players’ interactions. The complexity that these changes brought made way for the creation of decision support tools that facilitate the study and understanding of these markets. MASCEM – “Multiagent Simulator for Competitive Electricity Markets” arose in this context providing a framework for evaluating new rules, new behaviour, and new participants in deregulated electricity markets. MASCEM uses game theory, machine learning techniques, scenario analysis and optimisation techniques to model market agents and to provide them with decision-support. ALBidS is a multiagent system created to provide decision support to market negotiating players. Fully integrated with MASCEM it considers several different methodologies based on very distinct approaches. The Six Thinking Hats is a powerful technique used to look at decisions from different perspectives. This tool’s goal is to force the thinker to move outside his habitual thinking style. It was developed to be used mainly at meetings in order to “run better meetings, make faster decisions”. This dissertation presents a study about the applicability of the Six Thinking Hats technique in Decision Support Systems, particularly with the multiagent paradigm like the MASCEM simulator. As such this work’s proposal is of a new agent, a meta-learner based on STH technique that organizes several different ALBidS’ strategies and combines the distinct answers into a single one that, expectedly, out-performs any of them.
Resumo:
This technical report presents a description of the output data files and the tools used to validate and to extract information from the output data files generated by the Repeater-Based Hybrid Wired/Wireless Network Simulator and the Bridge-Based Hybrid Wired/Wireless Network Simulator.
Resumo:
Due to the increasing acceptance of BPM, nowadays BPM tools are extensively used in organizations. Core to BPM are the process modeling languages, of which BPMN is the one that has been receiving most attention these days. Once a business process is described using BPMN, one can use a process simulation approach in order to find its optimized form. In this context, the simulation of business processes, such as those defined in BPMN, appears as an obvious way of improving processes. This paper analyzes the business process modeling and simulation areas, identifying the elements that must be present in the BPMN language in order to allow processes described in BPMN to be simulated. During this analysis a set of existing BPM tools, which support BPMN, are compared regarding their limitations in terms of simulation support.
Resumo:
The last decade has shown that the global paper industry needs new processes and products in order to reassert its position in the industry. As the paper markets in Western Europe and North America have stabilized, the competition has tightened. Along with the development of more cost-effective processes and products, new process design methods are also required to break the old molds and create new ideas. This thesis discusses the development of a process design methodology based on simulation and optimization methods. A bi-level optimization problem and a solution procedure for it are formulated and illustrated. Computational models and simulation are used to illustrate the phenomena inside a real process and mathematical optimization is exploited to find out the best process structures and control principles for the process. Dynamic process models are used inside the bi-level optimization problem, which is assumed to be dynamic and multiobjective due to the nature of papermaking processes. The numerical experiments show that the bi-level optimization approach is useful for different kinds of problems related to process design and optimization. Here, the design methodology is applied to a constrained process area of a papermaking line. However, the same methodology is applicable to all types of industrial processes, e.g., the design of biorefiners, because the methodology is totally generalized and can be easily modified.
Resumo:
This paper completes the comparative analysis of the investment demand behaviour, of a sample of specialised arable crop farms, for farm buildings and machinery and equipment, as a function of the different types and levels of Common Agricultural Policy support, in selected European Union Member States. This contribution focuses on their quantitative interdependence calculating the relevant elasticity measures. In turn, they constitute the methodological tool to simulate the percentage expected change in average net investment levels associated to the implementation of the, recently proposed and currently under discussion, reductions in the Pillar I Direct Payments disbursed under the Common Agricultural Policy. Evidence suggests a statistically significant elastic and inelastic relationship between both types of subsidies and the investment levels for both asset classes in Germany and Italy, respectively. An elastic dependence of investment in farm buildings on decoupled subsidies exists in Hungary while changes in the level of coupled payments appear to translate into less than proportional changes in the demand for both farm buildings and machinery and equipment in France. Coupled payments appear to influence the UK demand for both asset classes in an elastic manner while decoupled support seems to induce a similar effect on investment in machinery and equipment. Since the currently discussed Common Agricultural Policy reform options imply, almost exclusively, a reduction in the level of support granted through Direct Payments, simulated effects were expected to reveal a worsening of the farm investment prospects for both asset types (i.e., a larger negative investment or a smaller positive one). The actual evidence largely respects this expectation with the sole exception of investment in machinery and equipment in France and Italy reaching smaller negative or larger positive levels irrespectively of the magnitude of the implemented cuts in Direct Payments.