967 resultados para Concept Design
Resumo:
Sensor and actuator based on laminated piezocomposite shells have shown increasing demand in the field of smart structures. The distribution of piezoelectric material within material layers affects the performance of these structures; therefore, its amount, shape, size, placement, and polarization should be simultaneously considered in an optimization problem. In addition, previous works suggest the concept of laminated piezocomposite structure that includes fiber-reinforced composite layer can increase the performance of these piezoelectric transducers; however, the design optimization of these devices has not been fully explored yet. Thus, this work aims the development of a methodology using topology optimization techniques for static design of laminated piezocomposite shell structures by considering the optimization of piezoelectric material and polarization distributions together with the optimization of the fiber angle of the composite orthotropic layers, which is free to assume different values along the same composite layer. The finite element model is based on the laminated piezoelectric shell theory, using the degenerate three-dimensional solid approach and first-order shell theory kinematics that accounts for the transverse shear deformation and rotary inertia effects. The topology optimization formulation is implemented by combining the piezoelectric material with penalization and polarization model and the discrete material optimization, where the design variables describe the amount of piezoelectric material and polarization sign at each finite element, with the fiber angles, respectively. Three different objective functions are formulated for the design of actuators, sensors, and energy harvesters. Results of laminated piezocomposite shell transducers are presented to illustrate the method. Copyright (C) 2012 John Wiley & Sons, Ltd.
Resumo:
The Cherenkov Telescope Array (CTA) is a new observatory for very high-energy (VHE) gamma rays. CTA has ambitions science goals, for which it is necessary to achieve full-sky coverage, to improve the sensitivity by about an order of magnitude, to span about four decades of energy, from a few tens of GeV to above 100 TeV with enhanced angular and energy resolutions over existing VHE gamma-ray observatories. An international collaboration has formed with more than 1000 members from 27 countries in Europe, Asia, Africa and North and South America. In 2010 the CTA Consortium completed a Design Study and started a three-year Preparatory Phase which leads to production readiness of CTA in 2014. In this paper we introduce the science goals and the concept of CTA, and provide an overview of the project.
Resumo:
In fluid dynamics research, pressure measurements are of great importance to define the flow field acting on aerodynamic surfaces. In fact the experimental approach is fundamental to avoid the complexity of the mathematical models for predicting the fluid phenomena. It’s important to note that, using in-situ sensor to monitor pressure on large domains with highly unsteady flows, several problems are encountered working with the classical techniques due to the transducer cost, the intrusiveness, the time response and the operating range. An interesting approach for satisfying the previously reported sensor requirements is to implement a sensor network capable of acquiring pressure data on aerodynamic surface using a wireless communication system able to collect the pressure data with the lowest environmental–invasion level possible. In this thesis a wireless sensor network for fluid fields pressure has been designed, built and tested. To develop the system, a capacitive pressure sensor, based on polymeric membrane, and read out circuitry, based on microcontroller, have been designed, built and tested. The wireless communication has been performed using the Zensys Z-WAVE platform, and network and data management have been implemented. Finally, the full embedded system with antenna has been created. As a proof of concept, the monitoring of pressure on the top of the mainsail in a sailboat has been chosen as working example.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
Motivated by the need to understand which are the underlying forces that trigger network evolution, we develop a multilevel theoretical and empirically testable model to examine the relationship between changes in the external environment and network change. We refer to network change as the dissolution or replacement of an interorganizational tie, adding also the case of the formation of new ties with new or preexisting partners. Previous research has paid scant attention to the organizational consequences of quantum change enveloping entire industries in favor of an emphasis on continuous change. To highlight radical change we introduce the concept of environmental jolt. The September 11 terrorist attacks provide us with a natural experiment to test our hypotheses on the antecedents and the consequences of network change. Since network change can be explained at multiple levels, we incorporate firm-level variables as moderators. The empirical setting is the global airline industry, which can be regarded as a constantly changing network of alliances. The study reveals that firms react to environmental jolts by forming homophilous ties and transitive triads as opposed to the non jolt periods. Moreover, we find that, all else being equal, firms that adopt a brokerage posture will have positive returns. However, we find that in the face of an environmental jolt brokerage relates negatively to firm performance. Furthermore, we find that the negative relationship between brokerage and performance during an environmental jolt is more significant for larger firms. Our findings suggest that jolts are an important predictor of network change, that they significantly affect operational returns and should be thus incorporated in studies of network dynamics.
Resumo:
The research activities described in the present thesis have been oriented to the design and development of components and technological processes aimed at optimizing the performance of plasma sources in advanced in material treatments. Consumables components for high definition plasma arc cutting (PAC) torches were studied and developed. Experimental activities have in particular focussed on the modifications of the emissive insert with respect to the standard electrode configuration, which comprises a press fit hafnium insert in a copper body holder, to improve its durability. Based on a deep analysis of both the scientific and patent literature, different solutions were proposed and tested. First, the behaviour of Hf cathodes when operating at high current levels (250A) in oxidizing atmosphere has been experimentally investigated optimizing, with respect to expected service life, the initial shape of the electrode emissive surface. Moreover, the microstructural modifications of the Hf insert in PAC electrodes were experimentally investigated during first cycles, in order to understand those phenomena occurring on and under the Hf emissive surface and involved in the electrode erosion process. Thereafter, the research activity focussed on producing, characterizing and testing prototypes of composite inserts, combining powders of a high thermal conductibility (Cu, Ag) and high thermionic emissivity (Hf, Zr) materials The complexity of the thermal plasma torch environment required and integrated approach also involving physical modelling. Accordingly, a detailed line-by-line method was developed to compute the net emission coefficient of Ar plasmas at temperatures ranging from 3000 K to 25000 K and pressure ranging from 50 kPa to 200 kPa, for optically thin and partially autoabsorbed plasmas. Finally, prototypal electrodes were studied and realized for a newly developed plasma source, based on the plasma needle concept and devoted to the generation of atmospheric pressure non-thermal plasmas for biomedical applications.
Resumo:
Through the use of Cloud Foundry "stack" concept, a new isolation is provided to the application running on the PaaS. A new deployment feature that can easily scale on distributed system, both public and private clouds.
Resumo:
With this dissertation research we investigate intersections between design and marketing and in this respect, which factors do contribute that a product design becomes brand formative. We have developed a Brand Formative Design (BFD) framework, which investigates individual design features in a holistic, comparable, brand relevant, and consumer specific context. We discuss what kinds of characteristics contribute to BFD but also illuminate how they should be applied and examine: rnA holistic framework leading to Brand Formative Design. Identification and assessment of BFD Drivers. The dissection of products into three Distinctive Design Levels. The detection of surprising design preferences. The appropriate degree of scheme deviation with evolutionary design. Simulated BFD development processes with three different products and the integration of consumers. Future oriented objectification, comparability and assessment of design. Recommendations for the management of design in a brand specific context. Design is a product feature, which contributes significantly to the success of products. However, the development of new design contains challenges. Design can hardly be objectified; many people have an opinion concerning the attractiveness of new products but cannot formulate their future preferences. Product design is widely developed based on intuition, which can be difficult for the management of design. Here the concept of Brand Formative Design can provide a framework which contributes to structure, objectify, develop and assess new evolutionary design in brand and future relevant contexts, but also integrates consumers and their preferences without restricting creativity too much.
Resumo:
Cloud services are becoming ever more important for everyone's life. Cloud storage? Web mails? Yes, we don't need to be working in big IT companies to be surrounded by cloud services. Another thing that's growing in importance, or at least that should be considered ever more important, is the concept of privacy. The more we rely on services of which we know close to nothing about, the more we should be worried about our privacy. In this work, I will analyze a prototype software based on a peer to peer architecture for the offering of cloud services, to see if it's possible to make it completely anonymous, meaning that not only the users using it will be anonymous, but also the Peers composing it will not know the real identity of each others. To make it possible, I will make use of anonymizing networks like Tor. I will start by studying the state of art of Cloud Computing, by looking at some real example, followed by analyzing the architecture of the prototype, trying to expose the differences between its distributed nature and the somehow centralized solutions offered by the famous vendors. After that, I will get as deep as possible into the working principle of the anonymizing networks, because they are not something that can just be 'applied' mindlessly. Some de-anonymizing techniques are very subtle so things must be studied carefully. I will then implement the required changes, and test the new anonymized prototype to see how its performances differ from those of the standard one. The prototype will be run on many machines, orchestrated by a tester script that will automatically start, stop and do all the required API calls. As to where to find all these machines, I will make use of Amazon EC2 cloud services and their on-demand instances.
Resumo:
BACKGROUND Students frequently hold a number of misconceptions related to temperature, heat and energy. There is not currently a concept inventory with sufficiently high internal reliability to assess these concept areas for research purposes. Consequently, there is little data on the prevalence of these misconceptions amongst undergraduate engineering students. PURPOSE (HYPOTHESIS) This work presents the Heat and Energy Concept Inventory (HECI) to assess prevalent misconceptions related to: (1) Temperature vs. Energy, (2) Temperature vs. Perceptions of Hot and Cold, (3) Factors that affect the Rate vs. Amount of Heat Transfer and (4) Thermal Radiation. The HECI is also used to document the prevalence of misconceptions amongst undergraduate engineering students. DESIGN/METHOD Item analysis, guided by classical test theory, was used to refine individual questions on the HECI. The HECI was used in a one group, pre-test-post-test design to assess the prevalence and persistence of targeted misconceptions amongst a population of undergraduate engineering students at diverse institutions. RESULTS Internal consistency reliability was assessed using Kuder-Richardson Formula 20; values were 0.85 for the entire instrument and ranged from 0.59 to 0.76 for the four subcategories of the HECI. Student performance on the HECI went from 49.2% to 54.5% after instruction. Gains on each of the individual subscales of the HECI, while generally statistically significant, were similarly modest. CONCLUSIONS The HECI provides sufficiently high estimates of internal consistency reliability to be used as a research tool to assess students' understanding of the targeted concepts. Use of the instrument demonstrates that student misconceptions are both prevalent and resistant to change through standard instruction.
Resumo:
We describe and analyze the efficiency of a new solar-thermochemical reactor concept, which employs a moving packed bed of reactive particles produce of H2 or CO from solar energy and H2O or CO2. The packed bed reactor incorporates several features essential to achieving high efficiency: spatial separation of pressures, temperature, and reaction products in the reactor; solid–solid sensible heat recovery between reaction steps; continuous on-sun operation; and direct solar illumination of the working material. Our efficiency analysis includes material thermodynamics and a detailed accounting of energy losses, and demonstrates that vacuum pumping, made possible by the innovative pressure separation approach in our reactor, has a decisive efficiency advantage over inert gas sweeping. We show that in a fully developed system, using CeO2 as a reactive material, the conversion efficiency of solar energy into H2 and CO at the design point can exceed 30%. The reactor operational flexibility makes it suitable for a wide range of operating conditions, allowing for high efficiency on an annual average basis. The mixture of H2 and CO, known as synthesis gas, is not only usable as a fuel but is also a universal starting point for the production of synthetic fuels compatible with the existing energy infrastructure. This would make it possible to replace petroleum derivatives used in transportation in the U.S., by using less than 0.7% of the U.S. land area, a roughly two orders of magnitude improvement over mature biofuel approaches. In addition, the packed bed reactor design is flexible and can be adapted to new, better performing reactive materials.
Resumo:
We describe and analyze the efficiency of a new solar-thermochemical reactor concept, which employs a moving packed bed of reactive particles produce of H-2 or CO from solar energy and H2O or CO2. The packed bed reactor incorporates several features essential to achieving high efficiency: spatial separation of pressures, temperature, and reaction products in the reactor; solid-solid sensible heat recovery between reaction steps; continuous on-sun operation; and direct solar illumination of the working material. Our efficiency analysis includes material thermodynamics and a detailed accounting of energy losses, and demonstrates that vacuum pumping, made possible by the innovative pressure separation approach in our reactor, has a decisive efficiency advantage over inert gas sweeping. We show that in a fully developed system, using CeO2 as a reactive material, the conversion efficiency of solar energy into H-2 and CO at the design point can exceed 30%. The reactor operational flexibility makes it suitable for a wide range of operating conditions, allowing for high efficiency on an annual average basis. The mixture of H-2 and CO, known as synthesis gas, is not only usable as a fuel but is also a universal starting point for the production of synthetic fuels compatible with the existing energy infrastructure. This would make it possible to replace petroleum derivatives used in transportation in the U. S., by using less than 0.7% of the U. S. land area, a roughly two orders of magnitude improvement over mature biofuel approaches. In addition, the packed bed reactor design is flexible and can be adapted to new, better performing reactive materials.
Resumo:
The Gracias Laboratory at Johns Hopkins University has developed microgrippers which utilize chemically-actuated joints to be used in micro-surgery. These grippers, however, take up to thirty minutes to close fully when activated biochemicals in the human body. This is very problematic and could limit the use of the devices in surgery. It is the goal of this research to develop a gripper that uses theGracias Laboratory's existing joints in conjunction with mechanical components to decrease the closing time. The purpose of including the mechanical components is to induce a state of instability at which time a small perturbation would cause the joint to close fully.The main concept of the research was to use the lateral buckling of a triangular gripper geometry and use a toggle mechanism to decrease the closure time of the device. This would create a snap-action device mimicking the quick closure of a Venus flytrap. All developed geometries were tested using finite element analysis to determine ifloading conditions produced the desired buckled shape. This research examines lateral buckling on the micro-scale and the possibility ofusing this phenomenon in a micro-gripper. Although a final geometry with the required deformed shaped was not found, this document contains suggestions for future geometries that may produce the correct deformed shape. It was determined through this work that in order to obtain the desired deformed shape, polymeric sections need to be added to the geometry. This simplifies the analysis and allows the triangular structure to buckle in the appropriate way due to the added joints. Future work for this project will be completed by undergraduate students at Bucknell University. Fabrication and testing of devices will be done at Johns Hopkins University in the Gracias Laboratory.
Design and construction of a new Drosophila species, D.synthetica, by synthetic regulatory evolution
Resumo:
Here, I merge the principles of synthetic biology1,2 and regulatory evolution3-11 to create a new species12-15 with a minimal set of known elements. Using preexisting transgenes and recessive mutations of Drosophila melanogaster, a transgenic population arises with small eyes and a different venation pattern that fulfills the criteria of a new species according to Mayr's "Biological Species Concept"7,10. The genetic circuit entails the loss of a non-essential transcription factor and the introduction of cryptic enhancers. Subsequent activation of those enhancers causes hybrid lethality. The transition from "transgenic organisms" towards "synthetic species", such as Drosophila synthetica, constitutes a safety mechanism to avoid hybridization with wild type populations and preserve natural biodiversity16-18. Drosophila synthetica is the first transgenic organism that cannot hybridize with the original wild type population but remains fertile when crossed with other transgenic animals.