989 resultados para integrated solutions
Resumo:
Cette thèse étudie une approche intégrant la gestion de l’horaire et la conception de réseaux de services pour le transport ferroviaire de marchandises. Le transport par rail s’articule autour d’une structure à deux niveaux de consolidation où l’affectation des wagons aux blocs ainsi que des blocs aux services représentent des décisions qui complexifient grandement la gestion des opérations. Dans cette thèse, les deux processus de consolidation ainsi que l’horaire d’exploitation sont étudiés simultanément. La résolution de ce problème permet d’identifier un plan d’exploitation rentable comprenant les politiques de blocage, le routage et l’horaire des trains, de même que l’habillage ainsi que l’affectation du traffic. Afin de décrire les différentes activités ferroviaires au niveau tactique, nous étendons le réseau physique et construisons une structure de réseau espace-temps comprenant trois couches dans lequel la dimension liée au temps prend en considération les impacts temporels sur les opérations. De plus, les opérations relatives aux trains, blocs et wagons sont décrites par différentes couches. Sur la base de cette structure de réseau, nous modélisons ce problème de planification ferroviaire comme un problème de conception de réseaux de services. Le modèle proposé se formule comme un programme mathématique en variables mixtes. Ce dernie r s’avère très difficile à résoudre en raison de la grande taille des instances traitées et de sa complexité intrinsèque. Trois versions sont étudiées : le modèle simplifié (comprenant des services directs uniquement), le modèle complet (comprenant des services directs et multi-arrêts), ainsi qu’un modèle complet à très grande échelle. Plusieurs heuristiques sont développées afin d’obtenir de bonnes solutions en des temps de calcul raisonnables. Premièrement, un cas particulier avec services directs est analysé. En considérant une cara ctéristique spécifique du problème de conception de réseaux de services directs nous développons un nouvel algorithme de recherche avec tabous. Un voisinage par cycles est privilégié à cet effet. Celui-ci est basé sur la distribution du flot circulant sur les blocs selon les cycles issus du réseau résiduel. Un algorithme basé sur l’ajustement de pente est développé pour le modèle complet, et nous proposons une nouvelle méthode, appelée recherche ellipsoidale, permettant d’améliorer davantage la qualité de la solution. La recherche ellipsoidale combine les bonnes solutions admissibles générées par l’algorithme d’ajustement de pente, et regroupe les caractéristiques des bonnes solutions afin de créer un problème élite qui est résolu de facon exacte à l’aide d’un logiciel commercial. L’heuristique tire donc avantage de la vitesse de convergence de l’algorithme d’ajustement de pente et de la qualité de solution de la recherche ellipsoidale. Les tests numériques illustrent l’efficacité de l’heuristique proposée. En outre, l’algorithme représente une alternative intéressante afin de résoudre le problème simplifié. Enfin, nous étudions le modèle complet à très grande échelle. Une heuristique hybride est développée en intégrant les idées de l’algorithme précédemment décrit et la génération de colonnes. Nous proposons une nouvelle procédure d’ajustement de pente où, par rapport à l’ancienne, seule l’approximation des couts liés aux services est considérée. La nouvelle approche d’ajustement de pente sépare ainsi les décisions associées aux blocs et aux services afin de fournir une décomposition naturelle du problème. Les résultats numériques obtenus montrent que l’algorithme est en mesure d’identifier des solutions de qualité dans un contexte visant la résolution d’instances réelles.
Resumo:
El problema de transporte en Bogotá es cada vez algo mas grande, pues las medidas actuales y los planes a futuro para el desarrollo de un sistema integrado de transporte parecen no ser suficientes para la magnitud poblacional de la capital Colombiana; de igual manera los precios son elevados y representan un inconveniente para los ciudadanos puesto que la cantidad de estos que puede pagar un pasaje del actual sistema transmilenio es cada vez más baja debido al alto incremento que su tarifa tiene anualmente. Por esta razón durante lo largo de este escrito se justificaran las razones que indican que los planes aplicados y por aplicar por el distrito no son suficientes para cubrir el vacío que existe en Bogotá a nivel de un sistema integrado de transporte público.
Resumo:
The realisation that much of conventional. modern architecture is not sustainable over the long term is not new. Typical approaches are aimed at using energy and materials more efficiently. However, by clearly understanding the natural processes and their interactions with human needs in view, designers can create buildings that are delightful. functional productive and regenerative by design. The paper aims to review the biomimetics literature that is relevant to building materials and design. Biomimetics is the abstraction of good design from Nature, an enabling interdisciplinary science. particularly interested in emerging properties of materials and structures as a result of their hierarchical organisation. Biomimetics provides ideas relevant to: graded functionality of materials (nano-scale), adaptive response (nano-, micro-. and macro-scales): integrated intelligence (sensing and actuation at all scales), architecture and additional functionality. There are many examples in biology where emergent response of plants and animals to temperature, humidity and other changes in their physical environments is based on relatively simple physical principles. However, the implementation of design solutions which exploit these principles is where inspiration for man-made structures should be. We analyse specific examples of sustainability from Nature and the benefits or value that these solutions have brought to different creatures. By doing this, we appreciate how the natural world fits into the world of sustainable buildings and how as building engineers we can value its true application in delivering sustainable building.
Resumo:
Modal filtering is based on the capability of single-mode waveguides to transmit only one complex amplitude function to eliminate virtually any perturbation of the interfering wavefronts, thus making very high rejection ratios possible in a nulling interferometer. In the present paper we focus on the progress of Integrated Optics in the thermal infrared [6-20 mu m] range, one of the two candidate technologies for the fabrication of Modal Filters, together with fiber optics. In conclusion of the European Space Agency's (ESA) "Integrated Optics for Darwin" activity, etched layers of clialcogenide material deposited on chalcogenide glass substrates was selected among four candidates as the technology with the best potential to simultaneously meet the filtering efficiency, absolute and spectral transmission, and beam coupling requirements. ESA's new "Integrated Optics" activity started at mid-2007 with the purpose of improving the technology until compliant prototypes can be manufactured and validated, expectedly by the end of 2009. The present paper aims at introducing the project and the components requirements and functions. The selected materials and preliminary designs, as well as the experimental validation logic and test benches are presented. More details are provided on the progress of the main technology: vacuum deposition in the co-evaporation mode and subsequent etching of chalcogenide layers. In addition., preliminary investigations of an alternative technology based on burying a chalcogenide optical fiber core into a chalcogenide substrate are presented. Specific developments of anti-reflective solutions designed for the mitigation of Fresnel losses at the input and output surface of the components are also introduced.
Resumo:
Innovative, low carbon technologies are already available for use in the construction of buildings, but the impact of their specification on construction projects is unclear. This exploratory research identifies issues which arise following the specification of BIPV in non-residential construction projects. Rather than treating the inclusion of a new technology as a technical problem, the research explores the issue from a socio-technical perspective to understand the accommodations which the project team makes and their effect on the building and the technology. The paper is part of a larger research project which uses a Social Construction of Technology Approach (SCOT) to explore the accommodations made to working practices and design when Building Integrated PhotoVoltaic (BIPV) technology is introduced. The approach explores how the requirements of the technology from different groups of actors (Relevant Social Groups or RSG's) give rise to problems and create solutions. As such it rejects the notion of a rational linear view of innovation diffusion; instead it suggests that the variety and composition of the Relevant Social Groups set the agenda for problem solving and solutions as the project progresses. The research explores the experiences of three people who have extensive histories of involvement with BIPV in construction, looks at how SCOT can inform our understanding of the issues involved and identifies themes and issues in the specification of BIPV on construction projects. A key finding concerns the alignment of inflection points at which interviewees have found themselves changing from one RSG to another as new problems and solutions are identified. The points at which they change RSG often occurred at points which mirror conventional construction categories (in terms of project specification, tender, design and construction).
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A method is proposed for the simultaneous determination of Al, As, Cu, Fe, Mn, and Ni in fuel ethanol by electrothermal atomic absorption spectrometry (ETAAS) using W-Rh permanent modifier together with Pd(NO3)(2) + Mg(NO3)(2) conventional modifier. The integrated platform of a transversely heated graphite atomizer (THGA) was treated with tungsten, followed by rhodium, forming a deposit containing 250 mug W + 200 mug Rh. A 500-muL, volume of fuel ethanol was diluted with 500 muL, of 0.14 mol L-1 HNO3 in an autosampler cup of the spectrometer. Then, 20 muL, of the diluted ethanol was introduced into the pretreated graphite platform followed by the introduction of 5 mug Pd(NO3)(2) + 3 mug Mg(NO3)(2). The injection of this modifier was required to improve arsenic and iron recoveries in fuel ethanol. Calibrations were carried out using multi-element reference solutions prepared in diluted ethanol (1 + 1, v/v) acidified to 0. 14 mol L-1 HNO3. The pyrolysis and atomization temperatures of the heating program were 1200degreesC and 2200degreesC, respectively, which were obtained with multielement reference solutions in acidic diluted ethanol (1 + 1, v/v; 0. 14 mol L-1 HNO3). The characteristic masses for the simultaneous determination in ethanol fuel were 78 pg Al, 33 pg As, 10 pg Cu, 14 pg Fe, 7 pg Mn, and 24 pg Ni. The lifetime of the pretreated tube was about 700 firings. The detection limits (D.L.) were 1.9 mug L-1 Al, 2.9 mug L-1 As, 0.57 mug L-1.Cu, 1.3 mug L-1 Fe, 0.40 mug L-1 Mn, and 1.3 mug L-1 Ni. The relative standard deviations (n = 12) were 4%, 4%, 3%, 1.5%, 1.2%, and 2.2% for Al, As, Cu, Fe, Mn, and Ni, respectively. The recoveries of Al, As, Cu, Fe, Mn, and Ni added to the fuel ethanol samples varied from 81% to 95%, 80% to 98%, 97% to 109%, 85% to 107%, 98% to 106% and 97% to 103%, respectively. Accuracy was checked for the Al, As, Cu, Fe, Mn, and Ni determination in 10 samples purchased at a local gas station in Araraquara-SP City, Brazil. A paired t-test showed that at the 95% confidence level the results were in agreement with those obtained by single-element ETAAS.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This work presents challenges and solutions for the teaching and learning of automation applied to integrated manufacturing by means of a methodological approach based on techniques, tools and industrial equipment directly applicable in the industry. The approach was implemented in a control and automation engineering course divided into expositive and laboratory classes. Since the success of the approach is mainly from the practical activities, the article focus more on activities developed in laboratory than theorical classes. Copyright © 2007 IFAC.
Resumo:
In this work, a heuristic model for integrated planning of primary distribution network and secondary distribution circuits is proposed. A Tabu Search (TS) algorithm is employed to solve the planning of primary distribution networks. Evolutionary Algorithms (EA) are used to solve the planning model of secondary networks. The planning integration of both networks is carried out by means a constructive heuristic taking into account a set of integration alternatives between these networks. These integration alternatives are treated in a hierarchical way. The planning of primary networks and secondary distribution circuits is carried out based on assessment of the effects of the alternative solutions in the expansion costs of both networks simultaneously. In order to evaluate this methodology, tests were performed for a real-life distribution system taking into account the primary and secondary networks.
Resumo:
This work presents the stage integration in power electronics converters as a suitable solution for solar photovoltaic inverters. The rated voltages available in Photovoltaic (PV) modules have usually low values for applications such as regulated output voltages in stand-alone or grid-connected configurations. In these cases, a boost stage or a transformer will be necessary. Transformers have low efficiencies, heavy weights and have been used only when galvanic isolation is mandatory. Furthermore, high-frequency transformers increase the converter complexity. Therefore, the most usual topologies use a boost stage and one inverter stage cascaded. However, the complexity, size, weight, cost and lifetime might be improved considering the integration of both stages. These are the expected features to turn attractive this kind of integrated structures. Therefore, some integrated converters are analyzed and compared in this paper in order to support future evaluations and trends for low power single-phase inverters for PV systems. © 2011 IEEE.
Resumo:
Researches on control for power electronics have looked for original solutions in order to advance renewable resources feasibility, specially the photovoltaic (PV). In this context, for PV renewable energy source the usage of compact, high efficiency, low cost and reliable converters are very attractive. In this context, two improved simplified converters, namely Tri-state Boost and Tri-state Buck-Boost integrated single-phase inverters, are achieved with the presented Tri-state modulation and control schemes, which guarantees the input to output power decoupling control. This feature enhances the field of single-phase PV inverters once the energy storage is mainly inductive. The main features of the proposal are confirmed with some simulations and experimental results. © 2012 IEEE.
Resumo:
This article describes a real-world production planning and scheduling problem occurring at an integrated pulp and paper mill (P&P) which manufactures paper for cardboard out of produced pulp. During the cooking of wood chips in the digester, two by-products are produced: the pulp itself (virgin fibers) and the waste stream known as black liquor. The former is then mixed with recycled fibers and processed in a paper machine. Here, due to significant sequence-dependent setups in paper type changeovers, sizing and sequencing of lots have to be made simultaneously in order to efficiently use capacity. The latter is converted into electrical energy using a set of evaporators, recovery boilers and counter-pressure turbines. The planning challenge is then to synchronize the material flow as it moves through the pulp and paper mills, and energy plant, maximizing customer demand (as backlogging is allowed), and minimizing operation costs. Due to the intensive capital feature of P&P, the output of the digester must be maximized. As the production bottleneck is not fixed, to tackle this problem we propose a new model that integrates the critical production units associated to the pulp and paper mills, and energy plant for the first time. Simple stochastic mixed integer programming based local search heuristics are developed to obtain good feasible solutions for the problem. The benefits of integrating the three stages are discussed. The proposed approaches are tested on real-world data. Our work may help P&P companies to increase their competitiveness and reactiveness in dealing with demand pattern oscillations. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.