846 resultados para Flexible Design Framework for Airport (FlexDFA)
Resumo:
The development of robots has shown itself as a very complex interdisciplinary research field. The predominant procedure for these developments in the last decades is based on the assumption that each robot is a fully personalized project, with the direct embedding of hardware and software technologies in robot parts with no level of abstraction. Although this methodology has brought countless benefits to the robotics research, on the other hand, it has imposed major drawbacks: (i) the difficulty to reuse hardware and software parts in new robots or new versions; (ii) the difficulty to compare performance of different robots parts; and (iii) the difficulty to adapt development needs-in hardware and software levels-to local groups expertise. Large advances might be reached, for example, if physical parts of a robot could be reused in a different robot constructed with other technologies by other researcher or group. This paper proposes a framework for robots, TORP (The Open Robot Project), that aims to put forward a standardization in all dimensions (electrical, mechanical and computational) of a robot shared development model. This architecture is based on the dissociation between the robot and its parts, and between the robot parts and their technologies. In this paper, the first specification for a TORP family and the first humanoid robot constructed following the TORP specification set are presented, as well as the advances proposed for their improvement.
Resumo:
We propose a general framework for the analysis of animal telemetry data through the use of weighted distributions. It is shown that several interpretations of resource selection functions arise when constructed from the ratio of a use and availability distribution. Through the proposed general framework, several popular resource selection models are shown to be special cases of the general model by making assumptions about animal movement and behavior. The weighted distribution framework is shown to be easily extended to readily account for telemetry data that are highly auto-correlated; as is typical with use of new technology such as global positioning systems animal relocations. An analysis of simulated data using several models constructed within the proposed framework is also presented to illustrate the possible gains from the flexible modeling framework. The proposed model is applied to a brown bear data set from southeast Alaska.
Resumo:
This thesis presents a new approach for the design and fabrication of bond wire magnetics for power converter applications by using standard IC gold bonding wires and micro-machined magnetic cores. It shows a systematic design and characterization study for bond wire transformers with toroidal and race-track cores for both PCB and silicon substrates. Measurement results show that the use of ferrite cores increases the secondary self-inductance up to 315 µH with a Q-factor up to 24.5 at 100 kHz. Measurement results on LTCC core report an enhancement of the secondary self-inductance up to 23 µH with a Q-factor up to 10.5 at 1.4 MHz. A resonant DC-DC converter is designed in 0.32 µm BCD6s technology at STMicroelectronics with a depletion nmosfet and a bond wire micro-transformer for EH applications. Measures report that the circuit begins to oscillate from a TEG voltage of 280 mV while starts to convert from an input down to 330 mV to a rectified output of 0.8 V at an input of 400 mV. Bond wire magnetics is a cost-effective approach that enables a flexible design of inductors and transformers with high inductance and high turns ratio. Additionally, it supports the development of magnetics on top of the IC active circuitry for package and wafer level integrations, thus enabling the design of high density power components. This makes possible the evolution of PwrSiP and PwrSoC with reliable highly efficient magnetics.
Resumo:
ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.
Resumo:
El propósito es presentar las principales estrategias de ordenamiento territorial urbano puestas en práctica en los últimos quince años en el Gran La Plata, que han incidido en el paisaje cultural. Se observan tanto políticas de enfoques integrales como sectoriales, las que incorporan innovaciones en OT y proponen nuevos y/o renovados paisajes culturales y las políticas “centrales" desde las gestiones municipales. Se reflexiona sobre aportes y debilidades, incompatibilidades entre ellas, en el marco del desarrollo sustentable. La estrategia metodológica utilizada tiene un perfil cualitativo y de tipo exploratoria, con un diseño de naturaleza flexible. En el estudio de caso se identifican las modalidades de intervención en función de las transformaciones del paisaje resultante y su gestión. Tiene una fuerte orientación interpretativa y la estrategia general está orientada a conseguir una familiarización con hechos aun no suficientemente comprendidos para generar nuevas ideas que permitan realizar nuevas preguntas e hipótesis. En este marco, las políticas se tornan contradictorias, si bien han logrado modificar algunos microespacios. Se entienden más como el recorte y congelamiento/ restauración del paisaje previo que como la creación de otros renovados, nuevos y/o mejores y con valores sociales aggiornados. En lo ambiental, no han sido acompañadas por estrategias estructurantes como el arbolado urbano y disposición de los residuos sólidos urbanos.
Resumo:
Environmental conservation activities must continue to become more efficient and effective, especially in Africa where development and population growth pressures continue to escalate. Recently, prioritization of conservation resources has focused on explicitly incorporating the economic costs of conservation along with better defining the outcomes of these expenditures. We demonstrate how new global and continental data that spans social, economic, and ecological sectors creates an opportunity to incorporate return-on-investment (ROI) principles into conservation priority setting for Africa. We suggest that combining conservation priorities that factor in biodiversity value, habitat quality, and conservation management investments across terrestrial, freshwater, and coastal marine environments provides a new lens for setting global conservation priorities. Using this approach we identified seven regions capturing interior and coastal resources that also have high ROI values that support further investment. We illustrate how spatially explicit, yet flexible ROI analysis can help to better address uncertainty, risk, and opportunities for conservation, while making values that guide prioritization more transparent. In one case the results of this prioritization process were used to support new conservation investments. Acknowledging a clear research need to improve cost information, we propose that adopting a flexible ROI framework to set conservation priorities in Africa has multiple potential benefits.
Resumo:
Resumen El diseño clásico de circuitos de microondas se basa fundamentalmente en el uso de los parámetros s, debido a su capacidad para caracterizar de forma exitosa el comportamiento de cualquier circuito lineal. La relación existente entre los parámetros s con los sistemas de medida actuales y con las herramientas de simulación lineal han facilitado su éxito y su uso extensivo tanto en el diseño como en la caracterización de circuitos y subsistemas de microondas. Sin embargo, a pesar de la gran aceptación de los parámetros s en la comunidad de microondas, el principal inconveniente de esta formulación reside en su limitación para predecir el comportamiento de sistemas no lineales reales. En la actualidad, uno de los principales retos de los diseñadores de microondas es el desarrollo de un contexto análogo que permita integrar tanto el modelado no lineal, como los sistemas de medidas de gran señal y los entornos de simulación no lineal, con el objetivo de extender las capacidades de los parámetros s a regímenes de operación en gran señal y por tanto, obtener una infraestructura que permita tanto la caracterización como el diseño de circuitos no lineales de forma fiable y eficiente. De acuerdo a esta filosofía, en los últimos años se han desarrollado diferentes propuestas como los parámetros X, de Agilent Technologies, o el modelo de Cardiff que tratan de proporcionar esta plataforma común en el ámbito de gran señal. Dentro de este contexto, uno de los objetivos de la presente Tesis es el análisis de la viabilidad del uso de los parámetros X en el diseño y simulación de osciladores para transceptores de microondas. Otro aspecto relevante en el análisis y diseño de circuitos lineales de microondas es la disposición de métodos analíticos sencillos, basados en los parámetros s del transistor, que permitan la obtención directa y rápida de las impedancias de carga y fuente necesarias para cumplir las especificaciones de diseño requeridas en cuanto a ganancia, potencia de salida, eficiencia o adaptación de entrada y salida, así como la determinación analítica de parámetros de diseño clave como el factor de estabilidad o los contornos de ganancia de potencia. Por lo tanto, el desarrollo de una formulación de diseño analítico, basada en los parámetros X y similar a la existente en pequeña señal, permitiría su uso en aplicaciones no lineales y supone un nuevo reto que se va a afrontar en este trabajo. Por tanto, el principal objetivo de la presente Tesis consistiría en la elaboración de una metodología analítica basada en el uso de los parámetros X para el diseño de circuitos no lineales que jugaría un papel similar al que juegan los parámetros s en el diseño de circuitos lineales de microondas. Dichos métodos de diseño analíticos permitirían una mejora significativa en los actuales procedimientos de diseño disponibles en gran señal, así como una reducción considerable en el tiempo de diseño, lo que permitiría la obtención de técnicas mucho más eficientes. Abstract In linear world, classical microwave circuit design relies on the s-parameters due to its capability to successfully characterize the behavior of any linear circuit. Thus the direct use of s-parameters in measurement systems and in linear simulation analysis tools, has facilitated its extensive use and success in the design and characterization of microwave circuits and subsystems. Nevertheless, despite the great success of s-parameters in the microwave community, the main drawback of this formulation is its limitation in the behavior prediction of real non-linear systems. Nowadays, the challenge of microwave designers is the development of an analogue framework that allows to integrate non-linear modeling, large-signal measurement hardware and non-linear simulation environment in order to extend s-parameters capabilities to non-linear regimen and thus, provide the infrastructure for non-linear design and test in a reliable and efficient way. Recently, different attempts with the aim to provide this common platform have been introduced, as the Cardiff approach and the Agilent X-parameters. Hence, this Thesis aims to demonstrate the X-parameter capability to provide this non-linear design and test framework in CAD-based oscillator context. Furthermore, the classical analysis and design of linear microwave transistorbased circuits is based on the development of simple analytical approaches, involving the transistor s-parameters, that are able to quickly provide an analytical solution for the input/output transistor loading conditions as well as analytically determine fundamental parameters as the stability factor, the power gain contours or the input/ output match. Hence, the development of similar analytical design tools that are able to extend s-parameters capabilities in small-signal design to non-linear ap- v plications means a new challenge that is going to be faced in the present work. Therefore, the development of an analytical design framework, based on loadindependent X-parameters, constitutes the core of this Thesis. These analytical nonlinear design approaches would enable to significantly improve current large-signal design processes as well as dramatically decrease the required design time and thus, obtain more efficient approaches.
Resumo:
This work attempts to create a systemic design framework for man-machine interfaces which is self consistent, compatible with other concepts, and applicable to real situations. This is tackled by examining the current architecture of computer applications packages. The treatment in the main is philosophical and theoretical and analyses the origins, assumptions and current practice of the design of applications packages. It proposes that the present form of packages is fundamentally contradictory to the notion of packaging itself. This is because as an indivisible ready-to-implement solution, current package architecture displays the following major disadvantages. First, it creates problems as a result of user-package interactions, in which the designer tries to mould all potential individual users, no matter how diverse they are, into one model. This is worsened by the minute provision, if any, of important properties such as flexibility, independence and impartiality. Second, it displays rigid structure that reduces the variety and/or multi-use of the component parts of such a package. Third, it dictates specific hardware and software configurations which probably results in reducing the number of degrees of freedom of its user. Fourth, it increases the dependence of its user upon its supplier through inadequate documentation and understanding of the package. Fifth, it tends to cause a degeneration of the expertise of design of the data processing practitioners. In view of this understanding an alternative methodological design framework which is both consistent with systems approach and the role of a package in its likely context is proposed. The proposition is based upon an extension of the identified concept of the hierarchy of holons* which facilitates the examination of the complex relationships of a package with its two principal environments. First, the user characteristics and his decision making practice and procedures; implying an examination of the user's M.I.S. network. Second, the software environment and its influence upon a package regarding support, control and operation of the package. The framework is built gradually as discussion advances around the central theme of a compatible M.I.S., software and model design. This leads to the formation of the alternative package architecture that is based upon the design of a number of independent, self-contained small parts. Such is believed to constitute the nucleus around which not only packages can be more effectively designed, but is also applicable to many man-machine systems design.
Resumo:
The Teallach project has adapted model-based user-interface development techniques to the systematic creation of user-interfaces for object-oriented database applications. Model-based approaches aim to provide designers with a more principled approach to user-interface development using a variety of underlying models, and tools which manipulate these models. Here we present the results of the Teallach project, describing the tools developed and the flexible design method supported. Distinctive features of the Teallach system include provision of database-specific constructs, comprehensive facilities for relating the different models, and support for a flexible design method in which models can be constructed and related by designers in different orders and in different ways, to suit their particular design rationales. The system then creates the desired user-interface as an independent, fully functional Java application, with automatically generated help facilities.
Resumo:
This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.
Resumo:
Pipelines extend thousands of kilometers across wide geographic areas as a network to provide essential services for modern life. It is inevitable that pipelines must pass through unfavorable ground conditions, which are susceptible to natural disasters. This thesis investigates the behaviour of buried pressure pipelines experiencing ground distortions induced by normal faulting. A recent large database of physical modelling observations on buried pipes of different stiffness relative to the surrounding soil subjected to normal faults provided a unique opportunity to calibrate numerical tools. Three-dimensional finite element models were developed to enable the complex soil-structure interaction phenomena to be further understood, especially on the subjects of gap formation beneath the pipe and the trench effect associated with the interaction between backfill and native soils. Benchmarked numerical tools were then used to perform parametric analysis regarding project geometry, backfill material, relative pipe-soil stiffness and pipe diameter. Seismic loading produces a soil displacement profile that can be expressed by isoil, the distance between the peak curvature and the point of contraflexure. A simplified design framework based on this length scale (i.e., the Kappa method) was developed, which features estimates of longitudinal bending moments of buried pipes using a characteristic length, ipipe, the distance from peak to zero curvature. Recent studies indicated that empirical soil springs that were calibrated against rigid pipes are not suitable for analyzing flexible pipes, since they lead to excessive conservatism (for design). A large-scale split-box normal fault simulator was therefore assembled to produce experimental data for flexible PVC pipe responses to a normal fault. Digital image correlation (DIC) was employed to analyze the soil displacement field, and both optical fibres and conventional strain gauges were used to measure pipe strains. A refinement to the Kappa method was introduced to enable the calculation of axial strains as a function of pipe elongation induced by flexure and an approximation of the longitudinal ground deformations. A closed-form Winkler solution of flexural response was also derived to account for the distributed normal fault pattern. Finally, these two analytical solutions were evaluated against the pipe responses observed in the large-scale laboratory tests.
Resumo:
Networked learning happens naturally within the social systems of which we are all part. However, in certain circumstances individuals may want to actively take initiative to initiate interaction with others they are not yet regularly in exchange with. This may be the case when external influences and societal changes require innovation of existing practices. This paper proposes a framework with relevant dimensions providing insight into precipitated characteristics of designed as well as ‘fostered or grown’ networked learning initiatives. Networked learning initiatives are characterized as “goal-directed, interest-, or needs based activities of a group of (at least three) individuals that initiate interaction across the boundaries of their regular social systems”. The proposed framework is based on two existing research traditions, namely 'networked learning' and 'learning networks', comparing, integrating and building upon knowledge from both perspectives. We uncover some interesting differences between definitions, but also similarities in the way they describe what ‘networked’ means and how learning is conceptualized. We think it is productive to combine both research perspectives, since they both study the process of learning in networks extensively, albeit from different points of view, and their combination can provide valuable insights in networked learning initiatives. We uncover important features of networked learning initiatives, characterize actors and connections of which they are comprised and conditions which facilitate and support them. The resulting framework could be used both for analytic purposes and (partly) as a design framework. In this framework it is acknowledged that not all successful networks have the same characteristics: there is no standard ‘constellation’ of people, roles, rules, tools and artefacts, although there are indications that some network structures work better than others. Interactions of individuals can only be designed and fostered till a certain degree: the type of network and its ‘growth’ (e.g. in terms of the quantity of people involved, or the quality and relevance of co-created concepts, ideas, artefacts and solutions to its ‘inhabitants’) is in the hand of the people involved. Therefore, the framework consists of dimensions on a sliding scale. It introduces a structured and analytic way to look at the precipitation of networked learning initiatives: learning networks. Successive research on the application of this framework and feedback from the networked learning community is needed to further validate it’s usability and value to both research as well as practice.
Resumo:
Networked learning happens naturally within the social systems of which we are all part. However, in certain circumstances individuals may want to actively take initiative to initiate interaction with others they are not yet regularly in exchange with. This may be the case when external influences and societal changes require innovation of existing practices. This paper proposes a framework with relevant dimensions providing insight into precipitated characteristics of designed as well as ‘fostered or grown’ networked learning initiatives. Networked learning initiatives are characterized as “goal-directed, interest-, or needs based activities of a group of (at least three) individuals that initiate interaction across the boundaries of their regular social systems”. The proposed framework is based on two existing research traditions, namely 'networked learning' and 'learning networks', comparing, integrating and building upon knowledge from both perspectives. We uncover some interesting differences between definitions, but also similarities in the way they describe what ‘networked’ means and how learning is conceptualized. We think it is productive to combine both research perspectives, since they both study the process of learning in networks extensively, albeit from different points of view, and their combination can provide valuable insights in networked learning initiatives. We uncover important features of networked learning initiatives, characterize actors and connections of which they are comprised and conditions which facilitate and support them. The resulting framework could be used both for analytic purposes and (partly) as a design framework. In this framework it is acknowledged that not all successful networks have the same characteristics: there is no standard ‘constellation’ of people, roles, rules, tools and artefacts, although there are indications that some network structures work better than others. Interactions of individuals can only be designed and fostered till a certain degree: the type of network and its ‘growth’ (e.g. in terms of the quantity of people involved, or the quality and relevance of co-created concepts, ideas, artefacts and solutions to its ‘inhabitants’) is in the hand of the people involved. Therefore, the framework consists of dimensions on a sliding scale. It introduces a structured and analytic way to look at the precipitation of networked learning initiatives: learning networks. Successive research on the application of this framework and feedback from the networked learning community is needed to further validate it’s usability and value to both research as well as practice.
Resumo:
Considerable interest in renewable energy has increased in recent years due to the concerns raised over the environmental impact of conventional energy sources and their price volatility. In particular, wind power has enjoyed a dramatic global growth in installed capacity over the past few decades. Nowadays, the advancement of wind turbine industry represents a challenge for several engineering areas, including materials science, computer science, aerodynamics, analytical design and analysis methods, testing and monitoring, and power electronics. In particular, the technological improvement of wind turbines is currently tied to the use of advanced design methodologies, allowing the designers to develop new and more efficient design concepts. Integrating mathematical optimization techniques into the multidisciplinary design of wind turbines constitutes a promising way to enhance the profitability of these devices. In the literature, wind turbine design optimization is typically performed deterministically. Deterministic optimizations do not consider any degree of randomness affecting the inputs of the system under consideration, and result, therefore, in an unique set of outputs. However, given the stochastic nature of the wind and the uncertainties associated, for instance, with wind turbine operating conditions or geometric tolerances, deterministically optimized designs may be inefficient. Therefore, one of the ways to further improve the design of modern wind turbines is to take into account the aforementioned sources of uncertainty in the optimization process, achieving robust configurations with minimal performance sensitivity to factors causing variability. The research work presented in this thesis deals with the development of a novel integrated multidisciplinary design framework for the robust aeroservoelastic design optimization of multi-megawatt horizontal axis wind turbine (HAWT) rotors, accounting for the stochastic variability related to the input variables. The design system is based on a multidisciplinary analysis module integrating several simulations tools needed to characterize the aeroservoelastic behavior of wind turbines, and determine their economical performance by means of the levelized cost of energy (LCOE). The reported design framework is portable and modular in that any of its analysis modules can be replaced with counterparts of user-selected fidelity. The presented technology is applied to the design of a 5-MW HAWT rotor to be used at sites of wind power density class from 3 to 7, where the mean wind speed at 50 m above the ground ranges from 6.4 to 11.9 m/s. Assuming the mean wind speed to vary stochastically in such range, the rotor design is optimized by minimizing the mean and standard deviation of the LCOE. Airfoil shapes, spanwise distributions of blade chord and twist, internal structural layup and rotor speed are optimized concurrently, subject to an extensive set of structural and aeroelastic constraints. The effectiveness of the multidisciplinary and robust design framework is demonstrated by showing that the probabilistically designed turbine achieves more favorable probabilistic performance than those of the initial baseline turbine and a turbine designed deterministically.
Resumo:
The development of robots has shown itself as a very complex interdisciplinary research field. The predominant procedure for these developments in the last decades is based on the assumption that each robot is a fully personalized project, with the direct embedding of hardware and software technologies in robot parts with no level of abstraction. Although this methodology has brought countless benefits to the robotics research, on the other hand, it has imposed major drawbacks: (i) the difficulty to reuse hardware and software parts in new robots or new versions; (ii) the difficulty to compare performance of different robots parts; and (iii) the difficulty to adapt development needs-in hardware and software levels-to local groups expertise. Large advances might be reached, for example, if physical parts of a robot could be reused in a different robot constructed with other technologies by other researcher or group. This paper proposes a framework for robots, TORP (The Open Robot Project), that aims to put forward a standardization in all dimensions (electrical, mechanical and computational) of a robot shared development model. This architecture is based on the dissociation between the robot and its parts, and between the robot parts and their technologies. In this paper, the first specification for a TORP family and the first humanoid robot constructed following the TORP specification set are presented, as well as the advances proposed for their improvement.