860 resultados para Engineering, Civil|Engineering, Industrial|Computer Science
Resumo:
In the design of lattice domes, design engineers need expertise in areas such as configuration processing, nonlinear analysis, and optimization. These are extensive numerical, iterative, and lime-consuming processes that are prone to error without an integrated design tool. This article presents the application of a knowledge-based system in solving lattice-dome design problems. An operational prototype knowledge-based system, LADOME, has been developed by employing the combined knowledge representation approach, which uses rules, procedural methods, and an object-oriented blackboard concept. The system's objective is to assist engineers in lattice-dome design by integrating all design tasks into a single computer-aided environment with implementation of the knowledge-based system approach. For system verification, results from design examples are presented.
Resumo:
In this work, we present a systematic approach to the representation of modelling assumptions. Modelling assumptions form the fundamental basis for the mathematical description of a process system. These assumptions can be translated into either additional mathematical relationships or constraints between model variables, equations, balance volumes or parameters. In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The smallest indivisible syntactical element, the so called assumption atom has been identified as a triplet. With this syntax a modelling assumption can be described as an elementary assumption, i.e. an assumption consisting of only an assumption atom or a composite assumption consisting of a conjunction of elementary assumptions. The above syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and necessary conditions for checking them are given. These transformations can be used in several ways and their implications can be analysed by formal methods. The modelling assumptions define model hierarchies. That is, a series of model families each belonging to a particular equivalence class. These model equivalence classes can be related to primal assumptions regarding the definition of mass, energy and momentum balance volumes and to secondary and tiertinary assumptions regarding the presence or absence and the form of mechanisms within the system. Within equivalence classes, there are many model members, these being related to algebraic model transformations for the particular model. We show how these model hierarchies are driven by the underlying assumption structure and indicate some implications on system dynamics and complexity issues. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A combination of modelling and analysis techniques was used to design a six component force balance. The balance was designed specifically for the measurement of impulsive aerodynamic forces and moments characteristic of hypervelocity shock tunnel testing using the stress wave force measurement technique. Aerodynamic modelling was used to estimate the magnitude and distribution of forces and finite element modelling to determine the mechanical response of proposed balance designs. Simulation of balance performance was based on aerodynamic loads and mechanical responses using convolution techniques. Deconvolution was then used to assess balance performance and to guide further design modifications leading to the final balance design. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Design of liquid retaining structures involves many decisions to be made by the designer based on rules of thumb, heuristics, judgment, code of practice and previous experience. Various design parameters to be chosen include configuration, material, loading, etc. A novice engineer may face many difficulties in the design process. Recent developments in artificial intelligence and emerging field of knowledge-based system (KBS) have made widespread applications in different fields. However, no attempt has been made to apply this intelligent system to the design of liquid retaining structures. The objective of this study is, thus, to develop a KBS that has the ability to assist engineers in the preliminary design of liquid retaining structures. Moreover, it can provide expert advice to the user in selection of design criteria, design parameters and optimum configuration based on minimum cost. The development of a prototype KBS for the design of liquid retaining structures (LIQUID), using blackboard architecture with hybrid knowledge representation techniques including production rule system and object-oriented approach, is presented in this paper. An expert system shell, Visual Rule Studio, is employed to facilitate the development of this prototype system. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper presents the project of a mobile cockpit system (MCS) for smartphones, which provides assistance to electric bicycle (EB) cyclists in smart cities' environment. The presented system introduces a mobile application (MCS App) with the goal to provide useful personalized information to the cyclist related to the EB's use, including EB range prediction considering the intended path, management of the cycling effort performed by the cyclist, handling of the battery charging process, and the provisioning of information regarding available public transport. This work also introduces the EB cyclist profile concept, which is based on historical data analysis previously stored in a database and collected from mobile devices' sensors. From the tests performed, the results show the importance of route guidance, taking into account the energy savings. The results also show significant changes on range prediction based on user and route taken. It is important to say that the proposed system can be used for all bicycles in general.
Resumo:
A Computação Evolutiva enquadra-se na área da Inteligência Artificial e é um ramo das ciências da computação que tem vindo a ser aplicado na resolução de problemas em diversas áreas da Engenharia. Este trabalho apresenta o estado da arte da Computação Evolutiva, assim como algumas das suas aplicações no ramo da eletrónica, denominada Eletrónica Evolutiva (ou Hardware Evolutivo), enfatizando a síntese de circuitos digitais combinatórios. Em primeiro lugar apresenta-se a Inteligência Artificial, passando à Computação Evolutiva, nas suas principais vertentes: os Algoritmos Evolutivos baseados no processo da evolução das espécies de Charles Darwin e a Inteligência dos Enxames baseada no comportamento coletivo de alguns animais. No que diz respeito aos Algoritmos Evolutivos, descrevem-se as estratégias evolutivas, a programação genética, a programação evolutiva e com maior ênfase, os Algoritmos Genéticos. Em relação à Inteligência dos Enxames, descreve-se a otimização por colônia de formigas e a otimização por enxame de partículas. Em simultâneo realizou-se também um estudo da Eletrónica Evolutiva, explicando sucintamente algumas das áreas de aplicação, entre elas: a robótica, as FPGA, o roteamento de placas de circuito impresso, a síntese de circuitos digitais e analógicos, as telecomunicações e os controladores. A título de concretizar o estudo efetuado, apresenta-se um caso de estudo da aplicação dos algoritmos genéticos na síntese de circuitos digitais combinatórios, com base na análise e comparação de três referências de autores distintos. Com este estudo foi possível comparar, não só os resultados obtidos por cada um dos autores, mas também a forma como os algoritmos genéticos foram implementados, nomeadamente no que diz respeito aos parâmetros, operadores genéticos utilizados, função de avaliação, implementação em hardware e tipo de codificação do circuito.
Resumo:
This case study illustrates the application of the Value Creation Radar (VCR) to SenSyF, an Earth Observation (EO) system which was developed by Deimos Engenharia S.A. (DME), the Portuguese affiliate of Elecnor Deimos. It describes how a team of consultants adopted the VCR in order to find new market applications for SenSyF, selected the one with the highest potential, and defined a path to guarantee a sustainable market launch. This case study highlights the main challenges of bringing a technology-driven company closer to the market in the pursuit of long-term sustainability, while not compromising its technological capabilities
Resumo:
Under the framework of constraint based modeling, genome-scale metabolic models (GSMMs) have been used for several tasks, such as metabolic engineering and phenotype prediction. More recently, their application in health related research has spanned drug discovery, biomarker identification and host-pathogen interactions, targeting diseases such as cancer, Alzheimer, obesity or diabetes. In the last years, the development of novel techniques for genome sequencing and other high-throughput methods, together with advances in Bioinformatics, allowed the reconstruction of GSMMs for human cells. Considering the diversity of cell types and tissues present in the human body, it is imperative to develop tissue-specific metabolic models. Methods to automatically generate these models, based on generic human metabolic models and a plethora of omics data, have been proposed. However, their results have not yet been adequately and critically evaluated and compared. This work presents a survey of the most important tissue or cell type specific metabolic model reconstruction methods, which use literature, transcriptomics, proteomics and metabolomics data, together with a global template model. As a case study, we analyzed the consistency between several omics data sources and reconstructed distinct metabolic models of hepatocytes using different methods and data sources as inputs. The results show that omics data sources have a poor overlapping and, in some cases, are even contradictory. Additionally, the hepatocyte metabolic models generated are in many cases not able to perform metabolic functions known to be present in the liver tissue. We conclude that reliable methods for a priori omics data integration are required to support the reconstruction of complex models of human cells.
Resumo:
Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.
Resumo:
Un objectif principal du génie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie orientée objet (OO) a fourni de bons concepts et des techniques de modélisation et de programmation qui ont permis de développer des applications complexes tant dans le monde académique que dans le monde industriel. Cette expérience a cependant permis de découvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problème de traçabilité). La programmation orientée aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problème des préoccupations transversales. Ces préoccupations transversales se traduisent par la dispersion du même code dans plusieurs modules du système ou l’emmêlement de plusieurs morceaux de code dans un même module. Cette nouvelle méthode de programmer permet d’implémenter chaque problématique indépendamment des autres, puis de les assembler selon des règles bien définies. La programmation OA promet donc une meilleure productivité, une meilleure réutilisation du code et une meilleure adaptation du code aux changements. Très vite, cette nouvelle façon de faire s’est vue s’étendre sur tout le processus de développement de logiciel en ayant pour but de préserver la modularité et la traçabilité, qui sont deux propriétés importantes des logiciels de bonne qualité. Cependant, la technologie OA présente de nombreux défis. Le raisonnement, la spécification, et la vérification des programmes OA présentent des difficultés d’autant plus que ces programmes évoluent dans le temps. Par conséquent, le raisonnement modulaire de ces programmes est requis sinon ils nécessiteraient d’être réexaminés au complet chaque fois qu’un composant est changé ou ajouté. Il est cependant bien connu dans la littérature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqués changent souvent le comportement de leurs composantes de base [47]. Ces mêmes difficultés sont présentes au niveau des phases de spécification et de vérification du processus de développement des logiciels. Au meilleur de nos connaissances, la spécification modulaire et la vérification modulaire sont faiblement couvertes et constituent un champ de recherche très intéressant. De même, les interactions entre aspects est un sérieux problème dans la communauté des aspects. Pour faire face à ces problèmes, nous avons choisi d’utiliser la théorie des catégories et les techniques des spécifications algébriques. Pour apporter une solution aux problèmes ci-dessus cités, nous avons utilisé les travaux de Wiels [110] et d’autres contributions telles que celles décrites dans le livre [25]. Nous supposons que le système en développement est déjà décomposé en aspects et classes. La première contribution de notre thèse est l’extension des techniques des spécifications algébriques à la notion d’aspect. Deuxièmement, nous avons défini une logique, LA , qui est utilisée dans le corps des spécifications pour décrire le comportement de ces composantes. La troisième contribution consiste en la définition de l’opérateur de tissage qui correspond à la relation d’interconnexion entre les modules d’aspect et les modules de classe. La quatrième contribution concerne le développement d’un mécanisme de prévention qui permet de prévenir les interactions indésirables dans les systèmes orientés aspect.
Resumo:
Le Problème de Tournées de Véhicules (PTV) est une clé importante pour gérér efficacement des systèmes logistiques, ce qui peut entraîner une amélioration du niveau de satisfaction de la clientèle. Ceci est fait en servant plus de clients dans un temps plus court. En terme général, il implique la planification des tournées d'une flotte de véhicules de capacité donnée basée à un ou plusieurs dépôts. Le but est de livrer ou collecter une certain quantité de marchandises à un ensemble des clients géographiquement dispersés, tout en respectant les contraintes de capacité des véhicules. Le PTV, comme classe de problèmes d'optimisation discrète et de grande complexité, a été étudié par de nombreux au cours des dernières décennies. Étant donné son importance pratique, des chercheurs dans les domaines de l'informatique, de la recherche opérationnelle et du génie industrielle ont mis au point des algorithmes très efficaces, de nature exacte ou heuristique, pour faire face aux différents types du PTV. Toutefois, les approches proposées pour le PTV ont souvent été accusées d'être trop concentrées sur des versions simplistes des problèmes de tournées de véhicules rencontrés dans des applications réelles. Par conséquent, les chercheurs sont récemment tournés vers des variantes du PTV qui auparavant étaient considérées trop difficiles à résoudre. Ces variantes incluent les attributs et les contraintes complexes observés dans les cas réels et fournissent des solutions qui sont exécutables dans la pratique. Ces extensions du PTV s'appellent Problème de Tournées de Véhicules Multi-Attributs (PTVMA). Le but principal de cette thèse est d'étudier les différents aspects pratiques de trois types de problèmes de tournées de véhicules multi-attributs qui seront modélisés dans celle-ci. En plus, puisque pour le PTV, comme pour la plupart des problèmes NP-complets, il est difficile de résoudre des instances de grande taille de façon optimale et dans un temps d'exécution raisonnable, nous nous tournons vers des méthodes approcheés à base d’heuristiques.
Resumo:
El proceso administrativo y de compras de OPL Carga tiene algunas falencias entre ellas: fallas en la Comunicación entre el personal operativo, no se realizan llamadas internas usando con frecuencia el email, produciendo la saturación de solicitudes las cuales terminan sin ser resueltas en cuanto a roles se refiere, no hay enfoque de procesos en vista que no se tiene claras las tareas de cada cargo, adicionalmente no hay claridad en los subprocesos, perjudicando el proceso con el aumento de costos, pérdida de tiempo, las responsabilidades de los funcionario no todas las veces se ejecutan en el tiempo asignado, el liderazgo compartido presenta ambigüedades. Objetivos: Definir el trabajo en equipo en el proceso administrativo y de compras en OPL carga de Bucaramanga. La investigación que a realizar es de tipo descriptivo, busca descubrir las falencias o características que permiten diseñar y desarrollar un modelo de solución para los problemas del equipo de OPL Carga S.A.S. Materiales y métodos: La investigación efectuada es de tipo descriptivo, el objetivo es definir el modelo del trabajo en equipo y describir las falencias en el proceso administrativo y de compras en OPL carga de Bucaramanga, que permitan obtener un diagnóstico integral que conlleve a la implementación de estrategias de solución. Resultados: Se identificaron las falencias en los siguientes aspectos: Variable comunicación, rendimiento, destrezas complementarias, propósito significativo y meta específicas de los funcionarios en OPL carga sección administrativa. Conclusiones: El modelo de trabajo en equipo que OPL aplica es jerárquico, en el que se ofrece estabilidad, seguridad, se toman decisiones en forma piramidal, mediante la planeación de tareas, la colaboración, igualdad y respeto por los miembros, trabajando en pro de la solución de problemas. Se construyó un plano conceptual que permitió exponer la interpretación que la estudiante tiene de las teorías, investigaciones y antecedentes válidos para la comprensión del problema investigado. Área comunicacional: Coordinar acciones tendientes para que los funcionarios respondan a tiempo los emails atenientes a su trabajo. Área condiciones de trabajo: Clarificar y diseñar las reglas de comportamiento al interior de los equipos de trabajo que redunden en el mejoramiento del mismo y la búsqueda de soluciones oportunas. Área metas específicas: Procurar mediante auditorías el cumplimiento de las metas y objetivos propuestos por cada equipo de trabajo.
Resumo:
Las tecnologías de la información han empezado a ser un factor importante a tener en cuenta en cada uno de los procesos que se llevan a cabo en la cadena de suministro. Su implementación y correcto uso otorgan a las empresas ventajas que favorecen el desempeño operacional a lo largo de la cadena. El desarrollo y aplicación de software han contribuido a la integración de los diferentes miembros de la cadena, de tal forma que desde los proveedores hasta el cliente final, perciben beneficios en las variables de desempeño operacional y nivel de satisfacción respectivamente. Por otra parte es importante considerar que su implementación no siempre presenta resultados positivos, por el contrario dicho proceso de implementación puede verse afectado seriamente por barreras que impiden maximizar los beneficios que otorgan las TIC.
Resumo:
Este documento se centra en la presentación de información y análisis de la misma a la hora de establecer la manera en que empresas del sector de extracción de gas natural y generación de energía a base de dicho recurso, toman decisiones en cuanto a inversión, centrándose en la lógica que usan a la hora de emprender este proceso. Esto debido a la constante necesidad de establecer procesos que permitan tomar decisiones más acertadas, incluyendo todas las herramientas posibles para tal fin. La lógica es una de estas herramientas, pues permite encadenar factores con el fin de obtener resultados positivos. Por tal razón, se hace importante conocer el uso de esta herramienta, teniendo en cuentas de qué manera y en que contextos es usada. Con el fin de tener una mayor orientación, este estudio estará centrado en un sector específico, el cual es el de la extracción de petróleo y gas natural. Lo anterior entendiendo la necesidad existente de fundamentación teórica que permita establecer de manera clara la forma apropiada de tomar decisiones en un sector tan diverso y complejo como lo es el mencionado. El contexto empresarial actual exige una visión global, no basada en la lógica lineal causal que hoy se tiene como referencia. El sector de extracción de petróleo y gas natural es un ejemplo particular en cuanto a la manera en cuanto se toman decisiones en inversión, puesto que en su mayoría son empresas de capital intensivo, las cuales mantienen un flujo elevado de recursos monetarios.