980 resultados para Engineering problems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research project HR-234A was sponsored by the Iowa Highway Research Board and the Iowa Department of Transportation. In the preparation of this compilation of highway and street laws of Iowa, an attempt has been made to include those sections of the Iowa Code Annotated and Iowa Digest to which reference is frequently required by the Department of Transportation, counties, cities and towns in their conduct of highway and street administration, construction and maintenance. This publication is offered with the hope and belief that it will prove to be of value and assistance to those concerned with the problems of establishing, maintaining and administering a highway and street program. Because of the broad scope of highway and street work and the many interrelated provisions of Iowa law, and usable size, some Code provision which are insignificant to the principal subject were omitted out of necessity; others were omitted to avoid repetition. A general index is provided at the end of the text of this volume. Each major topic is divided into subtopics and is accompanied by appropriate Code sections. Specific section numbers as they appear in the Code are in.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Excessive speed on State and County highways is recognized as a serious problem by many Iowans. Speed increases both the risk and severity of accidents. Studies conducted by the FHWA and NHTSA have concluded that if average speeds were increased by five MPH, fatalities would increase by at least 2,200 annually. Along with the safety problems associated with excessive speed are important energy considerations. When the national speed limit was lowered to 55 MPH in 1974, a tremendous savings in fuel was realized. The estimated actual savings for automobiles amounted to 2.2 billion gallons, an average of 20.75 gallons for each of the 106 million automobiles registered in 1975. These benefits prompted the Federal-Aid Amendment of 1974 requiring annual State enforcement certification as a prerequisite for approval of Federal-aid highway projects. In 1978, the United States D.O.T. recommended to Congress significant changes in speed limit legislation designed to increase compliance with the national speed limit. The Highway Safety Act of 1978 provides for both withholding Federal-aid highway funds and awarding incentive grants based on speed compliance data submitted annually. The objective of this study was to develop and make operational, an automatic speed monitoring system which would have flexible capabilities of collecting accurate speed data on all road systems in Iowa. It was concluded that the Automatic Speed Monitoring Program in Iowa has been successful and needed data is being collected in the most economical manner possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Iowa it is normal procedure to either use partial or full-depth patching to repair deteriorated areas of pavement prior to resurfacing. The Owens/Corning Corporation introduced a repair system to replace the patching process. Their Roadglas repair system was used in this research project on US 30 in Story County. It was installed in 1985 and has been observed annually since that time. There were some construction problems with slippage as the roller crossed the abundant Roadglas binder. It appears the Roadglas system has helped to control reflective cracking in the research areas. Since the time when this project was completed it has been reported that Owens/Corning has discontinued production of the Roadglas system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies the problems and their reasons a software architect faces in his work. The purpose of the study is to search and identify potential factors causing problens in system integration and software engineering. Under a special interest are non-technical factors causing different kinds of problems. Thesis was executed by interviewing professionals that took part in e-commerce project in some corporation. Interviewed professionals consisted of architects from technical implementation projects, corporation's architect team leader, different kind of project managers and CRM manager. A specific theme list was used as an guidance of the interviews. Recorded interviews were transcribed and then classified using ATLAS.ti software. Basics of e-commerce, software engineering and system integration is described too. Differences between e-commerce and e-business as well as traditional business are represented as are basic types of e-commerce. Software's life span, general problems of software engineering and software design are covered concerning software engineering. In addition, general problems of the system integration and the special requirements set by e-commerce are described in the thesis. In the ending there is a part where the problems founded in study are described and some areas of software engineering where some development could be done so that same kind of problems could be avoided in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

TRIZ is one of the well-known tools, based on analytical methods for creative problem solving. This thesis suggests adapted version of contradiction matrix, a powerful tool of TRIZ and few principles based on concept of original TRIZ. It is believed that the proposed version would aid in problem solving, especially those encountered in chemical process industries with unit operations. In addition, this thesis would help fresh process engineers to recognize importance of various available methods for creative problem solving and learn TRIZ method of creative problem solving. This thesis work mainly provides idea on how to modify TRIZ based method according to ones requirements to fit in particular niche area and solve problems efficiently in creative way. Here in this case, the contradiction matrix developed is based on review of common problems encountered in chemical process industry, particularly in unit operations and resolutions are based on approaches used in past to handle those issues.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral thesis describes the development work performed on the leachand purification sections in the electrolytic zinc plant in Kokkola to increase the efficiency in these two stages, and thus the competitiveness of the plant. Since metallic zinc is a typical bulk product, the improvement of the competitiveness of a plant was mostly an issue of decreasing unit costs. The problems in the leaching were low recovery of valuable metals from raw materials, and that the available technology offered complicated and expensive processes to overcome this problem. In the purification, the main problem was consumption of zinc powder - up to four to six times the stoichiometric demand. This reduced the capacity of the plant as this zinc is re-circulated through the electrolysis, which is the absolute bottleneck in a zinc plant. Low selectivity gave low-grade and low-value precipitates for further processing to metallic copper, cadmium, cobalt and nickel. Knowledge of the underlying chemistry was poor and process interruptions causing losses of zinc production were frequent. Studies on leaching comprised the kinetics of ferrite leaching and jarosite precipitation, as well as the stability of jarosite in acidic plant solutions. A breakthrough came with the finding that jarosite could precipitate under conditions where ferrite would leach satisfactorily. Based on this discovery, a one-step process for the treatment of ferrite was developed. In the plant, the new process almost doubled the recovery of zinc from ferrite in the same equipment as the two-step jarosite process was operated in at that time. In a later expansion of the plant, investment savings were substantial compared to other technologies available. In the solution purification, the key finding was that Co, Ni, and Cu formed specific arsenides in the “hot arsenic zinc dust” step. This was utilized for the development of a three-step purification stage based on fluidized bed technology in all three steps, i.e. removal of Cu, Co and Cd. Both precipitation rates and selectivity increased, which strongly decreased the zinc powder consumption through a substantially suppressed hydrogen gas evolution. Better selectivity improved the value of the precipitates: cadmium, which caused environmental problems in the copper smelter, was reduced from 1-3% reported normally down to 0.05 %, and a cobalt cake with 15 % Co was easily produced in laboratory experiments in the cobalt removal. The zinc powder consumption in the plant for a solution containing Cu, Co, Ni and Cd (1000, 25, 30 and 350 mg/l, respectively), was around 1.8 g/l; i.e. only 1.4 times the stoichiometric demand – or, about 60% saving in powder consumption. Two processes for direct leaching of the concentrate under atmospheric conditions were developed, one of which was implemented in the Kokkola zinc plant. Compared to the existing pressure leach technology, savings were obtained mostly in investment. The scientific basis for the most important processes and process improvements is given in the doctoral thesis. This includes mathematical modeling and thermodynamic evaluation of experimental results and hypotheses developed. Five of the processes developed in this research and development program were implemented in the plant and are still operated. Even though these processes were developed with the focus on the plant in Kokkola, they can also be implemented at low cost in most of the zinc plants globally, and have thus a great significance in the development of the electrolytic zinc process in general.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s world because of the rapid advancement in the field of technology and business, the requirements are not clear, and they are changing continuously in the development process. Due to those changes in the requirements the software development becomes very difficult. Use of traditional software development methods such as waterfall method is not a good option, as the traditional software development methods are not flexible to requirements and the software can be late and over budget. For developing high quality software that satisfies the customer, the organizations can use software development methods, such as agile methods which are flexible to change requirements at any stage in the development process. The agile methods are iterative and incremental methods that can accelerate the delivery of the initial business values through the continuous planning and feedback, and there is close communication between the customer and developers. The main purpose of the current thesis is to find out the problems in traditional software development and to show how agile methods reduced those problems in software development. The study also focuses the different success factors of agile methods, the success rate of agile projects and comparison between traditional and agile software development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recombinant human adenovirus (Ad) vectors are being extensively explored for their use in gene therapy and recombinant vaccines. Ad vectors are attractive for many reasons, including the fact that (1) they are relatively safe, based on their use as live oral vaccines, (2) they can accept large transgene inserts, (3) they can infect dividing and postmitotic cells, and (4) they can be produced to high titers. However, there are also a number of major problems associated with Ad vectors, including transient foreign gene expression due to host cellular immune responses, problems with humoral immunity, and the creation of replication competent adenoviruses (RCA). Most Ad vectors contain deletions in the E1 region that allow for insertion of a transgene. However, the E1 gene products are required for replication and thus must be supplied in trans by a helper ceillille that will allow for the growth and packaging of the defective virus. For this purpose the 293 cell line (Graham et al., 1977) is used most often; however, homologous recombination between the vector and the cell line often results in the generation of RCA. The presence of RCA in batches of adenoviral vectors for clinical use is a safety risk because tlley . may result in the mobilization and spread of the replication-defective vector viruses, and in significant tissue damage and pathogenicity. The present research focused on the alteration of the 293 cell line such that RCA formation can be eliminated. The strategy to modify the 293 cells involved the removal of the first 380 bp of the adenovirus genome through the process of homologous recombination. The first step towards this goal involved identifying and cloning the left-end cellular-viral jUl1ction from 293 cells to assemble sequences required for homologous recombination. Polymerase chain reaction (PCR) was performed to clone the junction, and the clone was verified through sequencing. The plasn1id PAM2 was then constructed, which served as the targeting cassette used to modify the 293 cells. The cassette consisted of (1) the cellular-viral junction as the left-end region of homology, (2) the neo gene to use for positive selection upon tranfection into 293 cells, (3) the adenoviral genome from bp 380 to bp 3438 as the right-end region of homology, and (4) the HSV-tk gene to use for negative selection. The plasmid PAM2 was linearized to produce a double strand break outside the region of homology, and transfected into 293 cells using the calcium-phosphate technique. Cells were first selected for their resistance to the drug G418, and subsequently for their resistance to the drug Gancyclovir (GANC). From 17 transfections, 100 pools of G418f and GANCf cells were picked using cloning lings and expanded for screening. Genomic DNA was isolated from the pools and screened for the presence of the 380 bps using PCR. Ten of the most promising pools were diluted to single cells and expanded in order to isolate homogeneous cell lines. From these, an additional 100 G41Sf and GANef foci were screened. These preliminary screening results appear promising for the detection of the desired cell line. Future work would include further cloning and purification of the promising cell lines that have potentially undergone homologous recombination, in order to isolate a homogeneous cell line of interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le problème inverse en électroencéphalographie (EEG) est la localisation de sources de courant dans le cerveau utilisant les potentiels de surface sur le cuir chevelu générés par ces sources. Une solution inverse implique typiquement de multiples calculs de potentiels de surface sur le cuir chevelu, soit le problème direct en EEG. Pour résoudre le problème direct, des modèles sont requis à la fois pour la configuration de source sous-jacente, soit le modèle de source, et pour les tissues environnants, soit le modèle de la tête. Cette thèse traite deux approches bien distinctes pour la résolution du problème direct et inverse en EEG en utilisant la méthode des éléments de frontières (BEM): l’approche conventionnelle et l’approche réciproque. L’approche conventionnelle pour le problème direct comporte le calcul des potentiels de surface en partant de sources de courant dipolaires. D’un autre côté, l’approche réciproque détermine d’abord le champ électrique aux sites des sources dipolaires quand les électrodes de surfaces sont utilisées pour injecter et retirer un courant unitaire. Le produit scalaire de ce champ électrique avec les sources dipolaires donne ensuite les potentiels de surface. L’approche réciproque promet un nombre d’avantages par rapport à l’approche conventionnelle dont la possibilité d’augmenter la précision des potentiels de surface et de réduire les exigences informatiques pour les solutions inverses. Dans cette thèse, les équations BEM pour les approches conventionnelle et réciproque sont développées en utilisant une formulation courante, la méthode des résidus pondérés. La réalisation numérique des deux approches pour le problème direct est décrite pour un seul modèle de source dipolaire. Un modèle de tête de trois sphères concentriques pour lequel des solutions analytiques sont disponibles est utilisé. Les potentiels de surfaces sont calculés aux centroïdes ou aux sommets des éléments de discrétisation BEM utilisés. La performance des approches conventionnelle et réciproque pour le problème direct est évaluée pour des dipôles radiaux et tangentiels d’excentricité variable et deux valeurs très différentes pour la conductivité du crâne. On détermine ensuite si les avantages potentiels de l’approche réciproquesuggérés par les simulations du problème direct peuvent êtres exploités pour donner des solutions inverses plus précises. Des solutions inverses à un seul dipôle sont obtenues en utilisant la minimisation par méthode du simplexe pour à la fois l’approche conventionnelle et réciproque, chacun avec des versions aux centroïdes et aux sommets. Encore une fois, les simulations numériques sont effectuées sur un modèle à trois sphères concentriques pour des dipôles radiaux et tangentiels d’excentricité variable. La précision des solutions inverses des deux approches est comparée pour les deux conductivités différentes du crâne, et leurs sensibilités relatives aux erreurs de conductivité du crâne et au bruit sont évaluées. Tandis que l’approche conventionnelle aux sommets donne les solutions directes les plus précises pour une conductivité du crâne supposément plus réaliste, les deux approches, conventionnelle et réciproque, produisent de grandes erreurs dans les potentiels du cuir chevelu pour des dipôles très excentriques. Les approches réciproques produisent le moins de variations en précision des solutions directes pour différentes valeurs de conductivité du crâne. En termes de solutions inverses pour un seul dipôle, les approches conventionnelle et réciproque sont de précision semblable. Les erreurs de localisation sont petites, même pour des dipôles très excentriques qui produisent des grandes erreurs dans les potentiels du cuir chevelu, à cause de la nature non linéaire des solutions inverses pour un dipôle. Les deux approches se sont démontrées également robustes aux erreurs de conductivité du crâne quand du bruit est présent. Finalement, un modèle plus réaliste de la tête est obtenu en utilisant des images par resonace magnétique (IRM) à partir desquelles les surfaces du cuir chevelu, du crâne et du cerveau/liquide céphalorachidien (LCR) sont extraites. Les deux approches sont validées sur ce type de modèle en utilisant des véritables potentiels évoqués somatosensoriels enregistrés à la suite de stimulation du nerf médian chez des sujets sains. La précision des solutions inverses pour les approches conventionnelle et réciproque et leurs variantes, en les comparant à des sites anatomiques connus sur IRM, est encore une fois évaluée pour les deux conductivités différentes du crâne. Leurs avantages et inconvénients incluant leurs exigences informatiques sont également évalués. Encore une fois, les approches conventionnelle et réciproque produisent des petites erreurs de position dipolaire. En effet, les erreurs de position pour des solutions inverses à un seul dipôle sont robustes de manière inhérente au manque de précision dans les solutions directes, mais dépendent de l’activité superposée d’autres sources neurales. Contrairement aux attentes, les approches réciproques n’améliorent pas la précision des positions dipolaires comparativement aux approches conventionnelles. Cependant, des exigences informatiques réduites en temps et en espace sont les avantages principaux des approches réciproques. Ce type de localisation est potentiellement utile dans la planification d’interventions neurochirurgicales, par exemple, chez des patients souffrant d’épilepsie focale réfractaire qui ont souvent déjà fait un EEG et IRM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le Problème de Tournées de Véhicules (PTV) est une clé importante pour gérér efficacement des systèmes logistiques, ce qui peut entraîner une amélioration du niveau de satisfaction de la clientèle. Ceci est fait en servant plus de clients dans un temps plus court. En terme général, il implique la planification des tournées d'une flotte de véhicules de capacité donnée basée à un ou plusieurs dépôts. Le but est de livrer ou collecter une certain quantité de marchandises à un ensemble des clients géographiquement dispersés, tout en respectant les contraintes de capacité des véhicules. Le PTV, comme classe de problèmes d'optimisation discrète et de grande complexité, a été étudié par de nombreux au cours des dernières décennies. Étant donné son importance pratique, des chercheurs dans les domaines de l'informatique, de la recherche opérationnelle et du génie industrielle ont mis au point des algorithmes très efficaces, de nature exacte ou heuristique, pour faire face aux différents types du PTV. Toutefois, les approches proposées pour le PTV ont souvent été accusées d'être trop concentrées sur des versions simplistes des problèmes de tournées de véhicules rencontrés dans des applications réelles. Par conséquent, les chercheurs sont récemment tournés vers des variantes du PTV qui auparavant étaient considérées trop difficiles à résoudre. Ces variantes incluent les attributs et les contraintes complexes observés dans les cas réels et fournissent des solutions qui sont exécutables dans la pratique. Ces extensions du PTV s'appellent Problème de Tournées de Véhicules Multi-Attributs (PTVMA). Le but principal de cette thèse est d'étudier les différents aspects pratiques de trois types de problèmes de tournées de véhicules multi-attributs qui seront modélisés dans celle-ci. En plus, puisque pour le PTV, comme pour la plupart des problèmes NP-complets, il est difficile de résoudre des instances de grande taille de façon optimale et dans un temps d'exécution raisonnable, nous nous tournons vers des méthodes approcheés à base d’heuristiques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse a pour but d’améliorer l’automatisation dans l’ingénierie dirigée par les modèles (MDE pour Model Driven Engineering). MDE est un paradigme qui promet de réduire la complexité du logiciel par l’utilisation intensive de modèles et des transformations automatiques entre modèles (TM). D’une façon simplifiée, dans la vision du MDE, les spécialistes utilisent plusieurs modèles pour représenter un logiciel, et ils produisent le code source en transformant automatiquement ces modèles. Conséquemment, l’automatisation est un facteur clé et un principe fondateur de MDE. En plus des TM, d’autres activités ont besoin d’automatisation, e.g. la définition des langages de modélisation et la migration de logiciels. Dans ce contexte, la contribution principale de cette thèse est de proposer une approche générale pour améliorer l’automatisation du MDE. Notre approche est basée sur la recherche méta-heuristique guidée par les exemples. Nous appliquons cette approche sur deux problèmes importants de MDE, (1) la transformation des modèles et (2) la définition précise de langages de modélisation. Pour le premier problème, nous distinguons entre la transformation dans le contexte de la migration et les transformations générales entre modèles. Dans le cas de la migration, nous proposons une méthode de regroupement logiciel (Software Clustering) basée sur une méta-heuristique guidée par des exemples de regroupement. De la même façon, pour les transformations générales, nous apprenons des transformations entre modèles en utilisant un algorithme de programmation génétique qui s’inspire des exemples des transformations passées. Pour la définition précise de langages de modélisation, nous proposons une méthode basée sur une recherche méta-heuristique, qui dérive des règles de bonne formation pour les méta-modèles, avec l’objectif de bien discriminer entre modèles valides et invalides. Les études empiriques que nous avons menées, montrent que les approches proposées obtiennent des bons résultats tant quantitatifs que qualitatifs. Ceux-ci nous permettent de conclure que l’amélioration de l’automatisation du MDE en utilisant des méthodes de recherche méta-heuristique et des exemples peut contribuer à l’adoption plus large de MDE dans l’industrie à là venir.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The constructional activities in the coastal belt of our country often demand deep foundations because of the poor engineering properties and the related problems arising from weak soil at shallow depths.The soil profile in coastal area often consists of very loose sandy soils extending to a depth of 3 to 4 m from the ground level underlain by clayey soils of medium consistency.The very low shearing resistance of the foundation bed causes local as well as punching shear failure.Hence structures built on these soils may suffer from excessive settlements.This type of soil profile is very common in coastal areas of Kerala,especially in Cochin. Further,the high water table and limited depth of the top sandy layer in these areas restrict the depth of foundation thereby further reducing the safe bearing capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the use of simulation as a problem-solving tool to solve a few logistic system related problems. More specifically it relates to studies on transport terminals. Transport terminals are key elements in the supply chains of industrial systems. One of the problems related to use of simulation is that of the multiplicity of models needed to study different problems. There is a need for development of methodologies related to conceptual modelling which will help reduce the number of models needed. Three different logistic terminal systems Viz. a railway yard, container terminal of apart and airport terminal were selected as cases for this study. The standard methodology for simulation development consisting of system study and data collection, conceptual model design, detailed model design and development, model verification and validation, experimentation, and analysis of results, reporting of finding were carried out. We found that models could be classified into tightly pre-scheduled, moderately pre-scheduled and unscheduled systems. Three types simulation models( called TYPE 1, TYPE 2 and TYPE 3) of various terminal operations were developed in the simulation package Extend. All models were of the type discrete-event simulation. Simulation models were successfully used to help solve strategic, tactical and operational problems related to three important logistic terminals as set in our objectives. From the point of contribution to conceptual modelling we have demonstrated that clubbing problems into operational, tactical and strategic and matching them with tightly pre-scheduled, moderately pre-scheduled and unscheduled systems is a good workable approach which reduces the number of models needed to study different terminal related problems.