926 resultados para Complex engineering problems
Resumo:
This thesis studies the problems and their reasons a software architect faces in his work. The purpose of the study is to search and identify potential factors causing problens in system integration and software engineering. Under a special interest are non-technical factors causing different kinds of problems. Thesis was executed by interviewing professionals that took part in e-commerce project in some corporation. Interviewed professionals consisted of architects from technical implementation projects, corporation's architect team leader, different kind of project managers and CRM manager. A specific theme list was used as an guidance of the interviews. Recorded interviews were transcribed and then classified using ATLAS.ti software. Basics of e-commerce, software engineering and system integration is described too. Differences between e-commerce and e-business as well as traditional business are represented as are basic types of e-commerce. Software's life span, general problems of software engineering and software design are covered concerning software engineering. In addition, general problems of the system integration and the special requirements set by e-commerce are described in the thesis. In the ending there is a part where the problems founded in study are described and some areas of software engineering where some development could be done so that same kind of problems could be avoided in the future.
Resumo:
Le système digestif est colonisé dès la naissance par une population bactérienne, le microbiote, qui influence le développement du système immunitaire. Des modifications dans sa composition sont associées à des pathologies comme l'obésité et les maladies inflammatoires chroniques de l'intestin. Outre les antibiotiques, des facteurs environnementaux comme le tabagisme semblent aussi avoir une influence sur la composition de la flore intestinale, pouvant en partie expliquer la prise de poids à l'arrêt du tabac avec une modification de la composition du microbiote proche de celle observée chez des personnes obèses (profil microbiotique montrant des capacités accrues d'extraction calorique des aliments ingérés). Ces découvertes permettent d'imaginer de nouvelles approches diagnostiques et thérapeutiques via la régulation de ce microbiome. The digestive tract is colonized from birth by a bacterial population called the microbiota which influences the development of the immune system. Modifications in its composition are associated with problems such as obesity or inflammatory bowel diseases. Antibiotics are known to influence the intestinal microbiota but other environmental factors such as cigarette smoking also seem to have an impact on its composition. This influence might partly explain weight gain which is observed after smoking cessation. Indeed there is a modification of the gut microbiota which becomes similar to that of obese people with a microbiotical profile which is more efficient to extract calories from ingested food. These new findings open new fields of diagnostic and therapeutic approaches through the regulation of the microbiota.
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
Early warning systems (EWSs) rely on the capacity to forecast a dangerous event with a certain amount of advance by defining warning criteria on which the safety of the population will depend. Monitoring of landslides is facilitated by new technologies, decreasing prices and easier data processing. At the same time, predicting the onset of a rapid failure or the sudden transition from slow to rapid failure and subsequent collapse, and its consequences is challenging for scientists that must deal with uncertainties and have limited tools to do so. Furthermore, EWS and warning criteria are becoming more and more a subject of concern between technical experts, researchers, stakeholders and decision makers responsible for the activation, enforcement and approval of civil protection actions. EWSs imply also a sharing of responsibilities which is often averted by technical staff, managers of technical offices and governing institutions. We organized the First International Workshop on Warning Criteria for Active Slides (IWWCAS) to promote sharing and networking among members from specialized institutions and relevant experts of EWS. In this paper, we summarize the event to stimulate discussion and collaboration between organizations dealing with the complex task of managing hazard and risk related to active slides.
Resumo:
Peer-reviewed
Resumo:
L’objectiu del present TFM és explorar les possibilitats del programa matemàtic MATLAB i la seva eina Entorn de Disseny d’Interfícies Gràfiques d’Usuari (GUIDE), desenvolupant un programa d’anàlisi d’imatges de provetes metal·logràfiques que es pugui utilitzar per a realitzar pràctiques de laboratori de l’assignatura Tecnologia de Materials de la titulació de Grau en Enginyeria Mecatrònica que s’imparteix a la Universitat de Vic. Les àrees d’interès del treball són la Instrumentació Virtual, la programació MATLAB i les tècniques d’anàlisi d’imatges metal·logràfiques. En la memòria es posa un èmfasi especial en el disseny de la interfície i dels procediments per a efectuar les mesures. El resultat final és un programa que satisfà tots els requeriments que s’havien imposat en la proposta inicial. La interfície del programa és clara i neta, destinant molt espai a la imatge que s’analitza. L’estructura i disposició dels menús i dels comandaments ajuda a que la utilització del programa sigui fàcil i intuïtiva. El programa s’ha estructurat de manera que sigui fàcilment ampliable amb altres rutines de mesura, o amb l’automatització de les rutines existents. Al tractar-se d’un programa que funciona com un instrument de mesura, es dedica un capítol sencer de la memòria a mostrar el procediment de càlcul dels errors que s’ocasionen durant la seva utilització, amb la finalitat de conèixer el seu ordre de magnitud, i de saber-los calcular de nou en cas que variïn les condicions d’utilització. Pel que fa referència a la programació, malgrat que MATLAB no sigui un entorn de programació clàssic, sí que incorpora eines que permeten fer aplicacions no massa complexes, i orientades bàsicament a gràfics o a imatges. L’eina GUIDE simplifica la realització de la interfície d’usuari, malgrat que presenta problemes per tractar dissenys una mica complexos. Per altra banda, el codi generat per GUIDE no és accessible, cosa que no permet modificar manualment la interfície en aquells casos en els que GUIDE té problemes. Malgrat aquests petits problemes, la potència de càlcul de MATLAB compensa sobradament aquestes deficiències.
Resumo:
Tehdyssä kirjallisuus- ja teoriakatsauksessa vuosien 2006 - 2010 välisenä aikana, Keski-Suomessa toimivan konepajateollisuuden järjestelmätoimittajayrityksen toimeksiannosta, pyrittiin muodostamaan kokonaiskuva laajasta tuotannonsuunnittelun ja -ohjauksen aihealueesta. Perustutkimuskysymykset liittyivät ns. MPC-systeemiin, jolla tarkoitetaan sitä, että tuotannonsuunnittelu- ja ohjauskysymyksissä on huomioitava aina henkilöiden, organisaation, teknologioiden ja prosessien muodostama kokonaisuus. Operatiivisen johtamisen tehtävänä on yrityksen tuotteita koskevan kysynnän ja tarjonnan tasapainottaminen niin, että resursseja käytettäisiin ja tarvittaisiin mahdollisimman vähän vastattaessa kysyntään asiakasvaatimukset huomioiden. Tuotantostrategian pohjalta on voitava rakentaa MPC-systeemi, jonka avulla ja jota kehittäen tuotanto saavuttaisi sille asetetut suorituskykytavoitteet mm. kustannusten, laadun, nopeuden, luotettavuuden sekä tuottavuuskehityksen osalta. Työssä tarkasteltiin yleisen kolmitasoisen viitekehyksen kautta ”perinteisistä MPC-systeemien perusratkaisuista” hierarkkisia, suunnittelu- ja laskentaintensiiviä, MRP-pohjaisia sekä yksinkertaistamiseen ja nopeuteen perustuvia JIT/Lean -menetelmiä. Tämä viitekehys käsittää: 1) kysynnän- ja resurssien hallinnan, 2) yksityiskohtaisemman kapasiteetin ja materiaalien hallinnan sekä 3) tarkemman tuotannon ja hankintojen ohjauksen sekä tuotannon lattiatason osa-alueet. Johtamisen ja MPC-systeemien kehittämisen ”uusina aaltoina ja näkökulmina” raportissa käsiteltiin myös johtamisen eri koulukuntia sekä em. viitekehyksen pohjalta tarvittavia tietojärjestelmiä. Olennaisimpana johtopäätöksenä todettiin, että MRP-pohjaisten ratkaisujen lisäksi, etenkin monimutkaisia tuotteita tilausohjautuvasti valmistavien kappaletavarateollisuuden yritysten, on mahdollisesti hyödynnettävä myös kehittyneempiä suunnittelu- ja ohjausjärjestelmiä. Lisäksi huomattiin, että ”perinteisten strategioiden” rinnalle yritysten on nostettava myös tieto- ja viestintäteknologiastrategiat. On tärkeää ymmärtää, että täydellistä MPC-systeemiä ei ole vielä keksitty: jokaisen yrityksen tehtäväksi ja vastuulle jää ”oman totuutensa” muodostaminen ja systeeminsä rakentaminen sen pohjalta.
Resumo:
The computer is a useful tool in the teaching of upper secondary school physics, and should not have a subordinate role in students' learning process. However, computers and computer-based tools are often not available when they could serve their purpose best in the ongoing teaching. Another problem is the fact that commercially available tools are not usable in the way the teacher wants. The aim of this thesis was to try out a novel teaching scenario in a complicated subject in physics, electrodynamics. The didactic engineering of the thesis consisted of developing a computer-based simulation and training material, implementing the tool in physics teaching and investigating its effectiveness in the learning process. The design-based research method, didactic engineering (Artigue, 1994), which is based on the theoryof didactical situations (Brousseau, 1997), was used as a frame of reference for the design of this type of teaching product. In designing the simulation tool a general spreadsheet program was used. The design was based on parallel, dynamic representations of the physics behind the function of an AC series circuit in both graphical and numerical form. The tool, which was furnished with possibilities to control the representations in an interactive way, was hypothesized to activate the students and promote the effectiveness of their learning. An effect variable was constructed in order to measure the students' and teachers' conceptions of learning effectiveness. The empirical study was twofold. Twelve physics students, who attended a course in electrodynamics in an upper secondary school, participated in a class experiment with the computer-based tool implemented in three modes of didactical situations: practice, concept introduction and assessment. The main goal of the didactical situations was to have students solve problems and study the function of AC series circuits, taking responsibility for theirown learning process. In the teacher study eighteen Swedish speaking physics teachers evaluated the didactic potential of the computer-based tool and the accompanying paper-based material without using them in their physics teaching. Quantitative and qualitative data were collected using questionnaires, observations and interviews. The result of the studies showed that both the group of students and the teachers had generally positive conceptions of learning effectiveness. The students' conceptions were more positive in the practice situation than in the concept introduction situation, a setting that was more explorative. However, it turned out that the students' conceptions were also positive in the more complex assessment situation. This had not been hypothesized. A deeper analysis of data from observations and interviews showed that one of the students in each pair was more active than the other, taking more initiative and more responsibilityfor the student-student and student-computer interaction. These active studentshad strong, positive conceptions of learning effectiveness in each of the threedidactical situations. The group of less active students had a weak but positive conception in the first iv two situations, but a negative conception in the assessment situation, thus corroborating the hypothesis ad hoc. The teacher study revealed that computers were seldom used in physics teaching and that computer programs were in short supply. The use of a computer was considered time-consuming. As long as physics teaching with computer-based tools has to take place in special computer rooms, the use of such tools will remain limited. The affordance is enhanced when the physical dimensions as well as the performance of the computer are optimised. As a consequence, the computer then becomes a real learning tool for each pair of students, smoothly integrated into the ongoing teaching in the same space where teaching normally takes place. With more interactive support from the teacher, the computer-based parallel, dynamic representations will be efficient in promoting the learning process of the students with focus on qualitative reasoning - an often neglected part of the learning process of the students in upper secondary school physics.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
The aim of this master’s thesis is to study how Agile method (Scrum) and open source software are utilized to produce software for a flagship product in a complex production environment. The empirical case and the used artefacts are taken from the Nokia MeeGo N9 product program, and from the related software program, called as the Harmattan. The single research case is analysed by using a qualitative method. The Grounded Theory principles are utilized, first, to find out all the related concepts from artefacts. Second, these concepts are analysed, and finally categorized to a core category and six supported categories. The result is formulated as the operation of software practices conceivable in circumstances, where the accountable software development teams and related context accepts a open source software nature as a part of business vision and the whole organization supports the Agile methods.
Resumo:
Products developed at industries, institutes and research centers are expected to have high level of quality and performance, having a minimum waste, which require efficient and robust tools to numerically simulate stringent project conditions with great reliability. In this context, Computational Fluid Dynamics (CFD) plays an important role and the present work shows two numerical algorithms that are used in the CFD community to solve the Euler and Navier-Stokes equations applied to typical aerospace and aeronautical problems. Particularly, unstructured discretization of the spatial domain has gained special attention by the international community due to its ease in discretizing complex spatial domains. This work has the main objective of illustrating some advantages and disadvantages of numerical algorithms using structured and unstructured spatial discretization of the flow governing equations. Numerical methods include a finite volume formulation and the Euler and Navier-Stokes equations are applied to solve a transonic nozzle problem, a low supersonic airfoil problem and a hypersonic inlet problem. In a structured context, these problems are solved using MacCormacks implicit algorithm with Steger and Warmings flux vector splitting technique, while, in an unstructured context, Jameson and Mavriplis explicit algorithm is used. Convergence acceleration is obtained using a spatially variable time stepping procedure.
Resumo:
TRIZ is one of the well-known tools, based on analytical methods for creative problem solving. This thesis suggests adapted version of contradiction matrix, a powerful tool of TRIZ and few principles based on concept of original TRIZ. It is believed that the proposed version would aid in problem solving, especially those encountered in chemical process industries with unit operations. In addition, this thesis would help fresh process engineers to recognize importance of various available methods for creative problem solving and learn TRIZ method of creative problem solving. This thesis work mainly provides idea on how to modify TRIZ based method according to ones requirements to fit in particular niche area and solve problems efficiently in creative way. Here in this case, the contradiction matrix developed is based on review of common problems encountered in chemical process industry, particularly in unit operations and resolutions are based on approaches used in past to handle those issues.
Improving the competitiveness of electrolytic Zinc process by chemical reaction engineering approach
Resumo:
This doctoral thesis describes the development work performed on the leachand purification sections in the electrolytic zinc plant in Kokkola to increase the efficiency in these two stages, and thus the competitiveness of the plant. Since metallic zinc is a typical bulk product, the improvement of the competitiveness of a plant was mostly an issue of decreasing unit costs. The problems in the leaching were low recovery of valuable metals from raw materials, and that the available technology offered complicated and expensive processes to overcome this problem. In the purification, the main problem was consumption of zinc powder - up to four to six times the stoichiometric demand. This reduced the capacity of the plant as this zinc is re-circulated through the electrolysis, which is the absolute bottleneck in a zinc plant. Low selectivity gave low-grade and low-value precipitates for further processing to metallic copper, cadmium, cobalt and nickel. Knowledge of the underlying chemistry was poor and process interruptions causing losses of zinc production were frequent. Studies on leaching comprised the kinetics of ferrite leaching and jarosite precipitation, as well as the stability of jarosite in acidic plant solutions. A breakthrough came with the finding that jarosite could precipitate under conditions where ferrite would leach satisfactorily. Based on this discovery, a one-step process for the treatment of ferrite was developed. In the plant, the new process almost doubled the recovery of zinc from ferrite in the same equipment as the two-step jarosite process was operated in at that time. In a later expansion of the plant, investment savings were substantial compared to other technologies available. In the solution purification, the key finding was that Co, Ni, and Cu formed specific arsenides in the “hot arsenic zinc dust” step. This was utilized for the development of a three-step purification stage based on fluidized bed technology in all three steps, i.e. removal of Cu, Co and Cd. Both precipitation rates and selectivity increased, which strongly decreased the zinc powder consumption through a substantially suppressed hydrogen gas evolution. Better selectivity improved the value of the precipitates: cadmium, which caused environmental problems in the copper smelter, was reduced from 1-3% reported normally down to 0.05 %, and a cobalt cake with 15 % Co was easily produced in laboratory experiments in the cobalt removal. The zinc powder consumption in the plant for a solution containing Cu, Co, Ni and Cd (1000, 25, 30 and 350 mg/l, respectively), was around 1.8 g/l; i.e. only 1.4 times the stoichiometric demand – or, about 60% saving in powder consumption. Two processes for direct leaching of the concentrate under atmospheric conditions were developed, one of which was implemented in the Kokkola zinc plant. Compared to the existing pressure leach technology, savings were obtained mostly in investment. The scientific basis for the most important processes and process improvements is given in the doctoral thesis. This includes mathematical modeling and thermodynamic evaluation of experimental results and hypotheses developed. Five of the processes developed in this research and development program were implemented in the plant and are still operated. Even though these processes were developed with the focus on the plant in Kokkola, they can also be implemented at low cost in most of the zinc plants globally, and have thus a great significance in the development of the electrolytic zinc process in general.
Resumo:
This study combines several projects related to the flows in vessels with complex shapes representing different chemical apparata. Three major cases were studied. The first one is a two-phase plate reactor with a complex structure of intersecting micro channels engraved on one plate which is covered by another plain plate. The second case is a tubular microreactor, consisting of two subcases. The first subcase is a multi-channel two-component commercial micromixer (slit interdigital) used to mix two liquid reagents before they enter the reactor. The second subcase is a micro-tube, where the distribution of the heat generated by the reaction was studied. The third case is a conventionally packed column. However, flow, reactions or mass transfer were not modeled. Instead, the research focused on how to describe mathematically the realistic geometry of the column packing, which is rather random and can not be created using conventional computeraided design or engineering (CAD/CAE) methods. Several modeling approaches were used to describe the performance of the processes in the considered vessels. Computational fluid dynamics (CFD) was used to describe the details of the flow in the plate microreactor and micromixer. A space-averaged mass transfer model based on Fick’s law was used to describe the exchange of the species through the gas-liquid interface in the microreactor. This model utilized data, namely the values of the interfacial area, obtained by the corresponding CFD model. A common heat transfer model was used to find the heat distribution in the micro-tube. To generate the column packing, an additional multibody dynamic model was implemented. Auxiliary simulation was carried out to determine the position and orientation of every packing element in the column. This data was then exported into a CAD system to generate desirable geometry, which could further be used for CFD simulations. The results demonstrated that the CFD model of the microreactor could predict the flow pattern well enough and agreed with experiments. The mass transfer model allowed to estimate the mass transfer coefficient. Modeling for the second case showed that the flow in the micromixer and the heat transfer in the tube could be excluded from the larger model which describes the chemical kinetics in the reactor. Results of the third case demonstrated that the auxiliary simulation could successfully generate complex random packing not only for the column but also for other similar cases.