940 resultados para An eddy-resolving ocean model simulation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn tavoitteena oli toteuttaa simulointimalli, jolla pystytään tutkimaan kestomagnetoidun tahtikoneen aiheuttaman vääntömomenttivärähtelyn vaikutuksia sähkömoottoriin liitetyssä mekaniikassa. Tarkoitus oli lisäksi selvittää kuinka kyseinen simulointimalli voidaan toteuttaa nykyaikaisia simulointiohjelmia käyttäen. Saatujen simulointitulosten oikeellisuus varmistettiin tätä työtä varten rakennetulla verifiointilaitteistolla. Tutkittava rakenne koostui akselista, johon kiinnitettiin epäkeskotanko. Epäkeskotankoon kiinnitettiin massa, jonka sijaintia voitiin muunnella. Massan asemaa muuttamalla saatiin rakenteelle erilaisia ominaistaajuuksia. Epäkeskotanko mallinnettiin joustavana elementtimenetelmää apuna käyttäen. Mekaniikka mallinnettiin dynamiikan simulointiin tarkoitetussa ADAMS –ohjelmistossa, johon joustavana mallinnettu epäkeskotanko tuotiin ANSYS –elementtimenetelmäohjelmasta. Mekaniikan malli siirrettiin SIMULINK –ohjelmistoon, jossa mallinnettiin myös sähkökäyttö. SIMULINK –ohjelmassa mallinnettiin sähkökäyttö, joka kuvaa kestomagnetoitua tahtikonetta. Kestomagnetoidun tahtikoneen yhtälöt perustuvat lineaarisiin differentiaaliyhtälöihin, joihin hammasvääntömomentin vaikutus on lisätty häiriösignaalina. Sähkökäytön malli tuottaa vääntömomenttia, joka syötetään ADAMS –ohjelmistolla mallinnettuun mekaniikkaan. Mekaniikan mallista otetaan roottorin kulmakiihtyvyyden arvo takaisinkytkentänä sähkömoottorin malliin. Näin saadaan aikaiseksi yhdistetty simulointi, joka koostuu sähkötoimilaitekäytöstä ja mekaniikasta. Tulosten perusteella voidaan todeta, että sähkökäyttöjen ja mekaniikan yhdistetty simulointi on mahdollista toteuttaa valituilla menetelmillä. Simuloimalla saadut tulokset vastaavat hyvin mitattuja tuloksia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

«Quel est l'âge de cette trace digitale?» Cette question est relativement souvent soulevée au tribunal ou lors d'investigations, lorsque la personne suspectée admet avoir laissé ses empreintes digitales sur une scène de crime mais prétend l'avoir fait à un autre moment que celui du crime et pour une raison innocente. Toutefois, aucune réponse ne peut actuellement être donnée à cette question, puisqu'aucune méthodologie n'est pour l'heure validée et acceptée par l'ensemble de la communauté forensique. Néanmoins, l'inventaire de cas américains conduit dans cette recherche a montré que les experts fournissent tout de même des témoignages au tribunal concernant l'âge de traces digitales, même si ceux-­‐ci sont majoritairement basés sur des paramètres subjectifs et mal documentés. Il a été relativement aisé d'accéder à des cas américains détaillés, ce qui explique le choix de l'exemple. Toutefois, la problématique de la datation des traces digitales est rencontrée dans le monde entier, et le manque de consensus actuel dans les réponses données souligne la nécessité d'effectuer des études sur le sujet. Le but de la présente recherche est donc d'évaluer la possibilité de développer une méthode de datation objective des traces digitales. Comme les questions entourant la mise au point d'une telle procédure ne sont pas nouvelles, différentes tentatives ont déjà été décrites dans la littérature. Cette recherche les a étudiées de manière critique, et souligne que la plupart des méthodologies reportées souffrent de limitations prévenant leur utilisation pratique. Néanmoins, certaines approches basées sur l'évolution dans le temps de composés intrinsèques aux résidus papillaires se sont montrées prometteuses. Ainsi, un recensement détaillé de la littérature a été conduit afin d'identifier les composés présents dans les traces digitales et les techniques analytiques capables de les détecter. Le choix a été fait de se concentrer sur les composés sébacés détectés par chromatographie gazeuse couplée à la spectrométrie de masse (GC/MS) ou par spectroscopie infrarouge à transformée de Fourier. Des analyses GC/MS ont été menées afin de caractériser la variabilité initiale de lipides cibles au sein des traces digitales d'un même donneur (intra-­‐variabilité) et entre les traces digitales de donneurs différents (inter-­‐variabilité). Ainsi, plusieurs molécules ont été identifiées et quantifiées pour la première fois dans les résidus papillaires. De plus, il a été déterminé que l'intra-­‐variabilité des résidus était significativement plus basse que l'inter-­‐variabilité, mais que ces deux types de variabilité pouvaient être réduits en utilisant différents pré-­‐ traitements statistiques s'inspirant du domaine du profilage de produits stupéfiants. Il a également été possible de proposer un modèle objectif de classification des donneurs permettant de les regrouper dans deux classes principales en se basant sur la composition initiale de leurs traces digitales. Ces classes correspondent à ce qui est actuellement appelé de manière relativement subjective des « bons » ou « mauvais » donneurs. Le potentiel d'un tel modèle est élevé dans le domaine de la recherche en traces digitales, puisqu'il permet de sélectionner des donneurs représentatifs selon les composés d'intérêt. En utilisant la GC/MS et la FTIR, une étude détaillée a été conduite sur les effets de différents facteurs d'influence sur la composition initiale et le vieillissement de molécules lipidiques au sein des traces digitales. Il a ainsi été déterminé que des modèles univariés et multivariés pouvaient être construits pour décrire le vieillissement des composés cibles (transformés en paramètres de vieillissement par pré-­‐traitement), mais que certains facteurs d'influence affectaient ces modèles plus sérieusement que d'autres. En effet, le donneur, le substrat et l'application de techniques de révélation semblent empêcher la construction de modèles reproductibles. Les autres facteurs testés (moment de déposition, pression, température et illumination) influencent également les résidus et leur vieillissement, mais des modèles combinant différentes valeurs de ces facteurs ont tout de même prouvé leur robustesse dans des situations bien définies. De plus, des traces digitales-­‐tests ont été analysées par GC/MS afin d'être datées en utilisant certains des modèles construits. Il s'est avéré que des estimations correctes étaient obtenues pour plus de 60 % des traces-­‐tests datées, et jusqu'à 100% lorsque les conditions de stockage étaient connues. Ces résultats sont intéressants mais il est impératif de conduire des recherches supplémentaires afin d'évaluer les possibilités d'application de ces modèles dans des cas réels. Dans une perspective plus fondamentale, une étude pilote a également été effectuée sur l'utilisation de la spectroscopie infrarouge combinée à l'imagerie chimique (FTIR-­‐CI) afin d'obtenir des informations quant à la composition et au vieillissement des traces digitales. Plus précisément, la capacité de cette technique à mettre en évidence le vieillissement et l'effet de certains facteurs d'influence sur de larges zones de traces digitales a été investiguée. Cette information a ensuite été comparée avec celle obtenue par les spectres FTIR simples. Il en a ainsi résulté que la FTIR-­‐CI était un outil puissant, mais que son utilisation dans l'étude des résidus papillaires à des buts forensiques avait des limites. En effet, dans cette recherche, cette technique n'a pas permis d'obtenir des informations supplémentaires par rapport aux spectres FTIR traditionnels et a également montré des désavantages majeurs, à savoir de longs temps d'analyse et de traitement, particulièrement lorsque de larges zones de traces digitales doivent être couvertes. Finalement, les résultats obtenus dans ce travail ont permis la proposition et discussion d'une approche pragmatique afin d'aborder les questions de datation des traces digitales. Cette approche permet ainsi d'identifier quel type d'information le scientifique serait capable d'apporter aux enquêteurs et/ou au tribunal à l'heure actuelle. De plus, le canevas proposé décrit également les différentes étapes itératives de développement qui devraient être suivies par la recherche afin de parvenir à la validation d'une méthodologie de datation des traces digitales objective, dont les capacités et limites sont connues et documentées. -- "How old is this fingermark?" This question is relatively often raised in trials when suspects admit that they have left their fingermarks on a crime scene but allege that the contact occurred at a time different to that of the crime and for legitimate reasons. However, no answer can be given to this question so far, because no fingermark dating methodology has been validated and accepted by the whole forensic community. Nevertheless, the review of past American cases highlighted that experts actually gave/give testimonies in courts about the age of fingermarks, even if mostly based on subjective and badly documented parameters. It was relatively easy to access fully described American cases, thus explaining the origin of the given examples. However, fingermark dating issues are encountered worldwide, and the lack of consensus among the given answers highlights the necessity to conduct research on the subject. The present work thus aims at studying the possibility to develop an objective fingermark dating method. As the questions surrounding the development of dating procedures are not new, different attempts were already described in the literature. This research proposes a critical review of these attempts and highlights that most of the reported methodologies still suffer from limitations preventing their use in actual practice. Nevertheless, some approaches based on the evolution of intrinsic compounds detected in fingermark residue over time appear to be promising. Thus, an exhaustive review of the literature was conducted in order to identify the compounds available in the fingermark residue and the analytical techniques capable of analysing them. It was chosen to concentrate on sebaceous compounds analysed using gas chromatography coupled with mass spectrometry (GC/MS) or Fourier transform infrared spectroscopy (FTIR). GC/MS analyses were conducted in order to characterize the initial variability of target lipids among fresh fingermarks of the same donor (intra-­‐variability) and between fingermarks of different donors (inter-­‐variability). As a result, many molecules were identified and quantified for the first time in fingermark residue. Furthermore, it was determined that the intra-­‐variability of the fingermark residue was significantly lower than the inter-­‐variability, but that it was possible to reduce both kind of variability using different statistical pre-­‐ treatments inspired from the drug profiling area. It was also possible to propose an objective donor classification model allowing the grouping of donors in two main classes based on their initial lipid composition. These classes correspond to what is relatively subjectively called "good" or "bad" donors. The potential of such a model is high for the fingermark research field, as it allows the selection of representative donors based on compounds of interest. Using GC/MS and FTIR, an in-­‐depth study of the effects of different influence factors on the initial composition and aging of target lipid molecules found in fingermark residue was conducted. It was determined that univariate and multivariate models could be build to describe the aging of target compounds (transformed in aging parameters through pre-­‐ processing techniques), but that some influence factors were affecting these models more than others. In fact, the donor, the substrate and the application of enhancement techniques seemed to hinder the construction of reproducible models. The other tested factors (deposition moment, pressure, temperature and illumination) also affected the residue and their aging, but models combining different values of these factors still proved to be robust. Furthermore, test-­‐fingermarks were analysed with GC/MS in order to be dated using some of the generated models. It turned out that correct estimations were obtained for 60% of the dated test-­‐fingermarks and until 100% when the storage conditions were known. These results are interesting but further research should be conducted to evaluate if these models could be used in uncontrolled casework conditions. In a more fundamental perspective, a pilot study was also conducted on the use of infrared spectroscopy combined with chemical imaging in order to gain information about the fingermark composition and aging. More precisely, its ability to highlight influence factors and aging effects over large areas of fingermarks was investigated. This information was then compared with that given by individual FTIR spectra. It was concluded that while FTIR-­‐ CI is a powerful tool, its use to study natural fingermark residue for forensic purposes has to be carefully considered. In fact, in this study, this technique does not yield more information on residue distribution than traditional FTIR spectra and also suffers from major drawbacks, such as long analysis and processing time, particularly when large fingermark areas need to be covered. Finally, the results obtained in this research allowed the proposition and discussion of a formal and pragmatic framework to approach the fingermark dating questions. It allows identifying which type of information the scientist would be able to bring so far to investigators and/or Justice. Furthermore, this proposed framework also describes the different iterative development steps that the research should follow in order to achieve the validation of an objective fingermark dating methodology, whose capacities and limits are well known and properly documented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: An inverse correlation between expression of the aldehyde dehydrogenase 1 subfamily A2 (ALDH1A2) and gene promoter methylation has been identified as a common feature of oropharyngeal squamous cell carcinoma (OPSCC). Moreover, low ALDH1A2 expression was associated with an unfavorable prognosis of OPSCC patients, however the causal link between reduced ALDH1A2 function and treatment failure has not been addressed so far. METHODS: Serial sections from tissue microarrays of patients with primary OPSCC (n = 101) were stained by immunohistochemistry for key regulators of retinoic acid (RA) signaling, including ALDH1A2. Survival with respect to these regulators was investigated by univariate Kaplan-Meier analysis and multivariate Cox regression proportional hazard models. The impact of ALDH1A2-RAR signaling on tumor-relevant processes was addressed in established tumor cell lines and in an orthotopic mouse xenograft model. RESULTS: Immunohistochemical analysis showed an improved prognosis of ALDH1A2(high) OPSCC only in the presence of CRABP2, an intracellular RA transporter. Moreover, an ALDH1A2(high)CRABP2(high) staining pattern served as an independent predictor for progression-free (HR: 0.395, p = 0.007) and overall survival (HR: 0.303, p = 0.002), suggesting a critical impact of RA metabolism and signaling on clinical outcome. Functionally, ALDH1A2 expression and activity in tumor cell lines were related to RA levels. While administration of retinoids inhibited clonogenic growth and proliferation, the pharmacological inhibition of ALDH1A2-RAR signaling resulted in loss of cell-cell adhesion and a mesenchymal-like phenotype. Xenograft tumors derived from FaDu cells with stable silencing of ALDH1A2 and primary tumors from OPSCC patients with low ALDH1A2 expression exhibited a mesenchymal-like phenotype characterized by vimentin expression. CONCLUSIONS: This study has unraveled a critical role of ALDH1A2-RAR signaling in the pathogenesis of head and neck cancer and our data implicate that patients with ALDH1A2(low) tumors might benefit from adjuvant treatment with retinoids.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la implementació del CLIL a l’educació superior, apart d’estudis sobre el nivell de l’estudiantat i la disponibilitat del professorat, i de l’elaboració de material educatiu interdisciplinari, el repte actual és aconseguir que s’involucrin activament en CLIL els professors de contingut d’un ventall ampli de disciplines. En aquesta comunicació es presenten les bases d’un model per un sistema CLIL, utilitzant la dinàmica newtoniana. Pot ser un model interessant i plausible en un context universitari científic i tecnològic, on fins ara el CLIL s’ha implementat només lleugerament.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examine the scale invariants in the preparation of highly concentrated w/o emulsions at different scales and in varying conditions. The emulsions are characterized using rheological parameters, owing to their highly elastic behavior. We first construct and validate empirical models to describe the rheological properties. These models yield a reasonable prediction of experimental data. We then build an empirical scale-up model, to predict the preparation and composition conditions that have to be kept constant at each scale to prepare the same emulsion. For this purpose, three preparation scales with geometric similarity are used. The parameter N¿D^α, as a function of the stirring rate N, the scale (D, impeller diameter) and the exponent α (calculated empirically from the regression of all the experiments in the three scales), is defined as the scale invariant that needs to be optimized, once the dispersed phase of the emulsion, the surfactant concentration, and the dispersed phase addition time are set. As far as we know, no other study has obtained a scale invariant factor N¿Dα for the preparation of highly concentrated emulsions prepared at three different scales, which covers all three scales, different addition times and surfactant concentrations. The power law exponent obtained seems to indicate that the scale-up criterion for this system is the power input per unit volume (P/V).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

APROS (Advanced Process Simulation Environment) is a computer simulation program developed to simulate thermal hydraulic processes in nuclear and conventional power plants. Earlier research at VTT Technological Research Centre of Finland had found the current version of APROS to produce inaccurate simulation results for a certain case of loop seal clearing. The objective of this Master’s thesis is to find and implement an alternative method for calculating the rate of stratification in APROS, which was found to be the reason for the inaccuracies. Brief literature study was performed and a promising candidate for the new method was found. The new method was implemented into APROS and tested against experiments and simulations from two test facilities and the current version of APROS. Simulation results with the new version were partially conflicting; in some cases the new method was more accurate than the current version, in some the current method was better. Overall, the new method can be assessed as an improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT Given the need to obtain systems to better control broiler production environment, we performed an experiment with broilers from 1 to 21 days, which were submitted to different intensities and air temperature durations in conditioned wind tunnels and the results were used for validation of afuzzy model. The model was developed using as input variables: duration of heat stress (days), dry bulb air temperature (°C) and as output variable: feed intake (g) weight gain (g) and feed conversion (g.g-1). The inference method used was Mamdani, 20 rules have been prepared and the defuzzification technique used was the Center of Gravity. A satisfactory efficiency in determining productive responses is evidenced in the results obtained in the model simulation, when compared with the experimental data, where R2 values ​​calculated for feed intake, weight gain and feed conversion were 0.998, 0.981 and 0.980, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past decade, organizations worldwide have begun to widely adopt agile software development practices, which offer greater flexibility to frequently changing business requirements, better cost effectiveness due to minimization of waste, faster time-to-market, and closer collaboration between business and IT. At the same time, IT services are continuing to be increasingly outsourced to third parties providing the organizations with the ability to focus on their core capabilities as well as to take advantage of better demand scalability, access to specialized skills, and cost benefits. An output-based pricing model, where the customers pay directly for the functionality that was delivered rather than the effort spent, is quickly becoming a new trend in IT outsourcing allowing to transfer the risk away from the customer while at the same time offering much better incentives for the supplier to optimize processes and improve efficiency, and consequently producing a true win-win outcome. Despite the widespread adoption of both agile practices and output-based outsourcing, there is little formal research available on how the two can be effectively combined in practice. Moreover, little practical guidance exists on how companies can measure the performance of their agile projects, which are being delivered in an output-based outsourced environment. This research attempted to shed light on this issue by developing a practical project monitoring framework which may be readily applied by organizations to monitor the performance of agile projects in an output-based outsourcing context, thus taking advantage of the combined benefits of such an arrangement Modified from action research approach, this research was divided into two cycles, each consisting of the Identification, Analysis, Verification, and Conclusion phases. During Cycle 1, a list of six Key Performance Indicators (KPIs) was proposed and accepted by the professionals in the studied multinational organization, which formed the core of the proposed framework and answered the first research sub-question of what needs to be measured. In Cycle 2, a more in-depth analysis was provided for each of the suggested Key Performance Indicators including the techniques for capturing, calculating, and evaluating the information provided by each KPI. In the course of Cycle 2, the second research sub-question was answered, clarifying how the data for each KPI needed to be measured, interpreted, and acted upon. Consequently, after two incremental research cycles, the primary research question was answered describing the practical framework that may be used for monitoring the performance of agile IT projects delivered in an output-based outsourcing context. This framework was evaluated by the professionals within the context of the studied organization and received positive feedback across all four evaluation criteria set forth in this research, including the low overhead of data collection, high value of provided information, ease of understandability of the metric dashboard, and high generalizability of the proposed framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present an algorithm for the numerical simulation of the cavitation in the hydrodynamic lubrication of journal bearings. Despite the fact that this physical process is usually modelled as a free boundary problem, we adopted the equivalent variational inequality formulation. We propose a two-level iterative algorithm, where the outer iteration is associated to the penalty method, used to transform the variational inequality into a variational equation, and the inner iteration is associated to the conjugate gradient method, used to solve the linear system generated by applying the finite element method to the variational equation. This inner part was implemented using the element by element strategy, which is easily parallelized. We analyse the behavior of two physical parameters and discuss some numerical results. Also, we analyse some results related to the performance of a parallel implementation of the algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study combines several projects related to the flows in vessels with complex shapes representing different chemical apparata. Three major cases were studied. The first one is a two-phase plate reactor with a complex structure of intersecting micro channels engraved on one plate which is covered by another plain plate. The second case is a tubular microreactor, consisting of two subcases. The first subcase is a multi-channel two-component commercial micromixer (slit interdigital) used to mix two liquid reagents before they enter the reactor. The second subcase is a micro-tube, where the distribution of the heat generated by the reaction was studied. The third case is a conventionally packed column. However, flow, reactions or mass transfer were not modeled. Instead, the research focused on how to describe mathematically the realistic geometry of the column packing, which is rather random and can not be created using conventional computeraided design or engineering (CAD/CAE) methods. Several modeling approaches were used to describe the performance of the processes in the considered vessels. Computational fluid dynamics (CFD) was used to describe the details of the flow in the plate microreactor and micromixer. A space-averaged mass transfer model based on Fick’s law was used to describe the exchange of the species through the gas-liquid interface in the microreactor. This model utilized data, namely the values of the interfacial area, obtained by the corresponding CFD model. A common heat transfer model was used to find the heat distribution in the micro-tube. To generate the column packing, an additional multibody dynamic model was implemented. Auxiliary simulation was carried out to determine the position and orientation of every packing element in the column. This data was then exported into a CAD system to generate desirable geometry, which could further be used for CFD simulations. The results demonstrated that the CFD model of the microreactor could predict the flow pattern well enough and agreed with experiments. The mass transfer model allowed to estimate the mass transfer coefficient. Modeling for the second case showed that the flow in the micromixer and the heat transfer in the tube could be excluded from the larger model which describes the chemical kinetics in the reactor. Results of the third case demonstrated that the auxiliary simulation could successfully generate complex random packing not only for the column but also for other similar cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rolling element bearings are essential components of rotating machinery. The spherical roller bearing (SRB) is one variant seeing increasing use, because it is self-aligning and can support high loads. It is becoming increasingly important to understand how the SRB responds dynamically under a variety of conditions. This doctoral dissertation introduces a computationally efficient, three-degree-of-freedom, SRB model that was developed to predict the transient dynamic behaviors of a rotor-SRB system. In the model, bearing forces and deflections were calculated as a function of contact deformation and bearing geometry parameters according to nonlinear Hertzian contact theory. The results reveal how some of the more important parameters; such as diametral clearance, the number of rollers, and osculation number; influence ultimate bearing performance. Distributed defects, such as the waviness of the inner and outer ring, and localized defects, such as inner and outer ring defects, are taken into consideration in the proposed model. Simulation results were verified with results obtained by applying the formula for the spherical roller bearing radial deflection and the commercial bearing analysis software. Following model verification, a numerical simulation was carried out successfully for a full rotor-bearing system to demonstrate the application of this newly developed SRB model in a typical real world analysis. Accuracy of the model was verified by comparing measured to predicted behaviors for equivalent systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The atrioventricular (AV) node is permanently damaged in approximately 3% of congenital heart surgery operations, requiring implantation of a permanent pacemaker. Improvements in pacemaker design and in alternative treatment modalities require an effective in vivo model of complete heart block (CHB) before testing can be performed in humans. Such a model should enable accurate, reliable, and detectable induction of the surgical pathology. Through our laboratory’s efforts in developing a tissue engineering therapy for CHB, we describe here an improved in vivo model for inducing chronic AV block. The method employs a right thoracotomy in the adult rabbit, from which the right atrial appendage may be retracted to expose an access channel for the AV node. A novel injection device was designed, which both physically restricts needle depth and provides electrical information via electrocardiogram interface. This combination of features provides real-time guidance to the researcher for confirming contact with the AV node, and documents its ablation upon formalin injection. While all animals tested could be induced to acute AV block, those with ECG guidance were more likely to maintain chronic heart block >12 h. Our model enables the researcher to reproduce both CHB and the associated peripheral fibrosis that would be present in an open congenital heart surgery, and which would inevitably impact the design and utility of a tissue engineered AV node replacement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä diplomityössä mallinnetaan Apros-simulointiohjelmistolla kylläistä höyryä tuottava KPA Uniconin toimittama Biograte-kattilalaitos. Työ on rajattu käsittelemään vesihöyrypiiri syöttövesisäiliöstä prosessiin lähtevään höyryyn saakka. Savukaasupuoli on mallinnettu polttoaineen ja palamisilman syötöstä savupiippuun asti, mutta savukaasujen puhdistus on jätetty pois simulaatiomallista. Työssä kerrotaan yleisesti biopolttoaineista, kattilalaitoksista ja tulipesäratkaisuista. Simuloitava kattilalaitos ja sen säätöjärjestelmä käydään läpi yksityiskohtaisemmin. Simuloinnista ja sen mahdollisuuksista kerrotaan yleisesti, jonka jälkeen esitellään tehty simulaatiomalli. Simulointituloksia verrataan kattilan mitoitusarvoihin ja tärkeimpien prosessisuureiden muutoksia tutkitaan kuormanmuutostilanteissa. Lopuksi tuloksista tehdään yhteenveto ja esitellään jatkotoimenpidesuunnitelmat. Simuloitu kattilalaitos tuottaa kylläistä höyryä halutun määrän oikeassa paineessa ja lämpötilassa. Kattilan prosessisuureet vastaavat melko hyvin mitoitusarvoja ja simulaatiomalli toimii vakaasti myös kuormanmuutostilanteissa. Suurimmat kompromissit ja yksinkertaistukset on tehty tulipesän ja polttoaineensyötön mallinnuksessa. Näitä osa-alueita kehittämällä simulaation tarkkuutta olisi mahdollista parantaa entisestään. Jatkossa simulointimallia on tarkoitus kehittää laajentamalla se kattamaan myös laitoksen sekundääripuoli kokonaisuudessaan. Tulosten perusteella simulaatiota voidaan pitää onnistuneena mallina Biograte-kattilalaitoksesta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The importance of industrial maintenance has been emphasized during the last decades; it is no longer a mere cost item, but one of the mainstays of business. Market conditions have worsened lately, investments in production assets have decreased, and at the same time competition has changed from taking place between companies to competition between networks. Companies have focused on their core functions and outsourced support services, like maintenance, above all to decrease costs. This new phenomenon has led to increasing formation of business networks. As a result, a growing need for new kinds of tools for managing these networks effectively has arisen. Maintenance costs are usually a notable part of the life-cycle costs of an item, and it is important to be able to plan the future maintenance operations for the strategic period of the company or for the whole life-cycle period of the item. This thesis introduces an itemlevel life-cycle model (LCM) for industrial maintenance networks. The term item is used as a common definition for a part, a component, a piece of equipment etc. The constructed LCM is a working tool for a maintenance network (consisting of customer companies that buy maintenance services and various supplier companies). Each network member is able to input their own cost and profit data related to the maintenance services of one item. As a result, the model calculates the net present values of maintenance costs and profits and presents them from the points of view of all the network members. The thesis indicates that previous LCMs for calculating maintenance costs have often been very case-specific, suitable only for the item in question, and they have also been constructed for the needs of a single company, without the network perspective. The developed LCM is a proper tool for the decision making of maintenance services in the network environment; it enables analysing the past and making scenarios for the future, and offers choices between alternative maintenance operations. The LCM is also suitable for small companies in building active networks to offer outsourcing services for large companies. The research introduces also a five-step constructing process for designing a life-cycle costing model in the network environment. This five-step designing process defines model components and structure throughout the iteration and exploitation of user feedback. The same method can be followed to develop other models. The thesis contributes to the literature of value and value elements of maintenance services. It examines the value of maintenance services from the perspective of different maintenance network members and presents established value element lists for the customer and the service provider. These value element lists enable making value visible in the maintenance operations of a networked business. The LCM added with value thinking promotes the notion of maintenance from a “cost maker” towards a “value creator”.