767 resultados para real world learning


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Työn tavoitteena oli löytää tarkka menetelmä kampiakselin vaurioitumisriskin laskentaan vertailemalla eri laskentamenetelmiä. Lopuksi suoritettiin simulointi monikappalejärjestelmälle käyttäen elastisia malleja todellisista rakenteista. Simulointiohjelmana käytettiin AVL:n kehittämää Excite:ia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Engineered nanomaterials (ENMs) exhibit special physicochemical properties and thus are finding their way into an increasing number of industries, enabling products with improved properties. Their increased use brings a greater likelihood of exposure to the nanoparticles (NPs) that could be released during the life cycle of nano-abled products. The field of nanotoxicology has emerged as a consequence of the development of these novel materials, and it has gained ever more attention due to the urgent need to gather information on exposure to them and to understand the potential hazards they engender. However, current studies on nanotoxicity tend to focus on pristine ENMs, and they use these toxicity results to generalize risk assessments on human exposure to NPs. ENMs released into the environment can interact with their surroundings, change characteristics and exhibit toxicity effects distinct from those of pristine ENMs. Furthermore, NPs' large surface areas provide extra-large potential interfaces, thus promoting more significant interactions between NPs and other co-existing species. In such processes, other species can attach to a NP's surface and modify its surface functionality, in addition to the toxicity in normally exhibits. One particular occupational health scenario involves NPs and low-volatile organic compounds (LVOC), a common type of pollutant existing around many potential sources of NPs. LVOC can coat a NP's surface and then dominate its toxicity. One important mechanism in nanotoxicology is the creation of reactive oxygen species (ROS) on a NP's surface; LVOC can modify the production of these ROS. In summary, nanotoxicity research should not be limited to the toxicity of pristine NPs, nor use their toxicity to evaluate the health effects of exposure to environmental NPs. Instead, the interactions which NPs have with other environmental species should also be considered and researched. The potential health effects of exposure to NPs should be derived from these real world NPs with characteristics modified by the environment and their distinct toxicity. Failure to suitably address toxicity results could lead to an inappropriate treatment of nano- release, affect the environment and public health and put a blemish on the development of sustainable nanotechnologies as a whole. The main objective of this thesis is to demonstrate a process for coating NP surfaces with LVOC using a well-controlled laboratory design and, with regard to these NPs' capacity to generate ROS, explore the consequences of changing particle toxicity. The dynamic coating system developed yielded stable and replicable coating performance, simulating an important realistic scenario. Clear changes in the size distribution of airborne NPs were observed using a scanning mobility particle sizer, were confirmed using both liquid nanotracking analyses and transmission electron microscopy (TEM) imaging, and were verified thanks to the LVOC coating. Coating thicknesses corresponded to the amount of coating material used and were controlled using the parameters of the LVOC generator. The capacity of pristine silver NPs (Ag NPs) to generate ROS was reduced when they were given a passive coating of inert paraffin: this coating blocked the reactive zones on the particle surfaces. In contrast, a coating of active reduced-anthraquinone contributed to redox reactions and generated ROS itself, despite the fact that ROS generation due to oxidation by Ag NPs themselves was quenched. Further objectives of this thesis included development of ROS methodology and the analysis of ROS case studies. Since the capacity of NPs to create ROS is an important effect in nanotoxicity, we attempted to refine and standardize the use of 2'7-dichlorodihydrofluorescin (DCFH) as a chemical tailored for the characterization of NPs' capacity for ROS generation. Previous studies had reported a wide variety of results, which were due to a number of insufficiently well controlled factors. We therefore cross-compared chemicals and concentrations, explored ways of dispersing NP samples in liquid solutions, identified sources of contradictions in the literature and investigated ways of reducing artificial results. The most robust results were obtained by sonicating an optimal sample of NPs in a DCFH-HRP solution made of 5,M DCFH and 0.5 unit/ml horseradish peroxidase (HRP). Our findings explained how the major reasons for previously conflicting results were the different experimental approaches used and the potential artifacts appearing when using high sample concentrations. Applying our advanced DCFH protocol with other physicochemical characterizations and biological analyses, we conducted several case studies, characterizing aerosols and NP samples. Exposure to aged brake wear dust engenders a risk of potential deleterious health effects in occupational scenarios. We performed microscopy and elemental analyses, as well as ROS measurements, with acellular and cellular DCFH assays. TEM images revealed samples to be heterogeneous mixtures with few particles in the nano-scale. Metallic and non-metallic elements were identified, primarily iron, carbon and oxygen. Moderate amounts of ROS were detected in the cell-free fluorescent tests; however, exposed cells were not dramatically activated. In addition to their highly aged state due to oxidation, the reason aged brake wear samples caused less oxidative stress than fresh brake wear samples may be because of their larger size and thus smaller relative reactive surface area. Other case studies involving welding fumes and differently charged NPs confirmed the performance of our DCFH assay and found ROS generation linked to varying characteristics, especially the surface functionality of the samples. Les nanomatériaux manufacturés (ENM) présentent des propriétés physico-chimiques particulières et ont donc trouvés des applications dans un nombre croissant de secteurs, permettant de réaliser des produits ayant des propriétés améliorées. Leur utilisation accrue engendre un plus grand risque pour les êtres humains d'être exposés à des nanoparticules (NP) qui sont libérées au long de leur cycle de vie. En conséquence, la nanotoxicologie a émergé et gagné de plus en plus d'attention dû à la nécessité de recueillir les renseignements nécessaires sur l'exposition et les risques associés à ces nouveaux matériaux. Cependant, les études actuelles sur la nanotoxicité ont tendance à se concentrer sur les ENM et utiliser ces résultats toxicologiques pour généraliser l'évaluation des risques sur l'exposition humaine aux NP. Les ENM libérés dans l'environnement peuvent interagir avec l'environnement, changeant leurs caractéristiques, et montrer des effets de toxicité distincts par rapport aux ENM originaux. Par ailleurs, la grande surface des NP fournit une grande interface avec l'extérieur, favorisant les interactions entre les NP et les autres espèces présentes. Dans ce processus, d'autres espèces peuvent s'attacher à la surface des NP et modifier leur fonctionnalité de surface ainsi que leur toxicité. Un scénario d'exposition professionnel particulier implique à la fois des NP et des composés organiques peu volatils (LVOC), un type commun de polluant associé à de nombreuses sources de NP. Les LVOC peuvent se déposer sur la surface des NP et donc dominer la toxicité globale de la particule. Un mécanisme important en nanotoxicologie est la création d'espèces réactives d'oxygène (ROS) sur la surface des particules, et les LVOC peuvent modifier cette production de ROS. En résumé, la recherche en nanotoxicité ne devrait pas être limitée à la toxicité des ENM originaux, ni utiliser leur toxicité pour évaluer les effets sur la santé de l'exposition aux NP de l'environnement; mais les interactions que les NP ont avec d'autres espèces environnementales doivent être envisagées et étudiées. Les effets possibles sur la santé de l'exposition aux NP devraient être dérivés de ces NP aux caractéristiques modifiées et à la toxicité distincte. L'utilisation de résultats de toxicité inappropriés peut conduire à une mauvaise prise en charge de l'exposition aux NP, de détériorer l'environnement et la santé publique et d'entraver le développement durable des industries de la nanotechnologie dans leur ensemble. L'objectif principal de cette thèse est de démontrer le processus de déposition des LVOC sur la surface des NP en utilisant un environnement de laboratoire bien contrôlé et d'explorer les conséquences du changement de toxicité des particules sur leur capacité à générer des ROS. Le système de déposition dynamique développé a abouti à des performances de revêtement stables et reproductibles, en simulant des scénarios réalistes importants. Des changements clairs dans la distribution de taille des NP en suspension ont été observés par spectrométrie de mobilité électrique des particules, confirmé à la fois par la méthode dite liquid nanotracking analysis et par microscopie électronique à transmission (MET), et a été vérifié comme provenant du revêtement par LVOC. La correspondance entre l'épaisseur de revêtement et la quantité de matériau de revêtement disponible a été démontré et a pu être contrôlé par les paramètres du générateur de LVOC. La génération de ROS dû aux NP d'argent (Ag NP) a été diminuée par un revêtement passif de paraffine inerte bloquant les zones réactives à la surface des particules. Au contraire, le revêtement actif d'anthraquinone réduit a contribué aux réactions redox et a généré des ROS, même lorsque la production de ROS par oxydation des Ag NP avec l'oxygène a été désactivé. Les objectifs associés comprennent le développement de la méthodologie et des études de cas spécifique aux ROS. Etant donné que la capacité des NP à générer des ROS contribue grandement à la nanotoxicité, nous avons tenté de définir un standard pour l'utilisation de 27- dichlorodihydrofluorescine (DCFH) adapté pour caractériser la génération de ROS par les NP. Des etudes antérieures ont rapporté une grande variété de résultats différents, ce qui était dû à un contrôle insuffisant des plusieurs facteurs. Nous avons donc comparé les produits chimiques et les concentrations utilisés, exploré les moyens de dispersion des échantillons HP en solution liquide, investigué les sources de conflits identifiées dans les littératures et étudié les moyens de réduire les résultats artificiels. De très bon résultats ont été obtenus par sonication d'une quantité optimale d'échantillons de NP en solution dans du DCFH-HRP, fait de 5 nM de DCFH et de 0,5 unité/ml de Peroxydase de raifort (HRP). Notre étude a démontré que les principales raisons causant les conflits entre les études précédemment conduites dans la littérature étaient dues aux différentes approches expérimentales et à des artefacts potentiels dus à des concentrations élevées de NP dans les échantillons. Utilisant notre protocole DCFH avancé avec d'autres caractérisations physico-chimiques et analyses biologiques, nous avons mené plusieurs études de cas, caractérisant les échantillons d'aérosols et les NP. La vielle poussière de frein en particulier présente un risque élevé d'exposition dans les scénarios professionnels, avec des effets potentiels néfastes sur la santé. Nous avons effectué des analyses d'éléments et de microscopie ainsi que la mesure de ROS avec DCFH cellulaire et acellulaire. Les résultats de MET ont révélé que les échantillons se présentent sous la forme de mélanges de particules hétérogènes, desquels une faible proportion se trouve dans l'échelle nano. Des éléments métalliques et non métalliques ont été identifiés, principalement du fer, du carbone et de l'oxygène. Une quantité modérée de ROS a été détectée dans le test fluorescent acellulaire; cependant les cellules exposées n'ont pas été très fortement activées. La raison pour laquelle les échantillons de vielle poussière de frein causent un stress oxydatif inférieur par rapport à la poussière de frein nouvelle peut-être à cause de leur plus grande taille engendrant une surface réactive proportionnellement plus petite, ainsi que leur état d'oxydation avancé diminuant la réactivité. D'autres études de cas sur les fumées de soudage et sur des NP différemment chargées ont confirmé la performance de notre test DCFH et ont trouvé que la génération de ROS est liée à certaines caractéristiques, notamment la fonctionnalité de surface des échantillons.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Markets, in the real world, are not efficient zero-sum games where hypotheses of the CAPM are fulfilled. Then, it is easy to conclude the market portfolio is not located on Markowitz"s efficient frontier, and passive investments (and indexing) are not optimal but biased. In this paper, we define and analyze biases suffered by passive investors: the sample, construction, efficiency and active biases and tracking error are presented. We propose Minimum Risk Indices (MRI) as an alternative to deal with to market index biases, and to provide investors with portfolios closer to the efficient frontier, that is, more optimal investment possibilities. MRI (using a Parametric Value-at-Risk Minimization approach) are calculated for three stock markets achieving interesting results. Our indices are less risky and more profitable than current Market Indices in the Argentinean and Spanish markets, facing that way the Efficient Market Hypothesis. Two innovations must be outlined: an error dimension has been included in the backtesting and the Sharpe"s Ratio has been used to select the"best" MRI

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Recently, it has been suggested that the type of stent used in primary percutaneous coronary interventions (pPCI) might impact upon the outcomes of patients with acute myocardial infarction (AMI). Indeed, drug-eluting stents (DES) reduce neointimal hyperplasia compared to bare-metal stents (BMS). Moreover, the later generation DES, due to its biocompatible polymer coatings and stent design, allows for greater deliverability, improved endothelial healing and therefore less restenosis and thrombus generation. However, data on the safety and performance of DES in large cohorts of AMI is still limited. AIM: To compare the early outcome of DES vs. BMS in AMI patients. METHODS: This was a prospective, multicentre analysis containing patients from 64 hospitals in Switzerland with AMI undergoing pPCI between 2005 and 2013. The primary endpoint was in-hospital all-cause death, whereas the secondary endpoint included a composite measure of major adverse cardiac and cerebrovascular events (MACCE) of death, reinfarction, and cerebrovascular event. RESULTS: Of 20,464 patients with a primary diagnosis of AMI and enrolled to the AMIS Plus registry, 15,026 were referred for pPCI and 13,442 received stent implantation. 10,094 patients were implanted with DES and 2,260 with BMS. The overall in-hospital mortality was significantly lower in patients with DES compared to those with BMS implantation (2.6% vs. 7.1%,p < 0.001). The overall in-hospital MACCE after DES was similarly lower compared to BMS (3.5% vs. 7.6%, p < 0.001). After adjusting for all confounding covariables, DES remained an independent predictor for lower in-hospital mortality (OR 0.51,95% CI 0.40-0.67, p < 0.001). Since groups differed as regards to baseline characteristics and pharmacological treatment, we performed a propensity score matching (PSM) to limit potential biases. Even after the PSM, DES implantation remained independently associated with a reduced risk of in-hospital mortality (adjusted OR 0.54, 95% CI 0.39-0.76, p < 0.001). CONCLUSIONS: In unselected patients from a nationwide, real-world cohort, we found DES, compared to BMS, was associated with lower in-hospital mortality and MACCE. The identification of optimal treatment strategies of patients with AMI needs further randomised evaluation; however, our findings suggest a potential benefit with DES.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les tâches nécessitant des manipulations et des transformations de figures géométriques et de formes, comme les tâches de rotation mentale, donnent lieu à des différences de performance entre hommes et femmes qui restent intrigantes. Plusieurs hypothèses ont été proposées pour expliquer ces différences. La plus récurrente porte sur les différences de stratégie globale vs locale utilisées pour traiter l'information. Bien que cette conjecture soit intéressante, elle reste difficile à opérationnaliser car elle englobe tous les mécanismes cognitifs (acquisition, conservation et récupération de l'information). Ce travail prend la forme d'un retour aux sources dans la mesure où il se base sur des recherches anciennes qui ont montré que les hommes perçoivent significativement mieux que les femmes la verticale et l'horizontale. Il teste l'hypothèse selon laquelle les hommes, comparativement aux femmes, présentent une plus forte indépendance au champ perceptif visuel et sont donc plus susceptibles d'utiliser la verticalité et l'horizontalité pour résoudre une tâche de rotation mentale. Une première série d'expériences s'est penchée sur la perception spatiale pour évaluer son impact sur la résolution d'une tâche impliquant la rotation mentale. Les résultats ont montré que seuls les hommes se référaient à la verticalité et à l'horizontalité pour résoudre la tâche. Une seconde série d'expériences ont investigué l'effet de la présence, ou absence, d'axes directionnels directement liés à une tâche de rotation mentale. Elles ont été menées également en environnement réel afin d'évaluer comment le déplacement actif ou passif, correspondant à un changement de perspective en lieu et place d'une rotation mentale, module la performance des hommes et des femmes. Les résultats n'ont pas mis en évidence de différence sexuelle. Notre hypothèse est vérifiée puisque c'est uniquement lorsque la tâche ne présente pas d'axes orthogonaux évidents mais implicites que seuls les hommes, plus indépendants au champ perceptif visuel que les femmes, utilisent la perception de la verticalité et de l'horizontalité pour améliorer leur compétence en rotation mentale. -- Tasks that require manipulation and transformation of geometric shapes and forms, like tasks of mental rotation and give rise to differences in performance between men and women, remain intriguing. Several hypotheses have been proposed to explain these differences. The most recurring hypothesis addresses differences in global versus local strategies for processing information. While this conjecture is interesting, it remains difficult to study because it encompasses all the cognitive mechanisms (acquisition, retention and output). This work returns to the sources, which are based on earlier research that shows that men are significantly better than women at perceiving verticality and horizontality. It tests the hypothesis according to which men, as compared to women, exhibit a greater independence on the perceptive visual field, and therefore are more susceptible to utilizing the verticality and the horizontality to solve a mental rotation task. A first set of experiments examined spatial perception in order to assess its impact on the resolution of a task involving mental rotation. The results showed that only men referred to the verticality and the horizontality to solve the task. A second series of experiments investigated the effect of a presence, or absence of directional axes directed tied to the task of mental rotation. They were also conducted in a real world environment to evaluate how the active or passive displacement, corresponding to a change in perspective instead of a mental rotation, modulates the performance of men and women. The results did not show sex differences. Our hypothesis is verified: it is only when the task presents no obvious, but implicit orthogonal axes that men, who exhibit a greater independence on the perceptive visual field than women, use the perception of verticality and horizontality to improve their competence in mental rotation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fixed Mobile Convergence is the recent buzz in the field of telecommunication technology. Unlicensed Mobile Access (UMA) technology is a realistic implementation of Fixed Mobile Convergence. UMA involves communication between different types of networks. Handover is a very important issue in UMA. The study is an analysis of theoretical handover mechanism and practical test results. It includes a new proposal for handover performance test in UMA. It also provides an overview of basic handover operation on different scenarios in UMA. The practical test involves an experiment on handover performance test using network analyzers. The new proposal provides a different approach for an experimental setting on handover performance test without using network analyzers. The approach is not be implemented because of some technical problem in a network equipment in UMA. The analysis of the test results reveals that time of handover between UMA and Global System for Mobile Communication (GSM) network is similar to time of handover between inter base station controller (BSC) handover in GSM networks. The new approach is simple and provides measurement at the end point communicating entities. The study gives a general understanding of handover operation, an analysis of handover performance in UMA and specifically provides a new approach useful for further study of handover in different real world environments and scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a task for eliciting attitudes toward risk that is close to real-world risky decisions which typically involve gains and losses. The task consists of accepting or rejecting gambles that provide a gain with probability p and a loss with probability 1−p . We employ finite mixture models to uncover heterogeneity in risk preferences and find that (i) behavior is heterogeneous, with one half of the subjects behaving as expected utility maximizers, (ii) for the others, reference-dependent models perform better than those where subjects derive utility from final outcomes, (iii) models with sign-dependent decision weights perform better than those without, and (iv) there is no evidence for loss aversion. The procedure is sufficiently simple so that it can be easily used in field or lab experiments where risk elicitation is not the main experiment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The activated sludge process - the main biological technology usually applied towastewater treatment plants (WWTP) - directly depends on live beings (microorganisms), and therefore on unforeseen changes produced by them. It could be possible to get a good plant operation if the supervisory control system is able to react to the changes and deviations in the system and can take thenecessary actions to restore the system’s performance. These decisions are oftenbased both on physical, chemical, microbiological principles (suitable to bemodelled by conventional control algorithms) and on some knowledge (suitable to be modelled by knowledge-based systems). But one of the key problems in knowledge-based control systems design is the development of an architecture able to manage efficiently the different elements of the process (integrated architecture), to learn from previous cases (spec@c experimental knowledge) and to acquire the domain knowledge (general expert knowledge). These problems increase when the process belongs to an ill-structured domain and is composed of several complex operational units. Therefore, an integrated and distributed AIarchitecture seems to be a good choice. This paper proposes an integrated and distributed supervisory multi-level architecture for the supervision of WWTP, that overcomes some of the main troubles of classical control techniques and those of knowledge-based systems applied to real world systems

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The role of transport in the economy is twofold. As a sector of economic activity it contributes to a share of national income. On the other hand, improvements in transport infrastructure create room for accelerated economic growth. As a means to support railways as a safe and environmentally friendly transportation mode, the EU legislation has required the opening of domestic railway freight for competition from beginning of year 2007. The importance of railways as a mode of transport has been great in Finland, as a larger share of freight has been carried on rails than in Europe on average. In this thesis it is claimed that the efficiency of goods transport can be enhanced by service specific investments. Furthermore, it is stressed that simulation can and should be used to evaluate the cost-efficiency of transport systems on operational level, as well as to assess transportation infrastructure investments. In all the studied cases notable efficiency improvements were found. For example in distribution, home delivery of groceries can be almost twice as cost efficient as the current practice of visiting the store. The majority of the cases concentrated on railway freight. In timber transportation, the item with the largest annual transport volume in domestic railway freight in Finland, the transportation cost could be reduced most substantially. Also in international timber procurement, the utilization of railway wagons could be improved by combining complementary flows. The efficiency improvements also have positive environmental effects; a large part of road transit could be moved to rails annually. If impacts of freight transport are included in cost-benefit analysis of railway investments, up to 50 % increase in the net benefits of the evaluated alternatives can be experienced, avoiding a possible inbuilt bias in the assessment framework, and thus increasing the efficiency of national investments in transportation infrastructure. Transportation systems are a typical example of complex real world systems that cannot be analysed realistically by analytical methods, whereas simulation allows inclusion of dynamics and the level of detail required. Regarding simulation as a viable tool for assessing the efficiency of transportation systems finds support also in the international survey conducted for railway freight operators; operators use operations research methods widely for planning purposes, while simulation is applied only by the larger operators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The provision of Internet access to large numbers has traditionally been under the control of operators, who have built closed access networks for connecting customers. As the access network (i.e. the last mile to the customer) is generally the most expensive part of the network because of the vast amount of cable required, many operators have been reluctant to build access networks in rural areas. There are problems also in urban areas, as incumbent operators may use various tactics to make it difficult for competitors to enter the market. Open access networking, where the goal is to connect multiple operators and other types of service providers to a shared network, changes the way in which networks are used. This change in network structure dismantles vertical integration in service provision and enables true competition as no service provider can prevent others fromcompeting in the open access network. This thesis describes the development from traditional closed access networks towards open access networking and analyses different types of open access solution. The thesis introduces a new open access network approach (The Lappeenranta Model) in greater detail. The Lappeenranta Model is compared to other types of open access networks. The thesis shows that end users and service providers see local open access and services as beneficial. In addition, the thesis discusses open access networking in a multidisciplinary fashion, focusing on the real-world challenges of open access networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the use of primary chemistry literature in a fifth-semester physical chemistry course for undergraduate chemistry students. The main goal is to expose students to the primary literature of physical chemistry, demonstrating how they can benefit from using it. The assignment addresses issues in chemical education such as scientific writing, relating lecture material to the real world, and conducting literature searches. The student evaluation of this assignment, consisting of two surveys and one focus group, showed its usefulness. The details of the evaluation instruments and their results are provided. Out of 45 students enrolled in the course, 30 (67%) students completed the assignment.