896 resultados para Simulation and modeling applications
Resumo:
The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.
Resumo:
Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.
Resumo:
In ship and offshore terminal construction, welded cross sections are thick and the number of welds very high. Consequently, there are two aspects of great importance; cost and heat input. Reduction in the welding operation time decreases the costs of the work force and avoids excessive heat, preventing distortion and other weld defects. The need to increase productivity while using a single wire in the GMAW process has led to the use of a high current and voltage to improve the melting rate. Unfortunately, this also increases the heat input. Innovative GMAW processes, mostly implemented for sheet plate sections, have shown significant reduction in heat input (Q), low distortion and increase in welding speed. The aim of this study is to investigate adaptive pulsed GMAW processes and assess relevant applications in the high power range, considering possible benefits when welding thicker sections and high yield strength steel. The study experimentally tests the usability of adaptive welding processes and evaluates their effects on weld properties, penetration and shapes of the weld bead.The study first briefly reviews adaptive GMAW to evaluate different approaches and their applications and to identify benefits in adaptive pulsed. Experiments are then performed using Synergic Pulsed GMAW, WiseFusionTM and Synergic GMAW processes to weld a T-joint in a horizontal position (PB). The air gap between the parts ranges from 0 to 2.5 mm. The base materials are structural steel grade S355MC and filler material G3Si1. The experiment investigates heat input, mechanical properties and microstructure of the welded joint. Analysis of the literature reveals that different approaches have been suggested using advanced digital power sources with accurate waveform, current, voltage, and feedback control. In addition, studies have clearly indicated the efficiency of lower energy welding processes. Interest in the high power range is growing and a number of different approaches have been suggested. The welding experiments in this study reveal a significant reduction of heat input and a weld microstructure with the presence of acicular ferrite (AF) beneficial for resistance to crack propagation. The WiseFusion bead had higher dilution, due to the weld bead shape, and low defects. Adaptive pulse GMAW processes can be a favoured choice when welding structures with many welded joints. The total heat reduction mitigates residual stresses and the bead shape allows a higher amperage limit. The stability of the arc during the process is virtually spatter free and allows an increase in welding speed.
Resumo:
This thesis presents different IPR risk mitigation actions as well as enforcement practices and evaluates their usability in different situations. The focus is on pending patent applications, where the right is not officially recognized or established yet, but some references are made to granted patents as well. The thesis presents the different aspects when assessing the risk level created by patents and pending applications. At all times it compares the patent law of the United States and European Patent Convention. Occasionally some references are made to national law, when the European Patent Convention cannot be applied. The thesis presents two case examples, which bring the risk mitigation actions and enforcement practices closer to practice.
Resumo:
The increasing complexity of controller systems, applied in modern passenger cars, requires adequate simulation tools. The toolset FASIM_C++, described in the following, uses complex vehicle models in three-dimensional vehicle dynamics simulation. The structure of the implemented dynamic models and the generation of the equations of motion applying the method of kinematic differentials is explained briefly. After a short introduction in methods of event handling, several vehicle models and applications like controller development, roll-over simulation and real-time-simulation are explained. Finally some simulation results are presented.
Resumo:
The iron and steelmaking industry is among the major contributors to the anthropogenic emissions of carbon dioxide in the world. The rising levels of CO2 in the atmosphere and the global concern about the greenhouse effect and climate change have brought about considerable investigations on how to reduce the energy intensity and CO2 emissions of this industrial sector. In this thesis the problem is tackled by mathematical modeling and optimization using three different approaches. The possibility to use biomass in the integrated steel plant, particularly as an auxiliary reductant in the blast furnace, is investigated. By pre-processing the biomass its heating value and carbon content can be increased at the same time as the oxygen content is decreased. As the compression strength of the preprocessed biomass is lower than that of coke, it is not suitable for replacing a major part of the coke in the blast furnace burden. Therefore the biomass is assumed to be injected at the tuyere level of the blast furnace. Carbon capture and storage is, nowadays, mostly associated with power plants but it can also be used to reduce the CO2 emissions of an integrated steel plant. In the case of a blast furnace, the effect of CCS can be further increased by recycling the carbon dioxide stripped top gas back into the process. However, this affects the economy of the integrated steel plant, as the amount of top gases available, e.g., for power and heat production is decreased. High quality raw materials are a prerequisite for smooth blast furnace operation. High quality coal is especially needed to produce coke with sufficient properties to ensure proper gas permeability and smooth burden descent. Lower quality coals as well as natural gas, which some countries have in great volumes, can be utilized with various direct and smelting reduction processes. The DRI produced with a direct reduction process can be utilized as a feed material for blast furnace, basic oxygen furnace or electric arc furnace. The liquid hot metal from a smelting reduction process can in turn be used in basic oxygen furnace or electric arc furnace. The unit sizes and investment costs of an alternative ironmaking process are also lower than those of a blast furnace. In this study, the economy of an integrated steel plant is investigated by simulation and optimization. The studied system consists of linearly described unit processes from coke plant to steel making units, with a more detailed thermodynamical model of the blast furnace. The results from the blast furnace operation with biomass injection revealed the importance of proper pre-processing of the raw biomass as the composition of the biomass as well as the heating value and the yield are all affected by the pyrolysis temperature. As for recycling of CO2 stripped blast furnace top gas, substantial reductions in the emission rates are achieved if the stripped CO2 can be stored. However, the optimal recycling degree together with other operation conditions is heavily dependent on the cost structure of CO2 emissions and stripping/storage. The economical feasibility related to the use of DRI in the blast furnace depends on the price ratio between the DRI pellets and the BF pellets. The high amount of energy needed in the rotary hearth furnace to reduce the iron ore leads to increased CO2 emissions.
Resumo:
Gasification of biomass is an efficient method process to produce liquid fuels, heat and electricity. It is interesting especially for the Nordic countries, where raw material for the processes is readily available. The thermal reactions of light hydrocarbons are a major challenge for industrial applications. At elevated temperatures, light hydrocarbons react spontaneously to form higher molecular weight compounds. In this thesis, this phenomenon was studied by literature survey, experimental work and modeling effort. The literature survey revealed that the change in tar composition is likely caused by the kinetic entropy. The role of the surface material is deemed to be an important factor in the reactivity of the system. The experimental results were in accordance with previous publications on the subject. The novelty of the experimental work lies in the used time interval for measurements combined with an industrially relevant temperature interval. The aspects which are covered in the modeling include screening of possible numerical approaches, testing of optimization methods and kinetic modelling. No significant numerical issues were observed, so the used calculation routines are adequate for the task. Evolutionary algorithms gave a better performance combined with better fit than the conventional iterative methods such as Simplex and Levenberg-Marquardt methods. Three models were fitted on experimental data. The LLNL model was used as a reference model to which two other models were compared. A compact model which included all the observed species was developed. The parameter estimation performed on that model gave slightly impaired fit to experimental data than LLNL model, but the difference was barely significant. The third tested model concentrated on the decomposition of hydrocarbons and included a theoretical description of the formation of carbon layer on the reactor walls. The fit to experimental data was extremely good. Based on the simulation results and literature findings, it is likely that the surface coverage of carbonaceous deposits is a major factor in thermal reactions.
Resumo:
In this Master Thesis the characteristics of the chosen fractal microstrip antennas are investigated. For modeling has been used the structure of the square Serpinsky fractal curves. During the elaboration of this Master thesis the following steps were undertaken: 1) calculation and simulation of square microstrip antennа, 2) optimizing for obtaining the required characteristics on the frequency 2.5 GHz, 3) simulation and calculation of the second and third iteration of the Serpinsky fractal curves, 4) radiation patterns and intensity distribution of these antennas. In this Master’s Thesis the search for the optimal position of the port and fractal elements was conducted. These structures can be used in perspective for creation of antennas working at the same time in different frequency range.
Resumo:
The monitoring and control of hydrogen sulfide (H2S) level is of great interest for a wide range of application areas including food quality control, defense and antiterrorist applications and air quality monitoring e.g. in mines. H2S is a very poisonous and flammable gas. Exposure to low concentrations of H2S can result in eye irritation, a sore throat and cough, shortness of breath, and fluid retention in the lungs. These symptoms usually disappear in a few weeks. Long-term, low-level exposure may result in fatigue, loss of appetite, headache, irritability, poor memory, and dizziness. Higher concentrations of 700 - 800 ppm tend to be fatal. H2S has a characteristic smell of rotten egg. However, because of temporary paralysis of olfactory nerves, the smelling capability at concentrations higher than 100 ppm is severely compromised. In addition, volatile H2S is one of the main products during the spoilage of poultry meat in anaerobic conditions. Currently, no commercial H2S sensor is available which can operate under anaerobic conditions and can be easily integrated in the food packaging. This thesis presents a step-wise progress in the development of printed H2S gas sensors. Efforts were made in the formulation, characterization and optimization of functional printable inks and coating pastes based on composites of a polymer and a metal salt as well as a composite of a metal salt and an organic acid. Different processing techniques including inkjet printing, flexographic printing, screen printing and spray coating were utilized in the fabrication of H2S sensors. The dispersions were characterized by measuring turbidity, surface tension, viscosity and particle size. The sensing films were characterized using X-ray photoelectron spectroscopy, X-ray diffraction, atomic force microscopy and an electrical multimeter. Thin and thick printed or coated films were developed for gas sensing applications with the aim of monitoring the H2S concentrations in real life applications. Initially, a H2S gas sensor based on a composite of polyaniline and metal salt was developed. Both aqueous and solvent-based dispersions were developed and characterized. These dispersions were then utilized in the fabrication of roll-to-roll printed H2S gas sensors. However, the humidity background, long term instability and comparatively lower detection limit made these sensors less favourable for real practical applications. To overcome these problems, copper acetate based sensors were developed for H2S gas sensing. Stable inks with excellent printability were developed by tuning the surface tension, viscosity and particle size. This enabled the formation of inkjet-printed high quality copper acetate films with excellent sensitivity towards H2S. Furthermore, these sensors showed negligible humidity effects and improved selectivity, response time, lower limit of detection and coefficient of variation. The lower limit of detection of copper acetate based sensors was further improved to sub-ppm level by incorporation of catalytic gold nano-particles and subsequent plasma treatment of the sensing film. These sensors were further integrated in an inexpensive wirelessly readable RLC-circuit (where R is resistor, L is inductor and C is capacitor). The performance of these sensors towards biogenic H2S produced during the spoilage of poultry meat in the modified atmosphere package was also demonstrated in this thesis. This serves as a proof of concept that these sensors can be utilized in real life applications.
Resumo:
La dernière décennie a connu un intérêt croissant pour les problèmes posés par les variables instrumentales faibles dans la littérature économétrique, c’est-à-dire les situations où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter. En effet, il est bien connu que lorsque les instruments sont faibles, les distributions des statistiques de Student, de Wald, du ratio de vraisemblance et du multiplicateur de Lagrange ne sont plus standard et dépendent souvent de paramètres de nuisance. Plusieurs études empiriques portant notamment sur les modèles de rendements à l’éducation [Angrist et Krueger (1991, 1995), Angrist et al. (1999), Bound et al. (1995), Dufour et Taamouti (2007)] et d’évaluation des actifs financiers (C-CAPM) [Hansen et Singleton (1982,1983), Stock et Wright (2000)], où les variables instrumentales sont faiblement corrélées avec la variable à instrumenter, ont montré que l’utilisation de ces statistiques conduit souvent à des résultats peu fiables. Un remède à ce problème est l’utilisation de tests robustes à l’identification [Anderson et Rubin (1949), Moreira (2002), Kleibergen (2003), Dufour et Taamouti (2007)]. Cependant, il n’existe aucune littérature économétrique sur la qualité des procédures robustes à l’identification lorsque les instruments disponibles sont endogènes ou à la fois endogènes et faibles. Cela soulève la question de savoir ce qui arrive aux procédures d’inférence robustes à l’identification lorsque certaines variables instrumentales supposées exogènes ne le sont pas effectivement. Plus précisément, qu’arrive-t-il si une variable instrumentale invalide est ajoutée à un ensemble d’instruments valides? Ces procédures se comportent-elles différemment? Et si l’endogénéité des variables instrumentales pose des difficultés majeures à l’inférence statistique, peut-on proposer des procédures de tests qui sélectionnent les instruments lorsqu’ils sont à la fois forts et valides? Est-il possible de proposer les proédures de sélection d’instruments qui demeurent valides même en présence d’identification faible? Cette thèse se focalise sur les modèles structurels (modèles à équations simultanées) et apporte des réponses à ces questions à travers quatre essais. Le premier essai est publié dans Journal of Statistical Planning and Inference 138 (2008) 2649 – 2661. Dans cet essai, nous analysons les effets de l’endogénéité des instruments sur deux statistiques de test robustes à l’identification: la statistique d’Anderson et Rubin (AR, 1949) et la statistique de Kleibergen (K, 2003), avec ou sans instruments faibles. D’abord, lorsque le paramètre qui contrôle l’endogénéité des instruments est fixe (ne dépend pas de la taille de l’échantillon), nous montrons que toutes ces procédures sont en général convergentes contre la présence d’instruments invalides (c’est-à-dire détectent la présence d’instruments invalides) indépendamment de leur qualité (forts ou faibles). Nous décrivons aussi des cas où cette convergence peut ne pas tenir, mais la distribution asymptotique est modifiée d’une manière qui pourrait conduire à des distorsions de niveau même pour de grands échantillons. Ceci inclut, en particulier, les cas où l’estimateur des double moindres carrés demeure convergent, mais les tests sont asymptotiquement invalides. Ensuite, lorsque les instruments sont localement exogènes (c’est-à-dire le paramètre d’endogénéité converge vers zéro lorsque la taille de l’échantillon augmente), nous montrons que ces tests convergent vers des distributions chi-carré non centrées, que les instruments soient forts ou faibles. Nous caractérisons aussi les situations où le paramètre de non centralité est nul et la distribution asymptotique des statistiques demeure la même que dans le cas des instruments valides (malgré la présence des instruments invalides). Le deuxième essai étudie l’impact des instruments faibles sur les tests de spécification du type Durbin-Wu-Hausman (DWH) ainsi que le test de Revankar et Hartley (1973). Nous proposons une analyse en petit et grand échantillon de la distribution de ces tests sous l’hypothèse nulle (niveau) et l’alternative (puissance), incluant les cas où l’identification est déficiente ou faible (instruments faibles). Notre analyse en petit échantillon founit plusieurs perspectives ainsi que des extensions des précédentes procédures. En effet, la caractérisation de la distribution de ces statistiques en petit échantillon permet la construction des tests de Monte Carlo exacts pour l’exogénéité même avec les erreurs non Gaussiens. Nous montrons que ces tests sont typiquement robustes aux intruments faibles (le niveau est contrôlé). De plus, nous fournissons une caractérisation de la puissance des tests, qui exhibe clairement les facteurs qui déterminent la puissance. Nous montrons que les tests n’ont pas de puissance lorsque tous les instruments sont faibles [similaire à Guggenberger(2008)]. Cependant, la puissance existe tant qu’au moins un seul instruments est fort. La conclusion de Guggenberger (2008) concerne le cas où tous les instruments sont faibles (un cas d’intérêt mineur en pratique). Notre théorie asymptotique sous les hypothèses affaiblies confirme la théorie en échantillon fini. Par ailleurs, nous présentons une analyse de Monte Carlo indiquant que: (1) l’estimateur des moindres carrés ordinaires est plus efficace que celui des doubles moindres carrés lorsque les instruments sont faibles et l’endogenéité modérée [conclusion similaire à celle de Kiviet and Niemczyk (2007)]; (2) les estimateurs pré-test basés sur les tests d’exogenété ont une excellente performance par rapport aux doubles moindres carrés. Ceci suggère que la méthode des variables instrumentales ne devrait être appliquée que si l’on a la certitude d’avoir des instruments forts. Donc, les conclusions de Guggenberger (2008) sont mitigées et pourraient être trompeuses. Nous illustrons nos résultats théoriques à travers des expériences de simulation et deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le problème bien connu du rendement à l’éducation. Le troisième essai étend le test d’exogénéité du type Wald proposé par Dufour (1987) aux cas où les erreurs de la régression ont une distribution non-normale. Nous proposons une nouvelle version du précédent test qui est valide même en présence d’erreurs non-Gaussiens. Contrairement aux procédures de test d’exogénéité usuelles (tests de Durbin-Wu-Hausman et de Rvankar- Hartley), le test de Wald permet de résoudre un problème courant dans les travaux empiriques qui consiste à tester l’exogénéité partielle d’un sous ensemble de variables. Nous proposons deux nouveaux estimateurs pré-test basés sur le test de Wald qui performent mieux (en terme d’erreur quadratique moyenne) que l’estimateur IV usuel lorsque les variables instrumentales sont faibles et l’endogénéité modérée. Nous montrons également que ce test peut servir de procédure de sélection de variables instrumentales. Nous illustrons les résultats théoriques par deux applications empiriques: le modèle bien connu d’équation du salaire [Angist et Krueger (1991, 1999)] et les rendements d’échelle [Nerlove (1963)]. Nos résultats suggèrent que l’éducation de la mère expliquerait le décrochage de son fils, que l’output est une variable endogène dans l’estimation du coût de la firme et que le prix du fuel en est un instrument valide pour l’output. Le quatrième essai résout deux problèmes très importants dans la littérature économétrique. D’abord, bien que le test de Wald initial ou étendu permette de construire les régions de confiance et de tester les restrictions linéaires sur les covariances, il suppose que les paramètres du modèle sont identifiés. Lorsque l’identification est faible (instruments faiblement corrélés avec la variable à instrumenter), ce test n’est en général plus valide. Cet essai développe une procédure d’inférence robuste à l’identification (instruments faibles) qui permet de construire des régions de confiance pour la matrices de covariances entre les erreurs de la régression et les variables explicatives (possiblement endogènes). Nous fournissons les expressions analytiques des régions de confiance et caractérisons les conditions nécessaires et suffisantes sous lesquelles ils sont bornés. La procédure proposée demeure valide même pour de petits échantillons et elle est aussi asymptotiquement robuste à l’hétéroscédasticité et l’autocorrélation des erreurs. Ensuite, les résultats sont utilisés pour développer les tests d’exogénéité partielle robustes à l’identification. Les simulations Monte Carlo indiquent que ces tests contrôlent le niveau et ont de la puissance même si les instruments sont faibles. Ceci nous permet de proposer une procédure valide de sélection de variables instrumentales même s’il y a un problème d’identification. La procédure de sélection des instruments est basée sur deux nouveaux estimateurs pré-test qui combinent l’estimateur IV usuel et les estimateurs IV partiels. Nos simulations montrent que: (1) tout comme l’estimateur des moindres carrés ordinaires, les estimateurs IV partiels sont plus efficaces que l’estimateur IV usuel lorsque les instruments sont faibles et l’endogénéité modérée; (2) les estimateurs pré-test ont globalement une excellente performance comparés à l’estimateur IV usuel. Nous illustrons nos résultats théoriques par deux applications empiriques: la relation entre le taux d’ouverture et la croissance économique et le modèle de rendements à l’éducation. Dans la première application, les études antérieures ont conclu que les instruments n’étaient pas trop faibles [Dufour et Taamouti (2007)] alors qu’ils le sont fortement dans la seconde [Bound (1995), Doko et Dufour (2009)]. Conformément à nos résultats théoriques, nous trouvons les régions de confiance non bornées pour la covariance dans le cas où les instruments sont assez faibles.
Resumo:
This paper provides an overview of work done in recent years by our research group to fuse multimodal images of the trunk of patients with Adolescent Idiopathic Scoliosis (AIS) treated at Sainte-Justine University Hospital Center (CHU). We first describe our surface acquisition system and introduce a set of clinical measurements (indices) based on the trunk's external shape, to quantify its degree of asymmetry. We then describe our 3D reconstruction system of the spine and rib cage from biplanar radiographs and present our methodology for multimodal fusion of MRI, X-ray and external surface images of the trunk We finally present a physical model of the human trunk including bone and soft tissue for the simulation of the surgical outcome on the external trunk shape in AIS.
Resumo:
The present work focuses on the modification of the commonly used thermoplastics, polypropylene and polystyrene using nanosilica preparcd from a cheap source of sodium silicate. Melt compounding technique has been used for nanocomposite preparation as it is simple and suited to injection moulding. Nanosilica in a polymer matrix provide significant enhancement in strength, stiffness and impact strength. Incorporation of silica particles in a polymer also improves its thennal stability. To achieve better dispersion of fillers in polymer matrices the mixing was done at different shear rates. The enhancement in material properties indicates that at higher shear rates there is greater interaction between particles and the matrix and it depends on filler concentration and type of polymer used. N anosilica is a useful filler in thennoplastic polymers and has been applied in automotive applications, electronic appliances and consumer goods.This thesis is divided into six chapters. General introduction to the topic is described in chapter 1. Salient features of polymer nanocomposites, their synthesis, properties and applications are presented. A review of relevant literature and the scope and objectives are also mentioned in this chapter.The materials used and the vanous experimental method and techniques employed in the study are described in chapter 2. Preparation of nanocomposites by melt blending using Thenno Haake Rheocord, preparation of samples, evaluation of mechanical and thennal properties using UTM, Impact testing and characterization using DMA, TGA and DSC and morphology by SEM are described.The preparation of nanosilica from a laboratory scale to a pilot plant scale is described in chapter 3. Generation of surface modified silica, evaluation of kinetic parameters of the synthesis reaction, scale up of the reactor and modeling of the reactor are also dealt with in this chapter.The modification of the commodity thennoplastic, Polypropylene using nanosilica is described in chapter 4. Preparation of PP/silica nanocomposites, evaluation of mechanical properties, thermal and crystallization characteristics, water absorption and ageing resistance studies are also presented.The modification of Polystyrene using synthesized nanosilica IS described in chapter 5. The method of preparation of PS/silica nanocomposites, evaluation of mechanical properties (static and dynamic), thermal properties melt flow characteristics using Haake Rheocord, water absorption and ageing resistance of these nanocomposites are studied.
Resumo:
An asymmetric coplanar strip (ACS) fed dual band F-shaped antenna covering the 2.4/5.2 GHz WLAN bands is presented. The optimized dimensions of the proposed uniplanar antenna are 21 mm × 19 mm when printed on a substrate of dielectric constant 4.4 and height 1.6 mm. The dual band nature of the antenna is brought about by the various current paths in the F-shaped structure and the ground plane. The antenna exhibits nearly omnidirectional radiation characteristics and moderate gain in both the operating bands. Details of the antenna design, simulation, and experimental results are presented and discussed.
Resumo:
A novel fixed frequency beam scanning microstrip leaky wave antenna is reported. The beam scanning at fixed frequency is achieved by reactive loading. Simulation and measured results shows frequency scanability of 80° as well as fixed frequency beam steering of 68° over the −10 dB impedance band of 4.56–5.06 GHz.
Resumo:
In this context,in search of new materials based on chalcogenide glasses,we have developed a novel technique for fabrication of chalcogenide nano composites which are presented in this theis.The techniques includes the dissolution of bulk chalcogenide glasses in amine solvent.This solution casting method allows to retain the attractive optical properties of chalcogenide glasses enabling new fabrication routes for realization of large area thick-thin films with less cost. Chalcogenide glass fiber geometry opens new possibilities for a large number of applications in optics,like remote temperature measurements ,CO2 laser power delivery, and optical sensing and single mode propagation of IR light.We have fabricated new optical polymer fibers doped with chalcogenide glasses which can be used for many optical applications.The present thesis also describes the structural,thermal and optical characterization of certain chalocogenide based materials prepared for different methods and its applications.