958 resultados para Modeling process
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
Design parameters, process flows, electro-thermal-fluidic simulations and experimental characterizations of Micro-Electro-Mechanical-Systems (MEMS) suited for gas-chromatographic (GC) applications are presented and thoroughly described in this thesis, whose topic belongs to the research activities the Institute for Microelectronics and Microsystems (IMM)-Bologna is involved since several years, i.e. the development of micro-systems for chemical analysis, based on silicon micro-machining techniques and able to perform analysis of complex gaseous mixtures, especially in the field of environmental monitoring. In this regard, attention has been focused on the development of micro-fabricated devices to be employed in a portable mini-GC system for the analysis of aromatic Volatile Organic Compounds (VOC) like Benzene, Toluene, Ethyl-benzene and Xylene (BTEX), i.e. chemical compounds which can significantly affect environment and human health because of their demonstrated carcinogenicity (benzene) or toxicity (toluene, xylene) even at parts per billion (ppb) concentrations. The most significant results achieved through the laboratory functional characterization of the mini-GC system have been reported, together with in-field analysis results carried out in a station of the Bologna air monitoring network and compared with those provided by a commercial GC system. The development of more advanced prototypes of micro-fabricated devices specifically suited for FAST-GC have been also presented (silicon capillary columns, Ultra-Low-Power (ULP) Metal OXide (MOX) sensor, Thermal Conductivity Detector (TCD)), together with the technological processes for their fabrication. The experimentally demonstrated very high sensitivity of ULP-MOX sensors to VOCs, coupled with the extremely low power consumption, makes the developed ULP-MOX sensor the most performing metal oxide sensor reported up to now in literature, while preliminary test results proved that the developed silicon capillary columns are capable of performances comparable to those of the best fused silica capillary columns. Finally, the development and the validation of a coupled electro-thermal Finite Element Model suited for both steady-state and transient analysis of the micro-devices has been described, and subsequently implemented with a fluidic part to investigate devices behaviour in presence of a gas flowing with certain volumetric flow rates.
Resumo:
Extrusion is a process used to form long products of constant cross section, from simple billets, with a high variety of shapes. Aluminum alloys are the materials most processed in the extrusion industry due to their deformability and the wide field of applications that range from buildings to aerospace and from design to automotive industries. The diverse applications imply different requirements that can be fulfilled by the wide range of alloys and treatments, that is from critical structural application to high quality surface and aesthetical aspect. Whether one or the other is the critical aspect, they both depend directly from microstructure. The extrusion process is moreover marked by high deformations and complex strain gradients making difficult the control of microstructure evolution that is at present not yet fully achieved. Nevertheless the evolution of Finite Element modeling has reached a maturity and can therefore start to be used as a tool for investigation and prediction of microstructure evolution. This thesis will analyze and model the evolution of microstructure throughout the entire extrusion process for 6XXX series aluminum alloys. Core phase of the work was the development of specific tests to investigate the microstructure evolution and validate the model implemented in a commercial FE code. Along with it two essential activities were carried out for a correct calibration of the model beyond the simple research of contour parameters, thus leading to the understanding and control of both code and process. In this direction activities were also conducted on building critical knowhow on the interpretation of microstructure and extrusion phenomena. It is believed, in fact, that the sole analysis of the microstructure evolution regardless of its relevance in the technological aspects of the process would be of little use for the industry as well as ineffective for the interpretation of the results.
Resumo:
Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. Knowledge of the spatial and temporal distribution of CCN in the atmosphere is essential to understand and describe the effects of aerosols in meteorological models. In this study, CCN properties were measured in polluted and pristine air of different continental regions, and the results were parameterized for efficient prediction of CCN concentrations.The continuous-flow CCN counter used for size-resolved measurements of CCN efficiency spectra (activation curves) was calibrated with ammonium sulfate and sodium chloride aerosols for a wide range of water vapor supersaturations (S=0.068% to 1.27%). A comprehensive uncertainty analysis showed that the instrument calibration depends strongly on the applied particle generation techniques, Köhler model calculations, and water activity parameterizations (relative deviations in S up to 25%). Laboratory experiments and a comparison with other CCN instruments confirmed the high accuracy and precision of the calibration and measurement procedures developed and applied in this study.The mean CCN number concentrations (NCCN,S) observed in polluted mega-city air and biomass burning smoke (Beijing and Pearl River Delta, China) ranged from 1000 cm−3 at S=0.068% to 16 000 cm−3 at S=1.27%, which is about two orders of magnitude higher than in pristine air at remote continental sites (Swiss Alps, Amazonian rainforest). Effective average hygroscopicity parameters, κ, describing the influence of chemical composition on the CCN activity of aerosol particles were derived from the measurement data. They varied in the range of 0.3±0.2, were size-dependent, and could be parameterized as a function of organic and inorganic aerosol mass fraction. At low S (≤0.27%), substantial portions of externally mixed CCN-inactive particles with much lower hygroscopicity were observed in polluted air (fresh soot particles with κ≈0.01). Thus, the aerosol particle mixing state needs to be known for highly accurate predictions of NCCN,S. Nevertheless, the observed CCN number concentrations could be efficiently approximated using measured aerosol particle number size distributions and a simple κ-Köhler model with a single proxy for the effective average particle hygroscopicity. The relative deviations between observations and model predictions were on average less than 20% when a constant average value of κ=0.3 was used in conjunction with variable size distribution data. With a constant average size distribution, however, the deviations increased up to 100% and more. The measurement and model results demonstrate that the aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the measurement results and parameterizations presented in this study can be directly implemented in detailed process models as well as in large-scale atmospheric and climate models for efficient description of the CCN activity of atmospheric aerosols.
Resumo:
Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].
Resumo:
In the last years the attentions on the energy efficiency on historical buildings grows, as different research project took place across Europe. The attention on combining, the need of the preservation of the buildings, their value and their characteristic, with the need of the reduction of energy consumption and the improvements of indoor comfort condition, stimulate the discussion of two points of view that are usually in contradiction, buildings engineer and Conservation Institution. The results are surprising because a common field is growing while remains the need of balancing the respective exigencies. From these experience results clear that many questions should be answered also from the building physicist regarding the correct assessment: on the energy consumption of this class of buildings, on the effectiveness of the measures that could be adopted, and much more. This thesis gives a contribution to answer to these questions developing a procedure to analyse the historic building. The procedure gives a guideline of the energy audit for the historical building considering the experimental activities to dial with the uncertainty of the estimation of the energy balance. It offers a procedure to simulate the energy balance of building with a validated dynamic model considering also a calibration procedure to increase the accuracy of the model. An approach of design of energy efficiency measures through an optimization that consider different aspect is also presented. All the process is applied to a real case study to give to the reader a practical understanding.
Resumo:
Parkinson’s disease is a neurodegenerative disorder due to the death of the dopaminergic neurons of the substantia nigra of the basal ganglia. The process that leads to these neural alterations is still unknown. Parkinson’s disease affects most of all the motor sphere, with a wide array of impairment such as bradykinesia, akinesia, tremor, postural instability and singular phenomena such as freezing of gait. Moreover, in the last few years the fact that the degeneration in the basal ganglia circuitry induces not only motor but also cognitive alterations, not necessarily implicating dementia, and that dopamine loss induces also further implications due to dopamine-driven synaptic plasticity got more attention. At the present moment, no neuroprotective treatment is available, and even if dopamine-replacement therapies as well as electrical deep brain stimulation are able to improve the life conditions of the patients, they often present side effects on the long term, and cannot recover the neural loss, which instead continues to advance. In the present thesis both motor and cognitive aspects of Parkinson’s disease and basal ganglia circuitry were investigated, at first focusing on Parkinson’s disease sensory and balance issues by means of a new instrumented method based on inertial sensor to provide further information about postural control and postural strategies used to attain balance, then applying this newly developed approach to assess balance control in mild and severe patients, both ON and OFF levodopa replacement. Given the inability of levodopa to recover balance issues and the new physiological findings than underline the importance in Parkinson’s disease of non-dopaminergic neurotransmitters, it was therefore developed an original computational model focusing on acetylcholine, the most promising neurotransmitter according to physiology, and its role in synaptic plasticity. The rationale of this thesis is that a multidisciplinary approach could gain insight into Parkinson’s disease features still unresolved.
Resumo:
In der Erdöl– und Gasindustrie sind bildgebende Verfahren und Simulationen auf der Porenskala im Begriff Routineanwendungen zu werden. Ihr weiteres Potential lässt sich im Umweltbereich anwenden, wie z.B. für den Transport und Verbleib von Schadstoffen im Untergrund, die Speicherung von Kohlendioxid und dem natürlichen Abbau von Schadstoffen in Böden. Mit der Röntgen-Computertomografie (XCT) steht ein zerstörungsfreies 3D bildgebendes Verfahren zur Verfügung, das auch häufig für die Untersuchung der internen Struktur geologischer Proben herangezogen wird. Das erste Ziel dieser Dissertation war die Implementierung einer Bildverarbeitungstechnik, die die Strahlenaufhärtung der Röntgen-Computertomografie beseitigt und den Segmentierungsprozess dessen Daten vereinfacht. Das zweite Ziel dieser Arbeit untersuchte die kombinierten Effekte von Porenraumcharakteristika, Porentortuosität, sowie die Strömungssimulation und Transportmodellierung in Porenräumen mit der Gitter-Boltzmann-Methode. In einer zylindrischen geologischen Probe war die Position jeder Phase auf Grundlage der Beobachtung durch das Vorhandensein der Strahlenaufhärtung in den rekonstruierten Bildern, das eine radiale Funktion vom Probenrand zum Zentrum darstellt, extrahierbar und die unterschiedlichen Phasen ließen sich automatisch segmentieren. Weiterhin wurden Strahlungsaufhärtungeffekte von beliebig geformten Objekten durch einen Oberflächenanpassungsalgorithmus korrigiert. Die Methode der „least square support vector machine” (LSSVM) ist durch einen modularen Aufbau charakterisiert und ist sehr gut für die Erkennung und Klassifizierung von Mustern geeignet. Aus diesem Grund wurde die Methode der LSSVM als pixelbasierte Klassifikationsmethode implementiert. Dieser Algorithmus ist in der Lage komplexe geologische Proben korrekt zu klassifizieren, benötigt für den Fall aber längere Rechenzeiten, so dass mehrdimensionale Trainingsdatensätze verwendet werden müssen. Die Dynamik von den unmischbaren Phasen Luft und Wasser wird durch eine Kombination von Porenmorphologie und Gitter Boltzmann Methode für Drainage und Imbibition Prozessen in 3D Datensätzen von Böden, die durch synchrotron-basierte XCT gewonnen wurden, untersucht. Obwohl die Porenmorphologie eine einfache Methode ist Kugeln in den verfügbaren Porenraum einzupassen, kann sie dennoch die komplexe kapillare Hysterese als eine Funktion der Wassersättigung erklären. Eine Hysterese ist für den Kapillardruck und die hydraulische Leitfähigkeit beobachtet worden, welche durch die hauptsächlich verbundenen Porennetzwerke und der verfügbaren Porenraumgrößenverteilung verursacht sind. Die hydraulische Konduktivität ist eine Funktion des Wassersättigungslevels und wird mit einer makroskopischen Berechnung empirischer Modelle verglichen. Die Daten stimmen vor allem für hohe Wassersättigungen gut überein. Um die Gegenwart von Krankheitserregern im Grundwasser und Abwässern vorhersagen zu können, wurde in einem Bodenaggregat der Einfluss von Korngröße, Porengeometrie und Fluidflussgeschwindigkeit z.B. mit dem Mikroorganismus Escherichia coli studiert. Die asymmetrischen und langschweifigen Durchbruchskurven, besonders bei höheren Wassersättigungen, wurden durch dispersiven Transport aufgrund des verbundenen Porennetzwerks und durch die Heterogenität des Strömungsfeldes verursacht. Es wurde beobachtet, dass die biokolloidale Verweilzeit eine Funktion des Druckgradienten als auch der Kolloidgröße ist. Unsere Modellierungsergebnisse stimmen sehr gut mit den bereits veröffentlichten Daten überein.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Fuel cells are a topic of high interest in the scientific community right now because of their ability to efficiently convert chemical energy into electrical energy. This thesis is focused on solid oxide fuel cells (SOFCs) because of their fuel flexibility, and is specifically concerned with the anode properties of SOFCs. The anodes are composed of a ceramic material (yttrium stabilized zirconia, or YSZ), and conducting material. Recent research has shown that an infiltrated anode may offer better performance at a lower cost. This thesis focuses on the creation of a model of an infiltrated anode that mimics the underlying physics of the production process. Using the model, several key parameters for anode performance are considered. These are the initial volume fraction of YSZ in the slurry before sintering, the final porosity of the composite anode after sintering, and the size of the YSZ and conducting particles in the composite. The performance measures of the anode, namely percolation threshold and effective conductivity, are analyzed as a function of these important input parameters. Simple two and three-dimensional percolation models are used to determine the conditions at which the full infiltrated anode would be investigated. These more simple models showed that the aspect ratio of the anode has no effect on the threshold or effective conductivity, and that cell sizes of 303 are needed to obtain accurate conductivity values. The full model of the infiltrated anode is able to predict the performance of the SOFC anodes and it can be seen that increasing the size of the YSZ decreases the percolation threshold and increases the effective conductivity at low conductor loadings. Similar trends are seen for a decrease in final porosity and a decrease in the initial volume fraction of YSZ.
Resumo:
Despite the impact of red blood cell (RBC) Life-spans in some disease areas such as diabetes or anemia of chronic kidney disease, there is no consensus on how to quantitatively best describe the process. Several models have been proposed to explain the elimination process of RBCs: random destruction process, homogeneous life-span model, or a series of 4-transit compartment model. The aim of this work was to explore the different models that have been proposed in literature, and modifications to those. The impact of choosing the right model on future outcomes prediction--in the above mentioned areas--was also investigated. Both data from indirect (clinical data) and direct life-span measurement (biotin-labeled data) methods were analyzed using non-linear mixed effects models. Analysis showed that: (1) predictions from non-steady state data will depend on the RBC model chosen; (2) the transit compartment model, which considers variation in life-span in the RBC population, better describes RBC survival data than the random destruction or homogenous life-span models; and (3) the additional incorporation of random destruction patterns, although improving the description of the RBC survival data, does not appear to provide a marked improvement when describing clinical data.
Resumo:
We propose a novel class of models for functional data exhibiting skewness or other shape characteristics that vary with spatial or temporal location. We use copulas so that the marginal distributions and the dependence structure can be modeled independently. Dependence is modeled with a Gaussian or t-copula, so that there is an underlying latent Gaussian process. We model the marginal distributions using the skew t family. The mean, variance, and shape parameters are modeled nonparametrically as functions of location. A computationally tractable inferential framework for estimating heterogeneous asymmetric or heavy-tailed marginal distributions is introduced. This framework provides a new set of tools for increasingly complex data collected in medical and public health studies. Our methods were motivated by and are illustrated with a state-of-the-art study of neuronal tracts in multiple sclerosis patients and healthy controls. Using the tools we have developed, we were able to find those locations along the tract most affected by the disease. However, our methods are general and highly relevant to many functional data sets. In addition to the application to one-dimensional tract profiles illustrated here, higher-dimensional extensions of the methodology could have direct applications to other biological data including functional and structural MRI.
Resumo:
Our dynamic capillary electrophoresis model which uses material specific input data for estimation of electroosmosis was applied to investigate fundamental aspects of isoelectric focusing (IEF) in capillaries or microchannels made from bare fused-silica (FS), FS coated with a sulfonated polymer, polymethylmethacrylate (PMMA) and poly(dimethylsiloxane) (PDMS). Input data were generated via determination of the electroosmotic flow (EOF) using buffers with varying pH and ionic strength. Two models are distinguished, one that neglects changes of ionic strength and one that includes the dependence between electroosmotic mobility and ionic strength. For each configuration, the models provide insight into the magnitude and dynamics of electroosmosis. The contribution of each electrophoretic zone to the net EOF is thereby visualized and the amount of EOF required for the detection of the zone structures at a particular location along the capillary, including at its end for MS detection, is predicted. For bare FS, PDMS and PMMA, simulations reveal that EOF is decreasing with time and that the entire IEF process is characterized by the asymptotic formation of a stationary steady-state zone configuration in which electrophoretic transport and electroosmotic zone displacement are opposite and of equal magnitude. The location of immobilization of the boundary between anolyte and most acidic carrier ampholyte is dependent on EOF, i.e. capillary material and anolyte. Overall time intervals for reaching this state in microchannels produced by PDMS and PMMA are predicted to be similar and about twice as long compared to uncoated FS. Additional mobilization for the detection of the entire pH gradient at the capillary end is required. Using concomitant electrophoretic mobilization with an acid as coanion in the catholyte is shown to provide sufficient additional cathodic transport for that purpose. FS capillaries dynamically double coated with polybrene and poly(vinylsulfonate) are predicted to provide sufficient electroosmotic pumping for detection of the entire IEF gradient at the cathodic column end.
Resumo:
EPON 862 is an epoxy resin which is cured with the hardening agent DETDA to form a crosslinked epoxy polymer and is used as a component in modern aircraft structures. These crosslinked polymers are often exposed to prolonged periods of temperatures below glass transition range which cause physical aging to occur. Because physical aging can compromise the performance of epoxies and their composites and because experimental techniques cannot provide all of the necessary physical insight that is needed to fully understand physical aging, efficient computational approaches to predict the effects of physical aging on thermo-mechanical properties are needed. In this study, Molecular Dynamics and Molecular Minimization simulations are being used to establish well-equilibrated, validated molecular models of the EPON 862-DETDA epoxy system with a range of crosslink densities using a united-atom force field. These simulations are subsequently used to predict the glass transition temperature, thermal expansion coefficients, and elastic properties of each of the crosslinked systems for validation of the modeling techniques. The results indicate that glass transition temperature and elastic properties increase with increasing levels of crosslink density and the thermal expansion coefficient decreases with crosslink density, both above and below the glass transition temperature. The results also indicate that there may be an upper limit to crosslink density that can be realistically achieved in epoxy systems. After evaluation of the thermo-mechanical properties, a method is developed to efficiently establish molecular models of epoxy resins that represent the corresponding real molecular structure at specific aging times. Although this approach does not model the physical aging process, it is useful in establishing a molecular model that resembles the physically-aged state for further use in predicting thermo-mechanical properties as a function of aging time. An equation has been predicted based on the results which directly correlate aging time to aged volume of the molecular model. This equation can be helpful for modelers who want to study properties of epoxy resins at different levels of aging but have little information about volume shrinkage occurring during physical aging.
Resumo:
Intraneural Ganglion Cysts expand within in a nerve, causing neurological deficits in afflicted patients. Modeling the propagation of these cysts, originating in the articular branch and then expanding radially outward, will help prove articular theory, and ultimately allow for more purposeful treatment of this condition. In Finite Element Analysis, traditional Lagrangian meshing methods fail to model the excessive deformation that occurs in the propagation of these cysts. This report explores the method of manual adaptive remeshing as a method to allow for the use of Lagrangian meshing, while circumventing the severe mesh distortions typical of using a Lagrangian mesh with a large deformation. Manual adaptive remeshing is the process of remeshing a deformed meshed part and then reapplying loads in order to achieve a larger deformation than a single mesh can achieve without excessive distortion. The methods of manual adaptive remeshing described in this Master’s Report are sufficient in modeling large deformations.