22 resultados para Ligand-steered Modeling Method

em Digital Commons - Michigan Tech


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Materials are inherently multi-scale in nature consisting of distinct characteristics at various length scales from atoms to bulk material. There are no widely accepted predictive multi-scale modeling techniques that span from atomic level to bulk relating the effects of the structure at the nanometer (10-9 meter) on macro-scale properties. Traditional engineering deals with treating matter as continuous with no internal structure. In contrast to engineers, physicists have dealt with matter in its discrete structure at small length scales to understand fundamental behavior of materials. Multiscale modeling is of great scientific and technical importance as it can aid in designing novel materials that will enable us to tailor properties specific to an application like multi-functional materials. Polymer nanocomposite materials have the potential to provide significant increases in mechanical properties relative to current polymers used for structural applications. The nanoscale reinforcements have the potential to increase the effective interface between the reinforcement and the matrix by orders of magnitude for a given reinforcement volume fraction as relative to traditional micro- or macro-scale reinforcements. To facilitate the development of polymer nanocomposite materials, constitutive relationships must be established that predict the bulk mechanical properties of the materials as a function of the molecular structure. A computational hierarchical multiscale modeling technique is developed to study the bulk-level constitutive behavior of polymeric materials as a function of its molecular chemistry. Various parameters and modeling techniques from computational chemistry to continuum mechanics are utilized for the current modeling method. The cause and effect relationship of the parameters are studied to establish an efficient modeling framework. The proposed methodology is applied to three different polymers and validated using experimental data available in literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Modeling method of teaching has demonstrated well--‐documented success in the improvement of student learning. The teacher/researcher in this study was introduced to Modeling through the use of a technique called White Boarding. Without formal training, the researcher began using the White Boarding technique for a limited number of laboratory experiences with his high school physics classes. The question that arose and was investigated in this study is “What specific aspects of the White Boarding process support student understanding?” For the purposes of this study, the White Boarding process was broken down into three aspects – the Analysis of data through the use of Logger Pro software, the Preparation of White Boards, and the Presentations each group gave about their specific lab data. The lab used in this study, an Acceleration of Gravity Lab, was chosen because of the documented difficulties students experience in the graphing of motion. In the lab, students filmed a given motion, utilized Logger Pro software to analyze the motion, prepared a White Board that described the motion with position--‐time and velocity--‐time graphs, and then presented their findings to the rest of the class. The Presentation included a class discussion with minimal contribution from the teacher. The three different aspects of the White Boarding experience – Analysis, Preparation, and Presentation – were compared through the use of student learning logs, video analysis of the Presentations, and follow--‐up interviews with participants. The information and observations gathered were used to determine the level of understanding of each participant during each phase of the lab. The researcher then looked for improvement in the level of student understanding, the number of “aha” moments students had, and the students’ perceptions about which phase was most important to their learning. The results suggest that while all three phases of the White Boarding experience play a part in the learning process for students, the Presentations provided the most significant changes. The implications for instruction are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a need by engine manufactures for computationally efficient and accurate predictive combustion modeling tools for integration in engine simulation software for the assessment of combustion system hardware designs and early development of engine calibrations. This thesis discusses the process for the development and validation of a combustion modeling tool for Gasoline Direct Injected Spark Ignited Engine with variable valve timing, lift and duration valvetrain hardware from experimental data. Data was correlated and regressed from accepted methods for calculating the turbulent flow and flame propagation characteristics for an internal combustion engine. A non-linear regression modeling method was utilized to develop a combustion model to determine the fuel mass burn rate at multiple points during the combustion process. The computational fluid dynamic software Converge ©, was used to simulate and correlate the 3-D combustion system, port and piston geometry to the turbulent flow development within the cylinder to properly predict the experimental data turbulent flow parameters through the intake, compression and expansion processes. The engine simulation software GT-Power © is then used to determine the 1-D flow characteristics of the engine hardware being tested to correlate the regressed combustion modeling tool to experimental data to determine accuracy. The results of the combustion modeling tool show accurate trends capturing the combustion sensitivities to turbulent flow, thermodynamic and internal residual effects with changes in intake and exhaust valve timing, lift and duration.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The study of volcano deformation data can provide information on magma processes and help assess the potential for future eruptions. In employing inverse deformation modeling on these data, we attempt to characterize the geometry, location and volume/pressure change of a deformation source. Techniques currently used to model sheet intrusions (e.g., dikes and sills) often require significant a priori assumptions about source geometry and can require testing a large number of parameters. Moreover, surface deformations are a non-linear function of the source geometry and location. This requires the use of Monte Carlo inversion techniques which leads to long computation times. Recently, ‘displacement tomography’ models have been used to characterize magma reservoirs by inverting source deformation data for volume changes using a grid of point sources in the subsurface. The computations involved in these models are less intensive as no assumptions are made on the source geometry and location, and the relationship between the point sources and the surface deformation is linear. In this project, seeking a less computationally intensive technique for fracture sources, we tested if this displacement tomography method for reservoirs could be used for sheet intrusions. We began by simulating the opening of three synthetic dikes of known geometry and location using an established deformation model for fracture sources. We then sought to reproduce the displacements and volume changes undergone by the fractures using the sources employed in the tomography methodology. Results of this validation indicate the volumetric point sources are not appropriate for locating fracture sources, however they may provide useful qualitative information on volume changes occurring in the surrounding rock, and therefore indirectly indicate the source location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

EPON 862 is an epoxy resin which is cured with the hardening agent DETDA to form a crosslinked epoxy polymer and is used as a component in modern aircraft structures. These crosslinked polymers are often exposed to prolonged periods of temperatures below glass transition range which cause physical aging to occur. Because physical aging can compromise the performance of epoxies and their composites and because experimental techniques cannot provide all of the necessary physical insight that is needed to fully understand physical aging, efficient computational approaches to predict the effects of physical aging on thermo-mechanical properties are needed. In this study, Molecular Dynamics and Molecular Minimization simulations are being used to establish well-equilibrated, validated molecular models of the EPON 862-DETDA epoxy system with a range of crosslink densities using a united-atom force field. These simulations are subsequently used to predict the glass transition temperature, thermal expansion coefficients, and elastic properties of each of the crosslinked systems for validation of the modeling techniques. The results indicate that glass transition temperature and elastic properties increase with increasing levels of crosslink density and the thermal expansion coefficient decreases with crosslink density, both above and below the glass transition temperature. The results also indicate that there may be an upper limit to crosslink density that can be realistically achieved in epoxy systems. After evaluation of the thermo-mechanical properties, a method is developed to efficiently establish molecular models of epoxy resins that represent the corresponding real molecular structure at specific aging times. Although this approach does not model the physical aging process, it is useful in establishing a molecular model that resembles the physically-aged state for further use in predicting thermo-mechanical properties as a function of aging time. An equation has been predicted based on the results which directly correlate aging time to aged volume of the molecular model. This equation can be helpful for modelers who want to study properties of epoxy resins at different levels of aging but have little information about volume shrinkage occurring during physical aging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intraneural Ganglion Cysts expand within in a nerve, causing neurological deficits in afflicted patients. Modeling the propagation of these cysts, originating in the articular branch and then expanding radially outward, will help prove articular theory, and ultimately allow for more purposeful treatment of this condition. In Finite Element Analysis, traditional Lagrangian meshing methods fail to model the excessive deformation that occurs in the propagation of these cysts. This report explores the method of manual adaptive remeshing as a method to allow for the use of Lagrangian meshing, while circumventing the severe mesh distortions typical of using a Lagrangian mesh with a large deformation. Manual adaptive remeshing is the process of remeshing a deformed meshed part and then reapplying loads in order to achieve a larger deformation than a single mesh can achieve without excessive distortion. The methods of manual adaptive remeshing described in this Master’s Report are sufficient in modeling large deformations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded siloxane polymer waveguides have shown promising results for use in optical backplanes. They exhibit high temperature stability, low optical absorption, and require common processing techniques. A challenging aspect of this technology is out-of-plane coupling of the waveguides. A multi-software approach to modeling an optical vertical interconnect (via) is proposed. This approach utilizes the beam propagation method to generate varied modal field distribution structures which are then propagated through a via model using the angular spectrum propagation technique. Simulation results show average losses between 2.5 and 4.5 dB for different initial input conditions. Certain configurations show losses of less than 3 dB and it is shown that in an input/output pair of vias, average losses per via may be lower than the targeted 3 dB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One dimensional magnetic photonic crystals (1D-MPC) are promising structures for integrated optical isolator applications. Rare earth substituted garnet thin films with proper Faraday rotation are required to fabricate planar 1D-MPCs. In this thesis, flat-top response 1D-MPC was proposed and spectral responses and Faraday rotation were modeled. Bismuth substituted iron garnet films were fabricated by RF magnetron sputtering and structures, compositions, birefringence and magnetooptical properties were studied. Double layer structures for single mode propagation were also fabricated by sputtering for the first time. Multilayer stacks with multiple defects (phase shift) composed of Ce-YIG and GGG quarter-wave plates were simulated by the transfer matrix method. The transmission and Faraday rotation characteristics were theoretically studied. It is found that flat-top response, with 100% transmission and near 45o rotation is achievable by adjusting the inter-defect spacing, for film structures as thin as 30 to 35 μm. This is better than 3-fold reduction in length compared to the best Ce-YIG films for comparable rotations, thus allows a considerable reduction in size in manufactured optical isolators. Transmission bands as wide as 7nm were predicted, which is considerable improvement over 2 defects structure. Effect of repetition number and ratio factor on transmission and Faraday rotation ripple factors for the case of 3 and 4 defects structure has been discussed. Diffraction across the structure corresponds to a longer optical path length. Thus the use of guided optics is required to minimize the insertion losses in integrated devices. This part is discussed in chapter 2 in this thesis. Bismuth substituted iron garnet thin films were prepared by RF magnetron sputtering. We investigated or measured the deposition parameters optimization, crystallinity, surface morphologies, composition, magnetic and magnetooptical properties. A very high crystalline quality garnet film with smooth surface has been heteroepitaxially grown on (111) GGG substrate for films less than 1μm. Dual layer structures with two distinct XRD peaks (within a single sputtered film) start to develop when films exceed this thickness. The development of dual layer structure was explained by compositional gradient across film thickness, rather than strain gradient proposed by other authors. Lower DC self bias or higher substrate temperature is found to help to delay the appearance of the 2nd layer. The deposited films show in-plane magnetization, which is advantageous for waveguide devices application. Propagation losses of fabricated waveguides can be decreased by annealing in an oxygen atmosphere from 25dB/cm to 10dB/cm. The Faraday rotation at λ=1.55μm were also measured for the waveguides. FR is small (10° for a 3mm long waveguide), due to the presence of linear birefringence. This part is covered in chapter 4. We also investigated the elimination of linear birefringence by thickness tuning method for our sputtered films. We examined the compressively and tensilely strained films and analyze the photoelastic response of the sputter deposited garnet films. It has been found that the net birefringence can be eliminated under planar compressive strain conditions by sputtering. Bi-layer GGG on garnet thin film yields a reduced birefringence. Temperature control during the sputter deposition of GGG cover layer is critical and strongly influences the magnetization and birefringence level in the waveguide. High temperature deposition lowers the magnetization and increases the linear birefringence in the garnet films. Double layer single mode structures fabricated by sputtering were also studied. The double layer, which shows an in-plane magnetization, has an increased RMS roughness upon upper layer deposition. The single mode characteristic was confirmed by prism coupler measurement. This part is discussed in chapter 5.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proteins are linear chain molecules made out of amino acids. Only when they fold to their native states, they become functional. This dissertation aims to model the solvent (environment) effect and to develop & implement enhanced sampling methods that enable a reliable study of the protein folding problem in silico. We have developed an enhanced solvation model based on the solution to the Poisson-Boltzmann equation in order to describe the solvent effect. Following the quantum mechanical Polarizable Continuum Model (PCM), we decomposed net solvation free energy into three physical terms– Polarization, Dispersion and Cavitation. All the terms were implemented, analyzed and parametrized individually to obtain a high level of accuracy. In order to describe the thermodynamics of proteins, their conformational space needs to be sampled thoroughly. Simulations of proteins are hampered by slow relaxation due to their rugged free-energy landscape, with the barriers between minima being higher than the thermal energy at physiological temperatures. In order to overcome this problem a number of approaches have been proposed of which replica exchange method (REM) is the most popular. In this dissertation we describe a new variant of canonical replica exchange method in the context of molecular dynamic simulation. The advantage of this new method is the easily tunable high acceptance rate for the replica exchange. We call our method Microcanonical Replica Exchange Molecular Dynamic (MREMD). We have described the theoretical frame work, comment on its actual implementation, and its application to Trp-cage mini-protein in implicit solvent. We have been able to correctly predict the folding thermodynamics of this protein using our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ethanol-gasoline fuel blends are increasingly being used in spark ignition (SI) engines due to continued growth in renewable fuels as part of a growing renewable portfolio standard (RPS). This leads to the need for a simple and accurate ethanol-gasoline blends combustion model that is applicable to one-dimensional engine simulation. A parametric combustion model has been developed, integrated into an engine simulation tool, and validated using SI engine experimental data. The parametric combustion model was built inside a user compound in GT-Power. In this model, selected burn durations were computed using correlations as functions of physically based non-dimensional groups that have been developed using the experimental engine database over a wide range of ethanol-gasoline blends, engine geometries, and operating conditions. A coefficient of variance (COV) of gross indicated mean effective pressure (IMEP) correlation was also added to the parametric combustion model. This correlation enables the cycle combustion variation modeling as a function of engine geometry and operating conditions. The computed burn durations were then used to fit single and double Wiebe functions. The single-Wiebe parametric combustion compound used the least squares method to compute the single-Wiebe parameters, while the double-Wiebe parametric combustion compound used an analytical solution to compute the double-Wiebe parameters. These compounds were then integrated into the engine model in GT-Power through the multi-Wiebe combustion template in which the values of Wiebe parameters (single-Wiebe or double-Wiebe) were sensed via RLT-dependence. The parametric combustion models were validated by overlaying the simulated pressure trace from GT-Power on to experimentally measured pressure traces. A thermodynamic engine model was also developed to study the effect of fuel blends, engine geometries and operating conditions on both the burn durations and COV of gross IMEP simulation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobile Mesh Network based In-Transit Visibility (MMN-ITV) system facilitates global real-time tracking capability for the logistics system. In-transit containers form a multi-hop mesh network to forward the tracking information to the nearby sinks, which further deliver the information to the remote control center via satellite. The fundamental challenge to the MMN-ITV system is the energy constraint of the battery-operated containers. Coupled with the unique mobility pattern, cross-MMN behavior, and the large-spanned area, it is necessary to investigate the energy-efficient communication of the MMN-ITV system thoroughly. First of all, this dissertation models the energy-efficient routing under the unique pattern of the cross-MMN behavior. A new modeling approach, pseudo-dynamic modeling approach, is proposed to measure the energy-efficiency of the routing methods in the presence of the cross-MMN behavior. With this approach, it could be identified that the shortest-path routing and the load-balanced routing is energy-efficient in mobile networks and static networks respectively. For the MMN-ITV system with both mobile and static MMNs, an energy-efficient routing method, energy-threshold routing, is proposed to achieve the best tradeoff between them. Secondly, due to the cross-MMN behavior, neighbor discovery is executed frequently to help the new containers join the MMN, hence, consumes similar amount of energy as that of the data communication. By exploiting the unique pattern of the cross-MMN behavior, this dissertation proposes energy-efficient neighbor discovery wakeup schedules to save up to 60% of the energy for neighbor discovery. Vehicular Ad Hoc Networks (VANETs)-based inter-vehicle communications is by now growingly believed to enhance traffic safety and transportation management with low cost. The end-to-end delay is critical for the time-sensitive safety applications in VANETs, and can be a decisive performance metric for VANETs. This dissertation presents a complete analytical model to evaluate the end-to-end delay against the transmission range and the packet arrival rate. This model illustrates a significant end-to-end delay increase from non-saturated networks to saturated networks. It hence suggests that the distributed power control and admission control protocols for VANETs should aim at improving the real-time capacity (the maximum packet generation rate without causing saturation), instead of the delay itself. Based on the above model, it could be determined that adopting uniform transmission range for every vehicle may hinder the delay performance improvement, since it does not allow the coexistence of the short path length and the low interference. Clusters are proposed to configure non-uniform transmission range for the vehicles. Analysis and simulation confirm that such configuration can enhance the real-time capacity. In addition, it provides an improved trade off between the end-to-end delay and the network capacity. A distributed clustering protocol with minimum message overhead is proposed, which achieves low convergence time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PMR-15 polyimide is a polymer that is used as a matrix in composites. These composites with PMR-15 matrices are called advanced polymer matrix composite that is abundantly used in the aerospace and electronics industries because of its high temperature resistivity. Apart from having high temperature sustainability, PMR-15 composites also display good thermal-oxidative stability, mechanical properties, processability and low costs, which makes it a suitable material for manufacturing aircraft structures. PMR-15 uses the reverse Diels-Alder (RDA) method for crosslinking which provides it with the groundwork for its distinctive thermal stability and a range of 280-300 degree Centigrade use temperature. Regardless of such desirable properties, this material has a number of limitations that compromises its application on a large scale basis. PMR-15 composites has been known to be very vulnerable to micro-cracking at inter and intra-laminar cracking. But the major factor that hinders its demand is PMR-15's carcinogenic constituent, methylene dianilineme (MDA), also a liver toxin. The necessity of providing a safe working environment during its production adds up to the cost of this material. In this study, Molecular Dynamics and Energy Minimization techniques are utilized to simulate a structure of PMR-15 at a given density of 1.324 g/cc and an attempt to recreate the polyimide to reduce the number of experimental testing and hence subdue the health hazards as well as the cost involved in its production. Even though this study does not involve in validating any mechanical properties of the model, it could be used in future for the validation of its properties and further testing for different properties like aging, microcracking, creep etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For half a century the integrated circuits (ICs) that make up the heart of electronic devices have been steadily improving by shrinking at an exponential rate. However, as the current crop of ICs get smaller and the insulating layers involved become thinner, electrons leak through due to quantum mechanical tunneling. This is one of several issues which will bring an end to this incredible streak of exponential improvement of this type of transistor device, after which future improvements will have to come from employing fundamentally different transistor architecture rather than fine tuning and miniaturizing the metal-oxide-semiconductor field effect transistors (MOSFETs) in use today. Several new transistor designs, some designed and built here at Michigan Tech, involve electrons tunneling their way through arrays of nanoparticles. We use a multi-scale approach to model these devices and study their behavior. For investigating the tunneling characteristics of the individual junctions, we use a first-principles approach to model conduction between sub-nanometer gold particles. To estimate the change in energy due to the movement of individual electrons, we use the finite element method to calculate electrostatic capacitances. The kinetic Monte Carlo method allows us to use our knowledge of these details to simulate the dynamics of an entire device— sometimes consisting of hundreds of individual particles—and watch as a device ‘turns on’ and starts conducting an electric current. Scanning tunneling microscopy (STM) and the closely related scanning tunneling spectroscopy (STS) are a family of powerful experimental techniques that allow for the probing and imaging of surfaces and molecules at atomic resolution. However, interpretation of the results often requires comparison with theoretical and computational models. We have developed a new method for calculating STM topographs and STS spectra. This method combines an established method for approximating the geometric variation of the electronic density of states, with a modern method for calculating spin-dependent tunneling currents, offering a unique balance between accuracy and accessibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past several decades, it has become apparent that anthropogenic activities have resulted in the large-scale enhancement of the levels of many trace gases throughout the troposphere. More recently, attention has been given to the transport pathway taken by these emissions as they are dispersed throughout the atmosphere. The transport pathway determines the physical characteristics of emissions plumes and therefore plays an important role in the chemical transformations that can occur downwind of source regions. For example, the production of ozone (O3) is strongly dependent upon the transport its precursors undergo. O3 can initially be formed within air masses while still over polluted source regions. These polluted air masses can experience continued O3 production or O3 destruction downwind, depending on the air mass's chemical and transport characteristics. At present, however, there are a number of uncertainties in the relationships between transport and O3 production in the North Atlantic lower free troposphere. The first phase of the study presented here used measurements made at the Pico Mountain observatory and model simulations to determine transport pathways for US emissions to the observatory. The Pico Mountain observatory was established in the summer of 2001 in order to address the need to understand the relationships between transport and O3 production. Measurements from the observatory were analyzed in conjunction with model simulations from the Lagrangian particle dispersion model (LPDM), FLEX-PART, in order to determine the transport pathway for events observed at the Pico Mountain observatory during July 2003. A total of 16 events were observed, 4 of which were analyzed in detail. The transport time for these 16 events varied from 4.5 to 7 days, while the transport altitudes over the ocean ranged from 2-8 km, but were typically less than 3 km. In three of the case studies, eastward advection and transport in a weak warm conveyor belt (WCB) airflow was responsible for the export of North American emissions into the FT, while transport in the FT was governed by easterly winds driven by the Azores/Bermuda High (ABH) and transient northerly lows. In the fourth case study, North American emissions were lofted to 6-8 km in a WCB before being entrained in the same cyclone's dry airstream and transported down to the observatory. The results of this study show that the lower marine FT may provide an important transport environment where O3 production may continue, in contrast to transport in the marine boundary layer, where O3 destruction is believed to dominate. The second phase of the study presented here focused on improving the analysis methods that are available with LPDMs. While LPDMs are popular and useful for the analysis of atmospheric trace gas measurements, identifying the transport pathway of emissions from their source to a receptor (the Pico Mountain observatory in our case) using the standard gridded model output, particularly during complex meteorological scenarios can be difficult can be difficult or impossible. The transport study in phase 1 was limited to only 1 month out of more than 3 years of available data and included only 4 case studies out of the 16 events specifically due to this confounding factor. The second phase of this study addressed this difficulty by presenting a method to clearly and easily identify the pathway taken by only those emissions that arrive at a receptor at a particular time, by combining the standard gridded output from forward (i.e., concentrations) and backward (i.e., residence time) LPDM simulations, greatly simplifying similar analyses. The ability of the method to successfully determine the source-to-receptor pathway, restoring this Lagrangian information that is lost when the data are gridded, is proven by comparing the pathway determined from this method with the particle trajectories from both the forward and backward models. A sample analysis is also presented, demonstrating that this method is more accurate and easier to use than existing methods using standard LPDM products. Finally, we discuss potential future work that would be possible by combining the backward LPDM simulation with gridded data from other sources (e.g., chemical transport models) to obtain a Lagrangian sampling of the air that will eventually arrive at a receptor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Principal Component Analysis (PCA) is a popular method for dimension reduction that can be used in many fields including data compression, image processing, exploratory data analysis, etc. However, traditional PCA method has several drawbacks, since the traditional PCA method is not efficient for dealing with high dimensional data and cannot be effectively applied to compute accurate enough principal components when handling relatively large portion of missing data. In this report, we propose to use EM-PCA method for dimension reduction of power system measurement with missing data, and provide a comparative study of traditional PCA and EM-PCA methods. Our extensive experimental results show that EM-PCA method is more effective and more accurate for dimension reduction of power system measurement data than traditional PCA method when dealing with large portion of missing data set.