8 resultados para Computer models

em Digital Commons - Michigan Tech


Relevância:

70.00% 70.00%

Publicador:

Resumo:

As an important Civil Engineering material, asphalt concrete (AC) is commonly used to build road surfaces, airports, and parking lots. With traditional laboratory tests and theoretical equations, it is a challenge to fully understand such a random composite material. Based on the discrete element method (DEM), this research seeks to develop and implement computer models as research approaches for improving understandings of AC microstructure-based mechanics. In this research, three categories of approaches were developed or employed to simulate microstructures of AC materials, namely the randomly-generated models, the idealized models, and image-based models. The image-based models were recommended for accurately predicting AC performance, while the other models were recommended as research tools to obtain deep insight into the AC microstructure-based mechanics. A viscoelastic micromechanical model was developed to capture viscoelastic interactions within the AC microstructure. Four types of constitutive models were built to address the four categories of interactions within an AC specimen. Each of the constitutive models consists of three parts which represent three different interaction behaviors: a stiffness model (force-displace relation), a bonding model (shear and tensile strengths), and a slip model (frictional property). Three techniques were developed to reduce the computational time for AC viscoelastic simulations. It was found that the computational time was significantly reduced to days or hours from years or months for typical three-dimensional models. Dynamic modulus and creep stiffness tests were simulated and methodologies were developed to determine the viscoelastic parameters. It was found that the DE models could successfully predict dynamic modulus, phase angles, and creep stiffness in a wide range of frequencies, temperatures, and time spans. Mineral aggregate morphology characteristics (sphericity, orientation, and angularity) were studied to investigate their impacts on AC creep stiffness. It was found that aggregate characteristics significantly impact creep stiffness. Pavement responses and pavement-vehicle interactions were investigated by simulating pavement sections under a rolling wheel. It was found that wheel acceleration, steadily moving, and deceleration significantly impact contact forces. Additionally, summary and recommendations were provided in the last chapter and part of computer programming codes wree provided in the appendixes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As environmental problems became more complex, policy and regulatory decisions become far more difficult to make. The use of science has become an important practice in the decision making process of many federal agencies. Many different types of scientific information are used to make decisions within the EPA, with computer models becoming especially important. Environmental models are used throughout the EPA in a variety of contexts and their predictive capacity has become highly valued in decision making. The main focus of this research is to examine the EPA’s Council for Regulatory Modeling (CREM) as a case study in addressing science issues, particularly models, in government agencies. Specifically, the goal was to answer the following questions: What is the history of the CREM and how can this information shed light on the process of science policy implementation? What were the goals of implementing the CREM? Were these goals reached and how have they changed? What have been the impediments that the CREM has faced and why did these impediments occur? The three main sources of information for this research came from observations during summer employment with the CREM, document review and supplemental interviews with CREM participants and other members of the modeling community. Examining a history of modeling at the EPA, as well as a history of the CREM, provides insight into the many challenges that are faced when implementing science policy and science policy programs. After examining the many impediments that the CREM has faced in implementing modeling policies, it was clear that the impediments fall into two separate categories, classic and paradoxical. The classic impediments include the more standard impediments to science policy implementation that might be found in any regulatory environment, such as lack of resources and changes in administration. Paradoxical impediments are cyclical in nature, with no clear solution, such as balancing top-down versus bottom-up initiatives and coping with differing perceptions. These impediments, when not properly addressed, severely hinder the ability for organizations to successfully implement science policy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A diesel oxidation catalyst (DOC) with a catalyzed diesel particulate filter (CPF) is an effective exhaust aftertreatment device that reduces particulate emissions from diesel engines, and properly designed DOC-CPF systems provide passive regeneration of the filter by the oxidation of PM via thermal and NO2/temperature-assisted means under various vehicle duty cycles. However, controlling the backpressure on engines caused by the addition of the CPF to the exhaust system requires a good understanding of the filtration and oxidation processes taking place inside the filter as the deposition and oxidation of solid particulate matter (PM) change as functions of loading time. In order to understand the solid PM loading characteristics in the CPF, an experimental and modeling study was conducted using emissions data measured from the exhaust of a John Deere 6.8 liter, turbocharged and after-cooled engine with a low-pressure loop EGR system and a DOC-CPF system (or a CCRT® - Catalyzed Continuously Regenerating Trap®, as named by Johnson Matthey) in the exhaust system. A series of experiments were conducted to evaluate the performance of the DOC-only, CPF-only and DOC-CPF configurations at two engine speeds (2200 and 1650 rpm) and various loads on the engine ranging from 5 to 100% of maximum torque at both speeds. Pressure drop across the DOC and CPF, mass deposited in the CPF at the end of loading, upstream and downstream gaseous and particulate emissions, and particle size distributions were measured at different times during the experiments to characterize the pressure drop and filtration efficiency of the DOCCPF system as functions of loading time. Pressure drop characteristics measured experimentally across the DOC-CPF system showed a distinct deep-bed filtration region characterized by a non-linear pressure drop rise, followed by a transition region, and then by a cake-filtration region with steadily increasing pressure drop with loading time at engine load cases with CPF inlet temperatures less than 325 °C. At the engine load cases with CPF inlet temperatures greater than 360 °C, the deep-bed filtration region had a steep rise in pressure drop followed by a decrease in pressure drop (due to wall PM oxidation) in the cake filtration region. Filtration efficiencies observed during PM cake filtration were greater than 90% in all engine load cases. Two computer models, i.e., the MTU 1-D DOC model and the MTU 1-D 2-layer CPF model were developed and/or improved from existing models as part of this research and calibrated using the data obtained from these experiments. The 1-D DOC model employs a three-way catalytic reaction scheme for CO, HC and NO oxidation, and is used to predict CO, HC, NO and NO2 concentrations downstream of the DOC. Calibration results from the 1-D DOC model to experimental data at 2200 and 1650 rpm are presented. The 1-D 2-layer CPF model uses a ‘2-filters in series approach’ for filtration, PM deposition and oxidation in the PM cake and substrate wall via thermal (O2) and NO2/temperature-assisted mechanisms, and production of NO2 as the exhaust gas mixture passes through the CPF catalyst washcoat. Calibration results from the 1-D 2-layer CPF model to experimental data at 2200 rpm are presented. Comparisons of filtration and oxidation behavior of the CPF at sample load-cases in both configurations are also presented. The input parameters and selected results are also compared with a similar research work with an earlier version of the CCRT®, to compare and explain differences in the fundamental behavior of the CCRT® used in these two research studies. An analysis of the results from the calibrated CPF model suggests that pressure drop across the CPF depends mainly on PM loading and oxidation in the substrate wall, and also that the substrate wall initiates PM filtration and helps in forming a PM cake layer on the wall. After formation of the PM cake layer of about 1-2 µm on the wall, the PM cake becomes the primary filter and performs 98-99% of PM filtration. In all load cases, most of PM mass deposited was in the PM cake layer, and PM oxidation in the PM cake layer accounted for 95-99% of total PM mass oxidized during loading. Overall PM oxidation efficiency of the DOC-CPF device increased with increasing CPF inlet temperatures and NO2 flow rates, and was higher in the CCRT® configuration compared to the CPF-only configuration due to higher CPF inlet NO2 concentrations. Filtration efficiencies greater than 90% were observed within 90-100 minutes of loading time (starting with a clean filter) in all load cases, due to the fact that the PM cake on the substrate wall forms a very efficient filter. A good strategy for maintaining high filtration efficiency and low pressure drop of the device while performing active regeneration would be to clean the PM cake filter partially (i.e., by retaining a cake layer of 1-2 µm thickness on the substrate wall) and to completely oxidize the PM deposited in the substrate wall. The data presented support this strategy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Biofuels are an increasingly important component of worldwide energy supply. This research aims to understand the pathways and impacts of biofuels production, and to improve these processes to make them more efficient. In Chapter 2, a life cycle assessment (LCA) is presented for cellulosic ethanol production from five potential feedstocks of regional importance to the upper Midwest - hybrid poplar, hybrid willow, switchgrass, diverse prairie grasses, and logging residues - according to the requirements of Renewable Fuel Standard (RFS). Direct land use change emissions are included for the conversion of abandoned agricultural land to feedstock production, and computer models of the conversion process are used in order to determine the effect of varying biomass composition on overall life cycle impacts. All scenarios analyzed here result in greater than 60% reduction in greenhouse gas emissions relative to petroleum gasoline. Land use change effects were found to contribute significantly to the overall emissions for the first 20 years after plantation establishment. Chapter 3 is an investigation of the effects of biomass mixtures on overall sugar recovery from the combined processes of dilute acid pretreatment and enzymatic hydrolysis. Biomass mixtures studied were aspen, a hardwood species well suited to biochemical processing; balsam, a high-lignin softwood species, and switchgrass, an herbaceous energy crop with high ash content. A matrix of three different dilute acid pretreatment severities and three different enzyme loading levels was used to characterize interactions between pretreatment and enzymatic hydrolysis. Maximum glucose yield for any species was 70% oftheoretical for switchgrass, and maximum xylose yield was 99.7% of theoretical for aspen. Supplemental β-glucosidase increased glucose yield from enzymatic hydrolysis by an average of 15%, and total sugar recoveries for mixtures could be predicted to within 4% by linear interpolation of the pure species results. Chapter 4 is an evaluation of the potential for producing Trichoderma reesei cellulose hydrolases in the Kluyveromyces lactis yeast expression system. The exoglucanases Cel6A and Cel7A, and the endoglucanase Cel7B were inserted separately into the K. lactis and the enzymes were analyzed for activity on various substrates. Recombinant Cel7B was found to be active on carboxymethyl cellulose and Avicel powdered cellulose substrates. Recombinant Cel6A was also found to be active on Avicel. Recombinant Cel7A was produced, but no enzymatic activity was detected on any substrate. Chapter 5 presents a new method for enzyme improvement studies using enzyme co-expression and yeast growth rate measurements as a potential high-throughput expression and screening system in K. lactis yeast. Two different K. lactis strains were evaluated for their usefulness in growth screening studies, one wild-type strain and one strain which has had the main galactose metabolic pathway disabled. Sequential transformation and co-expression of the exoglucanase Cel6A and endoglucanase Cel7B was performed, and improved hydrolysis rates on Avicel were detectable in the cell culture supernatant. Future work should focus on hydrolysis of natural substrates, developing the growth screening method, and utilizing the K. lactis expression system for directed evolution of enzymes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is an important and difficult challenge to protect modern interconnected power system from blackouts. Applying advanced power system protection techniques and increasing power system stability are ways to improve the reliability and security of power systems. Phasor-domain software packages such as Power System Simulator for Engineers (PSS/E) can be used to study large power systems but cannot be used for transient analysis. In order to observe both power system stability and transient behavior of the system during disturbances, modeling has to be done in the time-domain. This work focuses on modeling of power systems and various control systems in the Alternative Transients Program (ATP). ATP is a time-domain power system modeling software in which all the power system components can be modeled in detail. Models are implemented with attention to component representation and parameters. The synchronous machine model includes the saturation characteristics and control interface. Transient Analysis Control System is used to model the excitation control system, power system stabilizer and the turbine governor system of the synchronous machine. Several base cases of a single machine system are modeled and benchmarked against PSS/E. A two area system is modeled and inter-area and intra-area oscillations are observed. The two area system is reduced to a two machine system using reduced dynamic equivalencing. The original and the reduced systems are benchmarked against PSS/E. This work also includes the simulation of single-pole tripping using one of the base case models. Advantages of single-pole tripping and comparison of system behavior against three-pole tripping are studied. Results indicate that the built-in control system models in PSS/E can be effectively reproduced in ATP. The benchmarked models correctly simulate the power system dynamics. The successful implementation of a dynamically reduced system in ATP shows promise for studying a small sub-system of a large system without losing the dynamic behaviors. Other aspects such as relaying can be investigated using the benchmarked models. It is expected that this work will provide guidance in modeling different control systems for the synchronous machine and in representing dynamic equivalents of large power systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.