12 resultados para Statistical modeling technique

em Digital Commons - Michigan Tech


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Embedded siloxane polymer waveguides have shown promising results for use in optical backplanes. They exhibit high temperature stability, low optical absorption, and require common processing techniques. A challenging aspect of this technology is out-of-plane coupling of the waveguides. A multi-software approach to modeling an optical vertical interconnect (via) is proposed. This approach utilizes the beam propagation method to generate varied modal field distribution structures which are then propagated through a via model using the angular spectrum propagation technique. Simulation results show average losses between 2.5 and 4.5 dB for different initial input conditions. Certain configurations show losses of less than 3 dB and it is shown that in an input/output pair of vias, average losses per via may be lower than the targeted 3 dB.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Materials are inherently multi-scale in nature consisting of distinct characteristics at various length scales from atoms to bulk material. There are no widely accepted predictive multi-scale modeling techniques that span from atomic level to bulk relating the effects of the structure at the nanometer (10-9 meter) on macro-scale properties. Traditional engineering deals with treating matter as continuous with no internal structure. In contrast to engineers, physicists have dealt with matter in its discrete structure at small length scales to understand fundamental behavior of materials. Multiscale modeling is of great scientific and technical importance as it can aid in designing novel materials that will enable us to tailor properties specific to an application like multi-functional materials. Polymer nanocomposite materials have the potential to provide significant increases in mechanical properties relative to current polymers used for structural applications. The nanoscale reinforcements have the potential to increase the effective interface between the reinforcement and the matrix by orders of magnitude for a given reinforcement volume fraction as relative to traditional micro- or macro-scale reinforcements. To facilitate the development of polymer nanocomposite materials, constitutive relationships must be established that predict the bulk mechanical properties of the materials as a function of the molecular structure. A computational hierarchical multiscale modeling technique is developed to study the bulk-level constitutive behavior of polymeric materials as a function of its molecular chemistry. Various parameters and modeling techniques from computational chemistry to continuum mechanics are utilized for the current modeling method. The cause and effect relationship of the parameters are studied to establish an efficient modeling framework. The proposed methodology is applied to three different polymers and validated using experimental data available in literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal of this research is to provide a framework for vibro-acoustical analysis and design of a multiple-layer constrained damping structure. The existing research on damping and viscoelastic damping mechanism is limited to the following four mainstream approaches: modeling techniques of damping treatments/materials; control through the electrical-mechanical effect using the piezoelectric layer; optimization by adjusting the parameters of the structure to meet the design requirements; and identification of the damping material’s properties through the response of the structure. This research proposes a systematic design methodology for the multiple-layer constrained damping beam giving consideration to vibro-acoustics. A modeling technique to study the vibro-acoustics of multiple-layered viscoelastic laminated beams using the Biot damping model is presented using a hybrid numerical model. The boundary element method (BEM) is used to model the acoustical cavity whereas the Finite Element Method (FEM) is the basis for vibration analysis of the multiple-layered beam structure. Through the proposed procedure, the analysis can easily be extended to other complex geometry with arbitrary boundary conditions. The nonlinear behavior of viscoelastic damping materials is represented by the Biot damping model taking into account the effects of frequency, temperature and different damping materials for individual layers. A curve-fitting procedure used to obtain the Biot constants for different damping materials for each temperature is explained. The results from structural vibration analysis for selected beams agree with published closed-form results and results for the radiated noise for a sample beam structure obtained using a commercial BEM software is compared with the acoustical results of the same beam with using the Biot damping model. The extension of the Biot damping model is demonstrated to study MDOF (Multiple Degrees of Freedom) dynamics equations of a discrete system in order to introduce different types of viscoelastic damping materials. The mechanical properties of viscoelastic damping materials such as shear modulus and loss factor change with respect to different ambient temperatures and frequencies. The application of multiple-layer treatment increases the damping characteristic of the structure significantly and thus helps to attenuate the vibration and noise for a broad range of frequency and temperature. The main contributions of this dissertation include the following three major tasks: 1) Study of the viscoelastic damping mechanism and the dynamics equation of a multilayer damped system incorporating the Biot damping model. 2) Building the Finite Element Method (FEM) model of the multiple-layer constrained viscoelastic damping beam and conducting the vibration analysis. 3) Extending the vibration problem to the Boundary Element Method (BEM) based acoustical problem and comparing the results with commercial simulation software.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The South Florida Water Management District (SFWMD) manages and operates numerous water control structures that are subject to scour. In an effort to reduce scour downstream of these gated structures, laboratory experiments were performed to investigate the effect of active air-injection downstream of the terminal structure of a gated spillway on the depth of the scour hole. A literature review involving similar research revealed significant variables such as the ratio of headwater-to-tailwater depths, the diffuser angle, sediment uniformity, and the ratio of air-to-water volumetric discharge values. The experimental design was based on the analysis of several of these non-dimensional parameters. Bed scouring at stilling basins downstream of gated spillways has been identified as posing a serious risk to the spillway’s structural stability. Although this type of scour has been studied in the past, it continues to represent a real threat to water control structures and requires additional attention. A hydraulic scour channel comprised of a head tank, flow straightening section, gated spillway, stilling basin, scour section, sediment trap, and tail-tank was used to further this analysis. Experiments were performed in a laboratory channel consisting of a 1:30 scale model of the SFWMD S65E spillway structure. To ascertain the feasibility of air injection for scour reduction a proof-of-concept study was performed. Experiments were conducted without air entrainment and with high, medium, and low air entrainment rates for high and low headwater conditions. For the cases with no air entrainment it was found that there was excessive scour downstream of the structure due to a downward roller formed upon exiting the downstream sill of the stilling basin. When air was introduced vertically just downstream of, and at the same level as, the stilling basin sill, it was found that air entrainment does reduce scour depth by up to 58% depending on the air flow rate, but shifts the deepest scour location to the sides of the channel bed instead of the center. Various hydraulic flow conditions were tested without air injection to verify which scenario caused more scour. That scenario, uncontrolled free, in which water does not contact the gate and the water elevation in the stilling basin is lower than the spillway crest, would be used for the remainder of experiments testing air injection. Various air flow rates, diffuser elevations, air hole diameters, air hole spacings, diffuser angles and widths were tested in over 120 experiments. Optimal parameters include air injection at a rate that results in a water-to-air ratio of 0.28, air holes 1.016mm in diameter the entire width of the stilling basin, and a vertically orientated injection pattern. Detailed flow measurements were collected for one case using air injection and one without. An identical flow scenario was used for each experiment, namely that of a high flow rate and upstream headwater depth and a low tailwater depth. Equilibrium bed scour and velocity measurements were taken using an Acoustic Doppler Velocimeter at nearly 3000 points. Velocity data was used to construct a vector plot in order to identify which flow components contribute to the scour hole. Additionally, turbulence parameters were calculated in an effort to help understand why air-injection reduced bed scour. Turbulence intensities, normalized mean flow, normalized kinetic energy, and anisotropy of turbulence plots were constructed. A clear trend emerged that showed air-injection reduces turbulence near the bed and therefore reduces scour potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Riparian zones are dynamic, transitional ecosystems between aquatic and terrestrial ecosystems with well defined vegetation and soil characteristics. Development of an all-encompassing definition for riparian ecotones, because of their high variability, is challenging. However, there are two primary factors that all riparian ecotones are dependent on: the watercourse and its associated floodplain. Previous approaches to riparian boundary delineation have utilized fixed width buffers, but this methodology has proven to be inadequate as it only takes the watercourse into consideration and ignores critical geomorphology, associated vegetation and soil characteristics. Our approach offers advantages over other previously used methods by utilizing: the geospatial modeling capabilities of ArcMap GIS; a better sampling technique along the water course that can distinguish the 50-year flood plain, which is the optimal hydrologic descriptor of riparian ecotones; the Soil Survey Database (SSURGO) and National Wetland Inventory (NWI) databases to distinguish contiguous areas beyond the 50-year plain; and land use/cover characteristics associated with the delineated riparian zones. The model utilizes spatial data readily available from Federal and State agencies and geospatial clearinghouses. An accuracy assessment was performed to assess the impact of varying the 50-year flood height, changing the DEM spatial resolution (1, 3, 5 and 10m), and positional inaccuracies with the National Hydrography Dataset (NHD) streams layer on the boundary placement of the delineated variable width riparian ecotones area. The result of this study is a robust and automated GIS based model attached to ESRI ArcMap software to delineate and classify variable-width riparian ecotones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The municipality of San Juan La Laguna, Guatemala is home to approximately 5,200 people and located on the western side of the Lake Atitlán caldera. Steep slopes surround all but the eastern side of San Juan. The Lake Atitlán watershed is susceptible to many natural hazards, but most predictable are the landslides that can occur annually with each rainy season, especially during high-intensity events. Hurricane Stan hit Guatemala in October 2005; the resulting flooding and landslides devastated the Atitlán region. Locations of landslide and non-landslide points were obtained from field observations and orthophotos taken following Hurricane Stan. This study used data from multiple attributes, at every landslide and non-landslide point, and applied different multivariate analyses to optimize a model for landslides prediction during high-intensity precipitation events like Hurricane Stan. The attributes considered in this study are: geology, geomorphology, distance to faults and streams, land use, slope, aspect, curvature, plan curvature, profile curvature and topographic wetness index. The attributes were pre-evaluated for their ability to predict landslides using four different attribute evaluators, all available in the open source data mining software Weka: filtered subset, information gain, gain ratio and chi-squared. Three multivariate algorithms (decision tree J48, logistic regression and BayesNet) were optimized for landslide prediction using different attributes. The following statistical parameters were used to evaluate model accuracy: precision, recall, F measure and area under the receiver operating characteristic (ROC) curve. The algorithm BayesNet yielded the most accurate model and was used to build a probability map of landslide initiation points. The probability map developed in this study was also compared to the results of a bivariate landslide susceptibility analysis conducted for the watershed, encompassing Lake Atitlán and San Juan. Landslides from Tropical Storm Agatha 2010 were used to independently validate this study’s multivariate model and the bivariate model. The ultimate aim of this study is to share the methodology and results with municipal contacts from the author's time as a U.S. Peace Corps volunteer, to facilitate more effective future landslide hazard planning and mitigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the past three decades the automotive industry is facing two main conflicting challenges to improve fuel economy and meet emissions standards. This has driven the engineers and researchers around the world to develop engines and powertrain which can meet these two daunting challenges. Focusing on the internal combustion engines there are very few options to enhance their performance beyond the current standards without increasing the price considerably. The Homogeneous Charge Compression Ignition (HCCI) engine technology is one of the combustion techniques which has the potential to partially meet the current critical challenges including CAFE standards and stringent EPA emissions standards. HCCI works on very lean mixtures compared to current SI engines, resulting in very low combustion temperatures and ultra-low NOx emissions. These engines when controlled accurately result in ultra-low soot formation. On the other hand HCCI engines face a problem of high unburnt hydrocarbon and carbon monoxide emissions. This technology also faces acute combustion controls problem, which if not dealt properly with yields highly unfavorable operating conditions and exhaust emissions. This thesis contains two main parts. One part deals in developing an HCCI experimental setup and the other focusses on developing a grey box modelling technique to control HCCI exhaust gas emissions. The experimental part gives the complete details on modification made on the stock engine to run in HCCI mode. This part also comprises details and specifications of all the sensors, actuators and other auxiliary parts attached to the conventional SI engine in order to run and monitor the engine in SI mode and future SI-HCCI mode switching studies. In the latter part around 600 data points from two different HCCI setups for two different engines are studied. A grey-box model for emission prediction is developed. The grey box model is trained with the use of 75% data and the remaining data is used for validation purpose. An average of 70% increase in accuracy for predicting engine performance is found while using the grey-box over an empirical (black box) model during this study. The grey-box model provides a solution for the difficulty faced for real time control of an HCCI engine. The grey-box model in this thesis is the first study in literature to develop a control oriented model for predicting HCCI engine emissions for control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determination of combustion metrics for a diesel engine has the potential of providing feedback for closed-loop combustion phasing control to meet current and upcoming emission and fuel consumption regulations. This thesis focused on the estimation of combustion metrics including start of combustion (SOC), crank angle location of 50% cumulative heat release (CA50), peak pressure crank angle location (PPCL), and peak pressure amplitude (PPA), peak apparent heat release rate crank angle location (PACL), mean absolute pressure error (MAPE), and peak apparent heat release rate amplitude (PAA). In-cylinder pressure has been used in the laboratory as the primary mechanism for characterization of combustion rates and more recently in-cylinder pressure has been used in series production vehicles for feedback control. However, the intrusive measurement with the in-cylinder pressure sensor is expensive and requires special mounting process and engine structure modification. As an alternative method, this work investigated block mounted accelerometers to estimate combustion metrics in a 9L I6 diesel engine. So the transfer path between the accelerometer signal and the in-cylinder pressure signal needs to be modeled. Depending on the transfer path, the in-cylinder pressure signal and the combustion metrics can be accurately estimated - recovered from accelerometer signals. The method and applicability for determining the transfer path is critical in utilizing an accelerometer(s) for feedback. Single-input single-output (SISO) frequency response function (FRF) is the most common transfer path model; however, it is shown here to have low robustness for varying engine operating conditions. This thesis examines mechanisms to improve the robustness of FRF for combustion metrics estimation. First, an adaptation process based on the particle swarm optimization algorithm was developed and added to the single-input single-output model. Second, a multiple-input single-output (MISO) FRF model coupled with principal component analysis and an offset compensation process was investigated and applied. Improvement of the FRF robustness was achieved based on these two approaches. Furthermore a neural network as a nonlinear model of the transfer path between the accelerometer signal and the apparent heat release rate was also investigated. Transfer path between the acoustical emissions and the in-cylinder pressure signal was also investigated in this dissertation on a high pressure common rail (HPCR) 1.9L TDI diesel engine. The acoustical emissions are an important factor in the powertrain development process. In this part of the research a transfer path was developed between the two and then used to predict the engine noise level with the measured in-cylinder pressure as the input. Three methods for transfer path modeling were applied and the method based on the cepstral smoothing technique led to the most accurate results with averaged estimation errors of 2 dBA and a root mean square error of 1.5dBA. Finally, a linear model for engine noise level estimation was proposed with the in-cylinder pressure signal and the engine speed as components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of volcano deformation data can provide information on magma processes and help assess the potential for future eruptions. In employing inverse deformation modeling on these data, we attempt to characterize the geometry, location and volume/pressure change of a deformation source. Techniques currently used to model sheet intrusions (e.g., dikes and sills) often require significant a priori assumptions about source geometry and can require testing a large number of parameters. Moreover, surface deformations are a non-linear function of the source geometry and location. This requires the use of Monte Carlo inversion techniques which leads to long computation times. Recently, ‘displacement tomography’ models have been used to characterize magma reservoirs by inverting source deformation data for volume changes using a grid of point sources in the subsurface. The computations involved in these models are less intensive as no assumptions are made on the source geometry and location, and the relationship between the point sources and the surface deformation is linear. In this project, seeking a less computationally intensive technique for fracture sources, we tested if this displacement tomography method for reservoirs could be used for sheet intrusions. We began by simulating the opening of three synthetic dikes of known geometry and location using an established deformation model for fracture sources. We then sought to reproduce the displacements and volume changes undergone by the fractures using the sources employed in the tomography methodology. Results of this validation indicate the volumetric point sources are not appropriate for locating fracture sources, however they may provide useful qualitative information on volume changes occurring in the surrounding rock, and therefore indirectly indicate the source location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Implementation of stable aeroelastic models with the ability to capture the complex features of Multi concept smartblades is a prime step in reducing the uncertainties that come along with blade dynamics. The numerical simulations of fluid structure interaction can thus be used to test a realistic scenarios comprising of full-scale blades at a reasonably low computational cost. A code which was a combination of two advanced numerical models was designed and was run with the help of paralell HPC supercomputer platform. The first model was based on a variation of dimensional reduction technique proposed by Hodges and Yu. This model was the one to record the structural response of heterogenous composite blades. This technique reduces the geometrical complexities of the heterogenous blade section into a stiffness matrix for an equivalent beam. This derived equivalent 1-D strain energy matrix is similar to the actual 3-D strain energy matrix in an asymptotic sense. As this 1-D matrix helps in accurately modeling the blade structure as a 1-D finite element problem, this substantially redues the computational effort and subsequently the computational cost that are required to model the structural dynamics at each step. Second model comprises of implementation of the Blade Element Momentum Theory. In this approach we map all the velocities and the forces with the help of orthogonal matrices that help in capturing the large deformations and the effects of rotations in calculating the aerodynamic forces. This ultimately helps us to take into account the complex flexo torsional deformations. In this thesis we have succesfully tested these computayinal tools developed by MTU’s research team lead by for the aero elastic analysis of wind-turbine blades. The validation in this thesis is majorly based on several experiments done on NREL-5MW blade, as this is widely accepted as a benchmark blade in the wind industry. Along with the use of this innovative model the internal blade structure was also changed to add up to the existing benefits of the already advanced numerical models.