996 resultados para Michigan Tech


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Studies are suggesting that hurricane hazard patterns (e.g. intensity and frequency) may change as a consequence of the changing global climate. As hurricane patterns change, it can be expected that hurricane damage risks and costs may change as a result. This indicates the necessity to develop hurricane risk assessment models that are capable of accounting for changing hurricane hazard patterns, and develop hurricane mitigation and climatic adaptation strategies. This thesis proposes a comprehensive hurricane risk assessment and mitigation strategies that account for a changing global climate and that has the ability of being adapted to various types of infrastructure including residential buildings and power distribution poles. The framework includes hurricane wind field models, hurricane surge height models and hurricane vulnerability models to estimate damage risks due to hurricane wind speed, hurricane frequency, and hurricane-induced storm surge and accounts for the timedependant properties of these parameters as a result of climate change. The research then implements median insured house values, discount rates, housing inventory, etc. to estimate hurricane damage costs to residential construction. The framework was also adapted to timber distribution poles to assess the impacts climate change may have on timber distribution pole failure. This research finds that climate change may have a significant impact on the hurricane damage risks and damage costs of residential construction and timber distribution poles. In an effort to reduce damage costs, this research develops mitigation/adaptation strategies for residential construction and timber distribution poles. The costeffectiveness of these adaptation/mitigation strategies are evaluated through the use of a Life-Cycle Cost (LCC) analysis. In addition, a scenario-based analysis of mitigation strategies for timber distribution poles is included. For both residential construction and timber distribution poles, adaptation/mitigation measures were found to reduce damage costs. Finally, the research develops the Coastal Community Social Vulnerability Index (CCSVI) to include the social vulnerability of a region to hurricane hazards within this hurricane risk assessment. This index quantifies the social vulnerability of a region, by combining various social characteristics of a region with time-dependant parameters of hurricanes (i.e. hurricane wind and hurricane-induced storm surge). Climate change was found to have an impact on the CCSVI (i.e. climate change may have an impact on the social vulnerability of hurricane-prone regions).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mount Etna, Italy, is one of the most active volcanoes in the world, and is also regarded as one of the strongest volcanic sources of sulfur dioxide (SO2) emissions to the atmosphere. Since October 2004, an automated ultraviolet (UV) spectrometer network (FLAME) has provided ground-based SO2 measurements with high temporal resolution, providing an opportunity to validate satellite SO2 measurements at Etna. The Ozone Monitoring Instrument (OMI) on the NASA Aura satellite, which makes global daily measurements of trace gases in the atmosphere, was used to compare SO2 amount released by the volcano during paroxysmal lava-fountaining events from 2004 to present. We present the first comparison between SO2 emission rates and SO2 burdens obtained by the OMI transect technique and OMI Normalized Cloud-Mass (NCM) technique and the ground-based FLAME Mini-DOAS measurements. In spite of a good data set from the FLAME network, finding coincident OMI and FLAME measurements proved challenging and only one paroxysmal event provided a good validation for OMI. Another goal of this work was to assess the efficacy of the FLAME network in capturing paroxysmal SO2 emissions from Etna, given that the FLAME network is only operational during daylight hours and some paroxysms occur at night. OMI measurements are advantageous since SO2 emissions from nighttime paroxysms can often be quantified on the following day, providing improved constraints on Etna’s SO2 budget.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Experimental warming provides a method to determine how an ecosystem will respond to increased temperatures. Northern peatland ecosystems, sensitive to changing climates, provide an excellent setting for experimental warming. Storing great quantities of carbon, northern peatlands play a critical role in regulating global temperatures. Two of the most common methods of experimental warming include open top chambers (OTCs) and infrared (IR) lamps. These warming systems have been used in many ecosystems throughout the world, yet their efficacy to create a warmer environment is variable and has not been widely studied. To date, there has not been a direct, experimentally controlled comparison of OTCs and IR lamps. As a result, a factorial study was implemented to compare the warming efficacy of OTCs and IR lamps and to examine the resulting carbon dioxide (CO2) and methane (CH4) flux rates in a Lake Superior peatland. IR lamps warmed the ecosystem on average by 1-2 #°C, with the majority of warming occurring during nighttime hours. OTC's did not provide any long-term warming above control plots, which is contrary to similar OTC studies at high latitudes. By investigating diurnal heating patterns and micrometeorological variables, we were able to conclude that OTCs were not achieving strong daytime heating peaks and were often cooler than control plots during nighttime hours. Temperate day-length, cloudy and humid conditions, and latent heat loss were factors that inhibited OTC warming. There were no changes in CO2 flux between warming treatments in lawn plots. Gross ecosystem production was significantly greater in IR lamp-hummock plots, while ecosystem respiration was not affected. CH4 flux was not significantly affected by warming treatment. Minimal daytime heating differences, high ambient temperatures, decay resistant substrate, as well as other factors suppressed significant gas flux responses from warming treatments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Experimental studies on epoxies report that the microstructure consists of highly crosslinked localized regions connected with a dispersed phase of low crosslink density. The various thermo-mechanical properties of epoxies might be affected by the crosslink distribution. But as experiments cannot report the exact number of crosslinked covalent bonds present in the structure, molecular dynamics is thus being used in this work to determine the influence of crosslink distribution on thermo-mechanical properties. Molecular dynamics and molecular mechanics simulations are used to establish wellequilibrated molecular models of EPON 862-DETDA epoxy system with a range of crosslink densities and various crosslink distributions. Crosslink distributions are being varied by forming differently crosslinked localized clusters and then by forming different number of crosslinks interconnecting the clusters. Simulations are subsequently used to predict the volume shrinkage, thermal expansion coefficients, and elastic properties of each of the crosslinked systems. The results indicate that elastic properties increase with increasing levels of overall crosslink density and the thermal expansion coefficient decreases with overall crosslink density, both above and below the glass transition temperature. Elastic moduli and coefficients of linear thermal expansion values were found to be different for systems with same overall crosslink density but having different crosslink distributions, thus indicating an effect of the epoxy nanostructure on physical properties. The values of thermo-mechanical properties for all the crosslinked systems are within the range of values reported in literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hooked reinforcing bars (rebar) are used frequently to carry the tension forces developed in beams and transferred to columns. Research into epoxy coated hooked bars has only been minimally performed and no research has been carried out incorporating the coating process found in ASTM A934. This research program compares hooked rebar that are uncoated, coated by ASTM A775, and coated by ASTM A934. In total, forty-two full size beam-column specimens were created, instrumented and tested to failure. The program was carried out in three phases. The first phase was used to refine the test setup and procedures. Phase two explored the spacing of column ties within the joint region. Phase three explored the three coating types found above. Each specimen included two hooked rebar which were loaded and measured independently for relative rebar slip. The load and displacement of the hooked rebar were analyzed, focusing on behavior at the levels of 30 ksi, 42 ksi and 60 ksi of rebar stress. Statistical and general comparisons were made using the coating types, tie spacing, and rebar stress level. Many of the parameters composing the rebar and concrete were also tested to characterize the components and specimens. All rebar tested met ASTM standards for tensile strength, but the newer ASTM A934 method seemed to produce slightly lower yield strengths. The A934 method also produced coating thicknesses that were very inconsistent and were higher than ASTM maximum limits in many locations. Continuity of coating surfaces was found to be less than 100% for both A775 and A934 rebar, but for different reasons. The many comparisons made did not always produce clear conclusions. The data suggests that the ACI Code (318-05) parameter of 1.2 for including epoxy coating on hooked rebar may need to be raised, possibly to 2.5, but more testing needs to be performed before such a large value change is set forth. This is particularly important as variables were identified which may have a larger influence on rebar capacity than the Development Length, of which the current 1.2 factor modifies. Many suggestions for future work are included throughout the thesis to help guide other researchers in carrying out successful and productive programs which will further the highly understudied topic of hooked rebar.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The emissions, filtration and oxidation characteristics of a diesel oxidation catalyst (DOC) and a catalyzed particulate filter (CPF) in a Johnson Matthey catalyzed continuously regenerating trap (CCRT ®) were studied by using computational models. Experimental data needed to calibrate the models were obtained by characterization experiments with raw exhaust sampling from a Cummins ISM 2002 engine with variable geometry turbocharging (VGT) and programmed exhaust gas recirculation (EGR). The experiments were performed at 20, 40, 60 and 75% of full load (1120 Nm) at rated speed (2100 rpm), with and without the DOC upstream of the CPF. This was done to study the effect of temperature and CPF-inlet NO2 concentrations on particulate matter oxidation in the CCRT ®. A previously developed computational model was used to determine the kinetic parameters describing the oxidation characteristics of HCs, CO and NO in the DOC and the pressure drop across it. The model was calibrated at five temperatures in the range of 280 – 465° C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec. The downstream HCs, CO and NO concentrations were predicted by the DOC model to within ±3 ppm. The HCs and CO oxidation kinetics in the temperature range of 280 - 465°C and an exhaust volumetric flow rate of 0.447 - 0.843 act-m3/sec can be represented by one ’apparent’ activation energy and pre-exponential factor. The NO oxidation kinetics in the same temperature and exhaust flow rate range can be represented by ’apparent’ activation energies and pre-exponential factors in two regimes. The DOC pressure drop was always predicted within 0.5 kPa by the model. The MTU 1-D 2-layer CPF model was enhanced in several ways to better model the performance of the CCRT ®. A model to simulate the oxidation of particulate inside the filter wall was developed. A particulate cake layer filtration model which describes particle filtration in terms of more fundamental parameters was developed and coupled to the wall oxidation model. To better model the particulate oxidation kinetics, a model to take into account the NO2 produced in the washcoat of the CPF was developed. The overall 1-D 2-layer model can be used to predict the pressure drop of the exhaust gas across the filter, the evolution of particulate mass inside the filter, the particulate mass oxidized, the filtration efficiency and the particle number distribution downstream of the CPF. The model was used to better understand the internal performance of the CCRT®, by determining the components of the total pressure drop across the filter, by classifying the total particulate matter in layer I, layer II, the filter wall, and by the means of oxidation i.e. by O2, NO2 entering the filter and by NO2 being produced in the filter. The CPF model was calibrated at four temperatures in the range of 280 – 465 °C, and exhaust volumetric flow rates of 0.447 – 0.843 act-m3/sec, in CPF-only and CCRT ® (DOC+CPF) configurations. The clean filter wall permeability was determined to be 2.00E-13 m2, which is in agreement with values in the literature for cordierite filters. The particulate packing density in the filter wall had values between 2.92 kg/m3 - 3.95 kg/m3 for all the loads. The mean pore size of the catalyst loaded filter wall was found to be 11.0 µm. The particulate cake packing densities and permeabilities, ranged from 131 kg/m3 - 134 kg/m3, and 0.42E-14 m2 and 2.00E-14 m2 respectively, and are in agreement with the Peclet number correlations in the literature. Particulate cake layer porosities determined from the particulate cake layer filtration model ranged between 0.841 and 0.814 and decreased with load, which is about 0.1 lower than experimental and more complex discrete particle simulations in the literature. The thickness of layer I was kept constant at 20 µm. The model kinetics in the CPF-only and CCRT ® configurations, showed that no ’catalyst effect’ with O2 was present. The kinetic parameters for the NO2-assisted oxidation of particulate in the CPF were determined from the simulation of transient temperature programmed oxidation data in the literature. It was determined that the thermal and NO2 kinetic parameters do not change with temperature, exhaust flow rate or NO2 concentrations. However, different kinetic parameters are used for particulate oxidation in the wall and on the wall. Model results showed that oxidation of particulate in the pores of the filter wall can cause disproportionate decreases in the filter pressure drop with respect to particulate mass. The wall oxidation model along with the particulate cake filtration model were developed to model the sudden and rapid decreases in pressure drop across the CPF. The particulate cake and wall filtration models result in higher particulate filtration efficiencies than with just the wall filtration model, with overall filtration efficiencies of 98-99% being predicted by the model. The pre-exponential factors for oxidation by NO2 did not change with temperature or NO2 concentrations because of the NO2 wall production model. In both CPF-only and CCRT ® configurations, the model showed NO2 and layer I to be the dominant means and dominant physical location of particulate oxidation respectively. However, at temperatures of 280 °C, NO2 is not a significant oxidizer of particulate matter, which is in agreement with studies in the literature. The model showed that 8.6 and 81.6% of the CPF-inlet particulate matter was oxidized after 5 hours at 20 and 75% load in CCRT® configuration. In CPF-only configuration at the same loads, the model showed that after 5 hours, 4.4 and 64.8% of the inlet particulate matter was oxidized. The increase in NO2 concentrations across the DOC contributes significantly to the oxidation of particulate in the CPF and is supplemented by the oxidation of NO to NO2 by the catalyst in the CPF, which increases the particulate oxidation rates. From the model, it was determined that the catalyst in the CPF modeslty increases the particulate oxidation rates in the range of 4.5 – 8.3% in the CCRT® configuration. Hence, the catalyst loading in the CPF of the CCRT® could possibly be reduced without significantly decreasing particulate oxidation rates leading to catalyst cost savings and better engine performance due to lower exhaust backpressures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Water-saturated debris flows are among some of the most destructive mass movements. Their complex nature presents a challenge for quantitative description and modeling. In order to improve understanding of the dynamics of these flows, it is important to seek a simplified dynamic system underlying their behavior. Models currently in use to describe the motion of debris flows employ depth-averaged equations of motion, typically assuming negligible effects from vertical acceleration. However, in many cases debris flows experience significant vertical acceleration as they move across irregular surfaces, and it has been proposed that friction associated with vertical forces and liquefaction merit inclusion in any comprehensive mechanical model. The intent of this work is to determine the effect of vertical acceleration through a series of laboratory experiments designed to simulate debris flows, testing a recent model for debris flows experimentally. In the experiments, a mass of water-saturated sediment is released suddenly from a holding container, and parameters including rate of collapse, pore-fluid pressure, and bed load are monitored. Experiments are simplified to axial geometry so that variables act solely in the vertical dimension. Steady state equations to infer motion of the moving sediment mass are not sufficient to model accurately the independent solid and fluid constituents in these experiments. The model developed in this work more accurately predicts the bed-normal stress of a saturated sediment mass in motion and illustrates the importance of acceleration and deceleration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In an increasingly interconnected world characterized by the accelerating interplay of cultural, linguistic, and national difference, the ability to negotiate that difference in an equitable and ethical manner is a crucial skill for both individuals and larger social groups. This dissertation, Writing Center Handbooks and Travel Guidebooks: Redesigning Instructional Texts for Multicultural, Multilingual, and Multinational Contexts, considers how instructional texts that ostensibly support the negotiation of difference (i.e., accepting and learning from difference) actually promote the management of difference (i.e., rejecting, assimilating, and erasing difference). As a corrective to this focus on managing difference, chapter two constructs a theoretical framework that facilitates the redesign of handbooks, guidebooks, and similar instructional texts. This framework centers on reflexive design practices and is informed by literacy theory (Gee; New London Group; Street), social learning theory (Wenger), globalization theory (Nederveen Pieterse), and composition theory (Canagarajah; Horner and Trimbur; Lu; Matsuda; Pratt). By implementing reflexive design practices in the redesign of instructional texts, this dissertation argues that instructional texts can promote the negotiation of difference and a multicultural/multilingual sensibility that accounts for twenty-first century linguistic and cultural realities. Informed by the theoretical framework of chapter two, chapters three and four conduct a rhetorical analysis of two forms of instructional text that are representative of the larger genre: writing center coach handbooks and travel guidebooks to Hong Kong. This rhetorical analysis reveals how both forms of text employ rhetorical strategies that uphold dominant monolingual and monocultural assumptions. Alternative rhetorical strategies are then proposed that can be used to redesign these two forms of instructional texts in a manner that aligns with multicultural and multilingual assumptions. These chapters draw on the work of scholars in Writing Center Studies (Boquet and Lerner; Carino; DiPardo; Grimm; North; Severino) and Technical Communication (Barton and Barton; Dilger; Johnson; Kimball; Slack), respectively. Chapter five explores how the redesign of coach handbooks and travel guidebooks proposed in this dissertation can be conceptualized as a political act. Ultimately, this dissertation argues that instructional texts are powerful heuristic tools that can enact social change if they are redesigned to foster the negotiation of difference and to promote multicultural/multilingual world views.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The exotic emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), was first discovered in North America in southeastern Michigan, USA, and Windsor, Ontario, Canada in 2002. Significant ash (Fraxinus spp.) mortality has been caused in areas where this insect has become well established, and new infestations continue to be discovered in several states in the United States and in Canada. This beetle is difficult to detect when it invades new areas or occurs at low density. Girdled trap tree and ground surveys have been important tools for detecting emerald ash borer populations, and more recently, purple baited prism traps have been used in detection efforts. Girdled trap trees were found to be more effective than purple prism traps at detecting emerald ash borer as they acted as sinks for larvae in an area of known low density emerald ash borer infestation. The canopy condition of the trap trees was not predictive of whether they were infested or not, indicating that ground surveys may not be effective for detection in an area of low density emerald ash borer population. When landing rates of low density emerald ash borer populations were monitored on non-girdled ash trees, landing rates were higher on larger, open grown trees with canopies that contain a few dead branches. As a result of these studies, we suggest that the threshold for emerald ash borer detection using baited purple prism traps hung at the canopy base of trees is higher than for girdled trap trees. In addition, detection of developing populations of EAB may be possible by selectively placing sticky trapping surfaces on non-girdled trap trees that are the larger and more open grown trees at a site.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

High flexural strength and stiffness can be achieved by forming a thin panel into a wave shape perpendicular to the bending direction. The use of corrugated shapes to gain flexural strength and stiffness is common in metal and reinforced plastic products. However, there is no commercial production of corrugated wood composite panels. This research focuses on the application of corrugated shapes to wood strand composite panels. Beam theory, classical plate theory and finite element models were used to analyze the bending behavior of corrugated panels. The most promising shallow corrugated panel configuration was identified based on structural performance and compatibility with construction practices. The corrugation profile selected has a wavelength equal to 8”, a channel depth equal to ¾”, a sidewall angle equal to 45 degrees and a panel thickness equal to 3/8”. 16”x16” panels were produced using random mats and 3-layer aligned mats with surface flakes parallel to the channels. Strong axis and weak axis bending tests were conducted. The test results indicate that flake orientation has little effect on the strong axis bending stiffness. The 3/8” thick random mat corrugated panels exhibit bending stiffness (400,000 lbs-in2/ft) and bending strength (3,000 in-lbs/ft) higher than 23/32” or 3/4” thick APA Rated Sturd-I-Floor with a 24” o.c. span rating. Shear and bearing test results show that the corrugated panel can withstand more than 50 psf of uniform load at 48” joist spacings. Molding trials on 16”x16” panels provided data for full size panel production. Full size 4’x8’ shallow corrugated panels were produced with only minor changes to the current oriented strandboard manufacturing process. Panel testing was done to simulate floor loading during construction, without a top underlayment layer, and during occupancy, with an underlayment over the panel to form a composite deck. Flexural tests were performed in single-span and two-span bending with line loads applied at mid-span. The average strong axis bending stiffness and bending strength of the full size corrugated panels (without the underlayment) were over 400,000 lbs-in2/ft and 3,000 in-lbs/ft, respectively. The composite deck system, which consisted of an OSB sheathing (15/32” thick) nailed-glued (using 3d ringshank nails and AFG-01 subfloor adhesive) to the corrugated subfloor achieved about 60% of the full composite stiffness resulting in about 3 times the bending stiffness of the corrugated subfloor (1,250,000 lbs-in2/ft). Based on the LRFD design criteria, the corrugated composite floor system can carry 40 psf of unfactored uniform loads, limited by the L/480 deflection limit state, at 48” joist spacings. Four 10-ft long composite T-beam specimens were built and tested for the composite action and the load sharing between a 24” wide corrugated deck system and the supporting I-joist. The average bending stiffness of the composite T-beam was 1.6 times higher than the bending stiffness of the I-joist. A 8-ft x 12-ft mock up floor was built to evaluate construction procedures. The assembly of the composite floor system is relatively simple. The corrugated composite floor system might be able to offset the cheaper labor costs of the single-layer Sturd-IFloor through the material savings. However, no conclusive result can be drawn, in terms of the construction costs, at this point without an in depth cost analysis of the two systems. The shallow corrugated composite floor system might be a potential alternative to the Sturd-I-Floor in the near future because of the excellent flexural stiffness provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Geospatial information systems are used to analyze spatial data to provide decision makers with relevant, up-to-date, information. The processing time required for this information is a critical component to response time. Despite advances in algorithms and processing power, we still have many “human-in-the-loop” factors. Given the limited number of geospatial professionals, analysts using their time effectively is very important. The automation and faster humancomputer interactions of common tasks that will not disrupt their workflow or attention is something that is very desirable. The following research describes a novel approach to increase productivity with a wireless, wearable, electroencephalograph (EEG) headset within the geospatial workflow.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most research on carbon content of trees has focused on temperate tree species with little information existing on the carbon content of tropical tree species. This study investigated the variation in carbon content of selected tropical tree species and compared carbon content of Khaya spp from two ecozones in Ghana. Allometric equations developed for mixed-plantation stands for wet evergreen forest verified the expected strong relationship between tree volumes and dbh (r2>0.93) and volume and dbh2×height (r2>0.97). Carbon concentration, wood density and carbon content differed significantly among species. Volume at age 12 ranged from 0.01 to 1.04 m3 per tree, and wood density was highly variable among species, ranging from 0.27 to 0.76 g cm-3. This suggests that species specific density data is critical for accurate conversion of volumes derived from allometric relationships into carbon contents. Significant differences in density of Khaya spp existed between the wet and moist semi-deciduous ecozones. The baseline species-level information from this study will be useful for carbon accounting and development of carbon sequestration strategies in Ghana and other tropical African countries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The reserves of gasoline and diesel fuels are ever decreasing, which plays an important role in the technological development of automobiles. Numerous countries, especially the United States, wish to slowly decrease their fuel dependence on other countries by producing in house renewable fuels like biodiesels or ethanol. Therefore, the new automobile engines have to successfully run on a variety of fuels without significant changes to their designs. The current study focuses on assessing the potential of ethanol fuels to improve the performance of 'flex-fuel SI engines,' which literally means 'engines that are flexible in their fuel requirement.' Another important area within spark ignition (SI) engine research is the implementation of new technologies like Variable Valve Timing (VVT) or Variable Compression Ratio (VCR) to improve engine performance. These technologies add more complexity to the original system by adding extra degrees of freedom. Therefore, the potential of these technologies has to be evaluated before they are installed in any SI engine. The current study focuses on evaluating the advantages and drawbacks of these technologies, primarily from an engine brake efficiency perspective. The results show a significant improvement in engine efficiency with the use of VVT and VCR together. Spark ignition engines always operate at a lower compression ratio as compared to compression ignition (CI) engines primarily due to knock constraints. Therefore, even if the use of a higher compression ratio would result in a significant improvement in SI engine efficiency, the engine may still operate at a lower compression ratio due to knock limitations. Ethanol fuels extend the knock limit making the use of higher compression ratios possible. Hence, the current study focuses on using VVT, VCR, and ethanol-gasoline blends to improve overall engine performance. The results show that these technologies promise definite engine performance improvements provided both their positive and negative potentials have been evaluated prior to installation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Target localization has a wide range of military and civilian applications in wireless mobile networks. Examples include battle-field surveillance, emergency 911 (E911), traffc alert, habitat monitoring, resource allocation, routing, and disaster mitigation. Basic localization techniques include time-of-arrival (TOA), direction-of-arrival (DOA) and received-signal strength (RSS) estimation. Techniques that are proposed based on TOA and DOA are very sensitive to the availability of Line-of-sight (LOS) which is the direct path between the transmitter and the receiver. If LOS is not available, TOA and DOA estimation errors create a large localization error. In order to reduce NLOS localization error, NLOS identifcation, mitigation, and localization techniques have been proposed. This research investigates NLOS identifcation for multiple antennas radio systems. The techniques proposed in the literature mainly use one antenna element to enable NLOS identifcation. When a single antenna is utilized, limited features of the wireless channel can be exploited to identify NLOS situations. However, in DOA-based wireless localization systems, multiple antenna elements are available. In addition, multiple antenna technology has been adopted in many widely used wireless systems such as wireless LAN 802.11n and WiMAX 802.16e which are good candidates for localization based services. In this work, the potential of spatial channel information for high performance NLOS identifcation is investigated. Considering narrowband multiple antenna wireless systems, two xvNLOS identifcation techniques are proposed. Here, the implementation of spatial correlation of channel coeffcients across antenna elements as a metric for NLOS identifcation is proposed. In order to obtain the spatial correlation, a new multi-input multi-output (MIMO) channel model based on rough surface theory is proposed. This model can be used to compute the spatial correlation between the antenna pair separated by any distance. In addition, a new NLOS identifcation technique that exploits the statistics of phase difference across two antenna elements is proposed. This technique assumes the phases received across two antenna elements are uncorrelated. This assumption is validated based on the well-known circular and elliptic scattering models. Next, it is proved that the channel Rician K-factor is a function of the phase difference variance. Exploiting Rician K-factor, techniques to identify NLOS scenarios are proposed. Considering wideband multiple antenna wireless systems which use MIMO-orthogonal frequency division multiplexing (OFDM) signaling, space-time-frequency channel correlation is exploited to attain NLOS identifcation in time-varying, frequency-selective and spaceselective radio channels. Novel NLOS identi?cation measures based on space, time and frequency channel correlation are proposed and their performances are evaluated. These measures represent a better NLOS identifcation performance compared to those that only use space, time or frequency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Single-screw extrusion is one of the widely used processing methods in plastics industry, which was the third largest manufacturing industry in the United States in 2007 [5]. In order to optimize the single-screw extrusion process, tremendous efforts have been devoted for development of accurate models in the last fifty years, especially for polymer melting in screw extruders. This has led to a good qualitative understanding of the melting process; however, quantitative predictions of melting from various models often have a large error in comparison to the experimental data. Thus, even nowadays, process parameters and the geometry of the extruder channel for the single-screw extrusion are determined by trial and error. Since new polymers are developed frequently, finding the optimum parameters to extrude these polymers by trial and error is costly and time consuming. In order to reduce the time and experimental work required for optimizing the process parameters and the geometry of the extruder channel for a given polymer, the main goal of this research was to perform a coordinated experimental and numerical investigation of melting in screw extrusion. In this work, a full three-dimensional finite element simulation of the two-phase flow in the melting and metering zones of a single-screw extruder was performed by solving the conservation equations for mass, momentum, and energy. The only attempt for such a three-dimensional simulation of melting in screw extruder was more than twenty years back. However, that work had only a limited success because of the capability of computers and mathematical algorithms available at that time. The dramatic improvement of computational power and mathematical knowledge now make it possible to run full 3-D simulations of two-phase flow in single-screw extruders on a desktop PC. In order to verify the numerical predictions from the full 3-D simulations of two-phase flow in single-screw extruders, a detailed experimental study was performed. This experimental study included Maddock screw-freezing experiments, Screw Simulator experiments and material characterization experiments. Maddock screw-freezing experiments were performed in order to visualize the melting profile along the single-screw extruder channel with different screw geometry configurations. These melting profiles were compared with the simulation results. Screw Simulator experiments were performed to collect the shear stress and melting flux data for various polymers. Cone and plate viscometer experiments were performed to obtain the shear viscosity data which is needed in the simulations. An optimization code was developed to optimize two screw geometry parameters, namely, screw lead (pitch) and depth in the metering section of a single-screw extruder, such that the output rate of the extruder was maximized without exceeding the maximum temperature value specified at the exit of the extruder. This optimization code used a mesh partitioning technique in order to obtain the flow domain. The simulations in this flow domain was performed using the code developed to simulate the two-phase flow in single-screw extruders.