878 resultados para High-tech companies


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished. Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building energy meter network, based on per-appliance monitoring system, willbe an important part of the Advanced Metering Infrastructure. Two key issues exist for designing such networks. One is the network structure to be used. The other is the implementation of the network structure on a large amount of small low power devices, and the maintenance of high quality communication when the devices have electric connection with high voltage AC line. The recent advancement of low-power wireless communication makes itself the right candidate for house and building energy network. Among all kinds of wireless solutions, the low speed but highly reliable 802.15.4 radio has been chosen in this design. While many network-layer solutions have been provided on top of 802.15.4, an IPv6 based method is used in this design. 6LOWPAN is the particular protocol which adapts IP on low power personal network radio. In order to extend the network into building area without, a specific network layer routing mechanism-RPL, is included in this design. The fundamental unit of the building energy monitoring system is a smart wall plug. It is consisted of an electricity energy meter, a RF communication module and a low power CPU. The real challenge for designing such a device is its network firmware. In this design, IPv6 is implemented through Contiki operation system. Customize hardware driver and meter application program have been developed on top of the Contiki OS. Some experiments have been done, in order to prove the network ability of this system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultra-high performance fiber reinforced concrete (UHPFRC) has arisen from the implementation of a variety of concrete engineering and materials science concepts developed over the last century. This material offers superior strength, serviceability, and durability over its conventional counterparts. One of the most important differences for UHPFRC over other concrete materials is its ability to resist fracture through the use of randomly dispersed discontinuous fibers and improvements to the fiber-matrix bond. Of particular interest is the materials ability to achieve higher loads after first crack, as well as its high fracture toughness. In this research, a study of the fracture behavior of UHPFRC with steel fibers was conducted to look at the effect of several parameters related to the fracture behavior and to develop a fracture model based on a non-linear curve fit of the data. To determine this, a series of three-point bending tests were performed on various single edge notched prisms (SENPs). Compression tests were also performed for quality assurance. Testing was conducted on specimens of different cross-sections, span/depth (S/D) ratios, curing regimes, ages, and fiber contents. By comparing the results from prisms of different sizes this study examines the weakening mechanism due to the size effect. Furthermore, by employing the concept of fracture energy it was possible to obtain a comparison of the fracture toughness and ductility. The model was determined based on a fit to P-w fracture curves, which was cross referenced for comparability to the results. Once obtained the model was then compared to the models proposed by the AFGC in the 2003 and to the ACI 544 model for conventional fiber reinforced concretes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The numerical solution of the incompressible Navier-Stokes Equations offers an effective alternative to the experimental analysis of Fluid-Structure interaction i.e. dynamical coupling between a fluid and a solid which otherwise is very complex, time consuming and very expensive. To have a method which can accurately model these types of mechanical systems by numerical solutions becomes a great option, since these advantages are even more obvious when considering huge structures like bridges, high rise buildings, or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the KLE to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ODE time integration schemes and thus allows us to tackle the various multiphysics problems as separate modules. The current algorithm for KLE employs a structured or unstructured mesh for spatial discretization and it allows the use of a self-adaptive or fixed time step ODE solver while dealing with unsteady problems. This research deals with the analysis of the effects of the Courant-Friedrichs-Lewy (CFL) condition for KLE when applied to unsteady Stoke’s problem. The objective is to conduct a numerical analysis for stability and, hence, for convergence. Our results confirmthat the time step ∆t is constrained by the CFL-like condition ∆t ≤ const. hα, where h denotes the variable that represents spatial discretization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small clusters of gallium oxide, technologically important high temperature ceramic, together with interaction of nucleic acid bases with graphene and small-diameter carbon nanotube are focus of first principles calculations in this work. A high performance parallel computing platform is also developed to perform these calculations at Michigan Tech. First principles calculations are based on density functional theory employing either local density or gradient-corrected approximation together with plane wave and gaussian basis sets. The bulk Ga2O3 is known to be a very good candidate for fabricating electronic devices that operate at high temperatures. To explore the properties of Ga2O3 at nonoscale, we have performed a systematic theoretical study on the small polyatomic gallium oxide clusters. The calculated results find that all lowest energy isomers of GamOn clusters are dominated by the Ga-O bonds over the metal-metal or the oxygen-oxygen bonds. Analysis of atomic charges suggest the clusters to be highly ionic similar to the case of bulk Ga2O3. In the study of sequential oxidation of these slusters starting from Ga2O, it is found that the most stable isomers display up to four different backbones of constituent atoms. Furthermore, the predicted configuration of the ground state of Ga2O is recently confirmed by the experimental result of Neumark's group. Guided by the results of calculations the study of gallium oxide clusters, performance related challenge of computational simulations, of producing high performance computers/platforms, has been addressed. Several engineering aspects were thoroughly studied during the design, development and implementation of the high performance parallel computing platform, rama, at Michigan Tech. In an attempt to stay true to the principles of Beowulf revolutioni, the rama cluster was extensively customized to make it easy to understand, and use - for administrators as well as end-users. Following the results of benchmark calculations and to keep up with the complexity of systems under study, rama has been expanded to a total of sixty four processors. Interest in the non-covalent intereaction of DNA with carbon nanotubes has steadily increased during past several years. This hybrid system, at the junction of the biological regime and the nanomaterials world, possesses features which make it very attractive for a wide range of applicatioins. Using the in-house computational power available, we have studied details of the interaction between nucleic acid bases with graphene sheet as well as high-curvature small-diameter carbon nanotube. The calculated trend in the binding energies strongly suggests that the polarizability of the base molecules determines the interaction strength of the nucleic acid bases with graphene. When comparing the results obtained here for physisorption on the small diameter nanotube considered with those from the study on graphene, it is observed that the interaction strength of nucleic acid bases is smaller for the tube. Thus, these results show that the effect of introducing curvature is to reduce the binding energy. The binding energies for the two extreme cases of negligible curvature (i.e. flat graphene sheet) and of very high curvature (i.e. small diameter nanotube) may be considered as upper and lower bounds. This finding represents an important step towards a better understanding of experimentally observed sequence-dependent interaction of DNA with Carbon nanotubes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After teaching regular education secondary mathematics for seven years, I accepted a position in an alternative education high school. Over the next four years, the State of Michigan adopted new graduation requirements phasing in a mandate for all students to complete Geometry and Algebra 2 courses. Since many of my students were already struggling in Algebra 1, getting them through Geometry and Algebra 2 seemed like a daunting task. To better instruct my students, I wanted to know how other teachers in similar situations were addressing the new High School Content Expectations (HSCEs) in upper level mathematics. This study examines how thoroughly alternative education teachers in Michigan are addressing the HSCEs in their courses, what approaches they have found most effective, and what issues are preventing teachers and schools from successfully implementing the HSCEs. Twenty-six alternative high school educators completed an online survey that included a variety of questions regarding school characteristics, curriculum alignment, implementation approaches and issues. Follow-up phone interviews were conducted with four of these participants. The survey responses were used to categorize schools as successful, unsuccessful, and neutral schools in terms of meeting the HSCEs. Responses from schools in each category were compared to identify common approaches and issues among them and to identify significant differences between school groups. Data analysis showed that successful schools taught more of the HSCEs through a variety of instructional approaches, with an emphasis on varying the ways students learned the material. Individualized instruction was frequently mentioned by successful schools and was strikingly absent from unsuccessful school responses. The main obstacle to successful implementation of the HSCEs identified in the study was gaps in student knowledge. This caused pace of instruction to also be a significant issue. School representatives were fairly united against the belief that the Algebra 2 graduation requirement was appropriate for all alternative education students. Possible implications of these findings are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spray characterization under flash boiling conditions was investigated utilizing a symmetric multi-hole injector applicable to the gasoline direct injection (GDI) engine. Tests were performed in a constant volume combustion vessel using a high-speed schlieren and Mie scattering imaging systems. Four fuels including n-heptane, 100% ethanol, pure ethanol blended with 15% iso-octane by volume, and test grade E85 were considered in the study. Experimental conditions included various ambient pressure, fuel temperature, and fuel injection pressure. Visualization of the vaporizing spray development was acquired by utilizing schlieren and laser-based Mie scattering techniques. Time evolved spray tip penetration, spray angle, and the ratio of the vapor to liquid region were analyzed by utilizing digital image processing techniques in MATLAB. This research outlines spray characteristics at flash boiling and non-flash boiling conditions. At flash boiling conditions it was observed that individual plumes merge together, leading to significant contraction in spray angle as compared to non-flash boiling conditions. The results indicate that at flash boiling conditions, spray formation and expansion of vapor region is dependent on momentum exchange offered by the ambient gas. A relation between momentum exchange and liquid spray angle formed was also observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gas sensors have been used widely in different important area including industrial control, environmental monitoring, counter-terrorism and chemical production. Micro-fabrication offers a promising way to achieve sensitive and inexpensive gas sensors. Over the years, various MEMS gas sensors have been investigated and fabricated. One significant type of MEMS gas sensors is based on mass change detection and the integration with specific polymer. This dissertation aims to make contributions to the design and fabrication of MEMS resonant mass sensors with capacitance actuation and sensing that lead to improved sensitivity. To accomplish this goal, the research has several objectives: (1) Define an effective measure for evaluating the sensitivity of resonant mass devices; (2) Model the effects of air damping on microcantilevers and validate models using laser measurement system (3) Develop design guidelines for improving sensitivity in the presence of air damping; (4) Characterize the degree of uncertainty in performance arising from fabrication variation for one or more process sequences, and establish design guidelines for improved robustness. Work has been completed toward these objectives. An evaluation measure has been developed and compared to an RMS based measure. Analytic models of air damping for parallel plate that include holes are compared with a COMSOL model. The models have been used to identify cantilever design parameters that maximize sensitivity. Additional designs have been modeled with COMSOL and the development of an analytical model for Fixed-free cantilever geometries with holes has been developed. Two process flows have been implemented and compared. A number of cantilever designs have been fabricated and the uncertainty in process has been investigated. Variability from processing have been evaluated and characterized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dolomite [CaMg(CO3)2] is an intolerable impurity in phosphate ores due to its MgO content. Traditionally, the Florida phosphate industry has avoided mining high-MgO phosphate reserves due to the lack of an economically viable process for removal of dolomite. However, as the high grade phosphate reserves become depleted, more emphasis is being put on the development of a cost effective method for separating dolomite from high-MgO phosphate ores. In general, the phosphate industry demands a phosphate concentrate containing less than 1%MgO. Dolomite impurities have mineralogical properties that are very similar to the desired phosphate minerals (francolite), making the separation of the two minerals very difficult. Magnesium is primarily found as distinct dolomite-rich pebbles, very fine dolomite inclusions in predominately francolite pebbles, and magnesium substituted into the francolite structure. Jigging is a gravity separation process that attempts to take advantage of the density difference between the dolomite and francolite pebbles. A unique laboratory scale jig was designed and built at Michigan Tech for this study. Through a series of tests it was found that a pulsation rate of 200 pulse/minute, a stroke length of 1 inch, a water addition rate of 0.5gpm, and alumina ragging balls were optimum for this study. To investigate the feasibility of jigging for the removal of dolomite from phosphate ore, two high-MgO phosphate ores were tested using optimized jigging parameters: (1) Plant #1 was sized to 4.00x0.85mm and contained 1.55%MgO; (2) Plant #2 was sized to 3.40mmx0.85mm and contained 3.07% MgO. A sample from each plant was visually separated by hand into dolomite and francolite rich fractions, which were then analyzed to determine the minimum achievable MgO levels. For Plant #1 phosphate ore, a concentrate containing 0.89%MgO was achieved at a recovery of 32.0%BPL. For Plant #2, a phosphate concentrate containing 1.38%MgO was achieved at a recovery of 74.7%BPL. Minimum achievable MgO levels were determined to be 0.53%MgO for Plant #1 and 1.15%MgO for Plant #2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Range estimation is the core of many positioning systems such as radar, and Wireless Local Positioning Systems (WLPS). The estimation of range is achieved by estimating Time-of-Arrival (TOA). TOA represents the signal propagation delay between a transmitter and a receiver. Thus, error in TOA estimation causes degradation in range estimation performance. In wireless environments, noise, multipath, and limited bandwidth reduce TOA estimation performance. TOA estimation algorithms that are designed for wireless environments aim to improve the TOA estimation performance by mitigating the effect of closely spaced paths in practical (positive) signal-to-noise ratio (SNR) regions. Limited bandwidth avoids the discrimination of closely spaced paths. This reduces TOA estimation performance. TOA estimation methods are evaluated as a function of SNR, bandwidth, and the number of reflections in multipath wireless environments, as well as their complexity. In this research, a TOA estimation technique based on Blind signal Separation (BSS) is proposed. This frequency domain method estimates TOA in wireless multipath environments for a given signal bandwidth. The structure of the proposed technique is presented and its complexity and performance are theoretically evaluated. It is depicted that the proposed method is not sensitive to SNR, number of reflections, and bandwidth. In general, as bandwidth increases, TOA estimation performance improves. However, spectrum is the most valuable resource in wireless systems and usually a large portion of spectrum to support high performance TOA estimation is not available. In addition, the radio frequency (RF) components of wideband systems suffer from high cost and complexity. Thus, a novel, multiband positioning structure is proposed. The proposed technique uses the available (non-contiguous) bands to support high performance TOA estimation. This system incorporates the capabilities of cognitive radio (CR) systems to sense the available spectrum (also called white spaces) and to incorporate white spaces for high-performance localization. First, contiguous bands that are divided into several non-equal, narrow sub-bands that possess the same SNR are concatenated to attain an accuracy corresponding to the equivalent full band. Two radio architectures are proposed and investigated: the signal is transmitted over available spectrum either simultaneously (parallel concatenation) or sequentially (serial concatenation). Low complexity radio designs that handle the concatenation process sequentially and in parallel are introduced. Different TOA estimation algorithms that are applicable to multiband scenarios are studied and their performance is theoretically evaluated and compared to simulations. Next, the results are extended to non-contiguous, non-equal sub-bands with the same SNR. These are more realistic assumptions in practical systems. The performance and complexity of the proposed technique is investigated as well. This study’s results show that selecting bandwidth, center frequency, and SNR levels for each sub-band can adapt positioning accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Michigan Department of Transportation is evaluating upgrading their portion of the Wolverine Line between Chicago and Detroit to accommodate high speed rail. This will entail upgrading the track to allow trains to run at speeds in excess of 110 miles per hour (mph). An important component of this upgrade will be to assess the requirement for ballast material for high speed rail. In the event that the existing ballast materials do not meet specifications for higher speed train, additional ballast will be required. The purpose of this study, therefore, is to investigate the current MDOT railroad ballast quality specifications and compare them to both the national and international specifications for use on high speed rail lines. The study found that while MDOT has quality specifications for railroad ballast it does not have any for high speed rail. In addition, the American Railway Engineering and Maintenance-of-Way Association (AREMA), while also having specifications for railroad ballast, does not have specific specifications for high speed rail lines. The AREMA aggregate specifications for ballast include the following tests: (1) LA Abrasion, (2) Percent Moisture Absorption, (3) Flat and Elongated Particles, (4) Sulfate Soundness test. Internationally, some countries do require a highly standard for high speed rail such as the Los Angeles (LA) Abrasion test, which is uses a higher standard performance and the Micro Duval test, which is used to determine the maximum speed that a high speed can operate at. Since there are no existing MDOT ballast specification for high speed rail, it is assumed that aggregate ballast specifications for the Wolverine Line will use the higher international specifications. The Wolverine line, however, is located in southern Michigan is a region of sedimentary rocks which generally do not meet the existing MDOT ballast specifications. The investigation found that there were only 12 quarries in the Michigan that meet the MDOT specification. Of these 12 quarries, six were igneous or metamorphic rock quarries, while six were carbonate quarries. Of the six carbonate quarries four were locate in the Lower Peninsula and two in the Upper Peninsula. Two of the carbonate quarries were located in near proximity to the Wolverine Line, while the remaining quarries were at a significant haulage distance. In either case, the cost of haulage becomes an important consideration. In this regard, four of the quarries were located with lake terminals allowing water transportation to down state ports. The Upper Peninsula also has a significant amount of metal based mining in both igneous and metamorphic rock that generate significant amount of waste rock that could be used as a ballast material. The main drawback, however, is the distance to the Wolverine rail line. One potential source is the Cliffs Natural Resources that operates two large surface mines in the Marquette area with rail and water transportation to both Lake Superior and Lake Michigan. Both mines mine rock with a very high compressive strength far in excess of most ballast materials used in the United States and would make an excellent ballast materials. Discussions with Cliffs, however, indicated that due to environmental concerns that they would most likely not be interested in producing a ballast material. In the United States carbonate aggregates, while used for ballast, many times don't meet the ballast specifications in addition to the problem of particle degradation that can lead to fouling and cementation issues. Thus, many carbonate aggregate quarries in close proximity to railroads are not used. Since Michigan has a significant amount of carbonate quarries, the research also investigated using the dynamic properties of aggregate as a possible additional test for aggregate ballast quality. The dynamic strength of a material can be assessed using a split Hopkinson Pressure Bar (SHPB). The SHPB has been traditionally used to assess the dynamic properties of metal but over the past 20 years it is now being used to assess the dynamic properties of brittle materials such as ceramics and rock. In addition, the wear properties of metals have been related to their dynamic properties. Wear or breakdown of railroad ballast materials is one of the main problems with ballast material due to the dynamic loading generated by trains and which will be significantly higher for high speed rails. Previous research has indicated that the Port Inland quarry along Lake Michigan in the Southern Upper Peninsula has significant dynamic properties that might make it potentially useable as an aggregate for high speed rail. The dynamic strength testing conducted in this research indicate that the Port Inland limestone in fact has a dynamic strength close to igneous rocks and much higher than other carbonate rocks in the Great Lakes region. It is recommended that further research be conducted to investigate the Port Inland limestone as a high speed ballast material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power transformers are key components of the power grid and are also one of the most subjected to a variety of power system transients. The failure of a large transformer can cause severe monetary losses to a utility, thus adequate protection schemes are of great importance to avoid transformer damage and maximize the continuity of service. Computer modeling can be used as an efficient tool to improve the reliability of a transformer protective relay application. Unfortunately, transformer models presently available in commercial software lack completeness in the representation of several aspects such as internal winding faults, which is a common cause of transformer failure. It is also important to adequately represent the transformer at frequencies higher than the power frequency for a more accurate simulation of switching transients since these are a well known cause for the unwanted tripping of protective relays. This work develops new capabilities for the Hybrid Transformer Model (XFMR) implemented in ATPDraw to allow the representation of internal winding faults and slow-front transients up to 10 kHz. The new model can be developed using any of two sources of information: 1) test report data and 2) design data. When only test-report data is available, a higher-order leakage inductance matrix is created from standard measurements. If design information is available, a Finite Element Model is created to calculate the leakage parameters for the higher-order model. An analytical model is also implemented as an alternative to FEM modeling. Measurements on 15-kVA 240?/208Y V and 500-kVA 11430Y/235Y V distribution transformers were performed to validate the model. A transformer model that is valid for simulations for frequencies above the power frequency was developed after continuing the division of windings into multiple sections and including a higher-order capacitance matrix. Frequency-scan laboratory measurements were used to benchmark the simulations. Finally, a stability analysis of the higher-order model was made by analyzing the trapezoidal rule for numerical integration as used in ATP. Numerical damping was also added to suppress oscillations locally when discontinuities occurred in the solution. A maximum error magnitude of 7.84% was encountered in the simulated currents for different turn-to-ground and turn-to-turn faults. The FEM approach provided the most accurate means to determine the leakage parameters for the ATP model. The higher-order model was found to reproduce the short-circuit impedance acceptably up to about 10 kHz and the behavior at the first anti-resonant frequency was better matched with the measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for a stronger and more durable building material is becoming more important as the structural engineering field expands and challenges the behavioral limits of current materials. One of the demands for stronger material is rooted in the effects that dynamic loading has on a structure. High strain rates on the order of 101 s-1 to 103 s-1, though a small part of the overall types of loading that occur anywhere between 10-8 s-1 to 104 s-1 and at any point in a structures life, have very important effects when considering dynamic loading on a structure. High strain rates such as these can cause the material and structure to behave differently than at slower strain rates, which necessitates the need for the testing of materials under such loading to understand its behavior. Ultra high performance concrete (UHPC), a relatively new material in the U.S. construction industry, exhibits many enhanced strength and durability properties compared to the standard normal strength concrete. However, the use of this material for high strain rate applications requires an understanding of UHPC’s dynamic properties under corresponding loads. One such dynamic property is the increase in compressive strength under high strain rate load conditions, quantified as the dynamic increase factor (DIF). This factor allows a designer to relate the dynamic compressive strength back to the static compressive strength, which generally is a well-established property. Previous research establishes the relationships for the concept of DIF in design. The generally accepted methodology for obtaining high strain rates to study the enhanced behavior of compressive material strength is the split Hopkinson pressure bar (SHPB). In this research, 83 Cor-Tuf UHPC specimens were tested in dynamic compression using a SHPB at Michigan Technological University. The specimens were separated into two categories: ambient cured and thermally treated, with aspect ratios of 0.5:1, 1:1, and 2:1 within each category. There was statistically no significant difference in mean DIF for the aspect ratios and cure regimes that were considered in this study. DIF’s ranged from 1.85 to 2.09. Failure modes were observed to be mostly Type 2, Type 4, or combinations thereof for all specimen aspect ratios when classified according to ASTM C39 fracture pattern guidelines. The Comite Euro-International du Beton (CEB) model for DIF versus strain rate does not accurately predict the DIF for UHPC data gathered in this study. Additionally, a measurement system analysis was conducted to observe variance within the measurement system and a general linear model analysis was performed to examine the interaction and main effects that aspect ratio, cannon pressure, and cure method have on the maximum dynamic stress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lava flow modeling can be a powerful tool in hazard assessments; however, the ability to produce accurate models is usually limited by a lack of high resolution, up-to-date Digital Elevation Models (DEMs). This is especially obvious in places such as Kilauea Volcano (Hawaii), where active lava flows frequently alter the terrain. In this study, we use a new technique to create high resolution DEMs on Kilauea using synthetic aperture radar (SAR) data from the TanDEM-X (TDX) satellite. We convert raw TDX SAR data into a geocoded DEM using GAMMA software [Werner et al., 2000]. This process can be completed in several hours and permits creation of updated DEMs as soon as new TDX data are available. To test the DEMs, we use the Harris and Rowland [2001] FLOWGO lava flow model combined with the Favalli et al. [2005] DOWNFLOW model to simulate the 3-15 August 2011 eruption on Kilauea's East Rift Zone. Results were compared with simulations using the older, lower resolution 2000 SRTM DEM of Hawaii. Effusion rates used in the model are derived from MODIS thermal infrared satellite imagery. FLOWGO simulations using the TDX DEM produced a single flow line that matched the August 2011 flow almost perfectly, but could not recreate the entire flow field due to the relatively high DEM noise level. The issues with short model flow lengths can be resolved by filtering noise from the DEM. Model simulations using the outdated SRTM DEM produced a flow field that followed a different trajectory to that observed. Numerous lava flows have been emplaced at Kilauea since the creation of the SRTM DEM, leading the model to project flow lines in areas that have since been covered by fresh lava flows. These results show that DEMs can quickly become outdated on active volcanoes, but our new technique offers the potential to produce accurate, updated DEMs for modeling lava flow hazards.