9 resultados para 100602 Input Output and Data Devices

em Digital Commons - Michigan Tech


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary challenge in groundwater and contaminant transport modeling is obtaining the data needed for constructing, calibrating and testing the models. Large amounts of data are necessary for describing the hydrostratigraphy in areas with complex geology. Increasingly states are making spatial data available that can be used for input to groundwater flow models. The appropriateness of this data for large-scale flow systems has not been tested. This study focuses on modeling a plume of 1,4-dioxane in a heterogeneous aquifer system in Scio Township, Washtenaw County, Michigan. The analysis consisted of: (1) characterization of hydrogeology of the area and construction of a conceptual model based on publicly available spatial data, (2) development and calibration of a regional flow model for the site, (3) conversion of the regional model to a more highly resolved local model, (4) simulation of the dioxane plume, and (5) evaluation of the model's ability to simulate field data and estimation of the possible dioxane sources and subsequent migration until maximum concentrations are at or below the Michigan Department of Environmental Quality's residential cleanup standard for groundwater (85 ppb). MODFLOW-2000 and MT3D programs were utilized to simulate the groundwater flow and the development and movement of the 1, 4-dioxane plume, respectively. MODFLOW simulates transient groundwater flow in a quasi-3-dimensional sense, subject to a variety of boundary conditions that can simulate recharge, pumping, and surface-/groundwater interactions. MT3D simulates solute advection with groundwater flow (using the flow solution from MODFLOW), dispersion, source/sink mixing, and chemical reaction of contaminants. This modeling approach was successful at simulating the groundwater flows by calibrating recharge and hydraulic conductivities. The plume transport was adequately simulated using literature dispersivity and sorption coefficients, although the plume geometries were not well constrained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sustainable yields from water wells in hard-rock aquifers are achieved when the well bore intersects fracture networks. Fracture networks are often not readily discernable at the surface. Lineament analysis using remotely sensed satellite imagery has been employed to identify surface expressions of fracturing, and a variety of image-analysis techniques have been successfully applied in “ideal” settings. An ideal setting for lineament detection is where the influences of human development, vegetation, and climatic situations are minimal and hydrogeological conditions and geologic structure are known. There is not yet a well-accepted protocol for mapping lineaments nor have different approaches been compared in non-ideal settings. A new approach for image-processing/synthesis was developed to identify successful satellite imagery types for lineament analysis in non-ideal terrain. Four satellite sensors (ASTER, Landsat7 ETM+, QuickBird, RADARSAT-1) and a digital elevation model were evaluated for lineament analysis in Boaco, Nicaragua, where the landscape is subject to varied vegetative cover, a plethora of anthropogenic features, and frequent cloud cover that limit the availability of optical satellite data. A variety of digital image processing techniques were employed and lineament interpretations were performed to obtain 12 complementary image products that were evaluated subjectively to identify lineaments. The 12 lineament interpretations were synthesized to create a raster image of lineament zone coincidence that shows the level of agreement among the 12 interpretations. A composite lineament interpretation was made using the coincidence raster to restrict lineament observations to areas where multiple interpretations (at least 4) agree. Nine of the 11 previously mapped faults were identified from the coincidence raster. An additional 26 lineaments were identified from the coincidence raster, and the locations of 10 were confirmed by field observation. Four manual pumping tests suggest that well productivity is higher for wells proximal to lineament features. Interpretations from RADARSAT-1 products were superior to interpretations from other sensor products, suggesting that quality lineament interpretation in this region requires anthropogenic features to be minimized and topographic expressions to be maximized. The approach developed in this study has the potential to improve siting wells in non-ideal regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New designs of user input systems have resulted from the developing technologies and specialized user demands. Conventional keyboard and mouse input devices still dominate the input speed, but other input mechanisms are demanded in special application scenarios. Touch screen and stylus input methods have been widely adopted by PDAs and smartphones. Reduced keypads are necessary for mobile phones. A new design trend is exploring the design space in applications requiring single-handed input, even with eyes-free on small mobile devices. This requires as few keys on the input device to make it feasible to operate. But representing many characters with fewer keys can make the input ambiguous. Accelerometers embedded in mobile devices provide opportunities to combine device movements with keys for input signal disambiguation. Recent research has explored its design space for text input. In this dissertation an accelerometer assisted single key positioning input system is developed. It utilizes input device tilt directions as input signals and maps their sequences to output characters and functions. A generic positioning model is developed as guidelines for designing positioning input systems. A calculator prototype and a text input prototype on the 4+1 (5 positions) positioning input system and the 8+1 (9 positions) positioning input system are implemented using accelerometer readings on a smartphone. Users use one physical key to operate and feedbacks are audible. Controlled experiments are conducted to evaluate the feasibility, learnability, and design space of the accelerometer assisted single key positioning input system. This research can provide inspiration and innovational references for researchers and practitioners in the positioning user input designs, applications of accelerometer readings, and new development of standard machine readable sign languages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A diesel oxidation catalyst (DOC) with a catalyzed diesel particulate filter (CPF) is an effective exhaust aftertreatment device that reduces particulate emissions from diesel engines, and properly designed DOC-CPF systems provide passive regeneration of the filter by the oxidation of PM via thermal and NO2/temperature-assisted means under various vehicle duty cycles. However, controlling the backpressure on engines caused by the addition of the CPF to the exhaust system requires a good understanding of the filtration and oxidation processes taking place inside the filter as the deposition and oxidation of solid particulate matter (PM) change as functions of loading time. In order to understand the solid PM loading characteristics in the CPF, an experimental and modeling study was conducted using emissions data measured from the exhaust of a John Deere 6.8 liter, turbocharged and after-cooled engine with a low-pressure loop EGR system and a DOC-CPF system (or a CCRT® - Catalyzed Continuously Regenerating Trap®, as named by Johnson Matthey) in the exhaust system. A series of experiments were conducted to evaluate the performance of the DOC-only, CPF-only and DOC-CPF configurations at two engine speeds (2200 and 1650 rpm) and various loads on the engine ranging from 5 to 100% of maximum torque at both speeds. Pressure drop across the DOC and CPF, mass deposited in the CPF at the end of loading, upstream and downstream gaseous and particulate emissions, and particle size distributions were measured at different times during the experiments to characterize the pressure drop and filtration efficiency of the DOCCPF system as functions of loading time. Pressure drop characteristics measured experimentally across the DOC-CPF system showed a distinct deep-bed filtration region characterized by a non-linear pressure drop rise, followed by a transition region, and then by a cake-filtration region with steadily increasing pressure drop with loading time at engine load cases with CPF inlet temperatures less than 325 °C. At the engine load cases with CPF inlet temperatures greater than 360 °C, the deep-bed filtration region had a steep rise in pressure drop followed by a decrease in pressure drop (due to wall PM oxidation) in the cake filtration region. Filtration efficiencies observed during PM cake filtration were greater than 90% in all engine load cases. Two computer models, i.e., the MTU 1-D DOC model and the MTU 1-D 2-layer CPF model were developed and/or improved from existing models as part of this research and calibrated using the data obtained from these experiments. The 1-D DOC model employs a three-way catalytic reaction scheme for CO, HC and NO oxidation, and is used to predict CO, HC, NO and NO2 concentrations downstream of the DOC. Calibration results from the 1-D DOC model to experimental data at 2200 and 1650 rpm are presented. The 1-D 2-layer CPF model uses a ‘2-filters in series approach’ for filtration, PM deposition and oxidation in the PM cake and substrate wall via thermal (O2) and NO2/temperature-assisted mechanisms, and production of NO2 as the exhaust gas mixture passes through the CPF catalyst washcoat. Calibration results from the 1-D 2-layer CPF model to experimental data at 2200 rpm are presented. Comparisons of filtration and oxidation behavior of the CPF at sample load-cases in both configurations are also presented. The input parameters and selected results are also compared with a similar research work with an earlier version of the CCRT®, to compare and explain differences in the fundamental behavior of the CCRT® used in these two research studies. An analysis of the results from the calibrated CPF model suggests that pressure drop across the CPF depends mainly on PM loading and oxidation in the substrate wall, and also that the substrate wall initiates PM filtration and helps in forming a PM cake layer on the wall. After formation of the PM cake layer of about 1-2 µm on the wall, the PM cake becomes the primary filter and performs 98-99% of PM filtration. In all load cases, most of PM mass deposited was in the PM cake layer, and PM oxidation in the PM cake layer accounted for 95-99% of total PM mass oxidized during loading. Overall PM oxidation efficiency of the DOC-CPF device increased with increasing CPF inlet temperatures and NO2 flow rates, and was higher in the CCRT® configuration compared to the CPF-only configuration due to higher CPF inlet NO2 concentrations. Filtration efficiencies greater than 90% were observed within 90-100 minutes of loading time (starting with a clean filter) in all load cases, due to the fact that the PM cake on the substrate wall forms a very efficient filter. A good strategy for maintaining high filtration efficiency and low pressure drop of the device while performing active regeneration would be to clean the PM cake filter partially (i.e., by retaining a cake layer of 1-2 µm thickness on the substrate wall) and to completely oxidize the PM deposited in the substrate wall. The data presented support this strategy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the performance gap between microprocessors and memory continues to increase, main memory accesses result in long latencies which become a factor limiting system performance. Previous studies show that main memory access streams contain significant localities and SDRAM devices provide parallelism through multiple banks and channels. These locality and parallelism have not been exploited thoroughly by conventional memory controllers. In this thesis, SDRAM address mapping techniques and memory access reordering mechanisms are studied and applied to memory controller design with the goal of reducing observed main memory access latency. The proposed bit-reversal address mapping attempts to distribute main memory accesses evenly in the SDRAM address space to enable bank parallelism. As memory accesses to unique banks are interleaved, the access latencies are partially hidden and therefore reduced. With the consideration of cache conflict misses, bit-reversal address mapping is able to direct potential row conflicts to different banks, further improving the performance. The proposed burst scheduling is a novel access reordering mechanism, which creates bursts by clustering accesses directed to the same rows of the same banks. Subjected to a threshold, reads are allowed to preempt writes and qualified writes are piggybacked at the end of the bursts. A sophisticated access scheduler selects accesses based on priorities and interleaves accesses to maximize the SDRAM data bus utilization. Consequentially burst scheduling reduces row conflict rate, increasing and exploiting the available row locality. Using a revised SimpleScalar and M5 simulator, both techniques are evaluated and compared with existing academic and industrial solutions. With SPEC CPU2000 benchmarks, bit-reversal reduces the execution time by 14% on average over traditional page interleaving address mapping. Burst scheduling also achieves a 15% reduction in execution time over conventional bank in order scheduling. Working constructively together, bit-reversal and burst scheduling successfully achieve a 19% speedup across simulated benchmarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As water quality interventions are scaled up to meet the Millennium Development Goal of halving the proportion of the population without access to safe drinking water by 2015 there has been much discussion on the merits of household- and source-level interventions. This study furthers the discussion by examining specific interventions through the use of embodied human and material energy. Embodied energy quantifies the total energy required to produce and use an intervention, including all upstream energy transactions. This model uses material quantities and prices to calculate embodied energy using national economic input/output-based models from China, the United States and Mali. Embodied energy is a measure of aggregate environmental impacts of the interventions. Human energy quantifies the caloric expenditure associated with the installation and operation of an intervention is calculated using the physical activity ratios (PARs) and basal metabolic rates (BMRs). Human energy is a measure of aggregate social impacts of an intervention. A total of four household treatment interventions – biosand filtration, chlorination, ceramic filtration and boiling – and four water source-level interventions – an improved well, a rope pump, a hand pump and a solar pump – are evaluated in the context of Mali, West Africa. Source-level interventions slightly out-perform household-level interventions in terms of having less total embodied energy. Human energy, typically assumed to be a negligible portion of total embodied energy, is shown to be significant to all eight interventions, and contributing over half of total embodied energy in four of the interventions. Traditional gender roles in Mali dictate the types of work performed by men and women. When the human energy is disaggregated by gender, it is seen that women perform over 99% of the work associated with seven of the eight interventions. This has profound implications for gender equality in the context of water quality interventions, and may justify investment in interventions that reduce human energy burdens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) has been used to quantify SO2 emissions from passively degassing volcanoes. This dissertation explores ASTER’s capability to detect SO2 with satellite validation, enhancement techniques and extensive processing of images at a variety of volcanoes. ASTER is compared to the Mini UV Spectrometer (MUSe), a ground based instrument, to determine if reasonable SO2 fluxes can be quantified from a plume emitted from Lascar, Chile. The two sensors were in good agreement with ASTER proving to be a reliable detector of SO2. ASTER illustrated the advantages of imaging a plume in 2D, with better temporal resolution than the MUSe. SO2 plumes in ASTER imagery are not always discernible in the raw TIR data. Principal Component Analysis (PCA) and Decorrelation Stretch (DCS) enhancement techniques were compared to determine how well they highlight a variety of volcanic plumes. DCS produced a consistent output and the composition of the plumes was easy to identify from explosive eruptions. As the plumes became smaller and lower in altitude they became harder to distinguish using DCS. PCA proved to be better at identifying smaller low altitude plumes. ASTER was used to investigate SO2 emissions at Lascar, Chile. Activity at Lascar has been characterized by cyclic behavior and persistent degassing (Matthews et al. 1997). Previous studies at Lascar have primarily focused on changes in thermal infrared anomalies, neglecting gas emissions. Using the SO2 data along with changes in thermal anomalies and visual observations it is evident that Lascar is at the end an eruptive cycle that began in 1993. Declining gas emissions and crater temperatures suggest that the conduit is sealing. ASTER and the Ozone Monitoring Instrument (OMI) were used to determine the annual contribution of SO2 to the troposphere from the Central and South American volcanic arcs between 2000 and 2011. Fluxes of 3.4 Tg/a for Central America and 3.7 Tg/a for South America were calculated. The detection limits of ASTER were explored. The results a proved to be interesting, with plumes from many of the high emitting volcanoes, such as Villarrica, Chile, not being detected by ASTER.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Civil infrastructure provides essential services for the development of both society and economy. It is very important to manage systems efficiently to ensure sound performance. However, there are challenges in information extraction from available data, which also necessitates the establishment of methodologies and frameworks to assist stakeholders in the decision making process. This research proposes methodologies to evaluate systems performance by maximizing the use of available information, in an effort to build and maintain sustainable systems. Under the guidance of problem formulation from a holistic view proposed by Mukherjee and Muga, this research specifically investigates problem solving methods that measure and analyze metrics to support decision making. Failures are inevitable in system management. A methodology is developed to describe arrival pattern of failures in order to assist engineers in failure rescues and budget prioritization especially when funding is limited. It reveals that blockage arrivals are not totally random. Smaller meaningful subsets show good random behavior. Additional overtime failure rate is analyzed by applying existing reliability models and non-parametric approaches. A scheme is further proposed to depict rates over the lifetime of a given facility system. Further analysis of sub-data sets is also performed with the discussion of context reduction. Infrastructure condition is another important indicator of systems performance. The challenges in predicting facility condition are the transition probability estimates and model sensitivity analysis. Methods are proposed to estimate transition probabilities by investigating long term behavior of the model and the relationship between transition rates and probabilities. To integrate heterogeneities, model sensitivity is performed for the application of non-homogeneous Markov chains model. Scenarios are investigated by assuming transition probabilities follow a Weibull regressed function and fall within an interval estimate. For each scenario, multiple cases are simulated using a Monte Carlo simulation. Results show that variations on the outputs are sensitive to the probability regression. While for the interval estimate, outputs have similar variations to the inputs. Life cycle cost analysis and life cycle assessment of a sewer system are performed comparing three different pipe types, which are reinforced concrete pipe (RCP) and non-reinforced concrete pipe (NRCP), and vitrified clay pipe (VCP). Life cycle cost analysis is performed for material extraction, construction and rehabilitation phases. In the rehabilitation phase, Markov chains model is applied in the support of rehabilitation strategy. In the life cycle assessment, the Economic Input-Output Life Cycle Assessment (EIO-LCA) tools are used in estimating environmental emissions for all three phases. Emissions are then compared quantitatively among alternatives to support decision making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Disturbances in power systems may lead to electromagnetic transient oscillations due to mismatch of mechanical input power and electrical output power. Out-of-step conditions in power system are common after the disturbances where the continuous oscillations do not damp out and the system becomes unstable. Existing out-of-step detection methods are system specific as extensive off-line studies are required for setting of relays. Most of the existing algorithms also require network reduction techniques to apply in multi-machine power systems. To overcome these issues, this research applies Phasor Measurement Unit (PMU) data and Zubov’s approximation stability boundary method, which is a modification of Lyapunov’s direct method, to develop a novel out-of-step detection algorithm. The proposed out-of-step detection algorithm is tested in a Single Machine Infinite Bus system, IEEE 3-machine 9-bus, and IEEE 10-machine 39-bus systems. Simulation results show that the proposed algorithm is capable of detecting out-of-step conditions in multi-machine power systems without using network reduction techniques and a comparative study with an existing blinder method demonstrate that the decision times are faster. The simulation case studies also demonstrate that the proposed algorithm does not depend on power system parameters, hence it avoids the need of extensive off-line system studies as needed in other algorithms.