1000 resultados para Michigan Tech
Resumo:
Writing centers work with writers; traditionally services have been focused on undergraduates taking composition classes. More recently, centers have started to attract a wider client base including: students taking labs that require writing; graduate students; and ESL students learning the conventions of U.S. communication. There are very few centers, however, which identify themselves as open to working with all members of the campus-community. Michigan Technological University has one such center. In the Michigan Tech writing center, doors are open to “all students, faculty and staff.” While graduate students, post docs, and professors preparing articles for publication have used the center, for the first time in the collective memory of the center UAW staff members requested center appointments in the summer of 2008. These working class employees were in the process of filling out a work related document, the UAW Position Audit, an approximately seven-page form. This form was their one avenue for requesting a review of the job they were doing; the review was the first step in requesting a raise in job level and pay. This study grew out of the realization that implicit literacy expectations between working class United Auto Workers (UAW) staff and professional class staff were complicating the filling out and filing of the position audit form. Professional class supervisors had designed the form as a measure of fairness, in that each UAW employee on campus was responding to the same set of questions about their work. However, the implicit literacy expectations of supervisors were different from those of many of the employees who were to fill out the form. As a result, questions that were meant to be straightforward to answer were in the eyes of the employees filling out the form, complex. Before coming to the writing center UAW staff had spent months writing out responses to the form; they expressed concerns that their responses still would not meet audience expectations. These writers recognized that they did not yet know exactly what the audience was expecting. The results of this study include a framework for planning writing center sessions that facilitate the acquisition of literacy practices which are new to the user. One important realization from this dissertation is that the social nature of literacy must be kept in the forefront when both planning sessions and when educating tutors to lead these sessions. Literacy scholars such as James Paul Gee, Brian Street, and Shirley Brice Heath are used to show that a person can only know those literacy practices that they have previously acquired. In order to acquire new literacy practices, a person must have social opportunities for hands-on practice and mentoring from someone with experience. The writing center can adapt theory and practices from this dissertation that will facilitate sessions for a range of writers wishing to learn “new” literacy practices. This study also calls for specific changes to writing center tutor education.
Resumo:
The patterning of photoactive purple membrane (PM) films onto electronic substrates to create a biologically based light detection device was investigated. This research is part of a larger collaborative effort to develop a miniaturized toxin detection platform. This platform will utilize PM films containing the photoactive protein bacteriorhodopsin to convert light energy to electrical energy. Following an effort to pattern PM films using focused ion beam machining, the photolithography based bacteriorhodopsin patterning technique (PBBPT) was developed. This technique utilizes conventional photolithography techniques to pattern oriented PM films onto flat substrates. After the basic patterning process was developed, studies were conducted that confirmed the photoelectric functionality of the PM films after patterning. Several process variables were studied and optimized in order to increase the pattern quality of the PM films. Optical microscopy, scanning electron microscopy, and interferometric microscopy were used to evaluate the PM films produced by the patterning technique. Patterned PM films with lateral dimensions of 15 μm have been demonstrated using this technique. Unlike other patterning techniques, the PBBPT uses standard photolithographic processes that make its integration with conventional semiconductor fabrication feasible. The final effort of this research involved integrating PM films patterned using the PBBPT with PMOS transistors. An indirect integration of PM films with PMOS transistors was successfully demonstrated. This indirect integration used the voltage produced by a patterned PM film under light exposure to modulate the gate of a PMOS transistor, activating the transistor. Following this success, a study investigating how this PM based light detection system responded to variations in light intensity supplied to the PM film. This work provides a successful proof of concept for a portion of the toxin detection platform currently under development.
Resumo:
Transformers are very important elements of any power system. Unfortunately, they are subjected to through-faults and abnormal operating conditions which can affect not only the transformer itself but also other equipment connected to the transformer. Thus, it is essential to provide sufficient protection for transformers as well as the best possible selectivity and sensitivity of the protection. Nowadays microprocessor-based relays are widely used to protect power equipment. Current differential and voltage protection strategies are used in transformer protection applications and provide fast and sensitive multi-level protection and monitoring. The elements responsible for detecting turn-to-turn and turn-to-ground faults are the negative-sequence percentage differential element and restricted earth-fault (REF) element, respectively. During severe internal faults current transformers can saturate and slow down the speed of relay operation which affects the degree of equipment damage. The scope of this work is to develop a modeling methodology to perform simulations and laboratory tests for internal faults such as turn-to-turn and turn-to-ground for two step-down power transformers with capacity ratings of 11.2 MVA and 290 MVA. The simulated current waveforms are injected to a microprocessor relay to check its sensitivity for these internal faults. Saturation of current transformers is also studied in this work. All simulations are performed with the Alternative Transients Program (ATP) utilizing the internal fault model for three-phase two-winding transformers. The tested microprocessor relay is the SEL-487E current differential and voltage protection relay. The results showed that the ATP internal fault model can be used for testing microprocessor relays for any percentage of turns involved in an internal fault. An interesting observation from the experiments was that the SEL-487E relay is more sensitive to turn-to-turn faults than advertized for the transformers studied. The sensitivity of the restricted earth-fault element was confirmed. CT saturation cases showed that low accuracy CTs can be saturated with a high percentage of turn-to-turn faults, where the CT burden will affect the extent of saturation. Recommendations for future work include more accurate simulation of internal faults, transformer energization inrush, and other scenarios involving core saturation, using the newest version of the internal fault model. The SEL-487E relay or other microprocessor relays should again be tested for performance. Also, application of a grounding bank to the delta-connected side of a transformer will increase the zone of protection and relay performance can be tested for internal ground faults on both sides of a transformer.
Resumo:
A Reynolds-Stress Turbulence Model has been incorporated with success into the KIVA code, a computational fluid dynamics hydrocode for three-dimensional simulation of fluid flow in engines. The newly implemented Reynolds-stress turbulence model greatly improves the robustness of KIVA, which in its original version has only eddy-viscosity turbulence models. Validation of the Reynolds-stress turbulence model is accomplished by conducting pipe-flow and channel-flow simulations, and comparing the computed results with experimental and direct numerical simulation data. Flows in engines of various geometry and operating conditions are calculated using the model, to study the complex flow fields as well as confirm the model’s validity. Results show that the Reynolds-stress turbulence model is able to resolve flow details such as swirl and recirculation bubbles. The model is proven to be an appropriate choice for engine simulations, with consistency and robustness, while requiring relatively low computational effort.
Resumo:
Due to their high thermal efficiency, diesel engines have excellent fuel economy and have been widely used as a power source for many vehicles. Diesel engines emit less greenhouse gases (carbon dioxide) compared with gasoline engines. However, diesel engines emit large amounts of particulate matter (PM) which can imperil human health. The best way to reduce the particulate matter is by using the Diesel Particulate Filter (DPF) system which consists of a wall-flow monolith which can trap particulates, and the DPF can be periodically regenerated to remove the collected particulates. The estimation of the PM mass accumulated in the DPF and total pressure drop across the filter are very important in order to determine when to carry out the active regeneration for the DPF. In this project, by developing a filtration model and a pressure drop model, we can estimate the PM mass and the total pressure drop, then, these two models can be linked with a regeneration model which has been developed previously to predict when to regenerate the filter. There results of this project were: 1 Reproduce a filtration model and simulate the processes of filtration. By studying the deep bed filtration and cake filtration, stages and quantity of mass accumulated in the DPF can be estimated. It was found that the filtration efficiency increases faster during the deep-bed filtration than that during the cake filtration. A “unit collector” theory was used in our filtration model which can explain the mechanism of the filtration very well. 2 Perform a parametric study on the pressure drop model for changes in engine exhaust flow rate, deposit layer thickness, and inlet temperature. It was found that there are five primary variables impacting the pressure drop in the DPF which are temperature gradient along the channel, deposit layer thickness, deposit layer permeability, wall thickness, and wall permeability. 3 Link the filtration model and the pressure drop model with the regeneration model to determine the time to carry out the regeneration of the DPF. It was found that the regeneration should be initiated when the cake layer is at a certain thickness, since a cake layer with either too big or too small an amount of particulates will need more thermal energy to reach a higher regeneration efficiency. 4 Formulate diesel particulate trap regeneration strategies for real world driving conditions to find out the best desirable conditions for DPF regeneration. It was found that the regeneration should be initiated when the vehicle’s speed is high and during which there should not be any stops from the vehicle. Moreover, the regeneration duration is about 120 seconds and the inlet temperature for the regeneration is 710K.
Resumo:
The electrical power source is a critical component of the scoping level study as the source affects both the project economics and timeline. This paper proposes a systematic approach to selecting an electrical power source for a new mine. Orvana Minerals Copperwood project is used as a case study. The Copperwood results show that the proposed scoping level approach is consistent with the subsequent much more detailed feasibility study.
Resumo:
Onondaga Lake has received the municipal effluent and industrial waste from the city of Syracuse for more than a century. Historically, 75 metric tons of mercury were discharged to the lake by chlor-alkali facilities. These legacy deposits of mercury now exist primarily in the lake sediments. Under anoxic conditions, methylmercury is produced in the sediments and can be released to the overlying water. Natural sedimentation processes are continuously burying the mercury deeper into the sediments. Eventually, the mercury will be buried to a depth where it no longer has an impact on the overlying water. In the interim, electron acceptor amendment systems can be installed to retard these chemical releases while the lake naturally recovers. Electron acceptor amendment systems are designed to meet the sediment oxygen demand in the sediment and maintain manageable hypolimnion oxygen concentrations. Historically, designs of these systems have been under designed resulting in failure. This stems from a mischaracterization of the sediment oxygen demand. Turbulence at the sediment water interface has been shown to impact sediment oxygen demand. The turbulence introduced by the electron amendment system can thus increase the sediment oxygen demand, resulting in system failure if turbulence is not factored into the design. Sediment cores were gathered and operated to steady state under several well characterized turbulence conditions. The relationship between sediment oxygen/nitrate demand and turbulence was then quantified and plotted. A maximum demand was exhibited at or above a fluid velocity of 2.0 mm•s-1. Below this velocity, demand decreased rapidly with fluid velocity as zero velocity was approached. Similar relationships were displayed by both oxygen and nitrate cores.
Resumo:
Electrospinning uses electrostatic forces to create nanofibers that are far smaller than conventional fiber spinning process. Nanofibers made with chitosan were created and techniques to control fibers diameter and were well developed. However, the adsorption of porcine parvovirus (PPV) was low. PPV is a small, nonenveloped virus that is difficult to remove due to its size, 18-26 nm in diameter, and its chemical stability. To improve virus adsorption, we functionalized the nanofibers with a quaternized amine, forming N-[(2-hydroxy-3-trimethylammonium) propyl] chitosan chloride (HTCC). This was blended with additives to increase the ability to form HTCC nanofibers. The additives changed the viscosity and conductivity of the electrospinning solution. We have successfully synthesized and functionalized HTCC nanofibers that absorb PPV. HTCC blend with graphene have the ability to remove a minimum of 99% of PPV present in solution.
Resumo:
Molecules are the smallest possible elements for electronic devices, with active elements for such devices typically a few Angstroms in footprint area. Owing to the possibility of producing ultrahigh density devices, tremendous effort has been invested in producing electronic junctions by using various types of molecules. The major issues for molecular electronics include (1) developing an effective scheme to connect molecules with the present micro- and nano-technology, (2) increasing the lifetime and stabilities of the devices, and (3) increasing their performance in comparison to the state-of-the-art devices. In this work, we attempt to use carbon nanotubes (CNTs) as the interconnecting nanoelectrodes between molecules and microelectrodes. The ultimate goal is to use two individual CNTs to sandwich molecules in a cross-bar configuration while having these CNTs connected with microelectrodes such that the junction displays the electronic character of the molecule chosen. We have successfully developed an effective scheme to connect molecules with CNTs, which is scalable to arrays of molecular electronic devices. To realize this far reaching goal, the following technical topics have been investigated. 1. Synthesis of multi-walled carbon nanotubes (MWCNTs) by thermal chemical vapor deposition (T-CVD) and plasma-enhanced chemical vapor deposition (PECVD) techniques (Chapter 3). We have evaluated the potential use of tubular and bamboo-like MWCNTs grown by T-CVD and PE-CVD in terms of their structural properties. 2. Horizontal dispersion of MWCNTs with and without surfactants, and the integration of MWCNTs to microelectrodes using deposition by dielectrophoresis (DEP) (Chapter 4). We have systematically studied the use of surfactant molecules to disperse and horizontally align MWCNTs on substrates. In addition, DEP is shown to produce impurityfree placement of MWCNTs, forming connections between microelectrodes. We demonstrate the deposition density is tunable by both AC field strength and AC field frequency. 3. Etching of MWCNTs for the impurity-free nanoelectrodes (Chapter 5). We show that the residual Ni catalyst on MWCNTs can be removed by acid etching; the tip removal and collapsing of tubes into pyramids enhances the stability of field emission from the tube arrays. The acid-etching process can be used to functionalize the MWCNTs, which was used to make our initial CNT-nanoelectrode glucose sensors. Finally, lessons learned trying to perform spectroscopic analysis of the functionalized MWCNTs were vital for designing our final devices. 4. Molecular junction design and electrochemical synthesis of biphenyl molecules on carbon microelectrodes for all-carbon molecular devices (Chapter 6). Utilizing the experience gained on the work done so far, our final device design is described. We demonstrate the capability of preparing patterned glassy carbon films to serve as the bottom electrode in the new geometry. However, the molecular switching behavior of biphenyl was not observed by scanning tunneling microscopy (STM), mercury drop or fabricated glassy carbon/biphenyl/MWCNT junctions. Either the density of these molecules is not optimum for effective integration of devices using MWCNTs as the nanoelectrodes, or an electroactive contaminant was reduced instead of the ionic biphenyl species. 5. Self-assembly of octadecanethiol (ODT) molecules on gold microelectrodes for functional molecular devices (Chapter 7). We have realized an effective scheme to produce Au/ODT/MWCNT junctions by spanning MWCNTs across ODT-functionalized microelectrodes. A percentage of the resulting junctions retain the expected character of an ODT monolayer. While the process is not yet optimized, our successful junctions show that molecular electronic devices can be fabricated using simple processes such as photolithography, self-assembled monolayers and dielectrophoresis.
Resumo:
Phosphomolybdic acid (H3PMo12O40) along with niobium,pyridine and niobium exchanged phosphomolybdic acid catalysts were prepared. Ammonia adsorption microcalorimetry and methanol oxidation studies were carried out to investigate the acid sites strength acid/base/redox properties of each catalyst. The addition of niobium, pyridine or both increased the ammonia heat of adsorption and the total uptake. The catalyst with both niobium and pyridine demonstrated the largest number of strong sites. For the parent H3PMo12O40 catalyst, methanol oxidation favors the redox product. Incorporation of niobium results in similar selectivity to redox products but also results in no catalyst deactivation. Incorporation of pyridine instead changes to the selectivity to favor the acidic product. Finally, the inclusion of both niobium and pyridine results in strong selectivity to the acidic product while also showing no catalyst deactivation. Thus the presence of pyridine appears to enhance the acid property of the catalyst while niobium appears to stabilize the active site.
Resumo:
Autonomous system applications are typically limited by the power supply operational lifetime when battery replacement is difficult or costly. A trade-off between battery size and battery life is usually calculated to determine the device capability and lifespan. As a result, energy harvesting research has gained importance as society searches for alternative energy sources for power generation. For instance, energy harvesting has been a proven alternative for powering solar-based calculators and self-winding wristwatches. Thus, the use of energy harvesting technology can make it possible to assist or replace batteries for portable, wearable, or surgically-implantable autonomous systems. Applications such as cardiac pacemakers or electrical stimulation applications can benefit from this approach since the number of surgeries for battery replacement can be reduced or eliminated. Research on energy scavenging from body motion has been investigated to evaluate the feasibility of powering wearable or implantable systems. Energy from walking has been previously extracted using generators placed on shoes, backpacks, and knee braces while producing power levels ranging from milliwatts to watts. The research presented in this paper examines the available power from walking and running at several body locations. The ankle, knee, hip, chest, wrist, elbow, upper arm, side of the head, and back of the head were the chosen target localizations. Joints were preferred since they experience the most drastic acceleration changes. For this, a motor-driven treadmill test was performed on 11 healthy individuals at several walking (1-4 mph) and running (2-5 mph) speeds. The treadmill test provided the acceleration magnitudes from the listed body locations. Power can be estimated from the treadmill evaluation since it is proportional to the acceleration and frequency of occurrence. Available power output from walking was determined to be greater than 1mW/cm³ for most body locations while being over 10mW/cm³ at the foot and ankle locations. Available power from running was found to be almost 10 times higher than that from walking. Most energy harvester topologies use linear generator approaches that are well suited to fixed-frequency vibrations with sub-millimeter amplitude oscillations. In contrast, body motion is characterized with a wide frequency spectrum and larger amplitudes. A generator prototype based on self-winding wristwatches is deemed to be appropriate for harvesting body motion since it is not limited to operate at fixed-frequencies or restricted displacements. Electromagnetic generation is typically favored because of its slightly higher power output per unit volume. Then, a nonharmonic oscillating rotational energy scavenger prototype is proposed to harness body motion. The electromagnetic generator follows the approach from small wind turbine designs that overcome the lack of a gearbox by using a larger number of coil and magnets arrangements. The device presented here is composed of a rotor with multiple-pole permanent magnets having an eccentric weight and a stator composed of stacked planar coils. The rotor oscillations induce a voltage on the planar coil due to the eccentric mass unbalance produced by body motion. A meso-scale prototype device was then built and evaluated for energy generation. The meso-scale casing and rotor were constructed on PMMA with the help of a CNC mill machine. Commercially available discrete magnets were encased in a 25mm rotor. Commercial copper-coated polyimide film was employed to manufacture the planar coils using MEMS fabrication processes. Jewel bearings were used to finalize the arrangement. The prototypes were also tested at the listed body locations. A meso-scale generator with a 2-layer coil was capable to extract up to 234 µW of power at the ankle while walking at 3mph with a 2cm³ prototype for a power density of 117 µW/cm³. This dissertation presents the analysis of available power from walking and running at different speeds and the development of an unobtrusive miniature energy harvesting generator for body motion. Power generation indicates the possibility of powering devices by extracting energy from body motion.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
From the customer satisfaction point of view, sound quality of any product has become one of the important factors these days. The primary objective of this research is to determine factors which affect the acceptability of impulse noise. Though the analysis is based on a sample impulse sound file of a Commercial printer, the results can be applied to other similar impulsive noise. It is assumed that impulsive noise can be tuned to meet the accepTable criteria. Thus it is necessary to find the most significant factors which can be controlled physically. This analysis is based on a single impulse. A sample impulsive sound file is tweaked for different amplitudes, background noise, attack time, release time and the spectral content. A two level factorial design of experiments (DOE) is applied to study the significant effects and interactions. For each impulse file modified as per the DOE, the magnitude of perceived annoyance is calculated from the objective metric developed recently at Michigan Technological University. This metric is based on psychoacoustic criteria such as loudness, sharpness, roughness and loudness based impulsiveness. Software called ‘Artemis V11.2’ developed by HEAD Acoustics is used to calculate these psychoacoustic terms. As a result of two level factorial analyses, a new objective model of perceived annoyance is developed in terms of above mentioned physical parameters such as amplitudes, background noise, impulse attack time, impulse release time and the spectral content. Also the effects of the significant individual factors as well as two level interactions are also studied. The results show that all the mentioned five factors affect annoyance level of an impulsive sound significantly. Thus annoyance level can be reduced under the criteria by optimizing the levels. Also, an additional analysis is done to study the effect of these five significant parameters on the individual psychoacoustic metrics.
Resumo:
Since the introduction of the rope-pump in Nicaragua in the 1990s, the dependence on wells in rural areas has grown steadily. However, little or no attention is paid to rope-pump well performance after installation. Due to financial restraints, groundwater resource monitoring using conventional testing methods is too costly and out of reach of rural municipalities. Nonetheless, there is widespread agreement that without a way to quantify the changes in well performance over time, prioritizing regulatory actions is impossible. A manual pumping test method is presented, which at a fraction of the cost of a conventional pumping test, measures the specific capacity of rope-pump wells. The method requires only sight modifcations to the well and reasonable limitations on well useage prior to testing. The pumping test was performed a minimum of 33 times in three wells over an eight-month period in a small rural community in Chontales, Nicaragua. Data was used to measure seasonal variations in specific well capacity for three rope-pump wells completed in fractured crystalline basalt. Data collected from the tests were analyzed using four methods (equilibrium approximation, time-drawdown during pumping, time-drawdown during recovery, and time-drawdown during late-time recovery) to determine the best data-analyzing method. One conventional pumping test was performed to aid in evaluating the manual method. The equilibrim approximation can be performed while in the field with only a calculator and is the most technologically appropriate method for analyzing data. Results from this method overestimate specific capacity by 41% when compared to results from the conventional pumping test. The other analyes methods, requiring more sophisticated tools and higher-level interpretation skills, yielded results that agree to within 14% (pumping phase), 31% (recovery phase) and 133% (late-time recovery) of the conventional test productivity value. The wide variability in accuracy results principally from difficulties in achieving equilibrated pumping level and casing storage effects in the puping/recovery data. Decreases in well productivity resulting from naturally occuring seasonal water-table drops varied from insignificant in two wells to 80% in the third. Despite practical and theoretical limitations on the method, the collected data may be useful for municipal institutions to track changes in well behavior, eventually developing a database for planning future ground water development projects. Furthermore, the data could improve well-users’ abilities to self regulate well usage without expensive aquifer characterization.
Resumo:
Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.