996 resultados para Michigan Tech
Resumo:
Due to their high thermal efficiency, diesel engines have excellent fuel economy and have been widely used as a power source for many vehicles. Diesel engines emit less greenhouse gases (carbon dioxide) compared with gasoline engines. However, diesel engines emit large amounts of particulate matter (PM) which can imperil human health. The best way to reduce the particulate matter is by using the Diesel Particulate Filter (DPF) system which consists of a wall-flow monolith which can trap particulates, and the DPF can be periodically regenerated to remove the collected particulates. The estimation of the PM mass accumulated in the DPF and total pressure drop across the filter are very important in order to determine when to carry out the active regeneration for the DPF. In this project, by developing a filtration model and a pressure drop model, we can estimate the PM mass and the total pressure drop, then, these two models can be linked with a regeneration model which has been developed previously to predict when to regenerate the filter. There results of this project were: 1 Reproduce a filtration model and simulate the processes of filtration. By studying the deep bed filtration and cake filtration, stages and quantity of mass accumulated in the DPF can be estimated. It was found that the filtration efficiency increases faster during the deep-bed filtration than that during the cake filtration. A “unit collector” theory was used in our filtration model which can explain the mechanism of the filtration very well. 2 Perform a parametric study on the pressure drop model for changes in engine exhaust flow rate, deposit layer thickness, and inlet temperature. It was found that there are five primary variables impacting the pressure drop in the DPF which are temperature gradient along the channel, deposit layer thickness, deposit layer permeability, wall thickness, and wall permeability. 3 Link the filtration model and the pressure drop model with the regeneration model to determine the time to carry out the regeneration of the DPF. It was found that the regeneration should be initiated when the cake layer is at a certain thickness, since a cake layer with either too big or too small an amount of particulates will need more thermal energy to reach a higher regeneration efficiency. 4 Formulate diesel particulate trap regeneration strategies for real world driving conditions to find out the best desirable conditions for DPF regeneration. It was found that the regeneration should be initiated when the vehicle’s speed is high and during which there should not be any stops from the vehicle. Moreover, the regeneration duration is about 120 seconds and the inlet temperature for the regeneration is 710K.
Resumo:
The electrical power source is a critical component of the scoping level study as the source affects both the project economics and timeline. This paper proposes a systematic approach to selecting an electrical power source for a new mine. Orvana Minerals Copperwood project is used as a case study. The Copperwood results show that the proposed scoping level approach is consistent with the subsequent much more detailed feasibility study.
Resumo:
Onondaga Lake has received the municipal effluent and industrial waste from the city of Syracuse for more than a century. Historically, 75 metric tons of mercury were discharged to the lake by chlor-alkali facilities. These legacy deposits of mercury now exist primarily in the lake sediments. Under anoxic conditions, methylmercury is produced in the sediments and can be released to the overlying water. Natural sedimentation processes are continuously burying the mercury deeper into the sediments. Eventually, the mercury will be buried to a depth where it no longer has an impact on the overlying water. In the interim, electron acceptor amendment systems can be installed to retard these chemical releases while the lake naturally recovers. Electron acceptor amendment systems are designed to meet the sediment oxygen demand in the sediment and maintain manageable hypolimnion oxygen concentrations. Historically, designs of these systems have been under designed resulting in failure. This stems from a mischaracterization of the sediment oxygen demand. Turbulence at the sediment water interface has been shown to impact sediment oxygen demand. The turbulence introduced by the electron amendment system can thus increase the sediment oxygen demand, resulting in system failure if turbulence is not factored into the design. Sediment cores were gathered and operated to steady state under several well characterized turbulence conditions. The relationship between sediment oxygen/nitrate demand and turbulence was then quantified and plotted. A maximum demand was exhibited at or above a fluid velocity of 2.0 mm•s-1. Below this velocity, demand decreased rapidly with fluid velocity as zero velocity was approached. Similar relationships were displayed by both oxygen and nitrate cores.
Resumo:
Electrospinning uses electrostatic forces to create nanofibers that are far smaller than conventional fiber spinning process. Nanofibers made with chitosan were created and techniques to control fibers diameter and were well developed. However, the adsorption of porcine parvovirus (PPV) was low. PPV is a small, nonenveloped virus that is difficult to remove due to its size, 18-26 nm in diameter, and its chemical stability. To improve virus adsorption, we functionalized the nanofibers with a quaternized amine, forming N-[(2-hydroxy-3-trimethylammonium) propyl] chitosan chloride (HTCC). This was blended with additives to increase the ability to form HTCC nanofibers. The additives changed the viscosity and conductivity of the electrospinning solution. We have successfully synthesized and functionalized HTCC nanofibers that absorb PPV. HTCC blend with graphene have the ability to remove a minimum of 99% of PPV present in solution.
Resumo:
Molecules are the smallest possible elements for electronic devices, with active elements for such devices typically a few Angstroms in footprint area. Owing to the possibility of producing ultrahigh density devices, tremendous effort has been invested in producing electronic junctions by using various types of molecules. The major issues for molecular electronics include (1) developing an effective scheme to connect molecules with the present micro- and nano-technology, (2) increasing the lifetime and stabilities of the devices, and (3) increasing their performance in comparison to the state-of-the-art devices. In this work, we attempt to use carbon nanotubes (CNTs) as the interconnecting nanoelectrodes between molecules and microelectrodes. The ultimate goal is to use two individual CNTs to sandwich molecules in a cross-bar configuration while having these CNTs connected with microelectrodes such that the junction displays the electronic character of the molecule chosen. We have successfully developed an effective scheme to connect molecules with CNTs, which is scalable to arrays of molecular electronic devices. To realize this far reaching goal, the following technical topics have been investigated. 1. Synthesis of multi-walled carbon nanotubes (MWCNTs) by thermal chemical vapor deposition (T-CVD) and plasma-enhanced chemical vapor deposition (PECVD) techniques (Chapter 3). We have evaluated the potential use of tubular and bamboo-like MWCNTs grown by T-CVD and PE-CVD in terms of their structural properties. 2. Horizontal dispersion of MWCNTs with and without surfactants, and the integration of MWCNTs to microelectrodes using deposition by dielectrophoresis (DEP) (Chapter 4). We have systematically studied the use of surfactant molecules to disperse and horizontally align MWCNTs on substrates. In addition, DEP is shown to produce impurityfree placement of MWCNTs, forming connections between microelectrodes. We demonstrate the deposition density is tunable by both AC field strength and AC field frequency. 3. Etching of MWCNTs for the impurity-free nanoelectrodes (Chapter 5). We show that the residual Ni catalyst on MWCNTs can be removed by acid etching; the tip removal and collapsing of tubes into pyramids enhances the stability of field emission from the tube arrays. The acid-etching process can be used to functionalize the MWCNTs, which was used to make our initial CNT-nanoelectrode glucose sensors. Finally, lessons learned trying to perform spectroscopic analysis of the functionalized MWCNTs were vital for designing our final devices. 4. Molecular junction design and electrochemical synthesis of biphenyl molecules on carbon microelectrodes for all-carbon molecular devices (Chapter 6). Utilizing the experience gained on the work done so far, our final device design is described. We demonstrate the capability of preparing patterned glassy carbon films to serve as the bottom electrode in the new geometry. However, the molecular switching behavior of biphenyl was not observed by scanning tunneling microscopy (STM), mercury drop or fabricated glassy carbon/biphenyl/MWCNT junctions. Either the density of these molecules is not optimum for effective integration of devices using MWCNTs as the nanoelectrodes, or an electroactive contaminant was reduced instead of the ionic biphenyl species. 5. Self-assembly of octadecanethiol (ODT) molecules on gold microelectrodes for functional molecular devices (Chapter 7). We have realized an effective scheme to produce Au/ODT/MWCNT junctions by spanning MWCNTs across ODT-functionalized microelectrodes. A percentage of the resulting junctions retain the expected character of an ODT monolayer. While the process is not yet optimized, our successful junctions show that molecular electronic devices can be fabricated using simple processes such as photolithography, self-assembled monolayers and dielectrophoresis.
Resumo:
Phosphomolybdic acid (H3PMo12O40) along with niobium,pyridine and niobium exchanged phosphomolybdic acid catalysts were prepared. Ammonia adsorption microcalorimetry and methanol oxidation studies were carried out to investigate the acid sites strength acid/base/redox properties of each catalyst. The addition of niobium, pyridine or both increased the ammonia heat of adsorption and the total uptake. The catalyst with both niobium and pyridine demonstrated the largest number of strong sites. For the parent H3PMo12O40 catalyst, methanol oxidation favors the redox product. Incorporation of niobium results in similar selectivity to redox products but also results in no catalyst deactivation. Incorporation of pyridine instead changes to the selectivity to favor the acidic product. Finally, the inclusion of both niobium and pyridine results in strong selectivity to the acidic product while also showing no catalyst deactivation. Thus the presence of pyridine appears to enhance the acid property of the catalyst while niobium appears to stabilize the active site.
Resumo:
Autonomous system applications are typically limited by the power supply operational lifetime when battery replacement is difficult or costly. A trade-off between battery size and battery life is usually calculated to determine the device capability and lifespan. As a result, energy harvesting research has gained importance as society searches for alternative energy sources for power generation. For instance, energy harvesting has been a proven alternative for powering solar-based calculators and self-winding wristwatches. Thus, the use of energy harvesting technology can make it possible to assist or replace batteries for portable, wearable, or surgically-implantable autonomous systems. Applications such as cardiac pacemakers or electrical stimulation applications can benefit from this approach since the number of surgeries for battery replacement can be reduced or eliminated. Research on energy scavenging from body motion has been investigated to evaluate the feasibility of powering wearable or implantable systems. Energy from walking has been previously extracted using generators placed on shoes, backpacks, and knee braces while producing power levels ranging from milliwatts to watts. The research presented in this paper examines the available power from walking and running at several body locations. The ankle, knee, hip, chest, wrist, elbow, upper arm, side of the head, and back of the head were the chosen target localizations. Joints were preferred since they experience the most drastic acceleration changes. For this, a motor-driven treadmill test was performed on 11 healthy individuals at several walking (1-4 mph) and running (2-5 mph) speeds. The treadmill test provided the acceleration magnitudes from the listed body locations. Power can be estimated from the treadmill evaluation since it is proportional to the acceleration and frequency of occurrence. Available power output from walking was determined to be greater than 1mW/cm³ for most body locations while being over 10mW/cm³ at the foot and ankle locations. Available power from running was found to be almost 10 times higher than that from walking. Most energy harvester topologies use linear generator approaches that are well suited to fixed-frequency vibrations with sub-millimeter amplitude oscillations. In contrast, body motion is characterized with a wide frequency spectrum and larger amplitudes. A generator prototype based on self-winding wristwatches is deemed to be appropriate for harvesting body motion since it is not limited to operate at fixed-frequencies or restricted displacements. Electromagnetic generation is typically favored because of its slightly higher power output per unit volume. Then, a nonharmonic oscillating rotational energy scavenger prototype is proposed to harness body motion. The electromagnetic generator follows the approach from small wind turbine designs that overcome the lack of a gearbox by using a larger number of coil and magnets arrangements. The device presented here is composed of a rotor with multiple-pole permanent magnets having an eccentric weight and a stator composed of stacked planar coils. The rotor oscillations induce a voltage on the planar coil due to the eccentric mass unbalance produced by body motion. A meso-scale prototype device was then built and evaluated for energy generation. The meso-scale casing and rotor were constructed on PMMA with the help of a CNC mill machine. Commercially available discrete magnets were encased in a 25mm rotor. Commercial copper-coated polyimide film was employed to manufacture the planar coils using MEMS fabrication processes. Jewel bearings were used to finalize the arrangement. The prototypes were also tested at the listed body locations. A meso-scale generator with a 2-layer coil was capable to extract up to 234 µW of power at the ankle while walking at 3mph with a 2cm³ prototype for a power density of 117 µW/cm³. This dissertation presents the analysis of available power from walking and running at different speeds and the development of an unobtrusive miniature energy harvesting generator for body motion. Power generation indicates the possibility of powering devices by extracting energy from body motion.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
From the customer satisfaction point of view, sound quality of any product has become one of the important factors these days. The primary objective of this research is to determine factors which affect the acceptability of impulse noise. Though the analysis is based on a sample impulse sound file of a Commercial printer, the results can be applied to other similar impulsive noise. It is assumed that impulsive noise can be tuned to meet the accepTable criteria. Thus it is necessary to find the most significant factors which can be controlled physically. This analysis is based on a single impulse. A sample impulsive sound file is tweaked for different amplitudes, background noise, attack time, release time and the spectral content. A two level factorial design of experiments (DOE) is applied to study the significant effects and interactions. For each impulse file modified as per the DOE, the magnitude of perceived annoyance is calculated from the objective metric developed recently at Michigan Technological University. This metric is based on psychoacoustic criteria such as loudness, sharpness, roughness and loudness based impulsiveness. Software called ‘Artemis V11.2’ developed by HEAD Acoustics is used to calculate these psychoacoustic terms. As a result of two level factorial analyses, a new objective model of perceived annoyance is developed in terms of above mentioned physical parameters such as amplitudes, background noise, impulse attack time, impulse release time and the spectral content. Also the effects of the significant individual factors as well as two level interactions are also studied. The results show that all the mentioned five factors affect annoyance level of an impulsive sound significantly. Thus annoyance level can be reduced under the criteria by optimizing the levels. Also, an additional analysis is done to study the effect of these five significant parameters on the individual psychoacoustic metrics.
Resumo:
Since the introduction of the rope-pump in Nicaragua in the 1990s, the dependence on wells in rural areas has grown steadily. However, little or no attention is paid to rope-pump well performance after installation. Due to financial restraints, groundwater resource monitoring using conventional testing methods is too costly and out of reach of rural municipalities. Nonetheless, there is widespread agreement that without a way to quantify the changes in well performance over time, prioritizing regulatory actions is impossible. A manual pumping test method is presented, which at a fraction of the cost of a conventional pumping test, measures the specific capacity of rope-pump wells. The method requires only sight modifcations to the well and reasonable limitations on well useage prior to testing. The pumping test was performed a minimum of 33 times in three wells over an eight-month period in a small rural community in Chontales, Nicaragua. Data was used to measure seasonal variations in specific well capacity for three rope-pump wells completed in fractured crystalline basalt. Data collected from the tests were analyzed using four methods (equilibrium approximation, time-drawdown during pumping, time-drawdown during recovery, and time-drawdown during late-time recovery) to determine the best data-analyzing method. One conventional pumping test was performed to aid in evaluating the manual method. The equilibrim approximation can be performed while in the field with only a calculator and is the most technologically appropriate method for analyzing data. Results from this method overestimate specific capacity by 41% when compared to results from the conventional pumping test. The other analyes methods, requiring more sophisticated tools and higher-level interpretation skills, yielded results that agree to within 14% (pumping phase), 31% (recovery phase) and 133% (late-time recovery) of the conventional test productivity value. The wide variability in accuracy results principally from difficulties in achieving equilibrated pumping level and casing storage effects in the puping/recovery data. Decreases in well productivity resulting from naturally occuring seasonal water-table drops varied from insignificant in two wells to 80% in the third. Despite practical and theoretical limitations on the method, the collected data may be useful for municipal institutions to track changes in well behavior, eventually developing a database for planning future ground water development projects. Furthermore, the data could improve well-users’ abilities to self regulate well usage without expensive aquifer characterization.
Resumo:
Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction methods to compensate turbulence effects. While many image reconstruction methods have been proposed, their suitability for use in man-portable embedded systems is uncertain. To be effective, these systems must operate over significant variations in turbulence conditions while subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods have recently been proposed as being well suited for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. Design parameters are selected by parametric evaluation of system performance as factors external to the system are varied. The precise control necessary for such an evaluation is made possible using image sets of turbulence degraded imagery developed using a novel technique for simulating anisoplanatic image formation over long horizontal paths. System performance is statistically evaluated over multiple reconstruction using the Mean Squared Error (MSE) to evaluate reconstruction quality. In addition to more general design parameters, the relative performance the bispectrum and the Knox-Thompson phase recovery methods is also compared. As an outcome of this work it can be concluded that speckle-imaging techniques are robust to the variation in turbulence conditions and user controlled parameters expected when operating during the day over long horizontal paths. Speckle imaging systems that incorporate 15 or more image frames and 4 estimates of the object phase per reconstruction provide up to 45% reduction in MSE and 68% reduction in the deviation. In addition, Knox-Thompson phase recover method is shown to produce images in half the time required by the bispectrum. The quality of images reconstructed using Knox-Thompson and bispectrum methods are also found to be nearly identical. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.
Resumo:
Noise and vibration has long been sought to be reduced in major industries: automotive, aerospace and marine to name a few. Products must be tested and pass certain levels of federally regulated standards before entering the market. Vibration measurements are commonly acquired using accelerometers; however limitations of this method create a need for alternative solutions. Two methods for non-contact vibration measurements are compared: Laser Vibrometry, which directly measures the surface velocity of the aluminum plate, and Nearfield Acoustic Holography (NAH), which measures sound pressure in the nearfield, and using Green’s Functions, reconstructs the surface velocity at the plate. The surface velocity from each method is then used in modal analysis to determine the comparability of frequency, damping and mode shapes. Frequency and mode shapes are also compared to an FEA model. Laser Vibrometry is a proven, direct method for determining surface velocity and subsequently calculating modal analysis results. NAH is an effective method in locating noise sources, especially those that are not well separated spatially. Little work has been done in incorporating NAH into modal analysis.
Resumo:
Polymers are typically electrically and thermally insulating materials. The electrical and thermal conductivities of polymers can be increased by the addition conductive fillers such as carbons. Once the polymer composites have been made electrically and thermally conductive, they can be used in applications where these conductivities are desired such as electromagnetic shielding and static dissipation. In this project, three carbon nanomaterials are added to polycarbonate to enhance the electrical and thermal conductivity of the resulting composite. Hyperion Catalysis FIBRILs carbon nanotubes were added to a maximum loading of 8 wt%. Ketjenblack EC-600 JD carbon black was added to a maximum loading of 10 wt%. XG Sciences xGnP™ graphene nanoplatelets were added to a maximum loading of 15 wt%. These three materials have drastically different morphologies and will have varying effects on the various properties of polycarbonate composites. It was determined that carbon nanotubes have the largest effect on electrical conductivity with an 8 wt% carbon nanotube in polycarbonate composite having an electrical conductivity of 0.128 S/cm (from a pure polycarbonate value of 10-17 S/cm). Carbon black has the next largest effect with an 8 wt% carbon black in polycarbonate composite having an electrical conductivity of 0.008 S/cm. Graphene nanoplatelets have the least effect with an 8 wt% graphene nanoplatelet in polycarbonate having an electrical conductivity of 2.53 x 10-8 S/cm. Graphene nanoplatelets show a significantly higher effect on increasing thermal conductivity than either carbon nanotubes or carbon black. Mechanically, all three materials have similar effects with graphene nanoplatelets being somewhat more effective at increasing the tensile modulus of the composite than the other fillers. Carbon black and graphene nanoplatelets show standard carbon-filler rheology where the addition of filler increases the viscosity of the resulting composite. Carbon nanotubes, on the other hand, show an unexpected rheology. As carbon nanotubes are added to polycarbonate the viscosity of the composite is reduced below that of the original polycarbonate. It was seen that the addition of carbon nanotubes offsets the increased viscosity from a second filler, such as carbon black or graphene nanoplatelets.
Measuring energy spectra of TeV gamma-ray emission from the Cygnus region of our galaxy with Milagro
Resumo:
High energy gamma rays can provide fundamental clues to the origins of cosmic rays. In this thesis, TeV gamma-ray emission from the Cygnus region is studied. Previously the Milagro experiment detected five TeV gamma-ray sources in this region and a significant excess of TeV gamma rays whose origin is still unclear. To better understand the diffuse excess the separation of sources and diffuse emission is studied using the latest and most sensitive data set of the Milagro experiment. In addition, a newly developed technique is applied that allows the energy spectrum of the TeV gamma rays to be reconstructed using Milagro data. No conclusive statement can be made about the spectrum of the diffuse emission from the Cygnus region because of its low significance of 2.2 σ above the background in the studied data sample. The entire Cygnus region emission is best fit with a power law with a spectral index of α=2.40 (68% confidence interval: 1.35-2.92) and a exponential cutoff energy of 31.6 TeV (10.0-251.2 TeV). In the case of a simple power law assumption without a cutoff energy the best fit yields a spectral index of α=2.97 (68% confidence interval: 2.83-3.10). Neither of these best fits are in good agreement with the data. The best spectral fit to the TeV emission from MGRO J2019+37, the brightest source in the Cygnus region, yields a spectral index of α=2.30 (68% confidence interval: 1.40-2.70) with a cutoff energy of 50.1 TeV (68% confidence interval: 17.8-251.2 TeV) and a spectral index of α=2.75 (68% confidence interval: 2.65-2.85) when no exponential cutoff energy is assumed. According to the present analysis, MGRO J2019+37 contributes 25% to the differential flux from the entire Cygnus at 15 TeV.
Resumo:
Cochlear implants have been of great benefit in restoring auditory function to individuals with profound bilateral sensorineural deafness. The implants are used to directly stimulate auditory nerves and send a signal to the brain that is then interpreted as sound. This project focuses on the development of a surgical positioning tool to accurately and effectively place an array of stimulating electrodes deep within the cochlea. This will lead to improved efficiency and performance of the stimulating electrodes, reduced surgical trauma to the cochlea, and as a result, improved overall performance to the implant recipient. The positioning tool reported here consists of multiple fluidic chambers providing localized curvature control along the length of the attached silicon electrode array. The chambers consist of 200μm inner diameter PET (polyethylene therephthalate) tubes with 4μm wall thickness. The chambers are molded in a tapered helical configuration to correspond to the cochlear shape upon relaxation of the actuators. This ensures that the optimal electrode placement within the cochlea is retained after the positioning tool becomes dormant (for chronic implants). Actuation is achieved by injecting fluid into the PET chambers and regulating the fluidic pressure. The chambers are arranged in a stacked, overlapping design to provide fluid connectivity with the non-implantable pressure controller and allow for local curvature control of the device. The stacked tube configuration allows for localized curvature control of various areas along the length of the electrode and additional stiffening and actuating power towards the base. Curvature is affected along the entire length of a chamber and the result is cumulative in sections of multiple chambers. The actuating chambers are bonded to the back of a silicon electrode array.