12 resultados para Mobile application testing
em Digital Commons - Michigan Tech
Resumo:
In this thesis, an image enhancement application is developed for low-vision patients when they use iPhones to see images/watch videos. The thesis has two contributions. The first contribution is the new image enhancement algorithm which combines human vision features. The new image enhancement algorithm is modified from a wavelet transform based image enhancement algorithm developed by Dr. Jinshan Tang. Different from the original algorithm, the new image enhancement algorithm combines human visual feature into the algorithm and thus can make the new algorithm more effective. Experimental simulation results show that the proposed algorithm has better visual results than the algorithm without combining visual features. The second contribution of this thesis is the development of a mobile image enhancement application. In this application, users with low-vision can see clearer images on an iPhone which is installed with the application I have developed.
Resumo:
With the introduction of the mid-level ethanol blend gasoline fuel for commercial sale, the compatibility of different off-road engines is needed. This report details the test study of using one mid-level ethanol fuel in a two stroke hand held gasoline engine used to power line trimmers. The study sponsored by E3 is to test the effectiveness of an aftermarket spark plug from E3 Spark Plug when using a mid-level ethanol blend gasoline. A 15% ethanol by volume (E15) is the test mid-level ethanol used and the 10% ethanol by volume (E10) was used as the baseline fuel. The testing comprises running the engine at different load points and throttle positions to evaluate the cylinder head temperature, exhaust temperature and engine speed. Raw gas emissions were also measured to determine the impact of the performance spark plug. The low calorific value of the E15 fuel decreased the speed of the engine along with reduction in the fuel consumption and exhaust gas temperature. The HC emissions for E15 fuel and E3 spark plug increased when compared to the base line in most of the cases and NO formation was dependent on the cylinder head temperature. The E3 spark plug had a tendency to increase the temperature of the cylinder head irrespective of fuel type while reducing engine speed.
Resumo:
Transformers are very important elements of any power system. Unfortunately, they are subjected to through-faults and abnormal operating conditions which can affect not only the transformer itself but also other equipment connected to the transformer. Thus, it is essential to provide sufficient protection for transformers as well as the best possible selectivity and sensitivity of the protection. Nowadays microprocessor-based relays are widely used to protect power equipment. Current differential and voltage protection strategies are used in transformer protection applications and provide fast and sensitive multi-level protection and monitoring. The elements responsible for detecting turn-to-turn and turn-to-ground faults are the negative-sequence percentage differential element and restricted earth-fault (REF) element, respectively. During severe internal faults current transformers can saturate and slow down the speed of relay operation which affects the degree of equipment damage. The scope of this work is to develop a modeling methodology to perform simulations and laboratory tests for internal faults such as turn-to-turn and turn-to-ground for two step-down power transformers with capacity ratings of 11.2 MVA and 290 MVA. The simulated current waveforms are injected to a microprocessor relay to check its sensitivity for these internal faults. Saturation of current transformers is also studied in this work. All simulations are performed with the Alternative Transients Program (ATP) utilizing the internal fault model for three-phase two-winding transformers. The tested microprocessor relay is the SEL-487E current differential and voltage protection relay. The results showed that the ATP internal fault model can be used for testing microprocessor relays for any percentage of turns involved in an internal fault. An interesting observation from the experiments was that the SEL-487E relay is more sensitive to turn-to-turn faults than advertized for the transformers studied. The sensitivity of the restricted earth-fault element was confirmed. CT saturation cases showed that low accuracy CTs can be saturated with a high percentage of turn-to-turn faults, where the CT burden will affect the extent of saturation. Recommendations for future work include more accurate simulation of internal faults, transformer energization inrush, and other scenarios involving core saturation, using the newest version of the internal fault model. The SEL-487E relay or other microprocessor relays should again be tested for performance. Also, application of a grounding bank to the delta-connected side of a transformer will increase the zone of protection and relay performance can be tested for internal ground faults on both sides of a transformer.
Resumo:
Electrochemical capacitors (ECs), also known as supercapacitors or ultracapacitors, are energy storage devices with properties between batteries and conventional capacitors. EC have evolved through several generations. The trend in EC is to combine a double-layer electrode with a battery-type electrode in an asymmetric capacitor configuration. The double-layer electrode is usually an activated carbon (AC) since it has high surface area, good conductivity, and relatively low cost. The battery-type electrode usually consists of PbO2 or Ni(OH)2. In this research, a graphitic carbon foam was impregnated with Co-substituted Ni(OH)2 using electrochemical deposition to serve as the positive electrode in the asymmetric capacitor. The purpose was to reduce the cost and weight of the ECs while maintaining or increasing capacitance and gravimetric energy storage density. The XRD result indicated that the nickel-carbon foam electrode was a typical α-Ni(OH)2. The specific capacitance of the nickel-carbon foam electrode was 2641 F/g at 5 mA/cm2, higher than the previously reported value of 2080 F/g for a 7.5% Al-substituted α-Ni(OH)2 electrode. Three different ACs (RP-20, YP-50F, and Ketjenblack EC-600JD) were evaluated through their morphology and electrochemical performance to determine their suitability for use in ECs. The study indicated that YP-50F demonstrated the better overall performance because of the combination of micropore and mesopore structures. Therefore, YP-50F was chosen to combine with the nickel-carbon foam electrode for further evaluation. Six cells with different mass ratios of negative to positive active mass were fabricated to study the electrochemical performance. Among the different mass ratios, the asymmetric capacitor with the mass ratio of 3.71 gave the highest specific energy and specific power, 24.5 W.h/kg and 498 W/kg, respectively.
Resumo:
Target localization has a wide range of military and civilian applications in wireless mobile networks. Examples include battle-field surveillance, emergency 911 (E911), traffc alert, habitat monitoring, resource allocation, routing, and disaster mitigation. Basic localization techniques include time-of-arrival (TOA), direction-of-arrival (DOA) and received-signal strength (RSS) estimation. Techniques that are proposed based on TOA and DOA are very sensitive to the availability of Line-of-sight (LOS) which is the direct path between the transmitter and the receiver. If LOS is not available, TOA and DOA estimation errors create a large localization error. In order to reduce NLOS localization error, NLOS identifcation, mitigation, and localization techniques have been proposed. This research investigates NLOS identifcation for multiple antennas radio systems. The techniques proposed in the literature mainly use one antenna element to enable NLOS identifcation. When a single antenna is utilized, limited features of the wireless channel can be exploited to identify NLOS situations. However, in DOA-based wireless localization systems, multiple antenna elements are available. In addition, multiple antenna technology has been adopted in many widely used wireless systems such as wireless LAN 802.11n and WiMAX 802.16e which are good candidates for localization based services. In this work, the potential of spatial channel information for high performance NLOS identifcation is investigated. Considering narrowband multiple antenna wireless systems, two xvNLOS identifcation techniques are proposed. Here, the implementation of spatial correlation of channel coeffcients across antenna elements as a metric for NLOS identifcation is proposed. In order to obtain the spatial correlation, a new multi-input multi-output (MIMO) channel model based on rough surface theory is proposed. This model can be used to compute the spatial correlation between the antenna pair separated by any distance. In addition, a new NLOS identifcation technique that exploits the statistics of phase difference across two antenna elements is proposed. This technique assumes the phases received across two antenna elements are uncorrelated. This assumption is validated based on the well-known circular and elliptic scattering models. Next, it is proved that the channel Rician K-factor is a function of the phase difference variance. Exploiting Rician K-factor, techniques to identify NLOS scenarios are proposed. Considering wideband multiple antenna wireless systems which use MIMO-orthogonal frequency division multiplexing (OFDM) signaling, space-time-frequency channel correlation is exploited to attain NLOS identifcation in time-varying, frequency-selective and spaceselective radio channels. Novel NLOS identi?cation measures based on space, time and frequency channel correlation are proposed and their performances are evaluated. These measures represent a better NLOS identifcation performance compared to those that only use space, time or frequency.
Resumo:
With energy demands and costs growing every day, the need for improving energy efficiency in electrical devices has become very important. Research into various methods of improving efficiency for all electrical components will be a key to meet future energy needs. This report documents the design, construction, and testing of a research quality electric machine dynamometer and test bed. This test cell system can be used for research in several areas including: electric drives systems, electric vehicle propulsion systems, power electronic converters, load/source element in an AC Microgrid, as well as many others. The test cell design criteria, and decisions, will be discussed in reference to user functionality and flexibility. The individual power components will be discussed in detail to how they relate to the project, highlighting any feature used in operation of the test cell. A project timeline will be discussed, clearly stating the work done by the different individuals involved in the project. In addition, the system will be parameterized and benchmark data will be used to provide the functional operation of the system. With energy demands and costs growing every day, the need for improving energy efficiency in electrical devices has become very important. Research into various methods of improving efficiency for all electrical components will be a key to meet future energy needs. This report documents the design, construction, and testing of a research quality electric machine dynamometer and test bed. This test cell system can be used for research in several areas including: electric drives systems, electric vehicle propulsion systems, power electronic converters, load/source element in an AC Microgrid, as well as many others. The test cell design criteria, and decisions, will be discussed in reference to user functionality and flexibility. The individual power components will be discussed in detail to how they relate to the project, highlighting any feature used in operation of the test cell. A project timeline will be discussed, clearly stating the work done by the different individuals involved in the project. In addition, the system will be parameterized and benchmark data will be used to provide the functional operation of the system.
Resumo:
There has been a continuous evolutionary process in asphalt pavement design. In the beginning it was crude and based on past experience. Through research, empirical methods were developed based on materials response to specific loading at the AASHO Road Test. Today, pavement design has progressed to a mechanistic-empirical method. This methodology takes into account the mechanical properties of the individual layers and uses empirical relationships to relate them to performance. The mechanical tests that are used as part of this methodology include dynamic modulus and flow number, which have been shown to correlate with field pavement performance. This thesis was based on a portion of a research project being conducted at Michigan Technological University (MTU) for the Wisconsin Department of Transportation (WisDOT). The global scope of this project dealt with the development of a library of values as they pertain to the mechanical properties of the asphalt pavement mixtures paved in Wisconsin. Additionally, a comparison with the current associated pavement design to that of the new AASHTO Design Guide was conducted. This thesis describes the development of the current pavement design methodology as well as the associated tests as part of a literature review. This report also details the materials that were sampled from field operations around the state of Wisconsin and their testing preparation and procedures. Testing was conducted on available round robin and three Wisconsin mixtures and the main results of the research were: The test history of the Superpave SPT (fatigue and permanent deformation dynamic modulus) does not affect the mean response for both dynamic modulus and flow number, but does increase the variability in the test results of the flow number. The method of specimen preparation, compacting to test geometry versus sawing/coring to test geometry, does not statistically appear to affect the intermediate and high temperature dynamic modulus and flow number test results. The 2002 AASHTO Design Guide simulations support the findings of the statistical analyses that the method of specimen preparation did not impact the performance of the HMA as a structural layer as predicted by the Design Guide software. The methodologies for determining the temperature-viscosity relationship as stipulated by Witczak are sensitive to the viscosity test temperatures employed. The increase in asphalt binder content by 0.3% was found to actually increase the dynamic modulus at the intermediate and high test temperature as well as flow number. This result was based the testing that was conducted and was contradictory to previous research and the hypothesis that was put forth for this thesis. This result should be used with caution and requires further review. Based on the limited results presented herein, the asphalt binder grade appears to have a greater impact on performance in the Superpave SPT than aggregate angularity. Dynamic modulus and flow number was shown to increase with traffic level (requiring an increase in aggregate angularity) and with a decrease in air voids and confirm the hypotheses regarding these two factors. Accumulated micro-strain at flow number as opposed to the use of flow number appeared to be a promising measure for comparing the quality of specimens within a specific mixture. At the current time the Design Guide and its associate software needs to be further improved prior to implementation by owner/agencies.
Resumo:
This dissertation has three separate parts: the first part deals with the general pedigree association testing incorporating continuous covariates; the second part deals with the association tests under population stratification using the conditional likelihood tests; the third part deals with the genome-wide association studies based on the real rheumatoid arthritis (RA) disease data sets from Genetic Analysis Workshop 16 (GAW16) problem 1. Many statistical tests are developed to test the linkage and association using either case-control status or phenotype covariates for family data structure, separately. Those univariate analyses might not use all the information coming from the family members in practical studies. On the other hand, the human complex disease do not have a clear inheritance pattern, there might exist the gene interactions or act independently. In part I, the new proposed approach MPDT is focused on how to use both the case control information as well as the phenotype covariates. This approach can be applied to detect multiple marker effects. Based on the two existing popular statistics in family studies for case-control and quantitative traits respectively, the new approach could be used in the simple family structure data set as well as general pedigree structure. The combined statistics are calculated using the two statistics; A permutation procedure is applied for assessing the p-value with adjustment from the Bonferroni for the multiple markers. We use simulation studies to evaluate the type I error rates and the powers of the proposed approach. Our results show that the combined test using both case-control information and phenotype covariates not only has the correct type I error rates but also is more powerful than the other existing methods. For multiple marker interactions, our proposed method is also very powerful. Selective genotyping is an economical strategy in detecting and mapping quantitative trait loci in the genetic dissection of complex disease. When the samples arise from different ethnic groups or an admixture population, all the existing selective genotyping methods may result in spurious association due to different ancestry distributions. The problem can be more serious when the sample size is large, a general requirement to obtain sufficient power to detect modest genetic effects for most complex traits. In part II, I describe a useful strategy in selective genotyping while population stratification is present. Our procedure used a principal component based approach to eliminate any effect of population stratification. The paper evaluates the performance of our procedure using both simulated data from an early study data sets and also the HapMap data sets in a variety of population admixture models generated from empirical data. There are one binary trait and two continuous traits in the rheumatoid arthritis dataset of Problem 1 in the Genetic Analysis Workshop 16 (GAW16): RA status, AntiCCP and IgM. To allow multiple traits, we suggest a set of SNP-level F statistics by the concept of multiple-correlation to measure the genetic association between multiple trait values and SNP-specific genotypic scores and obtain their null distributions. Hereby, we perform 6 genome-wide association analyses using the novel one- and two-stage approaches which are based on single, double and triple traits. Incorporating all these 6 analyses, we successfully validate the SNPs which have been identified to be responsible for rheumatoid arthritis in the literature and detect more disease susceptibility SNPs for follow-up studies in the future. Except for chromosome 13 and 18, each of the others is found to harbour susceptible genetic regions for rheumatoid arthritis or related diseases, i.e., lupus erythematosus. This topic is discussed in part III.
Resumo:
As the development of genotyping and next-generation sequencing technologies, multi-marker testing in genome-wide association study and rare variant association study became active research areas in statistical genetics. This dissertation contains three methodologies for association study by exploring different genetic data features and demonstrates how to use those methods to test genetic association hypothesis. The methods can be categorized into in three scenarios: 1) multi-marker testing for strong Linkage Disequilibrium regions, 2) multi-marker testing for family-based association studies, 3) multi-marker testing for rare variant association study. I also discussed the advantage of using these methods and demonstrated its power by simulation studies and applications to real genetic data.
Resumo:
In the realm of computer programming, the experience of writing a program is used to reinforce concepts and evaluate ability. This research uses three case studies to evaluate the introduction of testing through Kolb's Experiential Learning Model (ELM). We then analyze the impact of those testing experiences to determine methods for improving future courses. The first testing experience that students encounter are unit test reports in their early courses. This course demonstrates that automating and improving feedback can provide more ELM iterations. The JUnit Generation (JUG) tool also provided a positive experience for the instructor by reducing the overall workload. Later, undergraduate and graduate students have the opportunity to work together in a multi-role Human-Computer Interaction (HCI) course. The interactions use usability analysis techniques with graduate students as usability experts and undergraduate students as design engineers. Students get experience testing the user experience of their product prototypes using methods varying from heuristic analysis to user testing. From this course, we learned the importance of the instructors role in the ELM. As more roles were added to the HCI course, a desire arose to provide more complete, quality assured software. This inspired the addition of unit testing experiences to the course. However, we learned that significant preparations must be made to apply the ELM when students are resistant. The research presented through these courses was driven by the recognition of a need for testing in a Computer Science curriculum. Our understanding of the ELM suggests the need for student experience when being introduced to testing concepts. We learned that experiential learning, when appropriately implemented, can provide benefits to the Computer Science classroom. When examined together, these course-based research projects provided insight into building strong testing practices into a curriculum.
Resumo:
In this thesis, I study skin lesion detection and its applications to skin cancer diagnosis. A skin lesion detection algorithm is proposed. The proposed algorithm is based color information and threshold. For the proposed algorithm, several color spaces are studied and the detection results are compared. Experimental results show that YUV color space can achieve the best performance. Besides, I develop a distance histogram based threshold selection method and the method is proven to be better than other adaptive threshold selection methods for color detection. Besides the detection algorithms, I also investigate GPU speed-up techniques for skin lesion extraction and the results show that GPU has potential applications in speeding-up skin lesion extraction. Based on the skin lesion detection algorithms proposed, I developed a mobile-based skin cancer diagnosis application. In this application, the user with an iPhone installed with the proposed application can use the iPhone as a diagnosis tool to find the potential skin lesions in a persons' skin and compare the skin lesions detected by the iPhone with the skin lesions stored in a database in a remote server.
Resumo:
Increasing prices for fuel with depletion and instability in foreign oil imports has driven the importance for using alternative and renewable fuels. The alternative fuels such as ethanol, methanol, butyl alcohol, and natural gas are of interest to be used to relieve some of the dependence on oil for transportation. The renewable fuel, ethanol which is made from the sugars of corn, has been used widely in fuel for vehicles in the United States because of its unique qualities. As with any renewable fuel, ethanol has many advantages but also has disadvantages. Cold startability of engines is one area of concern when using ethanol blended fuel. This research was focused on the cold startability of snowmobiles at ambient temperatures of 20 °F, 0 °F, and -20 °F. The tests were performed in a modified 48 foot refrigerated trailer which was retrofitted for the purpose of cold-start tests. Pure gasoline (E0) was used as a baseline test. A splash blended ethanol and gasoline mixture (E15, 15% ethanol and 85% gasoline by volume) was then tested and compared to the E0 fuel. Four different types of snowmobiles were used for the testing including a Yamaha FX Nytro RTX four-stroke, Ski-doo MX Z TNT 600 E-TEC direct injected two stroke, Polaris 800 Rush semi-direct injected two-stroke, and an Arctic Cat F570 carbureted two-stroke. All of the snowmobiles operate on open loop systems which means there was no compensation for the change in fuel properties. Emissions were sampled using a Sensors Inc. Semtech DS five gas emissions analyzer and engine data was recoded using AIM Racing Data Power EVO3 Pro and EVO4 systems. The recorded raw exhaust emissions included carbon monoxide (CO), carbon dioxide (CO2), total hydrocarbons (THC), and oxygen (O2). To help explain the trends in the emissions data, engine parameters were also recorded. The EVO equipment was installed on each vehicle to record the following parameters: engine speed, exhaust gas temperature, head temperature, coolant temperature, and test cell air temperature. At least three consistent tests to ensure repeatability were taken at each fuel and temperature combination so a total of 18 valid tests were taken on each snowmobile. The snowmobiles were run at operating temperature to clear any excess fuel in the engine crankcase before each cold-start test. The trends from switching from E0 to E15 were different for each snowmobile as they all employ different engine technologies. The Yamaha snowmobile (four-stroke EFI) achieved higher levels of CO2 with lower CO and THC emissions on E15. Engine speeds were fairly consistent between fuels but the average engine speeds were increased as the temperatures decreased. The average exhaust gas temperature increased from 1.3-1.8% for the E15 compared to E0 due to enleanment. For the Ski-doo snowmobile (direct injected two-stroke) only slight differences were noted when switching from E0 to E15. This could possibly be due to the lean of stoichiometric operation of the engine at idle. The CO2 emissions decreased slightly at 20 °F and 0 °F for E15 fuel with a small difference at -20 °F. Almost no change in CO or THC emissions was noted for all temperatures. The only significant difference in the engine data observed was the exhaust gas temperature which decreased with E15. The Polaris snowmobile (semi-direct injected two-stroke) had similar raw exhaust emissions for each of the two fuels. This was probably due to changing a resistor when using E15 which changed the fuel map for an ethanol mixture (E10 vs. E0). This snowmobile operates at a rich condition which caused the engine to emit higher values of CO than CO2 along with exceeding the THC analyzer range at idle. The engine parameters and emissions did not increase or decrease significantly with decreasing temperature. The average idle engine speed did increase as the ambient temperature decreased. The Arctic Cat snowmobile (carbureted two-stroke) was equipped with a choke lever to assist cold-starts. The choke was operated in the same manor for both fuels. Lower levels of CO emissions with E15 fuel were observed yet the THC emissions exceeded the analyzer range. The engine had a slightly lower speed with E15.