10 resultados para comparison method
em Digital Commons - Michigan Tech
Resumo:
In this report, we attempt to define the capabilities of the infrared satellite remote sensor, Multifunctional Transport Satellite-2 (MTSAT-2) (i.e. a geosynchronous instrument), in characterizing volcanic eruptive behavior in the highly active region of Indonesia. Sulfur dioxide data from NASA's Ozone Monitoring Instrument (OMI) (i.e. a polar orbiting instrument) are presented here for validation of the processes interpreted using the thermal infrared datasets. Data provided from two case studies are analyzed specifically for eruptive products producing large thermal anomalies (i.e. lava flows, lava domes, etc.), volcanic ash and SO2 clouds; three distinctly characteristic and abundant volcanic emissions. Two primary methods used for detection of heat signatures are used and compared in this report including, single-channel thermal radiance (4-µm) and the normalized thermal index (NTI) algorithm. For automated purposes, fixed thresholds must be determined for these methods. A base minimum detection limit (MDL) for single-channel thermal radiance of 2.30E+05 Wm- 2sr-1m-1 and -0.925 for NTI generate false alarm rates of 35.78% and 34.16%, respectively. A spatial comparison method, developed here specifically for use in Indonesia and used as a second parameter for detection, is implemented to address the high false alarm rate. For the single-channel thermal radiance method, the utilization of the spatial comparison method eliminated 100% of the false alarms while maintaining every true anomaly. The NTI algorithm showed similar results with only 2 false alarms remaining. No definitive difference is observed between the two thermal detection methods for automated use; however, the single-channel thermal radiance method coupled with the SO2 mass abundance data can be used to interpret volcanic processes including the identification of lava dome activity at Sinabung as well as the mechanism for the dome emplacement (i.e. endogenous or exogenous). Only one technique, the brightness temperature difference (BTD) method, is used for the detection of ash. Trends of ash area, water/ice area, and their respective concentrations yield interpretations of increased ice formation, aggregation, and sedimentation processes that only a high-temporal resolution instrument like the MTSAT-2 can analyze. A conceptual model of a secondary zone of aggregation occurring in the migrating Kelut ash cloud, which decreases the distal fine-ash component and hazards to flight paths, is presented in this report. Unfortunately, SO2 data was unable to definitively reinforce the concept of a secondary zone of aggregation due to the lack of a sufficient temporal resolution. However, a detailed study of the Kelut SO2 cloud is used to determine that there was no climatic impacts generated from this eruption due to the atmospheric residence times and e-folding rate of ~14 days for the SO2. This report applies the complementary assets offered by utilizing a high-temporal and a high-spatial resolution satellite, and it demonstrates that these two instruments can provide unparalleled observations of dynamic volcanic processes.
Resumo:
The amount and type of ground cover is an important characteristic to measure when collecting soil disturbance monitoring data after a timber harvest. Estimates of ground cover and bare soil can be used for tracking changes in invasive species, plant growth and regeneration, woody debris loadings, and the risk of surface water runoff and soil erosion. A new method of assessing ground cover and soil disturbance was recently published by the U.S. Forest Service, the Forest Soil Disturbance Monitoring Protocol (FSDMP). This protocol uses the frequency of cover types in small circular (15cm) plots to compare ground surface in pre- and post-harvest condition. While both frequency and percent cover are common methods of describing vegetation, frequency has rarely been used to measure ground surface cover. In this study, three methods for assessing ground cover percent (step-point, 15cm dia. circular and 1x5m visual plot estimates) were compared to the FSDMP frequency method. Results show that the FSDMP method provides significantly higher estimates of ground surface condition for most soil cover types, except coarse wood. The three cover methods had similar estimates for most cover values. The FSDMP method also produced the highest value when bare soil estimates were used to model erosion risk. In a person-hour analysis, estimating ground cover percent in 15cm dia. plots required the least sampling time, and provided standard errors similar to the other cover estimates even at low sampling intensities (n=18). If ground cover estimates are desired in soil monitoring, then a small plot size (15cm dia. circle), or a step-point method can provide a more accurate estimate in less time than the current FSDMP method.
Resumo:
Noise and vibration has long been sought to be reduced in major industries: automotive, aerospace and marine to name a few. Products must be tested and pass certain levels of federally regulated standards before entering the market. Vibration measurements are commonly acquired using accelerometers; however limitations of this method create a need for alternative solutions. Two methods for non-contact vibration measurements are compared: Laser Vibrometry, which directly measures the surface velocity of the aluminum plate, and Nearfield Acoustic Holography (NAH), which measures sound pressure in the nearfield, and using Green’s Functions, reconstructs the surface velocity at the plate. The surface velocity from each method is then used in modal analysis to determine the comparability of frequency, damping and mode shapes. Frequency and mode shapes are also compared to an FEA model. Laser Vibrometry is a proven, direct method for determining surface velocity and subsequently calculating modal analysis results. NAH is an effective method in locating noise sources, especially those that are not well separated spatially. Little work has been done in incorporating NAH into modal analysis.
Resumo:
Experimental warming provides a method to determine how an ecosystem will respond to increased temperatures. Northern peatland ecosystems, sensitive to changing climates, provide an excellent setting for experimental warming. Storing great quantities of carbon, northern peatlands play a critical role in regulating global temperatures. Two of the most common methods of experimental warming include open top chambers (OTCs) and infrared (IR) lamps. These warming systems have been used in many ecosystems throughout the world, yet their efficacy to create a warmer environment is variable and has not been widely studied. To date, there has not been a direct, experimentally controlled comparison of OTCs and IR lamps. As a result, a factorial study was implemented to compare the warming efficacy of OTCs and IR lamps and to examine the resulting carbon dioxide (CO2) and methane (CH4) flux rates in a Lake Superior peatland. IR lamps warmed the ecosystem on average by 1-2 #°C, with the majority of warming occurring during nighttime hours. OTC's did not provide any long-term warming above control plots, which is contrary to similar OTC studies at high latitudes. By investigating diurnal heating patterns and micrometeorological variables, we were able to conclude that OTCs were not achieving strong daytime heating peaks and were often cooler than control plots during nighttime hours. Temperate day-length, cloudy and humid conditions, and latent heat loss were factors that inhibited OTC warming. There were no changes in CO2 flux between warming treatments in lawn plots. Gross ecosystem production was significantly greater in IR lamp-hummock plots, while ecosystem respiration was not affected. CH4 flux was not significantly affected by warming treatment. Minimal daytime heating differences, high ambient temperatures, decay resistant substrate, as well as other factors suppressed significant gas flux responses from warming treatments.
Resumo:
The reported research project involved studying how teaching science using demonstrations, inquiry-based cooperative learning groups, or a combination of the two methods affected sixth grade students’ understanding of air pressure and density. Three different groups of students were each taught the two units using different teaching methods. Group one learned about the topics through both demonstrations and inquirybased cooperative learning, whereas group two only viewed demonstrations, and group three only participated in inquiry-based learning in cooperative learning groups. The study was designed to answer the following two questions: 1. Which teaching strategy works best for supporting student understanding of air pressure and density: demonstrations, inquirybased labs in cooperative learning groups, or a combination of the two? 2. And what effect does the time spent engaging in a particular learning experience (demonstrations or labs) have on student learning? Overall, the data did not provide sufficient evidence that one method of learning was more effective than the others. The results also suggested that spending more time on a unit does not necessarily equate to a better understanding of the concepts by the students. Implications for science instruction are discussed.
Resumo:
A significant cost for foundations is the design and installation of piles when they are required due to poor ground conditions. Not only is it important that piles be designed properly, but also that the installation equipment and total cost be evaluated. To assist in the evaluation of piles a number of methods have been developed. In this research three of these methods were investigated, which were developed by the Federal Highway Administration, the US Corps of Engineers and the American Petroleum Institute (API). The results from these methods were entered into the program GRLWEAPTM to assess the pile drivability and to provide a standard base for comparing the three methods. An additional element of this research was to develop EXCEL spreadsheets to implement these three methods. Currently the Army Corps and API methods do not have publicly available software and must be performed manually, which requires that data is taken off of figures and tables, which can introduce error in the prediction of pile capacities. Following development of the EXCEL spreadsheet, they were validated with both manual calculations and existing data sets to ensure that the data output is correct. To evaluate the three pile capacity methods data was utilized from four project sites from North America. The data included site geotechnical data along with field determined pile capacities. In order to achieve a standard comparison of the data, the pile capacities and geotechnical data from the three methods were entered into GRLWEAPTM. The sites consisted of both cohesive and cohesionless soils; where one site was primarily cohesive, one was primarily cohesionless, and the other two consisted of inter-bedded cohesive and cohesionless soils. Based on this limited set of data the results indicated that the US Corps of Engineers method more closely compared with the field test data, followed by the API method to a lesser degree. The DRIVEN program compared favorably in cohesive soils, but over predicted in cohesionless material.
Resumo:
Nanoparticles are fascinating where physical and optical properties are related to size. Highly controllable synthesis methods and nanoparticle assembly are essential [6] for highly innovative technological applications. Among nanoparticles, nonhomogeneous core-shell nanoparticles (CSnp) have new properties that arise when varying the relative dimensions of the core and the shell. This CSnp structure enables various optical resonances, and engineered energy barriers, in addition to the high charge to surface ratio. Assembly of homogeneous nanoparticles into functional structures has become ubiquitous in biosensors (i.e. optical labeling) [7, 8], nanocoatings [9-13], and electrical circuits [14, 15]. Limited nonhomogenous nanoparticle assembly has only been explored. Many conventional nanoparticle assembly methods exist, but this work explores dielectrophoresis (DEP) as a new method. DEP is particle polarization via non-uniform electric fields while suspended in conductive fluids. Most prior DEP efforts involve microscale particles. Prior work on core-shell nanoparticle assemblies and separately, nanoparticle characterizations with dielectrophoresis and electrorotation [2-5], did not systematically explore particle size, dielectric properties (permittivity and electrical conductivity), shell thickness, particle concentration, medium conductivity, and frequency. This work is the first, to the best of our knowledge, to systematically examine these dielectrophoretic properties for core-shell nanoparticles. Further, we conduct a parametric fitting to traditional core-shell models. These biocompatible core-shell nanoparticles were studied to fill a knowledge gap in the DEP field. Experimental results (chapter 5) first examine medium conductivity, size and shell material dependencies of dielectrophoretic behaviors of spherical CSnp into 2D and 3D particle-assemblies. Chitosan (amino sugar) and poly-L-lysine (amino acid, PLL) CSnp shell materials were custom synthesized around a hollow (gas) core by utilizing a phospholipid micelle around a volatile fluid templating for the shell material; this approach proves to be novel and distinct from conventional core-shell models wherein a conductive core is coated with an insulative shell. Experiments were conducted within a 100 nl chamber housing 100 um wide Ti/Au quadrapole electrodes spaced 25 um apart. Frequencies from 100kHz to 80MHz at fixed local field of 5Vpp were tested with 10-5 and 10-3 S/m medium conductivities for 25 seconds. Dielectrophoretic responses of ~220 and 340(or ~400) nm chitosan or PLL CSnp were compiled as a function of medium conductivity, size and shell material.
Resumo:
Data of the strength of Earth’s magnetic field (paleointensity) in the geological past are crucial for understanding the geodynamo. Conventional paleointensity determination methods require heating a sample to a high temperature in one or more steps. Consequently, many rocks are unsuitable for these methods due to a heating-induced experimental alteration. Alternative non-heating paleointensity methods are investigated to assess their effectiveness and reliability using both natural samples from Lemptégy Volcano, France, and synthetic samples. Paleointensity was measured from the natural and synthetic samples using the Pseudo-Thellier, ARM, REM, REMc, REM’, and Preisach methods. For the natural samples, only the Pseudo-Thellier method was able to produce a reasonable paleointensity estimate consistent with previous paleointensity data. The synthetic samples yielded more successful estimates using all the methods, with the Pseudo-Thellier and ARM methods producing the most accurate results. The Pseudo-Thellier method appears to be the best alternative to the heating-based paleointensity methods.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.
Resumo:
With recent advances in remote sensing processing technology, it has become more feasible to begin analysis of the enormous historic archive of remotely sensed data. This historical data provides valuable information on a wide variety of topics which can influence the lives of millions of people if processed correctly and in a timely manner. One such field of benefit is that of landslide mapping and inventory. This data provides a historical reference to those who live near high risk areas so future disasters may be avoided. In order to properly map landslides remotely, an optimum method must first be determined. Historically, mapping has been attempted using pixel based methods such as unsupervised and supervised classification. These methods are limited by their ability to only characterize an image spectrally based on single pixel values. This creates a result prone to false positives and often without meaningful objects created. Recently, several reliable methods of Object Oriented Analysis (OOA) have been developed which utilize a full range of spectral, spatial, textural, and contextual parameters to delineate regions of interest. A comparison of these two methods on a historical dataset of the landslide affected city of San Juan La Laguna, Guatemala has proven the benefits of OOA methods over those of unsupervised classification. Overall accuracies of 96.5% and 94.3% and F-score of 84.3% and 77.9% were achieved for OOA and unsupervised classification methods respectively. The greater difference in F-score is a result of the low precision values of unsupervised classification caused by poor false positive removal, the greatest shortcoming of this method.