16 resultados para LIDAR (light detection and ranging)

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airborne LIDAR (Light Detecting and Ranging) is a relatively new technique that rapidly and accurately measures micro-topographic features. This study compares topography derived from LIDAR with subsurface karst structures mapped in 3-dimensions with ground penetrating radar (GPR). Over 500 km of LIDAR data were collected in 1995 by the NASA ATM instrument. The LIDAR data was processed and analyzed to identify closed depressions. A GPR survey was then conducted at a 200 by 600 m site to determine if the target features are associated with buried karst structures. The GPR survey resolved two major depressions in the top of a clay rich layer at ~10m depth. These features are interpreted as buried dolines and are associated spatially with subtle (< 1m) trough-like depressions in the topography resolved from the LIDAR data. This suggests that airborne LIDAR may be a useful tool for indirectly detecting subsurface features associated with sinkhole hazard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seedlings of the red mangrove, Rhizophora mangle L., were grown under light conditions differing in both photosynthetic photon flux density (PPFD) and spectral quality (red:far-red ratio, R:FR). During the first 8 mo of development, parameters of stem, leaf, and root growth were affected by PPFD. Significant responses to lowered R:FR, however, were limited to internode extension. The results are moderately indicative of a strategy to persist in shade, but illustrate the complexity of light responses and suggest that precise categorization as shade-tolerant or -intolerant may be unbefitting for this species at this particular stage of development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Choosing between Light Rail Transit (LRT) and Bus Rapid Transit (BRT) systems is often controversial and not an easy task for transportation planners who are contemplating the upgrade of their public transportation services. These two transit systems provide comparable services for medium-sized cities from the suburban neighborhood to the Central Business District (CBD) and utilize similar right-of-way (ROW) categories. The research is aimed at developing a method to assist transportation planners and decision makers in determining the most feasible system between LRT and BRT. ^ Cost estimation is a major factor when evaluating a transit system. Typically, LRT is more expensive to build and implement than BRT, but has significantly lower Operating and Maintenance (OM) costs than BRT. This dissertation examines the factors impacting capacity and costs, and develops cost models, which are a capacity-based cost estimate for the LRT and BRT systems. Various ROW categories and alignment configurations of the systems are also considered in the developed cost models. Kikuchi's fleet size model (1985) and cost allocation method are used to develop the cost models to estimate the capacity and costs. ^ The comparison between LRT and BRT are complicated due to many possible transportation planning and operation scenarios. In the end, a user-friendly computer interface integrated with the established capacity-based cost models, the LRT and BRT Cost Estimator (LBCostor), was developed by using Microsoft Visual Basic language to facilitate the process and will guide the users throughout the comparison operations. The cost models and the LBCostor can be used to analyze transit volumes, alignments, ROW configurations, number of stops and stations, headway, size of vehicle, and traffic signal timing at the intersections. The planners can make the necessary changes and adjustments depending on their operating practices. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, large numbers of social media services have emerged and been widely used in people's daily life as important information sharing and acquisition tools. With a substantial amount of user-contributed text data on social media, it becomes a necessity to develop methods and tools for text analysis for this emerging data, in order to better utilize it to deliver meaningful information to users. ^ Previous work on text analytics in last several decades is mainly focused on traditional types of text like emails, news and academic literatures, and several critical issues to text data on social media have not been well explored: 1) how to detect sentiment from text on social media; 2) how to make use of social media's real-time nature; 3) how to address information overload for flexible information needs. ^ In this dissertation, we focus on these three problems. First, to detect sentiment of text on social media, we propose a non-negative matrix tri-factorization (tri-NMF) based dual active supervision method to minimize human labeling efforts for the new type of data. Second, to make use of social media's real-time nature, we propose approaches to detect events from text streams on social media. Third, to address information overload for flexible information needs, we propose two summarization framework, dominating set based summarization framework and learning-to-rank based summarization framework. The dominating set based summarization framework can be applied for different types of summarization problems, while the learning-to-rank based summarization framework helps utilize the existing training data to guild the new summarization tasks. In addition, we integrate these techneques in an application study of event summarization for sports games as an example of how to better utilize social media data. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study is to design and development of an enzyme-linked biosensor for detection and quantification of phosphate species. Various concentrations of phosphate species were tested and completed for this study. Phosphate is one of the vital nutrients for all living organisms. Phosphate compounds can be found in nature (e.g., water sediments), and they often exist in aninorganic form. The amount of phosphates in the environment strongly influences the operations of living organisms. Excess amount of phosphate in the environment causes eutrophication which in turn causes oxygen deficit for the other living organisms. Fish die and degradation of habitat in the water occurs as a result of eutrophication. In contrast, low phosphate concentration causes death of vegetation since plants utilize the inorganic phosphate for photosynthesis, respiration, and regulation of enzymes. Therefore, the phosphate quantity in lakes and rivers must be monitored. Result demonstrated that phosphate species could be detected in various organisms via enzyme-linked biosensor in this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, large numbers of social media services have emerged and been widely used in people's daily life as important information sharing and acquisition tools. With a substantial amount of user-contributed text data on social media, it becomes a necessity to develop methods and tools for text analysis for this emerging data, in order to better utilize it to deliver meaningful information to users. Previous work on text analytics in last several decades is mainly focused on traditional types of text like emails, news and academic literatures, and several critical issues to text data on social media have not been well explored: 1) how to detect sentiment from text on social media; 2) how to make use of social media's real-time nature; 3) how to address information overload for flexible information needs. In this dissertation, we focus on these three problems. First, to detect sentiment of text on social media, we propose a non-negative matrix tri-factorization (tri-NMF) based dual active supervision method to minimize human labeling efforts for the new type of data. Second, to make use of social media's real-time nature, we propose approaches to detect events from text streams on social media. Third, to address information overload for flexible information needs, we propose two summarization framework, dominating set based summarization framework and learning-to-rank based summarization framework. The dominating set based summarization framework can be applied for different types of summarization problems, while the learning-to-rank based summarization framework helps utilize the existing training data to guild the new summarization tasks. In addition, we integrate these techneques in an application study of event summarization for sports games as an example of how to better utilize social media data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classification procedures, including atmospheric correction satellite images as well as classification performance utilizing calibration and validation at different levels, have been investigated in the context of a coarse land-cover classification scheme for the Pachitea Basin. Two different correction methods were tested against no correction in terms of reflectance correction towards a common response for pseudo-invariant features (PIF). The accuracy of classifications derived from each of the three methods was then assessed in a discriminant analysis using crossvalidation at pixel, polygon, region, and image levels. Results indicate that only regression adjusted images using PIFs show no significant difference between images in any of the bands. A comparison of classifications at different levels suggests though that at pixel, polygon, and region levels the accuracy of the classifications do not significantly differ between corrected and uncorrected images. Spatial patterns of land-cover were analyzed in terms of colonization history, infrastructure, suitability of the land, and landownership. The actual use of the land is driven mainly by the ability to access the land and markets as is obvious in the distribution of land cover as a function of distance to rivers and roads. When considering all rivers and roads a threshold distance at which disproportional agro-pastoral land cover switches from over represented to under represented is at about 1km. Best land use suggestions seem not to affect the choice of land use. Differences in abundance of land cover between watersheds are more prevailing than differences between colonist and indigenous groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gunshot residue (GSR) is the term used to describe the particles originating from different parts of the firearm and ammunition during the discharge. A fast and practical field tool to detect the presence of GSR can assist law enforcement in the accurate identification of subjects. A novel field sampling device is presented for the first time for the fast detection and quantitation of volatile organic compounds (VOCs). The capillary microextraction of volatiles (CMV) is a headspace sampling technique that provides fast results (< 2 min. sampling time) and is reported as a versatile and high-efficiency sampling tool. The CMV device can be coupled to a Gas Chromatography-Mass Spectrometry (GC-MS) instrument by installation of a thermal separation probe in the injection port of the GC. An analytical method using the CMV device was developed for the detection of 17 compounds commonly found in polluted environments. The acceptability of the CMV as a field sampling method for the detection of VOCs is demonstrated by following the criteria established by the Environmental Protection Agency (EPA) compendium method TO-17. The CMV device was used, for the first time, for the detection of VOCs on swabs from the hands of shooters, and non-shooters and spent cartridges from different types of ammunition (i.e., pistol, rifle, and shotgun). The proposed method consists in the headspace extraction of VOCs in smokeless powders present in the propellant of ammunition. The sensitivity of this method was demonstrated with method detection limits (MDLs) 4-26 ng for diphenylamine (DPA), nitroglycerine (NG), 2,4-dinitrotoluene (2,4-DNT), and ethyl centralite (EC). In addition, a fast method was developed for the detection of the inorganic components (i.e., Ba, Pb, and Sb) characteristic of GSR presence by Laser Induced Breakdown Spectroscopy (LIBS). Advantages of LIBS include fast analysis (~ 12 seconds per sample) and good sensitivity, with expected MDLs in the range of 0.1-20 ng for target elements. Statistical analysis of the results using both techniques was performed to determine any correlation between the variables analyzed. This work demonstrates that the information collected from the analysis of organic components has the potential to improve the detection of GSR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cetaceans are aquatic mammals that rely primarily on sound for most daily tasks. A compendium of sounds is emitted for orientation, prey detection, and predator avoidance, and to communicate. Communicative sounds are among the most studied Cetacean signals, particularly those referred to as tonal sounds. Because tonal sounds have been studied especially well in social dolphins, it has been assumed these sounds evolved as a social adaptation. However, whistles have been reported in ‘solitary’ species and have been secondarily lost three times in social lineages. Clearly, therefore, it is necessary to examine closely the association, if any, between whistles and sociality instead of merely assuming it. Several hypotheses have been proposed to explain the evolutionary history of Cetacean tonal sounds. The main goal of this dissertation is to cast light on the evolutionary history of tonal sounds by testing these hypotheses by combining comparative phylogenetic and field methods. This dissertation provides the first species-level phylogeny of Cetacea and phylogenetic tests of evolutionary hypotheses of cetacean communicative signals. Tonal sounds evolution is complex in that has likely been shaped by a combination of factors that may influence different aspects of their acoustical structure. At the inter-specific level, these results suggest that only tonal sound minimum frequency is constrained by body size. Group size also influences tonal sound minimum frequency. Species that live in large groups tend to produce higher frequency tonal sounds. The evolutionary history of tonal sounds and sociality may be intertwined, but in a complex manner rejecting simplistic views such as the hypothesis that tonal sounds evolved ‘for’ social communication in dolphins. Levels of social and tonal sound complexity nevertheless correlate indicating the importance of tonal sounds in social communication. At the intraspecific level, tonal sound variation in frequency and temporal parameters may be product of genetic isolation and local levels of underwater noise. This dissertation provides one of the first insights into the evolution of Cetacean tonal sounds in a phylogenetic context, and points out key species where future studies would be valuable to enrich our understanding of other factors also playing a role in tonal sound evolution. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Zinc oxide and graphene nanostructures are important technological materials because of their unique properties and potential applications in future generation of electronic and sensing devices. This dissertation investigates a brief account of the strategies to grow zinc oxide nanostructures (thin film and nanowire) and graphene, and their applications as enhanced field effect transistors, chemical sensors and transparent flexible electrodes. Nanostructured zinc oxide (ZnO) and low-gallium doped zinc oxide (GZO) thin films were synthesized by a magnetron sputtering process. Zinc oxide nanowires (ZNWs) were grown by a chemical vapor deposition method. Field effect transistors (FETs) of ZnO and GZO thin films and ZNWs were fabricated by standard photo and electron beam lithography processes. Electrical characteristics of these devices were investigated by nondestructive surface cleaning, ultraviolet irradiation treatment at high temperature and under vacuum. GZO thin film transistors showed a mobility of ∼5.7 cm2/V·s at low operation voltage of <5 V and a low turn-on voltage of ∼0.5 V with a sub threshold swing of ∼85 mV/decade. Bottom gated FET fabricated from ZNWs exhibit a very high on-to-off ratio (∼106) and mobility (∼28 cm2/V·s). A bottom gated FET showed large hysteresis of ∼5.0 to 8.0 V which was significantly reduced to ∼1.0 V by the surface treatment process. The results demonstrate charge transport in ZnO nanostructures strongly depends on its surface environmental conditions and can be explained by formation of depletion layer at the surface by various surface states. A nitric oxide (NO) gas sensor using single ZNW, functionalized with Cr nanoparticles was developed. The sensor exhibited average sensitivity of ∼46% and a minimum detection limit of ∼1.5 ppm for NO gas. The sensor also is selective towards NO gas as demonstrated by a cross sensitivity test with N2, CO and CO2 gases. Graphene film on copper foil was synthesized by chemical vapor deposition method. A hot press lamination process was developed for transferring graphene film to flexible polymer substrate. The graphene/polymer film exhibited a high quality, flexible transparent conductive structure with unique electrical-mechanical properties; ∼88.80% light transmittance and ∼1.1742Ω/sq k sheet resistance. The application of a graphene/polymer film as a flexible and transparent electrode for field emission displays was demonstrated.