11 resultados para Staff detection and removal

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airborne LIDAR (Light Detecting and Ranging) is a relatively new technique that rapidly and accurately measures micro-topographic features. This study compares topography derived from LIDAR with subsurface karst structures mapped in 3-dimensions with ground penetrating radar (GPR). Over 500 km of LIDAR data were collected in 1995 by the NASA ATM instrument. The LIDAR data was processed and analyzed to identify closed depressions. A GPR survey was then conducted at a 200 by 600 m site to determine if the target features are associated with buried karst structures. The GPR survey resolved two major depressions in the top of a clay rich layer at ~10m depth. These features are interpreted as buried dolines and are associated spatially with subtle (< 1m) trough-like depressions in the topography resolved from the LIDAR data. This suggests that airborne LIDAR may be a useful tool for indirectly detecting subsurface features associated with sinkhole hazard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, large numbers of social media services have emerged and been widely used in people's daily life as important information sharing and acquisition tools. With a substantial amount of user-contributed text data on social media, it becomes a necessity to develop methods and tools for text analysis for this emerging data, in order to better utilize it to deliver meaningful information to users. ^ Previous work on text analytics in last several decades is mainly focused on traditional types of text like emails, news and academic literatures, and several critical issues to text data on social media have not been well explored: 1) how to detect sentiment from text on social media; 2) how to make use of social media's real-time nature; 3) how to address information overload for flexible information needs. ^ In this dissertation, we focus on these three problems. First, to detect sentiment of text on social media, we propose a non-negative matrix tri-factorization (tri-NMF) based dual active supervision method to minimize human labeling efforts for the new type of data. Second, to make use of social media's real-time nature, we propose approaches to detect events from text streams on social media. Third, to address information overload for flexible information needs, we propose two summarization framework, dominating set based summarization framework and learning-to-rank based summarization framework. The dominating set based summarization framework can be applied for different types of summarization problems, while the learning-to-rank based summarization framework helps utilize the existing training data to guild the new summarization tasks. In addition, we integrate these techneques in an application study of event summarization for sports games as an example of how to better utilize social media data. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study is to design and development of an enzyme-linked biosensor for detection and quantification of phosphate species. Various concentrations of phosphate species were tested and completed for this study. Phosphate is one of the vital nutrients for all living organisms. Phosphate compounds can be found in nature (e.g., water sediments), and they often exist in aninorganic form. The amount of phosphates in the environment strongly influences the operations of living organisms. Excess amount of phosphate in the environment causes eutrophication which in turn causes oxygen deficit for the other living organisms. Fish die and degradation of habitat in the water occurs as a result of eutrophication. In contrast, low phosphate concentration causes death of vegetation since plants utilize the inorganic phosphate for photosynthesis, respiration, and regulation of enzymes. Therefore, the phosphate quantity in lakes and rivers must be monitored. Result demonstrated that phosphate species could be detected in various organisms via enzyme-linked biosensor in this research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, large numbers of social media services have emerged and been widely used in people's daily life as important information sharing and acquisition tools. With a substantial amount of user-contributed text data on social media, it becomes a necessity to develop methods and tools for text analysis for this emerging data, in order to better utilize it to deliver meaningful information to users. Previous work on text analytics in last several decades is mainly focused on traditional types of text like emails, news and academic literatures, and several critical issues to text data on social media have not been well explored: 1) how to detect sentiment from text on social media; 2) how to make use of social media's real-time nature; 3) how to address information overload for flexible information needs. In this dissertation, we focus on these three problems. First, to detect sentiment of text on social media, we propose a non-negative matrix tri-factorization (tri-NMF) based dual active supervision method to minimize human labeling efforts for the new type of data. Second, to make use of social media's real-time nature, we propose approaches to detect events from text streams on social media. Third, to address information overload for flexible information needs, we propose two summarization framework, dominating set based summarization framework and learning-to-rank based summarization framework. The dominating set based summarization framework can be applied for different types of summarization problems, while the learning-to-rank based summarization framework helps utilize the existing training data to guild the new summarization tasks. In addition, we integrate these techneques in an application study of event summarization for sports games as an example of how to better utilize social media data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classification procedures, including atmospheric correction satellite images as well as classification performance utilizing calibration and validation at different levels, have been investigated in the context of a coarse land-cover classification scheme for the Pachitea Basin. Two different correction methods were tested against no correction in terms of reflectance correction towards a common response for pseudo-invariant features (PIF). The accuracy of classifications derived from each of the three methods was then assessed in a discriminant analysis using crossvalidation at pixel, polygon, region, and image levels. Results indicate that only regression adjusted images using PIFs show no significant difference between images in any of the bands. A comparison of classifications at different levels suggests though that at pixel, polygon, and region levels the accuracy of the classifications do not significantly differ between corrected and uncorrected images. Spatial patterns of land-cover were analyzed in terms of colonization history, infrastructure, suitability of the land, and landownership. The actual use of the land is driven mainly by the ability to access the land and markets as is obvious in the distribution of land cover as a function of distance to rivers and roads. When considering all rivers and roads a threshold distance at which disproportional agro-pastoral land cover switches from over represented to under represented is at about 1km. Best land use suggestions seem not to affect the choice of land use. Differences in abundance of land cover between watersheds are more prevailing than differences between colonist and indigenous groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gunshot residue (GSR) is the term used to describe the particles originating from different parts of the firearm and ammunition during the discharge. A fast and practical field tool to detect the presence of GSR can assist law enforcement in the accurate identification of subjects. A novel field sampling device is presented for the first time for the fast detection and quantitation of volatile organic compounds (VOCs). The capillary microextraction of volatiles (CMV) is a headspace sampling technique that provides fast results (< 2 min. sampling time) and is reported as a versatile and high-efficiency sampling tool. The CMV device can be coupled to a Gas Chromatography-Mass Spectrometry (GC-MS) instrument by installation of a thermal separation probe in the injection port of the GC. An analytical method using the CMV device was developed for the detection of 17 compounds commonly found in polluted environments. The acceptability of the CMV as a field sampling method for the detection of VOCs is demonstrated by following the criteria established by the Environmental Protection Agency (EPA) compendium method TO-17. The CMV device was used, for the first time, for the detection of VOCs on swabs from the hands of shooters, and non-shooters and spent cartridges from different types of ammunition (i.e., pistol, rifle, and shotgun). The proposed method consists in the headspace extraction of VOCs in smokeless powders present in the propellant of ammunition. The sensitivity of this method was demonstrated with method detection limits (MDLs) 4-26 ng for diphenylamine (DPA), nitroglycerine (NG), 2,4-dinitrotoluene (2,4-DNT), and ethyl centralite (EC). In addition, a fast method was developed for the detection of the inorganic components (i.e., Ba, Pb, and Sb) characteristic of GSR presence by Laser Induced Breakdown Spectroscopy (LIBS). Advantages of LIBS include fast analysis (~ 12 seconds per sample) and good sensitivity, with expected MDLs in the range of 0.1-20 ng for target elements. Statistical analysis of the results using both techniques was performed to determine any correlation between the variables analyzed. This work demonstrates that the information collected from the analysis of organic components has the potential to improve the detection of GSR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context: Due to a unique combination of factors, outdoor athletes in the Southeastern United States are at high risk of lightning deaths and injuries. Lightning detection methods are available to minimize lightning strike victims. Objective: Becoming aware of the risk factors that predispose athletes to lightning strikes and determining the most reliable detection method against hazardous weather will enable Certified Athletic Trainers to develop protocols that protect athletes from injury. Data Sources: A comprehensive literature review of Medline and Pubmed using key words: lightning, lightning risk factors, lightning safety, lightning detection, and athletic trainers and lightning was completed. Data Synthesis: Factors predisposing athletes to lighting death or injury include: time of year, time of day, the athlete’s age, geographical location, physical location, sex, perspiration level, and lack of education and preparedness by athletes and staff. Although handheld lightning detectors have become widely accessible to detect lightning strikes, their performance has not been independently or objectively confirmed. There is evidence that these detectors inaccurately detect strike locations by recording false strikes and not recording actual strikes. Conclusions: Lightning education and preparation are two factors that can be controlled. Measures need to be taken by Certified Athletic Trainers to ensure the safety of athletes during outdoor athletics. It is critical for athletic trainers and supervising staff members to become fully aware of the risks of lightning strikes in order to most effectively protect everyone under their supervision. Even though lightning detectors have been manufactured in an attempt to minimize death and injuries due to lightning strikes, none of the detectors have been proven to be 100% effective. Educating coaches, athletes, and parents on the risks of lightning and the detection methods available, while implementing an emergency action plan for lightning safety, is crucial to ensure the well being of the student-athlete population.