12 resultados para High-speed digital imaging

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study was conducted to investigate the effectiveness, as measured by performance on course posttests, of mindmapping versus traditional notetaking in a corporate training class. The purpose of this study was to increase knowledge concerning the effectiveness of mindmapping as an information encoding tool to enhance the effectiveness of learning. Corporations invest billions of dollars, annually, in training programs. Given this increased demand for effective and efficient workplace learning, continual reliance on traditional notetaking is questionable for the high-speed and continual learning required on workers.^ An experimental, posttest-only control group design was used to test the following hypotheses: (1) there is no significant difference in posttest scores on an achievement test, administered immediately after the course, between adult learners using mindmapping versus traditional notetaking methods in a training lecture, and (2) there is no significant difference in posttest scores on an achievement test, administered 30 days after the course, between adult learners using mindmapping versus traditional notetaking methods in a training lecture. After a 1.5 hour instruction on mindmapping, the treatment group used mindmapping throughout the course. The control group used traditional notetaking. T-tests were used to determine if there were significant differences between mean posttest scores between the two groups. In addition, an attitudinal survey, brain hemisphere dominance survey, course dynamics observations, and course evaluations were used to investigate preference for mindmapping, its perceived effect on test performance, and the effectiveness of mindmapping instruction.^ This study's principal finding was that although the mindmapping group did not perform significantly higher on posttests administered immediately and 30 days after the course, than the traditional notetaking group, the mindmapping group did score higher on both posttests and reported higher ratings of the course on every evaluation criteria. Lower educated, right brain dominant learners reported a significantly positive learning experience. These results suggest that mindmapping enhances and reinforces the preconditions of learning. Recommendations for future study are provided. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combustion-generated carbon black nano particles, or soot, have both positive and negative effects depending on the application. From a positive point of view, it is used as a reinforcing agent in tires, black pigment in inks, and surface coatings. From a negative point of view, it affects performance and durability of many combustion systems, it is a major contributor of global warming, and it is linked to respiratory illness and cancer. Laser-Induced Incandescence (LII) was used in this study to measure soot volume fractions in four steady and twenty-eight pulsed ethylene diffusion flames burning at atmospheric pressure. A laminar coflow diffusion burner combined with a very-high-speed solenoid valve and control circuit provided unsteady flows by forcing the fuel flow with frequencies between 10 Hz and 200 Hz. Periodic flame oscillations were captured by two-dimensional phase-locked LII images and broadband luminosity images for eight phases (0° – 360°) covering each period. A comparison between the steady and pulsed flames and the effect of the pulsation frequency on soot volume fraction in the flame region and the post flame region are presented. The most significant effect of pulsing frequency was observed at 10 Hz. At this frequency, the flame with the lowest mean flow rate had 1.77 times enhancement in peak soot volume fraction and 1.2 times enhancement in total soot volume fraction; whereas the flame with the highest mean flow rate had no significant change in the peak soot volume fraction and 1.4 times reduction in the total soot volume fraction. A correlation (fvRe-1 = a + b·Str) for the total soot volume fraction in the flame region for the unsteady laminar ethylene flames was obtained for the pulsation frequency between 10 Hz and 200 Hz, and the Reynolds number between 37 and 55. The soot primary particle size in steady and unsteady flames was measured using the Time-Resolved Laser-Induced Incandescence (TIRE-LII) and the double-exponential fit method. At maximum frequency (200 Hz), the soot particles were smaller in size by 15% compared to the steady case in the flame with the highest mean flow rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Buffered crossbar switches have recently attracted considerable attention as the next generation of high speed interconnects. They are a special type of crossbar switches with an exclusive buffer at each crosspoint of the crossbar. They demonstrate unique advantages over traditional unbuffered crossbar switches, such as high throughput, low latency, and asynchronous packet scheduling. However, since crosspoint buffers are expensive on-chip memories, it is desired that each crosspoint has only a small buffer. This dissertation proposes a series of practical algorithms and techniques for efficient packet scheduling for buffered crossbar switches. To reduce the hardware cost of such switches and make them scalable, we considered partially buffered crossbars, whose crosspoint buffers can be of an arbitrarily small size. Firstly, we introduced a hybrid scheme called Packet-mode Asynchronous Scheduling Algorithm (PASA) to schedule best effort traffic. PASA combines the features of both distributed and centralized scheduling algorithms and can directly handle variable length packets without Segmentation And Reassembly (SAR). We showed by theoretical analysis that it achieves 100% throughput for any admissible traffic in a crossbar with a speedup of two. Moreover, outputs in PASA have a large probability to avoid the more time-consuming centralized scheduling process, and thus make fast scheduling decisions. Secondly, we proposed the Fair Asynchronous Segment Scheduling (FASS) algorithm to handle guaranteed performance traffic with explicit flow rates. FASS reduces the crosspoint buffer size by dividing packets into shorter segments before transmission. It also provides tight constant performance guarantees by emulating the ideal Generalized Processor Sharing (GPS) model. Furthermore, FASS requires no speedup for the crossbar, lowering the hardware cost and improving the switch capacity. Thirdly, we presented a bandwidth allocation scheme called Queue Length Proportional (QLP) to apply FASS to best effort traffic. QLP dynamically obtains a feasible bandwidth allocation matrix based on the queue length information, and thus assists the crossbar switch to be more work-conserving. The feasibility and stability of QLP were proved, no matter whether the traffic distribution is uniform or non-uniform. Hence, based on bandwidth allocation of QLP, FASS can also achieve 100% throughput for best effort traffic in a crossbar without speedup.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integrated on-chip optical platforms enable high performance in applications of high-speed all-optical or electro-optical switching, wide-range multi-wavelength on-chip lasing for communication, and lab-on-chip optical sensing. Integrated optical resonators with high quality factor are a fundamental component in these applications. Periodic photonic structures (photonic crystals) exhibit a photonic band gap, which can be used to manipulate photons in a way similar to the control of electrons in semiconductor circuits. This makes it possible to create structures with radically improved optical properties. Compared to silicon, polymers offer a potentially inexpensive material platform with ease of fabrication at low temperatures and a wide range of material properties when doped with nanocrystals and other molecules. In this research work, several polymer periodic photonic structures are proposed and investigated to improve optical confinement and optical sensing. We developed a fast numerical method for calculating the quality factor of a photonic crystal slab (PhCS) cavity. The calculation is implemented via a 2D-FDTD method followed by a post-process for cavity surface energy radiation loss. Computational time is saved and good accuracy is demonstrated compared to other published methods. Also, we proposed a novel concept of slot-PhCS which enhanced the energy density 20 times compared to traditional PhCS. It combines both advantages of the slot waveguide and photonic crystal to localize the high energy density in the low index material. This property could increase the interaction between light and material embedded with nanoparticles like quantum dots for active device development. We also demonstrated a wide range bandgap based on a one dimensional waveguide distributed Bragg reflector with high coupling to optical waveguides enabling it to be easily integrated with other optical components on the chip. A flexible polymer (SU8) grating waveguide is proposed as a force sensor. The proposed sensor can monitor nN range forces through its spectral shift. Finally, quantum dot - doped SU8 polymer structures are demonstrated by optimizing spin coating and UV exposure. Clear patterns with high emission spectra proved the compatibility of the fabrication process for applications in optical amplification and lasing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global connectivity, for anyone, at anyplace, at anytime, to provide high-speed, high-quality, and reliable communication channels for mobile devices, is now becoming a reality. The credit mainly goes to the recent technological advances in wireless communications comprised of a wide range of technologies, services, and applications to fulfill the particular needs of end-users in different deployment scenarios (Wi-Fi, WiMAX, and 3G/4G cellular systems). In such a heterogeneous wireless environment, one of the key ingredients to provide efficient ubiquitous computing with guaranteed quality and continuity of service is the design of intelligent handoff algorithms. Traditional single-metric handoff decision algorithms, such as Received Signal Strength (RSS) based, are not efficient and intelligent enough to minimize the number of unnecessary handoffs, decision delays, and call-dropping and/or blocking probabilities. This research presented a novel approach for the design and implementation of a multi-criteria vertical handoff algorithm for heterogeneous wireless networks. Several parallel Fuzzy Logic Controllers were utilized in combination with different types of ranking algorithms and metric weighting schemes to implement two major modules: the first module estimated the necessity of handoff, and the other module was developed to select the best network as the target of handoff. Simulations based on different traffic classes, utilizing various types of wireless networks were carried out by implementing a wireless test-bed inspired by the concept of Rudimentary Network Emulator (RUNE). Simulation results indicated that the proposed scheme provided better performance in terms of minimizing the unnecessary handoffs, call dropping, and call blocking and handoff blocking probabilities. When subjected to Conversational traffic and compared against the RSS-based reference algorithm, the proposed scheme, utilizing the FTOPSIS ranking algorithm, was able to reduce the average outage probability of MSs moving with high speeds by 17%, new call blocking probability by 22%, the handoff blocking probability by 16%, and the average handoff rate by 40%. The significant reduction in the resulted handoff rate provides MS with efficient power consumption, and more available battery life. These percentages indicated a higher probability of guaranteed session continuity and quality of the currently utilized service, resulting in higher user satisfaction levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are situations in which it is very important to quickly and positively identify an individual. Examples include suspects detained in the neighborhood of a bombing or terrorist incident, individuals detained attempting to enter or leave the country, and victims of mass disasters. Systems utilized for these purposes must be fast, portable, and easy to maintain. The goal of this project was to develop an ultra fast, direct PCR method for forensic genotyping of oral swabs. The procedure developed eliminates the need for cellular digestion and extraction of the sample by performing those steps in the PCR tube itself. Then, special high-speed polymerases are added which are capable of amplifying a newly developed 7 loci multiplex in under 16 minutes. Following the amplification, a postage stamp sized microfluidic device equipped with specially designed entangled polymer separation matrix, yields a complete genotype in 80 seconds. The entire process is rapid and reliable, reducing the time from sample to genotype from 1-2 days to under 20 minutes. Operation requires minimal equipment and can be easily performed with a small high-speed thermal-cycler, reagents, and a microfluidic device with a laptop. The system was optimized and validated using a number of test parameters and a small test population. The overall precision was better than 0.17 bp and provided a power of discrimination greater than 1 in 106. The small footprint, and ease of use will permit this system to be an effective tool to quickly screen and identify individuals detained at ports of entry, police stations and remote locations. The system is robust, portable and demonstrates to the forensic community a simple solution to the problem of rapid determination of genetic identity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An assessment of how hotel guests view in-room entertainment-technology amenities was conducted to compare the importance of these technologies to how they performed. In-room entertainment technology continues to evolve in the hotel industry. However, given the multitude of entertainment products available in the marketplace today, hoteliers have little understanding of guests’ expectations and of which in-room entertainment-technology amenities will drive guest satisfaction and increase loyalty to the hotel brand. Given that technology is integral to a hotel stay, this study seeks to evaluate the importance and performance of in-room entertainment-technology amenities. Findings indicate that free-to-guest television (FTG TV) and high-speed Internet access were the two most important inroom entertainment-technology amenities when it comes to the selection of a hotel for both leisure and business travelers. The Importance/Satisfaction Matrix presented in the current study showed that many of the in-room entertainment-technology amenities are currently a low priority for guests. Keywords: importance-performance analysis, hotel, in-room entertainment technologies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electronic noise has been investigated in AlxGa1−x N/GaN Modulation-Doped Field Effect Transistors (MODFETs) of submicron dimensions, grown for us by MBE (Molecular Beam Epitaxy) techniques at Virginia Commonwealth University by Dr. H. Morkoç and coworkers. Some 20 devices were grown on a GaN substrate, four of which have leads bonded to source (S), drain (D), and gate (G) pads, respectively. Conduction takes place in the quasi-2D layer of the junction (xy plane) which is perpendicular to the quantum well (z-direction) of average triangular width ∼3 nm. A non-doped intrinsic buffer layer of ∼5 nm separates the Si-doped donors in the AlxGa1−xN layer from the 2D-transistor plane, which affords a very high electron mobility, thus enabling high-speed devices. Since all contacts (S, D, and G) must reach through the AlxGa1−xN layer to connect internally to the 2D plane, parallel conduction through this layer is a feature of all modulation-doped devices. While the shunting effect may account for no more than a few percent of the current IDS, it is responsible for most excess noise, over and above thermal noise of the device. ^ The excess noise has been analyzed as a sum of Lorentzian spectra and 1/f noise. The Lorentzian noise has been ascribed to trapping of the carriers in the AlxGa1−xN layer. A detailed, multitrapping generation-recombination noise theory is presented, which shows that an exponential relationship exists for the time constants obtained from the spectral components as a function of 1/kT. The trap depths have been obtained from Arrhenius plots of log (τT2) vs. 1000/T. Comparison with previous noise results for GaAs devices shows that: (a) many more trapping levels are present in these nitride-based devices; (b) the traps are deeper (farther below the conduction band) than for GaAs. Furthermore, the magnitude of the noise is strongly dependent on the level of depletion of the AlxGa1−xN donor layer, which can be altered by a negative or positive gate bias VGS. ^ Altogether, these frontier nitride-based devices are promising for bluish light optoelectronic devices and lasers; however, the noise, though well understood, indicates that the purity of the constituent layers should be greatly improved for future technological applications. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combustion-generated carbon black nano particles, or soot, have both positive and negative effects depending on the application. From a positive point of view, it is used as a reinforcing agent in tires, black pigment in inks, and surface coatings. From a negative point of view, it affects performance and durability of many combustion systems, it is a major contributor of global warming, and it is linked to respiratory illness and cancer. Laser-Induced Incandescence (LII) was used in this study to measure soot volume fractions in four steady and twenty-eight pulsed ethylene diffusion flames burning at atmospheric pressure. A laminar coflow diffusion burner combined with a very-high-speed solenoid valve and control circuit provided unsteady flows by forcing the fuel flow with frequencies between 10 Hz and 200 Hz. Periodic flame oscillations were captured by two-dimensional phase-locked LII images and broadband luminosity images for eight phases (0°- 360°) covering each period. A comparison between the steady and pulsed flames and the effect of the pulsation frequency on soot volume fraction in the flame region and the post flame region are presented. The most significant effect of pulsing frequency was observed at 10 Hz. At this frequency, the flame with the lowest mean flow rate had 1.77 times enhancement in peak soot volume fraction and 1.2 times enhancement in total soot volume fraction; whereas the flame with the highest mean flow rate had no significant change in the peak soot volume fraction and 1.4 times reduction in the total soot volume fraction. A correlation (ƒv Reˉ1 = a+b· Str) for the total soot volume fraction in the flame region for the unsteady laminar ethylene flames was obtained for the pulsation frequency between 10 Hz and 200 Hz, and the Reynolds number between 37 and 55. The soot primary particle size in steady and unsteady flames was measured using the Time-Resolved Laser-Induced Incandescence (TIRE-LII) and the double-exponential fit method. At maximum frequency (200 Hz), the soot particles were smaller in size by 15% compared to the steady case in the flame with the highest mean flow rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Engineering analysis in geometric models has been the main if not the only credible/reasonable tool used by engineers and scientists to resolve physical boundaries problems. New high speed computers have facilitated the accuracy and validation of the expected results. In practice, an engineering analysis is composed of two parts; the design of the model and the analysis of the geometry with the boundary conditions and constraints imposed on it. Numerical methods are used to resolve a large number of physical boundary problems independent of the model geometry. The time expended due to the computational process are related to the imposed boundary conditions and the well conformed geometry. Any geometric model that contains gaps or open lines is considered an imperfect geometry model and major commercial solver packages are incapable of handling such inputs. Others packages apply different kinds of methods to resolve this problems like patching or zippering; but the final resolved geometry may be different from the original geometry, and the changes may be unacceptable. The study proposed in this dissertation is based on a new technique to process models with geometrical imperfection without the necessity to repair or change the original geometry. An algorithm is presented that is able to analyze the imperfect geometric model with the imposed boundary conditions using a meshfree method and a distance field approximation to the boundaries. Experiments are proposed to analyze the convergence of the algorithm in imperfect models geometries and will be compared with the same models but with perfect geometries. Plotting results will be presented for further analysis and conclusions of the algorithm convergence

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global connectivity is on the verge of becoming a reality to provide high-speed, high-quality, and reliable communication channels for mobile devices at anytime, anywhere in the world. In a heterogeneous wireless environment, one of the key ingredients to provide efficient and ubiquitous computing with guaranteed quality and continuity of service is the design of intelligent handoff algorithms. Traditional single-metric handoff decision algorithms, such as Received Signal Strength (RSS), are not efficient and intelligent enough to minimize the number of unnecessary handoffs, decision delays, call-dropping and blocking probabilities. This research presents a novel approach for of a Multi Attribute Decision Making (MADM) model based on an integrated fuzzy approach for target network selection.