15 resultados para Small-signal transfer functions
em Digital Commons at Florida International University
Resumo:
Digital systems can generate left and right audio channels that create the effect of virtual sound source placement (spatialization) by processing an audio signal through pairs of Head-Related Transfer Functions (HRTFs) or, equivalently, Head-Related Impulse Responses (HRIRs). The spatialization effect is better when individually-measured HRTFs or HRIRs are used than when generic ones (e.g., from a mannequin) are used. However, the measurement process is not available to the majority of users. There is ongoing interest to find mechanisms to customize HRTFs or HRIRs to a specific user, in order to achieve an improved spatialization effect for that subject. Unfortunately, the current models used for HRTFs and HRIRs contain over a hundred parameters and none of those parameters can be easily related to the characteristics of the subject. This dissertation proposes an alternative model for the representation of HRTFs, which contains at most 30 parameters, all of which have a defined functional significance. It also presents methods to obtain the value of parameters in the model to make it approximately equivalent to an individually-measured HRTF. This conversion is achieved by the systematic deconstruction of HRIR sequences through an augmented version of the Hankel Total Least Squares (HTLS) decomposition approach. An average 95% match (fit) was observed between the original HRIRs and those re-constructed from the Damped and Delayed Sinusoids (DDSs) found by the decomposition process, for ipsilateral source locations. The dissertation also introduces and evaluates an HRIR customization procedure, based on a multilinear model implemented through a 3-mode tensor, for mapping of anatomical data from the subjects to the HRIR sequences at different sound source locations. This model uses the Higher-Order Singular Value Decomposition (HOSVD) method to represent the HRIRs and is capable of generating customized HRIRs from easily attainable anatomical measurements of a new intended user of the system. Listening tests were performed to compare the spatialization performance of customized, generic and individually-measured HRIRs when they are used for synthesized spatial audio. Statistical analysis of the results confirms that the type of HRIRs used for spatialization is a significant factor in the spatialization success, with the customized HRIRs yielding better results than generic HRIRs.
Resumo:
One of the most popular techniques for creating spatialized virtual sounds is based on the use of Head-Related Transfer Functions (HRTFs). HRTFs are signal processing models that represent the modifications undergone by the acoustic signal as it travels from a sound source to each of the listener's eardrums. These modifications are due to the interaction of the acoustic waves with the listener's torso, shoulders, head and pinnae, or outer ears. As such, HRTFs are somewhat different for each listener. For a listener to perceive synthesized 3-D sound cues correctly, the synthesized cues must be similar to the listener's own HRTFs. ^ One can measure individual HRTFs using specialized recording systems, however, these systems are prohibitively expensive and restrict the portability of the 3-D sound system. HRTF-based systems also face several computational challenges. This dissertation presents an alternative method for the synthesis of binaural spatialized sounds. The sound entering the pinna undergoes several reflective, diffractive and resonant phenomena, which determine the HRTF. Using signal processing tools, such as Prony's signal modeling method, an appropriate set of time delays and a resonant frequency were used to approximate the measured Head-Related Impulse Responses (HRIRs). Statistical analysis was used to find out empirical equations describing how the reflections and resonances are determined by the shape and size of the pinna features obtained from 3D images of 15 experimental subjects modeled in the project. These equations were used to yield “Model HRTFs” that can create elevation effects. ^ Listening tests conducted on 10 subjects show that these model HRTFs are 5% more effective than generic HRTFs when it comes to localizing sounds in the frontal plane. The number of reversals (perception of sound source above the horizontal plane when actually it is below the plane and vice versa) was also reduced by 5.7%, showing the perceptual effectiveness of this approach. The model is simple, yet versatile because it relies on easy to measure parameters to create an individualized HRTF. This low-order parameterized model also reduces the computational and storage demands, while maintaining a sufficient number of perceptually relevant spectral cues. ^
Resumo:
Despite their sensitivity to climate variability, few of the abundant sinkhole lakes of Florida have been the subject of paleolimnological studies to discern patterns of change in aquatic communities and link them to climate drivers. However, deep sinkhole lakes can contain highly resolved paleolimnological records that can be used to track long-term climate variability and its interaction with effects of land-use change. In order to understand how limnological changes were regulated by regional climate variability and further modified by local land-use change in south Florida, we explored diatom assemblage variability over centennial and semi-decadal time scales in an ~11,000-yr and a ~150-yr sediment core extracted from a 21-m deep sinkhole lake, Lake Annie, on the protected property of Archbold Biological Station. We linked variance in diatom assemblage structure to changes in water total phosphorus, color, and pH using diatom-based transfer functions. Reconstructions suggest the sinkhole depression contained a small, acidic, oligotrophic pond ~11000–7000 cal yr BP that gradually deepened to form a humic lake by ~4000 cal yr BP, coinciding with the onset of modern precipitation regimes and the stabilization of sea-level indicated by corresponding palynological records. The lake then contained stable, acidophilous planktonic and benthic algal communities for several thousand years. In the early AD 1900s, that community shifted to one diagnostic of an even lower pH (~5.6), likely resulting from acid precipitation. Further transitions over the past 25 yr reflect recovery from acidification and intensified sensitivity to climate variability caused by enhanced watershed runoff from small drainage ditches dug during the mid-twentieth Century on the surrounding property.
Resumo:
Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved, analysis of the SSBI is a complicated task.^ This research applies the state-space averaging technique to the SSBI to develop the state-space-averaged model of the SSBI under stand-alone and grid-connected modes of operation. Then, a small-signal model is derived by means of the perturbation and linearization method. An experimental hardware set-up, including a laboratory-scaled prototype SSBI, is built and the validity of the obtained models is verified through simulation and experiments. Finally, an eigenvalue sensitivity analysis is performed to investigate the stability and dynamic behavior of the SSBI system over a typical range of operation. ^
Resumo:
Limestone-based (karstic) freshwater wetlands of the Everglades, Belize, Mexico, and Jamaica are distinctive in having a high biomass of CaCO3-rich periphyton mats. Diatoms are common components of these mats and show predictable responses to environmental variation, making them good candidates for assessing nutrient enrichment in these naturally ultraoligotrophic wetlands. However, aside from in the Everglades of southern Florida, very little research has been done to document the diatoms and their environmental preferences in karstic Caribbean wetlands, which are increasingly threatened by eutrophication. We identified diatoms in periphyton mats collected during wet and dry periods from the Everglades and similar freshwater karstic wetlands in Belize, Mexico, and Jamaica. We compared diatom assemblage composition and diversity among locations and periods, and the effect of the limiting nutrient, P, on species composition among locations. We used periphyton-mat total P (TP) as a metric of availability. A total of 176 diatom species in 45 genera were recorded from the 4 locations. Twenty-three of these species, including 9 that are considered indicative of Everglades diatom flora, were found in all 4 locations. In Everglades and Caribbean sites, we identified assemblages and indicator species associated with low and high periphyton-mat TP and calculated TP optima and tolerances for each indicator species. TP optima and tolerances of indicator species differed between the Everglades and the Caribbean, but weighted averaging models predicted periphyton-mat TP concentrations from diatom assemblages at Everglades (R2 = 0.56) and Caribbean (R2 = 0.85) locations. These results show that diatoms can be effective indicators of water quality in karstic wetlands of the Caribbean, but application of regionally generated transfer functions to distant sites provides less reliable estimates than locally developed functions.
Resumo:
Previous results in our laboratory suggest that the (CG) 4 segments whether present in a right-handed or a left-handed conformation form distinctive junctions with adjacent random sequences. These junctions and their associated sequences have unique structural and thermodynamic properties that may be recognized by DNA-binding molecules. This study probes these sequences by using the following small ligands: actinomycin D, 1,4-bis(((di(aminoethyl)amino)ethyl)amino)anthracene-9,10-dione, ametantrone, and tris(phenanthroline)ruthenium (II). These ligands may recognize the distinctive features associated to the (CG)4 segment and its junctions and thus interact preferentially near these sequences. Restriction enzyme inhibition assays were used to determine whether or not binding interactions took place, and to approximate locations of these interactions. These binding studies are first carried out using two small synthetic oligomers BZ-III and BZ-IV. The (5meCG)4 segment present in BZ-III adopts the Z-conformation in the presence of 50 m M Co(NH3)63+. In BZ-IV, the unmethylated (CG)4 segment changes to a non-B conformation in the presence of 50 m M Co(NH3)63+. BZ-IV, containing the (CG)4 segment, was inserted into a clone plasmid then digested with the restriction enzyme Hinf I to produce a larger fragment that contains the (CG)4 segment. The results obtained on the small oligomers and on the larger fragment for restriction enzyme Mbo I indicate that 1,4-bis(((di(aminoethyl)amino)ethyl)amino)anthracene-9,10-dione binds more efficiently at or near the (CG)4 segment. Restriction enzymes EcoRV, Sac I and Not I with cleavage sites upstream and downstream of the (CG)4 insert were used to further localize binding interactions in the vicinity of the (CG)4 insert. RNA polymerase activity was studied in a plasmid which contained the (CG)4 insert downstream from the promoter sites of SP6 and T7 RNA polymerases. Activities of these two polymerases were studied in the presence of each one of the ligands used throughout the study. Only actinomycin D and spider, which bind at or near the (CG)4 segment, alter the activities of SP6 and T7 RNA polymerases. Surprisingly, enhancement of polymerase activity was observed in the presence of very low concentrations of actinomycin D. These results suggest that the conformational features of (CG) segments may serve in regulatory functions of DNA. ^
Resumo:
A high resolution study of the quasielastic 2 H(e, e'p)n reaction was performed in Hall A at the Thomas Jefferson Accelerator Facility in Newport News, Virginia. The measurements were performed at a central momentum transfer of : q: ∼ 2400 MeV/c, and at a central energy transfer of ω ∼ 1500 MeV, a four momentum transfer Q2 = 3.5 (GeV/c)2, covering missing momenta from 0 to 0.5 GeV/c. The majority of the measurements were performed at Φ = 180° and a small set of measurements were done at Φ = 0°. The Hall A High Resolution Spectrometers (HRS) were used to detect coincident electrons and protons, respectively. Absolute 2H(e, e'p) n cross sections were obtained as a function of the recoiling neutron scattering angle with respect to [special characters omitted]. The experimental results were compared to a Plane Wave Impulse Approximation (PWIA) model and to a calculation that includes Final State Interaction (FSI) effects. Experimental 2H(e, e'p)n cross sections were determined with an estimated systematic uncertainty of 7%. The general features of the measured cross sections are reproduced by Glauber based calculations that take the motion of the bound nucleons into account (GEA). Final State Interactions (FSI) contributions were found to depend strongly on the angle of the recoiling neutron with respect to the momentum transfer and on the missing momentum. We found a systematic deviation of the theoretical prediction of about 30%. At small &thetas; nq (&thetas;nq < 60°) the theory overpredicts the cross section while at large &thetas; nq (&thetas;nq > 80°) the theory underestimates the cross sections. We observed an enhancement of the cross section, due to FSI, of about 240%, as compared to PWIA, for a missing momentum of 0.4 GeV/c at an angle of 75°. For missing momentum of 0.5 GeV/c the enhancement of the cross section due to the same FSI effects, was about 270%. This is in agreement with GEA. Standard Glauber calculations predict this large contribution to occur at an angle of 90°. Our results show that GEA better describes the 2H(e, e'p)n reaction.
Resumo:
QCD predicts Color Transparency (CT), which refers to nuclear medium becoming transparent to a small color neutral object produced in high momentum transfer reactions, due to reduced strong interaction. Despite several studies at BNL, SLAC, FNAL, DESY and Jefferson Lab, a definitive signal for CT still remains elusive. In this dissertation, we present the results of a new study at Jefferson Lab motivated by theoretical calculations that suggest fully exclusive measurement of coherent rho meson electroproduction off the deuteron is a favorable channel for studying CT. Vector meson production has a large cross section at high energies, and the deuteron is the best understood and simplest nuclear system. Exclusivity allows the production and propagation to be controlled separately by controlling Q 2, lf (formation length), lc (coherence length) and t. This control is important as the rapid expansion of small objects increases their interaction probability and masks CT. The CT signal is investigated in a ratio of cross sections at high t (where re-scattering is significant) to low t (where single nucleon reactions dominate). The results are presented over a Q2 range of 1 to 3 GeV2 based on the data taken with beam energy of 6 GeV.
Resumo:
This study examines how public management practitioners in small and medium-sized Florida cities perceive globalization and its impact on public management practice. Using qualitative analysis, descriptive statistics and factor analysis methods, data obtained from a survey and semi-structured interviews were studied to comprehend how public managers view the management and control of their municipalities in a time of globalization. The study shows that the public managers’ perceptions of globalization and its impact on public management in Florida’s small-medium cities are nuanced. Whereas some public managers feel that globalization has significant impacts on municipalities’ viability, others opine that globalization has no local impact. The study further finds that globalization processes are perceived as altering the public management functions of decision-making, economic development and service delivery in some small-medium cities in Florida as a result of transnational shifts, rapidly changing technologies, and municipalities’ heightened involvement in the global economy. The study concludes that the globalization discourse does not resonate among some public managers in Florida’s small-medium cities in ways implied in extant literature.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Communication has become an essential function in our civilization. With the increasing demand for communication channels, it is now necessary to find ways to optimize the use of their bandwidth. One way to achieve this is by transforming the information before it is transmitted. This transformation can be performed by several techniques. One of the newest of these techniques is the use of wavelets. Wavelet transformation refers to the act of breaking down a signal into components called details and trends by using small waveforms that have a zero average in the time domain. After this transformation the data can be compressed by discarding the details, transmitting the trends. In the receiving end, the trends are used to reconstruct the image. In this work, the wavelet used for the transformation of an image will be selected from a library of available bases. The accuracy of the reconstruction, after the details are discarded, is dependent on the wavelets chosen from the wavelet basis library. The system developed in this thesis takes a 2-D image and decomposes it using a wavelet bank. A digital signal processor is used to achieve near real-time performance in this transformation task. A contribution of this thesis project is the development of DSP-based test bed for the future development of new real-time wavelet transformation algorithms.
Resumo:
Many U.S. students do not perform well on mathematics assessments with respect to algebra topics such as linear functions, a building-block for other functions. Poor achievement of U.S. middle school students in this topic is a problem. U.S. eighth graders have had average mathematics scores on international comparison tests such as Third International Mathematics Science Study, later known as Trends in Mathematics and Science Study, (TIMSS)-1995, -99, -03, while Singapore students have had highest average scores. U.S. eighth grade average mathematics scores improved on TIMMS-2007 and held steady onTIMMS-2011. Results from national assessments, PISA 2009 and 2012 and National Assessment of Educational Progress of 2007, 2009, and 2013, showed a lack of proficiency in algebra. Results of curriculum studies involving nations in TIMSS suggest that elementary textbooks in high-scoring countries were different than elementary textbooks and middle grades texts were different with respect to general features in the U.S. The purpose of this study was to compare treatments of linear functions in Singapore and U.S. middle grades mathematics textbooks. Results revealed features currently in textbooks. Findings should be valuable to constituencies who wish to improve U.S. mathematics achievement. Portions of eight Singapore and nine U.S. middle school student texts pertaining to linear functions were compared with respect to 22 features in three categories: (a) background features, (b) general features of problems, and (c) specific characterizations of problem practices, problem-solving competency types, and transfer of representation. Features were coded using a codebook developed by the researcher. Tallies and percentages were reported. Welch's t-tests and chi-square tests were used, respectively, to determine whether texts differed significantly for the features and if codes were independent of country. U.S. and Singapore textbooks differed in page appearance and number of pages, problems, and images. Texts were similar in problem appearance. Differences in problems related to assessment of conceptual learning. U.S. texts contained more problems requiring (a) use of definitions, (b) single computation, (c) interpreting, and (d) multiple responses. These differences may stem from cultural differences seen in attitudes toward education. Future studies should focus on density of page, spiral approach, and multiple response problems.
Resumo:
Group testing has long been considered as a safe and sensible relative to one-at-a-time testing in applications where the prevalence rate p is small. In this thesis, we applied Bayes approach to estimate p using Beta-type prior distribution. First, we showed two Bayes estimators of p from prior on p derived from two different loss functions. Second, we presented two more Bayes estimators of p from prior on π according to two loss functions. We also displayed credible and HPD interval for p. In addition, we did intensive numerical studies. All results showed that the Bayes estimator was preferred over the usual maximum likelihood estimator (MLE) for small p. We also presented the optimal β for different p, m, and k.
Resumo:
In Enterobacteriaceae, the transcriptional regulator AmpR, a member of the LysR family, regulates the expression of a chromosomal β-lactamase AmpC. The regulatory repertoire of AmpR is broader in Pseudomonas aeruginosa, an opportunistic pathogen responsible for numerous acute and chronic infections including cystic fibrosis. Previous studies showed that in addition to regulating ampC, P. aeruginosa AmpR regulates the sigma factor AlgT/U and production of some quorum sensing (QS)-regulated virulence factors. In order to better understand the ampR regulon, the transcriptional profiles generated using DNA microarrays and RNA-Seq of the prototypic P. aeruginosa PAO1 strain with its isogenic ampR deletion mutant, PAO∆ampR were analyzed. Transcriptome analysis demonstrates that the AmpR regulon is much more extensive than previously thought influencing the differential expression of over 500 genes. In addition to regulating resistance to β-lactam antibiotics via AmpC, AmpR also regulates non-β-lactam antibiotic resistance by modulating the MexEF-OprN efflux pump. Virulence mechanisms including biofilm formation, QS-regulated acute virulence, and diverse physiological processes such as oxidative stress response, heat-shock response and iron uptake are AmpR-regulated. Real-time PCR and phenotypic assays confirmed the transcriptome data. Further, Caenorhabditis elegans model demonstrates that a functional AmpR is required for full pathogenicity of P. aeruginosa. AmpR, a member of the core genome, also regulates genes in the regions of genome plasticity that are acquired by horizontal gene transfer. The extensive AmpR regulon included other transcriptional regulators and sigma factors, accounting for the extensive AmpR regulon. Gene expression studies demonstrate AmpR-dependent expression of the QS master regulator LasR that controls expression of many virulence factors. Using a chromosomally tagged AmpR, ChIP-Seq studies show direct AmpR binding to the lasR promoter. The data demonstrates that AmpR functions as a global regulator in P. aeruginosa and is a positive regulator of acute virulence while negatively regulating chronic infection phenotypes. In summary, my dissertation sheds light on the complex regulatory circuit in P. aeruginosa to provide a better understanding of the bacterial response to antibiotics and how the organism coordinately regulates a myriad of virulence factors.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.