16 resultados para Voice Digital Processing
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)
Digital signal processing and digital system design using discrete cosine transform [student course]
Resumo:
The discrete cosine transform (DCT) is an important functional block for image processing applications. The implementation of a DCT has been viewed as a specialized research task. We apply a micro-architecture based methodology to the hardware implementation of an efficient DCT algorithm in a digital design course. Several circuit optimization and design space exploration techniques at the register-transfer and logic levels are introduced in class for generating the final design. The students not only learn how the algorithm can be implemented, but also receive insights about how other signal processing algorithms can be translated into a hardware implementation. Since signal processing has very broad applications, the study and implementation of an extensively used signal processing algorithm in a digital design course significantly enhances the learning experience in both digital signal processing and digital design areas for the students.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Solid-state shear pulverization (SSSP) is a unique processing technique for mechanochemical modification of polymers, compatibilization of polymer blends, and exfoliation and dispersion of fillers in polymer nanocomposites. A systematic parametric study of the SSSP technique is conducted to elucidate the detailed mechanism of the process and establish the basis for a range of current and future operation scenarios. Using neat, single component polypropylene (PP) as the model material, we varied machine type, screw design, and feed rate to achieve a range of shear and compression applied to the material, which can be quantified through specific energy input (Ep). As a universal processing variable, Ep reflects the level of chain scission occurring in the material, which correlates well to the extent of the physical property changes of the processed PP. Additionally, we compared the operating cost estimates of SSSP and conventional twin screw extrusion to determine the practical viability of SSSP.
Resumo:
We investigated the effect of level-of-processing manipulations on “remember” and “know” responses in episodic melody recognition (Experiments 1 and 2) and how this effect is modulated by item familiarity (Experiment 2). In Experiment 1, participants performed 2 conceptual and 2 perceptual orienting tasks while listening to familiar melodies: judging the mood, continuing the tune, tracing the pitch contour, and counting long notes. The conceptual mood task led to higher d' rates for “remember” but not “know” responses. In Experiment 2, participants either judged the mood or counted long notes of tunes with high and low familiarity. A level-of-processing effect emerged again in participants’ “remember” d' rates regardless of melody familiarity. Results are discussed within the distinctive processing framework.
Resumo:
We describe a recent offering of a linear systems and signal processing course for third-year electrical and computer engineering students. This course is a pre-requisite for our first digital signal processing course. Students have traditionally viewed linear systems courses as mathematical and extremely difficult. Without compromising the rigor of the required concepts, we strived to make the course fun, with application-based hands-on laboratory projects. These projects can be modified easily to meet specific instructors' preferences. © 2011 IEEE.(17 refs)
Resumo:
Solid-state shear pulverization (SSSP) is a unique processing technique for mechanochemical modification of polymers, compatibilization of polymer blends, and exfoliation and dispersion of fillers in polymer nanocomposites. A systematic parametric study of the SSSP technique is conducted to elucidate the detailed mechanism of the process and establish the basis for a range of current and future operation scenarios. Using neat, single component polypropylene (PP) as the model material, we varied machine type, screw design, and feed rate to achieve a range of shear and compression applied to the material, which can be quantified through specific energy input (Ep). As a universal processing variable, Ep reflects the level of chain scission occurring in the material, which correlates well to the extent of the physical property changes of the processed PP. Additionally, we compared the operating cost estimates of SSSP and conventional twin screw extrusion to determine the practical viability of SSSP.
Resumo:
The fracture properties of high-strength spray-formed Al alloys were investigated, with consideration of the effects of elemental additions such as zinc,manganese, and chromium and the influence of the addition of SiC particulate. Fracture resistance values between 13.6 and 25.6 MPa (m)1/2 were obtained for the monolithic alloys in the T6 and T7 conditions, respectively. The alloys with SiC particulate compared well and achieved fracture resistance values between 18.7 and 25.6 MPa (m)1/2. The spray-formed materials exhibited a loss in fracture resistance (KI) compared to ingot metallurgy 7075 alloys but had an improvedperformance compared to high-solute powder metallurgy alloys of similar composition. Characterization of the fracture surfaces indicated a predominantly intergranular decohesion, possibly facilitated by the presence of incoherent particles at the grain boundary regions and by the large strength differentialbetween the matrix and precipitate zone. It is believed that at the slip band-grain boundary intersection, particularly in the presence of large dispersoids and/or inclusions, microvoid nucleation would be significantly enhanced. Differences in fracture surfaces between the alloys in the T6 and T7 condition were observed and are attributed to inhomogeneous slip distribution, which results in strain localization at grain boundaries. The best overall combination of fracture resistance properties were obtained for alloys with minimum amounts of chromium and manganese additions.
Resumo:
Investigates multiple processing parameters, includingpolymer type, filler type, processing technique, severity of SSSP (Solid-state shear pulverization)processing, and postprocessing, of SSSP. HDPE and LLDPE polymers with pristine clay and organo-clay samples are explored. Effects on crystallization, high-temperature behavior, mechanicalproperties, and gas barrier properties are examined. Thermal, mechanical, and morphological characterization is conducted to determine polymer/filler compatibility and superior processing methods for the polymer-clay nanocomposites.
Resumo:
Biodegradable nanoparticles are at the forefront of drug delivery research as they provide numerous advantages over traditional drug delivery methods. An important factor affecting the ability of nanoparticles to circulate within the blood stream and interact with cells is their morphology. In this study a novel processing method, confined impinging jet mixing, was used to form poly (lactic acid) nanoparticles through a solvent-diffusion process with Pluronic F-127 being used as a stabilizing agent. This study focused on the effects of Reynolds number (flow rate), surfactant presence in mixing, and polymer concentration on the morphology of poly (lactic acid) nanoparticles. In addition to looking at the parameters affecting poly (lactic acid) morphology, this study attempted to improve nanoparticle isolation and purification methods to increase nanoparticle yield and ensure specific morphologies were not being excluded during isolation and purification. The isolation and purification methods used in this study were centrifugation and a stir cell. This study successfully produced particles having pyramidal and cubic morphologies. Despite successful production of these morphologies the yield of non-spherical particles was very low, additionally great variability existed between redundant trails. Surfactant was determined to be very important for the stabilization of nanoparticles in solution but appears to be unnecessary for the formation of nanoparticles. Isolation and purification methods that produce a high yield of surfactant free particles have still not been perfected and additional testing will be necessary for improvement.¿
Resumo:
This thesis presents two frameworks- a software framework and a hardware core manager framework- which, together, can be used to develop a processing platform using a distributed system of field-programmable gate array (FPGA) boards. The software framework providesusers with the ability to easily develop applications that exploit the processing power of FPGAs while the hardware core manager framework gives users the ability to configure and interact with multiple FPGA boards and/or hardware cores. This thesis describes the design and development of these frameworks and analyzes the performance of a system that was constructed using the frameworks. The performance analysis included measuring the effect of incorporating additional hardware components into the system and comparing the system to a software-only implementation. This work draws conclusions based on the provided results of the performance analysis and offers suggestions for future work.
Resumo:
Biodegradable polymer/clay nanocomposites were prepared withpristine and organically modified montmorillonite in polylactic acid (PLA) and polycaprolactone (PCL) polymer matrices. Nanocomposites were fabricated using extrusion and SSSP to compare the effects of melt-state and solid-state processing on the morphology of the final nanocomposite. Characterization of various material properties was performed on prepared biodegradable polymer/clay nanocomposites to evaluate property enhancements from different clays and/or processing methods.