12 resultados para Speaker Recognition, Text-constrained, Multilingual, Speaker Verification, HMMs
em Digital Commons - Michigan Tech
Resumo:
David Salmela is the special guest speaker for the opening reception.
Resumo:
Susan Martin, professor emerita in the Department of Social Sciences at Michigan Technological University, will welcome attendees to the speaker series.
Resumo:
A light breakfast is included for attendees that preregister for speaker series. A registration table will also be available for registrants who wish to pick up event materials and for walk-in registrants. Walk-in registrants are welcome, but meal tickets may not be available.
Resumo:
Retrospection & Respect: The 1913-1914 Mining/Labor Strike Symposium of 2014 and FinnForumX will host a dinner to conclude the events of the weekend, featuring keynote speaker Arnold Alanen and entertainment by the 1913 Singers. Separate registration by April 9, 2014 is required.
Resumo:
Lunch is included for attendees that preregistered for the speaker series by April 9, 2014.
Resumo:
Attendees that preregistered for the speaker series are invited to the opening reception, featuring guest speaker David Salmela and remarks from Scott See, Executive Director of the Keweenaw National Historical Park Advisory Commission. A registration table will also be available for registrants who wish to pick up event materials.
Resumo:
The goal of this research is to provide a framework for vibro-acoustical analysis and design of a multiple-layer constrained damping structure. The existing research on damping and viscoelastic damping mechanism is limited to the following four mainstream approaches: modeling techniques of damping treatments/materials; control through the electrical-mechanical effect using the piezoelectric layer; optimization by adjusting the parameters of the structure to meet the design requirements; and identification of the damping material’s properties through the response of the structure. This research proposes a systematic design methodology for the multiple-layer constrained damping beam giving consideration to vibro-acoustics. A modeling technique to study the vibro-acoustics of multiple-layered viscoelastic laminated beams using the Biot damping model is presented using a hybrid numerical model. The boundary element method (BEM) is used to model the acoustical cavity whereas the Finite Element Method (FEM) is the basis for vibration analysis of the multiple-layered beam structure. Through the proposed procedure, the analysis can easily be extended to other complex geometry with arbitrary boundary conditions. The nonlinear behavior of viscoelastic damping materials is represented by the Biot damping model taking into account the effects of frequency, temperature and different damping materials for individual layers. A curve-fitting procedure used to obtain the Biot constants for different damping materials for each temperature is explained. The results from structural vibration analysis for selected beams agree with published closed-form results and results for the radiated noise for a sample beam structure obtained using a commercial BEM software is compared with the acoustical results of the same beam with using the Biot damping model. The extension of the Biot damping model is demonstrated to study MDOF (Multiple Degrees of Freedom) dynamics equations of a discrete system in order to introduce different types of viscoelastic damping materials. The mechanical properties of viscoelastic damping materials such as shear modulus and loss factor change with respect to different ambient temperatures and frequencies. The application of multiple-layer treatment increases the damping characteristic of the structure significantly and thus helps to attenuate the vibration and noise for a broad range of frequency and temperature. The main contributions of this dissertation include the following three major tasks: 1) Study of the viscoelastic damping mechanism and the dynamics equation of a multilayer damped system incorporating the Biot damping model. 2) Building the Finite Element Method (FEM) model of the multiple-layer constrained viscoelastic damping beam and conducting the vibration analysis. 3) Extending the vibration problem to the Boundary Element Method (BEM) based acoustical problem and comparing the results with commercial simulation software.
Resumo:
In an increasingly interconnected world characterized by the accelerating interplay of cultural, linguistic, and national difference, the ability to negotiate that difference in an equitable and ethical manner is a crucial skill for both individuals and larger social groups. This dissertation, Writing Center Handbooks and Travel Guidebooks: Redesigning Instructional Texts for Multicultural, Multilingual, and Multinational Contexts, considers how instructional texts that ostensibly support the negotiation of difference (i.e., accepting and learning from difference) actually promote the management of difference (i.e., rejecting, assimilating, and erasing difference). As a corrective to this focus on managing difference, chapter two constructs a theoretical framework that facilitates the redesign of handbooks, guidebooks, and similar instructional texts. This framework centers on reflexive design practices and is informed by literacy theory (Gee; New London Group; Street), social learning theory (Wenger), globalization theory (Nederveen Pieterse), and composition theory (Canagarajah; Horner and Trimbur; Lu; Matsuda; Pratt). By implementing reflexive design practices in the redesign of instructional texts, this dissertation argues that instructional texts can promote the negotiation of difference and a multicultural/multilingual sensibility that accounts for twenty-first century linguistic and cultural realities. Informed by the theoretical framework of chapter two, chapters three and four conduct a rhetorical analysis of two forms of instructional text that are representative of the larger genre: writing center coach handbooks and travel guidebooks to Hong Kong. This rhetorical analysis reveals how both forms of text employ rhetorical strategies that uphold dominant monolingual and monocultural assumptions. Alternative rhetorical strategies are then proposed that can be used to redesign these two forms of instructional texts in a manner that aligns with multicultural and multilingual assumptions. These chapters draw on the work of scholars in Writing Center Studies (Boquet and Lerner; Carino; DiPardo; Grimm; North; Severino) and Technical Communication (Barton and Barton; Dilger; Johnson; Kimball; Slack), respectively. Chapter five explores how the redesign of coach handbooks and travel guidebooks proposed in this dissertation can be conceptualized as a political act. Ultimately, this dissertation argues that instructional texts are powerful heuristic tools that can enact social change if they are redesigned to foster the negotiation of difference and to promote multicultural/multilingual world views.
Resumo:
Groundwater pumping from aquifers in hydraulic connection with nearby streams is known to cause adverse impacts by decreasing flows to levels below those necessary to maintain aquatic ecosystems. The recent passage of the Great Lakes--St. Lawrence River Basin Water Resources Compact has brought attention to this issue in the Great Lakes region. In particular, the legislation requires the Great Lakes states to enact measures for limiting water withdrawals that can cause adverse ecosystem impacts. This study explores how both hydrogeologic and environmental flow limitations constrain groundwater availability in the Great Lakes Basin. A methodology for calculating maximum allowable pumping rates is presented. Groundwater availability across the basin is shown to be constrained by a combination of hydrogeologic yield and environmental flow limitations varying over both local and regional scales. The results are sensitive to factors such as pumping time and streamflow depletion limits as well as streambed conductance. Understanding how these restrictions constrain groundwater usage and which hydrogeologic characteristics and spatial variables have the most influence on potential streamflow depletions has important water resources policy and management implications.
Resumo:
The fields of Rhetoric and Communication usually assume a competent speaker who is able to speak well with conscious intent; however, what happens when intent and comprehension are intact but communicative facilities are impaired (e.g., by stroke or traumatic brain injury)? What might a focus on communicative success be able to tell us in those instances? This project considers this question in examining communication disorders through identifying and analyzing patterns of (dis) fluent speech between 10 aphasic and 10 non-aphasic adults. The analysis in this report is centered on a collection of data provided by the Aphasia Bank database. The database’s collection protocol guides aphasic and non-aphasic participants through a series of language assessments, and for my re-analysis of the database’s transcripts I consider communicative success is and how it is demonstrated during a re-telling of the Cinderella narrative. I conducted a thorough examination of a set of participant transcripts to understand the contexts in which speech errors occur, and how (dis) fluencies may follow from aphasic and non-aphasic participant’s speech patterns. An inductive mixed-methods approach, informed by grounded theory, qualitative, and linguistic analyses of the transcripts functioned as a means to balance the classification of data, providing a foundation for all sampling decisions. A close examination of the transcripts and the codes of the Aphasia Bank database suggest that while the coding is abundant and detailed, that further levels of coding and analysis may be needed to reveal underlying similarities and differences in aphasic vs. non-aphasic linguistic behavior. Through four successive levels of increasingly detailed analysis, I found that patterns of repair by aphasics and non-aphasics differed primarily in degree rather than kind. This finding may have therapeutic impact, in reassuring aphasics that they are on the right track to achieving communicative fluency.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.
MINING AND VERIFICATION OF TEMPORAL EVENTS WITH APPLICATIONS IN COMPUTER MICRO-ARCHITECTURE RESEARCH
Resumo:
Computer simulation programs are essential tools for scientists and engineers to understand a particular system of interest. As expected, the complexity of the software increases with the depth of the model used. In addition to the exigent demands of software engineering, verification of simulation programs is especially challenging because the models represented are complex and ridden with unknowns that will be discovered by developers in an iterative process. To manage such complexity, advanced verification techniques for continually matching the intended model to the implemented model are necessary. Therefore, the main goal of this research work is to design a useful verification and validation framework that is able to identify model representation errors and is applicable to generic simulators. The framework that was developed and implemented consists of two parts. The first part is First-Order Logic Constraint Specification Language (FOLCSL) that enables users to specify the invariants of a model under consideration. From the first-order logic specification, the FOLCSL translator automatically synthesizes a verification program that reads the event trace generated by a simulator and signals whether all invariants are respected. The second part consists of mining the temporal flow of events using a newly developed representation called State Flow Temporal Analysis Graph (SFTAG). While the first part seeks an assurance of implementation correctness by checking that the model invariants hold, the second part derives an extended model of the implementation and hence enables a deeper understanding of what was implemented. The main application studied in this work is the validation of the timing behavior of micro-architecture simulators. The study includes SFTAGs generated for a wide set of benchmark programs and their analysis using several artificial intelligence algorithms. This work improves the computer architecture research and verification processes as shown by the case studies and experiments that have been conducted.