832 resultados para SPEECH-AID PROSTHESIS
Resumo:
UANL
Resumo:
Since 1986, the Canadian Public Administration is required to analyze the socio-economic impact of new regulatory requirements or regulatory changes. To report on its analysis, a Regulatory Impact Analysis Statement (RIAS) is produced and published in the Canada Gazette with the proposed regulation to which it pertains for notice to, and comments by, interested parties. After the allocated time for comments has elapsed, the regulation is adopted with a final version of the RIAS. Both documents are again published in the Canada Gazette. As a result, the RIAS acquires the status of an official public document of the Government of Canada and its content can be argued in courts as an extrinsic aid to the interpretation of a regulation. In this paper, an analysis of empirical findings on the uses of this interpretative tool by the Federal Court of Canada is made. A sample of decisions classified as unorthodox show that judges are making determinations on the basis of two distinct sets of arguments built from the information found in a RIAS and which the author calls “technocratic” and “democratic”. The author argues that these uses raise the general question of “What makes law possible in our contemporary legal systems”? for they underline enduring legal problems pertaining to the knowledge and the acceptance of the law by the governed. She concludes that this new interpretive trend of making technocratic and democratic uses of a RIAS in case law should be monitored closely as it may signal a greater change than foreseen, and perhaps an unwanted one, regarding the relationship between the government and the judiciary.
Resumo:
Thesis written in co-mentorship with Richard Chase Smith Ph.D, of El Instituto del Bien Comun (IBC) in Peru. The attached file is a pdf created in Word. The pdf file serves to preserve the accuracy of the many linguistic symbols found in the text.
Resumo:
This paper examines the ethics of refugee aid, attempting to answer “Why do States engage in refugee aid?” Moving beyond the simplistic answer based on the notion of charity, which demonstrably fits ill with the essentially positivist methodology of conducting refugee aid, an ethical model is construed based on the Weberian concept of action as an instrument of rationality. This is supported with critical readings from Hannah Arendt, amongst others, and also my own experiences as a former UNHCR aid worker. However, although this model better captures ground realities, it negates the individuality and humanity of refugees. Thus refugee aid as a form of global, transnational justice will be presented, based on readings from Amartya Sen.
Resumo:
La protéine AID (déaminase induite par l’activation) joue un rôle central dans la réponse immunitaire adaptative. En désaminant des désoxycytidines en désoxyuridines au niveau des gènes immunoglobulines, elle initie l’hypermutation somatique (SHM), la conversion génique (iGC) et la commutation isotypique (CSR). Elle est essentielle à une réponse humorale efficace en contribuant à la maturation de l’affinité des anticorps et au changement de classe isotypique. Cependant, son activité mutagénique peut être oncogénique et causer une instabilité génomique propice au développement de cancers et de maladies autoimmunes. Il est donc critique de réguler AID, en particulier ses niveaux protéiques, pour générer une réponse immunitaire efficace tout en minimisant les risques de cancer et d’autoimmunité. Un élément de régulation est le fait qu’AID transite du cytoplasme vers le noyau mais reste majoritairement cytoplasmique à l’équilibre. AID est par ailleurs plus stable dans le cytoplasme que dans le noyau, ce qui contribue à réduire sa présence à proximité de l’ADN. Le but de cette thèse était d’identifier de nouveaux partenaires et déterminants d’AID régulant sa stabilité et ses fonctions biologiques. Dans un premier temps, nous avons identifié AID comme une nouvelle protéine cliente d’HSP90. Nous avons montré qu’HSP90 interagit avec AID dans le cytoplasme, ce qui empêche la poly-ubiquitination d’AID et sa dégradation par le protéasome. En conséquence, l’inhibition d’HSP90 résulte en une diminution significative des niveaux endogènes d’AID et corrèle avec une réduction proportionnelle de ses fonctions biologiques dans la diversification des anticorps mais aussi dans l’introduction de mutations aberrantes. Dans un second temps, nous avons montré que l’étape initiale dans la stabilisation d’AID par la voie de chaperonnage d’HSP90 dépend d’HSP40 et d’HSP70. En particulier, la protéine DnaJa1, qui fait partie de la famille des protéines HSP40s, limite la stabilisation d’AID dans le cytoplasme. La farnésylation de DnaJa1 est importante pour l’interaction entre DnaJa1 et AID et moduler les niveaux de DnaJa1 ou son état de farnésylation impacte à la fois les niveaux endogènes d’AID mais aussi la diversification des anticorps. Les souris DNAJA1-/- présentent une réponse immunitaire compromise en cas d’immunisation, qui est dûe à des niveaux réduits d’AID et un défaut de commutation de classe. Dans un troisième temps, nous avons montré que la protéine AID est intrinsèquement plus instable que sesprotéines paralogues APOBEC. Nous avons identifié l’acide aspartique en seconde position d’AID ainsi qu’un motif semblable au PEST comme des modulateurs de la stabilité d’AID. La modification de ces motifs augmente la stabilité d’AID et résulte en une diversification des anticorps plus efficace. En conclusion, l’instabilité intrinsèque d’AID est un élément de régulation de la diversification des anticorps. Cette instabilité est en partie compensée dans le cytoplasme par l’action protective de la voie de chaperonnage DnaJa1-HSP90. Par ailleurs, l’utilisation d’inhibiteurs d’HSP90 ou de farnésyltransférases pourrait être un outil intéressant pour la modulation indirecte des niveaux d’AID et le traitement de lymphomes/leucémies et de maladies auto-immunes causés par AID.
Resumo:
Cette recherche exploratoire vise à documenter, du point de vue des intervenants, les conditions nécessaires à la mise en place de projets utilisant des outils de narrativité numérique, de même que les principaux apports de ces outils à l’intervention. Ces outils peuvent être des récits numériques qui sont de courtes vidéos (deux à cinq minutes) intégrant images, musique, texte, voix et animation, ou encore de courts fichiers audio, aussi appelés podcasting ou baladodiffusion. Il peut aussi s’agir de jeux vidéo interactifs ou d’un montage vidéo à partir d’extraits de témoignages. Dans un contexte où les pratiques d’intervention, dans les services publics en particulier, sont de plus en plus normées et standardisées, une recherche qui explore des outils d’intervention recourant à la créativité s’avère des plus pertinentes. Par ailleurs, ce champ n’a été que très peu exploré en service social jusqu’à maintenant. Des entrevues semi-dirigées ont été menées auprès de huit intervenants ayant utilisé ces outils dans leur pratique. L’analyse de leurs propos met d’abord en lumière les conditions nécessaires à la réalisation de ce type de projet, de même que les questions éthiques qui les accompagnent. Ensuite, du côté des principaux apports de ces outils, ils se situent, d’une part, dans le processus créatif collaboratif. Celui-ci permet d’enrichir l’intervention en donnant un espace de parole plus libre où intervenants et usagers créent des liens qui modifient le rapport hiérarchique entre aidant et aidé. D’autre part, l’attention professionnelle accordée à la réalisation des produits et à leur diffusion contribue à donner une plus grande visibilité à des personnes souvent exclues de l’espace public. Ainsi, en plus d’explorer les apports d’un outil artistique à l’intervention, cette recherche permet également d’analyser les enjeux de visibilité et de reconnaissance associés à l’utilisation de médias participatifs.
Resumo:
The influence of partisan politics on public policy is a much debated issue of political science. With respect to foreign policy, often considered as above parties, the question appears even more problematic. This comparison of foreign aid policies in 16 OECD countries develops a structural equation model and uses LISREL analysis to demonstrate that parties do matter, even in international affairs. Social-democratic parties have an effect on a country's level of development assistance. This effect, however, is neither immediate nor direct. First, it appears only in the long run. Second, the relationship between leftist partisan strength and foreign aid works through welfare state institutions and social spending. Our findings indicate how domestic politics shapes foreign conduct. We confirm the empirical relevance of cumulative partisan scores and show how the influence of parties is mediated by other political determinants.
Resumo:
Medical fields requires fast, simple and noninvasive methods of diagnostic techniques. Several methods are available and possible because of the growth of technology that provides the necessary means of collecting and processing signals. The present thesis details the work done in the field of voice signals. New methods of analysis have been developed to understand the complexity of voice signals, such as nonlinear dynamics aiming at the exploration of voice signals dynamic nature. The purpose of this thesis is to characterize complexities of pathological voice from healthy signals and to differentiate stuttering signals from healthy signals. Efficiency of various acoustic as well as non linear time series methods are analysed. Three groups of samples are used, one from healthy individuals, subjects with vocal pathologies and stuttering subjects. Individual vowels/ and a continuous speech data for the utterance of the sentence "iruvarum changatimaranu" the meaning in English is "Both are good friends" from Malayalam language are recorded using a microphone . The recorded audio are converted to digital signals and are subjected to analysis.Acoustic perturbation methods like fundamental frequency (FO), jitter, shimmer, Zero Crossing Rate(ZCR) were carried out and non linear measures like maximum lyapunov exponent(Lamda max), correlation dimension (D2), Kolmogorov exponent(K2), and a new measure of entropy viz., Permutation entropy (PE) are evaluated for all three groups of the subjects. Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. The results shows that nonlinear dynamical methods seem to be a suitable technique for voice signal analysis, due to the chaotic component of the human voice. Permutation entropy is well suited due to its sensitivity to uncertainties, since the pathologies are characterized by an increase in the signal complexity and unpredictability. Pathological groups have higher entropy values compared to the normal group. The stuttering signals have lower entropy values compared to the normal signals.PE is effective in charaterising the level of improvement after two weeks of speech therapy in the case of stuttering subjects. PE is also effective in characterizing the dynamical difference between healthy and pathological subjects. This suggests that PE can improve and complement the recent voice analysis methods available for clinicians. The work establishes the application of the simple, inexpensive and fast algorithm of PE for diagnosis in vocal disorders and stuttering subjects.
Resumo:
This thesis investigates the potential use of zerocrossing information for speech sample estimation. It provides 21 new method tn) estimate speech samples using composite zerocrossings. A simple linear interpolation technique is developed for this purpose. By using this method the A/D converter can be avoided in a speech coder. The newly proposed zerocrossing sampling theory is supported with results of computer simulations using real speech data. The thesis also presents two methods for voiced/ unvoiced classification. One of these methods is based on a distance measure which is a function of short time zerocrossing rate and short time energy of the signal. The other one is based on the attractor dimension and entropy of the signal. Among these two methods the first one is simple and reguires only very few computations compared to the other. This method is used imtea later chapter to design an enhanced Adaptive Transform Coder. The later part of the thesis addresses a few problems in Adaptive Transform Coding and presents an improved ATC. Transform coefficient with maximum amplitude is considered as ‘side information’. This. enables more accurate tfiiz assignment enui step—size computation. A new bit reassignment scheme is also introduced in this work. Finally, sum ATC which applies switching between luiscrete Cosine Transform and Discrete Walsh-Hadamard Transform for voiced and unvoiced speech segments respectively is presented. Simulation results are provided to show the improved performance of the coder
Resumo:
Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold
Resumo:
This thesis investigated the potential use of Linear Predictive Coding in speech communication applications. A Modified Block Adaptive Predictive Coder is developed, which reduces the computational burden and complexity without sacrificing the speech quality, as compared to the conventional adaptive predictive coding (APC) system. For this, changes in the evaluation methods have been evolved. This method is as different from the usual APC system in that the difference between the true and the predicted value is not transmitted. This allows the replacement of the high order predictor in the transmitter section of a predictive coding system, by a simple delay unit, which makes the transmitter quite simple. Also, the block length used in the processing of the speech signal is adjusted relative to the pitch period of the signal being processed rather than choosing a constant length as hitherto done by other researchers. The efficiency of the newly proposed coder has been supported with results of computer simulation using real speech data. Three methods for voiced/unvoiced/silent/transition classification have been presented. The first one is based on energy, zerocrossing rate and the periodicity of the waveform. The second method uses normalised correlation coefficient as the main parameter, while the third method utilizes a pitch-dependent correlation factor. The third algorithm which gives the minimum error probability has been chosen in a later chapter to design the modified coder The thesis also presents a comparazive study beh-cm the autocorrelation and the covariance methods used in the evaluaiicn of the predictor parameters. It has been proved that the azztocorrelation method is superior to the covariance method with respect to the filter stabf-it)‘ and also in an SNR sense, though the increase in gain is only small. The Modified Block Adaptive Coder applies a switching from pitch precitzion to spectrum prediction when the speech segment changes from a voiced or transition region to an unvoiced region. The experiments cont;-:ted in coding, transmission and simulation, used speech samples from .\£=_‘ajr2_1a:r1 and English phrases. Proposal for a speaker reecgnifion syste: and a phoneme identification system has also been outlized towards the end of the thesis.
Resumo:
Speech processing and consequent recognition are important areas of Digital Signal Processing since speech allows people to communicate more natu-rally and efficiently. In this work, a speech recognition system is developed for re-cognizing digits in Malayalam. For recognizing speech, features are to be ex-tracted from speech and hence feature extraction method plays an important role in speech recognition. Here, front end processing for extracting the features is per-formed using two wavelet based methods namely Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Naive Bayes classifier is used for classification purpose. After classification using Naive Bayes classifier, DWT produced a recognition accuracy of 83.5% and WPD produced an accuracy of 80.7%. This paper is intended to devise a new feature extraction method which produces improvements in the recognition accuracy. So, a new method called Dis-crete Wavelet Packet Decomposition (DWPD) is introduced which utilizes the hy-brid features of both DWT and WPD. The performance of this new approach is evaluated and it produced an improved recognition accuracy of 86.2% along with Naive Bayes classifier.
Resumo:
Speech is the most natural means of communication among human beings and speech processing and recognition are intensive areas of research for the last five decades. Since speech recognition is a pattern recognition problem, classification is an important part of any speech recognition system. In this work, a speech recognition system is developed for recognizing speaker independent spoken digits in Malayalam. Voice signals are sampled directly from the microphone. The proposed method is implemented for 1000 speakers uttering 10 digits each. Since the speech signals are affected by background noise, the signals are tuned by removing the noise from it using wavelet denoising method based on Soft Thresholding. Here, the features from the signals are extracted using Discrete Wavelet Transforms (DWT) because they are well suitable for processing non-stationary signals like speech. This is due to their multi- resolutional, multi-scale analysis characteristics. Speech recognition is a multiclass classification problem. So, the feature vector set obtained are classified using three classifiers namely, Artificial Neural Networks (ANN), Support Vector Machines (SVM) and Naive Bayes classifiers which are capable of handling multiclasses. During classification stage, the input feature vector data is trained using information relating to known patterns and then they are tested using the test data set. The performances of all these classifiers are evaluated based on recognition accuracy. All the three methods produced good recognition accuracy. DWT and ANN produced a recognition accuracy of 89%, SVM and DWT combination produced an accuracy of 86.6% and Naive Bayes and DWT combination produced an accuracy of 83.5%. ANN is found to be better among the three methods.