904 resultados para POLYMER INTERACTION PARAMETERS
Resumo:
This paper reports the findings from a study of the learning of English intonation by Spanish speakers within the discourse mode of L2 oral presentation. The purpose of this experiment is, firstly, to compare four prosodic parameters before and after an L2 discourse intonation training programme and, secondly, to confirm whether subjects, after the aforementioned L2 discourse intonation training, are able to match the form of these four prosodic parameters to the discourse-pragmatic function of dominance and control. The study designed the instructions and tasks to create the oral and written corpora and Brazil’s Pronunciation for Advanced Learners of English was adapted for the pedagogical aims of the present study. The learners’ pre- and post-tasks were acoustically analysed and a pre / post- questionnaire design was applied to interpret the acoustic analysis. Results indicate most of the subjects acquired a wider choice of the four prosodic parameters partly due to the prosodically-annotated transcripts that were developed throughout the L2 discourse intonation course. Conversely, qualitative and quantitative data reveal most subjects failed to match the forms to their appropriate pragmatic functions to express dominance and control in an L2 oral presentation.
Resumo:
Interaction of ocean waves, currents and sea bed roughness is a complicated phenomena in fluid dynamic. This paper will describe the governing equations of motions of this phenomena in viscous and nonviscous conditions as well as study and analysis the experimental results of sets of physical models on waves, currents and artificial roughness, and consists of three parts: First, by establishing some typical patterns of roughness, the effects of sea bed roughness on a uniform current has been studied, as well as the manning coefficient of each type is reviewed to find the critical situation due to different arrangement. Second, the effect of roughness on wave parameters changes, such as wave height, wave length, and wave dispersion equations have been studied, third, superimposing, the waves + current + roughness patterns established in a flume, equipped with waves + currents generator, in this stage different analysis has been done to find the governing dimensionless numbers, and present the numbers to define the contortions and formulations of this phenomena. First step of the model is verified by the so called Chinese method, and the Second step by the Kamphius (1975), and third step by the van Rijn (1990) , and Brevik and Ass ( 1980), and in all cases reasonable agreements have been obtained. Finally new dimensionless parameters presented for this complicated phenomena.
Resumo:
The effect of swell on wind wave growth has been a topic of active research for many years with inconsistent results. The details are often contradictory among investigations. Further more, there remain a variety of competing theories to explain these phenomena. In this research, we consider waves and wind and temperature data in the Persian Gulf (Busher region) in years 1995, 1996 and 1999. This study provides estimations of wave conditions and the atmosphere stability that has an influence on wind wave. Results are also compared with data that have been recorded by a buoy in Caspian Sea (Neka region) during 1989. In the second part of this work we estimate non- dimensional energy and non-dimensional peak frequencies as a function of the non- dimensional fetch and Bulk Richardson numbers for the Persian Gulf (Busher region).This results also agree well with similar results for the Caspian Sea. The acquired relations can be used to compute the wind wave parameters. Also the results for the Persian Gulf show that the relationship of non-dimensional energy to as a function of wave age is independent of presence of swell. Finally the WAM model was run for the Persian Gulf during 3-8 September of 2002. The results show that swell on the Persian Gulf reduces the energy density of wind waves by up to 10%, but the growth rate at peak frequency is only reduced by up to 4%, and the spectral peak frequency is increased by only 1%.
Resumo:
Eucalyptus pellita demonstrated good growth and wood quality traits in this study, with young plantation grown timber being suitable for both solid and pulp wood products. All traits examined were under moderate levels of genetic control with little genotype by environment interaction when grown on two contrasting sites in Vietnam. Eucalyptus pellita currently has a significant role in reforestation in the tropics. Research to support expanded of use of this species is needed: particularly, research to better understand the genetic control of key traits will facilitate the development of genetically improved planting stock. This study aimed to provide estimates of the heritability of diameter at breast height over bark, wood basic density, Kraft pulp yield, modulus of elasticity and microfibril angle, and the genetic correlations among these traits, and understand the importance of genotype by environment interactions in Vietnam. Data for diameter and wood properties were collected from two 10-year-old, open-pollinated progeny trials of E. pellita in Vietnam that evaluated 104 families from six native range and three orchard sources. Wood properties were estimated from wood samples using near-infrared (NIR) spectroscopy. Data were analysed using mixed linear models to estimate genetic parameters (heritability, proportion of variance between seed sources and genetic correlations). Variation among the nine sources was small compared to additive variance. Narrow-sense heritability and genetic correlation estimates indicated that simultaneous improvements in most traits could be achieved from selection among and within families as the genetic correlations among traits were either favourable or close to zero. Type B genetic correlations approached one for all traits suggesting that genotype by environment interactions were of little importance. These results support a breeding strategy utilizing a single breeding population advanced by selecting the best individuals across all seed sources. Both growth and wood properties have been evaluated. Multi-trait selection for growth and wood property traits will lead to more productive populations of E. pellita both with improved productivity and improved timber and pulp properties.
Resumo:
Every year in the US and other cold-climate countries considerable amount of money is spent to restore structural damages in conventional bridges resulting from (or “caused by”) salt corrosion in bridge expansion joints. Frequent usage of deicing salt in conventional bridges with expansion joints results in corrosion and other damages to the expansion joints, steel girders, stiffeners, concrete rebar, and any structural steel members in the abutments. The best way to prevent these damages is to eliminate the expansion joints at the abutment and elsewhere and make the entire bridge abutment and deck a continuous monolithic structural system. This type of bridge is called Integral Abutment Bridge which is now widely used in the US and other cold-climate countries. In order to provide lateral flexibility, the entire abutment is constructed on piles. Piles used in integral abutments should have enough capacity in the perpendicular direction to support the vertical forces. In addition, piles should be able to withstand corrosive environments near the surface of the ground and maintain their performance during the lifespan of the bridge. Fiber Reinforced Polymer (FRP) piles are a new type of pile that can not only accommodate large displacements, but can also resist corrosion significantly better than traditional steel or concrete piles. The use of FRP piles extends the life of the pile which in turn extends the life of the bridge. This dissertation studies FRP piles with elliptical shapes. The elliptical shapes can simultaneously provide flexibility and stiffness in two perpendicular axes. The elliptical shapes can be made using the filament winding method which is a less expensive method of manufacturing compared to the pultrusion or other manufacturing methods. In this dissertation a new way is introduced to construct the desired elliptical shapes with the filament winding method. Pile specifications such as dimensions, number of layers, fiber orientation angles, material, and soil stiffness are defined as parameters and the effects of each parameter on the pile stresses and pile failure have been studied. The ANSYS software has been used to model the composite materials. More than 14,000 nonlinear finite element pile models have been created, each slightly different from the others. The outputs of analyses have been used to draw curves. Optimum values of the parameters have been defined using generated curves. The best approaches to find optimum shape, angle of fibers and types of composite material have been discussed.
Resumo:
Polymer aluminum electrolytic capacitors were introduced to provide an alternative to liquid electrolytic capacitors. Polymer electrolytic capacitor electric parameters of capacitance and ESR are less temperature dependent than those of liquid aluminum electrolytic capacitors. Furthermore, the electrical conductivity of the polymer used in these capacitors (poly-3,4ethylenedioxithiophene) is orders of magnitude higher than the electrolytes used in liquid aluminum electrolytic capacitors, resulting in capacitors with much lower equivalent series resistance which are suitable for use in high ripple-current applications. The presence of the moisture-sensitive polymer PEDOT introduces concerns on the reliability of polymer aluminum capacitors in high humidity conditions. Highly accelerated stress testing (or HAST) (110ºC, 85% relative humidity) of polymer aluminum capacitors in which the parts were subjected to unbiased HAST conditions for 700 hours was done to understand the design factors that contribute to the susceptibility to degradation of a polymer aluminum electrolytic capacitor exposed to HAST conditions. A large scale study involving capacitors of different electrical ratings (2.5V – 16V, 100µF – 470 µF), mounting types (surface-mount and through-hole) and manufacturers (6 different manufacturers) was done to determine a relationship between package geometry and reliability in high temperature-humidity conditions. A Geometry-Based HAST test in which the part selection limited variations between capacitor samples to geometric differences only was done to analyze the effect of package geometry on humidity-driven degradation more closely. Raman spectroscopy, x-ray imaging, environmental scanning electron microscopy, and destructive analysis of the capacitors after HAST exposure was done to determine the failure mechanisms of polymer aluminum capacitors under high temperature-humidity conditions.
Resumo:
This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.
Resumo:
Background: Long-term exposure to infrasound and low frequency noise (ILFN <500 Hz, including infrasound) can lead to the development of vibroacoustic disease (VAD). VAD is a systemic pathology characterized by the abnormal growth of extracellular matrices in the absence of inflammatory processes, namely of collagen and elastin, both of which are abundant in the basement membrane zone of the vocal folds. ILFN-exposed workers include pilots, cabin crewmembers, restaurant workers, ship machinists and, in previous studies, even though they did not present vocal symptoms, ILFN-exposed workers had significant different voice acoustic patterns (perturbation and temporal measures) when compared with normative population. Study Aims: The present study investigates the effects of age and years of occupational ILFN-exposure on voice acoustic parameters of 37 cabin crewmembers: 12 males and 25 females. Specifically, the goals of this study are to: 1) Verify if acoustic parameters change over the age and years of ILFN-exposure and 2) Determine if there is any interaction between age and years of ILFNexposure on voice acoustic parameters of crewmembers. Materials and Methods: Spoken phonatory tasks were recorded with a C420III PP AKG head-worn microphone and a DA-P1 Tascam DAT. Acoustic analyses were performed using KayPENTAX Computer Speech Lab and Multi-Dimensional Voice Program. Acoustic parameters included speaking fundamental frequency, perturbation measures (jitter, shimmer and harmonicto- noise ratio), temporal measures (maximum phonation time and s/z ratio) and voice tremor frequency. Results: One-way ANOVA analysis revealed that as the number of ILFN-exposure years increased male cabin crewmembers presented significant different shimmer values of /i/ as well as tremor frequency of /u/. Females presented significantly different jitter % of /i, a, O/ (p <0.05). Lastly, Two-way ANOVA analysis revealed that for females, there was a significant interaction between age and occupational ILFN-exposure for voice acoustic parameters, namely for jitter’s mean for /a, O/ and shimmer’s (%) mean for /a, i/ (p <0.05). Discussion and Conclusion: These perturbation measure patterns may be indicative of histological changes within the vocal folds as a result of ILFN-exposure. The results of this study suggest that voice acoustic analysis may be an important tool for confirming ILFN-induced health effects.
Resumo:
Safety in civil aviation is increasingly important due to the increase in flight routes and their more challenging nature. Like other important systems in aircraft, fuel level monitoring is always a technical challenge. The most frequently used level sensors in aircraft fuel systems are based on capacitive, ultrasonic and electric techniques, however they suffer from intrinsic safety concerns in explosive environments combined with issues relating to reliability and maintainability. In the last few years, optical fiber liquid level sensors (OFLLSs) have been reported to be safe and reliable and present many advantages for aircraft fuel measurement. Different OFLLSs have been developed, such as the pressure type, float type, optical radar type, TIR type and side-leaking type. Amongst these, many types of OFLLSs based on fiber gratings have been demonstrated. However, these sensors have not been commercialized because they exhibit some drawbacks: low sensitivity, limited range, long-term instability, or limited resolution. In addition, any sensors that involve direct interaction of the optical field with the fuel (either by launching light into the fuel tank or via the evanescent field of a fiber-guided mode) must be able to cope with the potential build up of contamination-often bacterial-on the optical surface. In this paper, a fuel level sensor based on microstructured polymer optical fiber Bragg gratings (mPOFBGs), including poly (methyl methacrylate) (PMMA) and TOPAS fibers, embedded in diaphragms is investigated in detail. The mPOFBGs are embedded in two different types of diaphragms and their performance is investigated with aviation fuel for the first time, in contrast to our previous works, where water was used. Our new system exhibits a high performance when compared with other previously published in the literature, making it a potentially useful tool for aircraft fuel monitoring.
Resumo:
Support Vector Machines (SVMs) are widely used classifiers for detecting physiological patterns in Human-Computer Interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the application of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables, and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported.
Resumo:
La « pensée mixte » est une approche de la composition caractérisée par l’interaction de trois pensées: la pensée instrumentale, la pensée électroacoustique et la pensée informatique. Elle prend la forme d’un réseau où le compositeur fait des aller-retours entre les trois pensées et réalise des équivalences paramétriques. La pensée instrumentale se rattache à la tradition de l’écriture occidentale, la pensée électroacoustique fait allusion aux pratiques du studio analogique et de la musique acousmatique, et la pensée informatique fait référence aux pratiques numériques de la programmation visuelle et de l’analyse spectrale. Des lieux communs existent où s’opèrent l’interaction des trois pensées: la notion du studio instrumental de Ivo Malec, la notion de musique concrète instrumentale de Helmut Lachenmann, la composition assistée par ordinateur, la musique spectrale, l’approche instrumentale par montage, la musique acousmatique s’inspirant de la tradition musicale écrite et les musiques mixtes. Ces domaines constituent les influences autour desquelles j’ai composé un corpus de deux cycles d’œuvres: Les Larmes du Scaphandre et le Nano-Cosmos. L’analyse des œuvres met en évidence la notion de « pensée mixte » en abordant la pensée électroacoustique dans ma pratique instrumentale, la pensée informatique dans ma pratique musicale, et la pensée instrumentale dans ma pratique électroacoustique.
Resumo:
Au sprint 100 mètres et dans de nombreux sport de puissance, la phase d’accélération est un déterminant majeure de la performance. Toutefois, les asymétries cinétiques et cinématiques peuvent avoir une incidence sur la performance. L’objectif de cette étude était d’identifier la présence d’interaction entre différentes variables cinétiques et cinématiques angulaires aux membres inférieures (MI) d’un sprint de haute intensité sur un ergomètre non-motorisé avec résistance (NMR). Suite à une rencontre de familiarisation, 11 sujets ont exécuté des sprints de 40 verges. Les données cinétiques ont été obtenues par l’entremise de plateformes de force intégrées aux appuis de l’ergomètre NMR à 10 Hz et les données cinématiques ont été amassées à l’aide du système Optitrack et du logiciel Motive Tracker à 120Hz. Nous avons effectué un test de corrélation linéaire (Corrélation linéaire de Pearson) pour déterminer la relation entre les données cinétiques et cinématiques (p < 0,05). L’analyse des données a révélée (1) une corrélation positive entre la moyenne d’amplitude articulaire à la cheville et la moyenne des pics de puissance développés (W/kg) lors de la phase de maintien (r = 0,62), (2) une corrélation négative entre l’extension maximale moyenne (calculé à partir de l’angle de flexion le plus petit) à la hanche et la moyenne de pics de puissance développées en fin de poussée lors de la totalité et de la phase de maintien (r = -0,63 et r = -0,69 respectivement), et finalement (3) une corrélation négative entre la différence de dorsiflexion maximale à la cheville et la différence des pics de puissance développés aux MI lors du contact du pied au sol en phase de maintien ( r = -0,62). Les résultats obtenus dans cette étude permettront d’améliorer l’intervention des préparateurs physiques et la pratique des athlètes de sport de puissance en plus d’aider au développant de nouvelles technologies et outils d’entrainement complémentaire au sprint et particulièrement à la phase d’accélération.
Resumo:
La « pensée mixte » est une approche de la composition caractérisée par l’interaction de trois pensées: la pensée instrumentale, la pensée électroacoustique et la pensée informatique. Elle prend la forme d’un réseau où le compositeur fait des aller-retours entre les trois pensées et réalise des équivalences paramétriques. La pensée instrumentale se rattache à la tradition de l’écriture occidentale, la pensée électroacoustique fait allusion aux pratiques du studio analogique et de la musique acousmatique, et la pensée informatique fait référence aux pratiques numériques de la programmation visuelle et de l’analyse spectrale. Des lieux communs existent où s’opèrent l’interaction des trois pensées: la notion du studio instrumental de Ivo Malec, la notion de musique concrète instrumentale de Helmut Lachenmann, la composition assistée par ordinateur, la musique spectrale, l’approche instrumentale par montage, la musique acousmatique s’inspirant de la tradition musicale écrite et les musiques mixtes. Ces domaines constituent les influences autour desquelles j’ai composé un corpus de deux cycles d’œuvres: Les Larmes du Scaphandre et le Nano-Cosmos. L’analyse des œuvres met en évidence la notion de « pensée mixte » en abordant la pensée électroacoustique dans ma pratique instrumentale, la pensée informatique dans ma pratique musicale, et la pensée instrumentale dans ma pratique électroacoustique.
Resumo:
Au sprint 100 mètres et dans de nombreux sport de puissance, la phase d’accélération est un déterminant majeure de la performance. Toutefois, les asymétries cinétiques et cinématiques peuvent avoir une incidence sur la performance. L’objectif de cette étude était d’identifier la présence d’interaction entre différentes variables cinétiques et cinématiques angulaires aux membres inférieures (MI) d’un sprint de haute intensité sur un ergomètre non-motorisé avec résistance (NMR). Suite à une rencontre de familiarisation, 11 sujets ont exécuté des sprints de 40 verges. Les données cinétiques ont été obtenues par l’entremise de plateformes de force intégrées aux appuis de l’ergomètre NMR à 10 Hz et les données cinématiques ont été amassées à l’aide du système Optitrack et du logiciel Motive Tracker à 120Hz. Nous avons effectué un test de corrélation linéaire (Corrélation linéaire de Pearson) pour déterminer la relation entre les données cinétiques et cinématiques (p < 0,05). L’analyse des données a révélée (1) une corrélation positive entre la moyenne d’amplitude articulaire à la cheville et la moyenne des pics de puissance développés (W/kg) lors de la phase de maintien (r = 0,62), (2) une corrélation négative entre l’extension maximale moyenne (calculé à partir de l’angle de flexion le plus petit) à la hanche et la moyenne de pics de puissance développées en fin de poussée lors de la totalité et de la phase de maintien (r = -0,63 et r = -0,69 respectivement), et finalement (3) une corrélation négative entre la différence de dorsiflexion maximale à la cheville et la différence des pics de puissance développés aux MI lors du contact du pied au sol en phase de maintien ( r = -0,62). Les résultats obtenus dans cette étude permettront d’améliorer l’intervention des préparateurs physiques et la pratique des athlètes de sport de puissance en plus d’aider au développant de nouvelles technologies et outils d’entrainement complémentaire au sprint et particulièrement à la phase d’accélération.
Resumo:
The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy–light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin–orbit and tensor forces and do not allow to separate the spin–spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark–antiquark interactions.