993 resultados para Typical application
Resumo:
Software architectural evaluation is a key discipline used to identify, at early stages of a real-time system (RTS) development, the problems that may arise during its operation. Typical mechanisms supporting concurrency, such as semaphores, mutexes or monitors, usually lead to concurrency problems in execution time that are difficult to be identified, reproduced and solved. For this reason, it is crucial to understand the root causes of these problems and to provide support to identify and mitigate them at early stages of the system lifecycle. This paper aims to present the results of a research work oriented to the development of the tool called ‘Deadlock Risk Evaluation of Architectural Models’ (DREAM) to assess deadlock risk in architectural models of an RTS. A particular architectural style, Pipelines of Processes in Object-Oriented Architectures–UML (PPOOA) was used to represent platform-independent models of an RTS architecture supported by the PPOOA –Visio tool. We validated the technique presented here by using several case studies related to RTS development and comparing our results with those from other deadlock detection approaches, supported by different tools. Here we present two of these case studies, one related to avionics and the other to planetary exploration robotics. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
This research investigates the ultimate earthquake resistance of typical RC moment resisting frames designed accordingly to current standards, in terms of ultimate energy absorption/dissipation capacity. Shake table test of a 2/5 scale model, under several intensities of ground motion, are carried out. The loading effect of the earthquake is expressed as the total energy that the quake inputs to the structure, and the seismic resistance is interpreted as the amount of energy that the structure dissipates in terms of cumulative inelastic strain energy.
Resumo:
In this work, the power management techniques implemented in a high-performance node for Wireless Sensor Networks (WSN) based on a RAM-based FPGA are presented. This new node custom architecture is intended for high-end WSN applications that include complex sensor management like video cameras, high compute demanding tasks such as image encoding or robust encryption, and/or higher data bandwidth needs. In the case of these complex processing tasks, yet maintaining low power design requirements, it can be shown that the combination of different techniques such as extensive HW algorithm mapping, smart management of power islands to selectively switch on and off components, smart and low-energy partial reconfiguration, an adequate set of save energy modes and wake up options, all combined, may yield energy results that may compete and improve energy usage of typical low power microcontrollers used in many WSN node architectures. Actually, results show that higher complexity tasks are in favor of HW based platforms, while the flexibility achieved by dynamic and partial reconfiguration techniques could be comparable to SW based solutions.
Resumo:
The design of nuclear power plant has to follow a number of regulations aimed at limiting the risks inherent in this type of installation. The goal is to prevent and to limit the consequences of any possible incident that might threaten the public or the environment. To verify that the safety requirements are met a safety assessment process is followed. Safety analysis is as key component of a safety assessment, which incorporates both probabilistic and deterministic approaches. The deterministic approach attempts to ensure that the various situations, and in particular accidents, that are considered to be plausible, have been taken into account, and that the monitoring systems and engineered safety and safeguard systems will be capable of ensuring the safety goals. On the other hand, probabilistic safety analysis tries to demonstrate that the safety requirements are met for potential accidents both within and beyond the design basis, thus identifying vulnerabilities not necessarily accessible through deterministic safety analysis alone. Probabilistic safety assessment (PSA) methodology is widely used in the nuclear industry and is especially effective in comprehensive assessment of the measures needed to prevent accidents with small probability but severe consequences. Still, the trend towards a risk informed regulation (RIR) demanded a more extended use of risk assessment techniques with a significant need to further extend PSA’s scope and quality. Here is where the theory of stimulated dynamics (TSD) intervenes, as it is the mathematical foundation of the integrated safety assessment (ISA) methodology developed by the CSN(Consejo de Seguridad Nuclear) branch of Modelling and Simulation (MOSI). Such methodology attempts to extend classical PSA including accident dynamic analysis, an assessment of the damage associated to the transients and a computation of the damage frequency. The application of this ISA methodology requires a computational framework called SCAIS (Simulation Code System for Integrated Safety Assessment). SCAIS provides accident dynamic analysis support through simulation of nuclear accident sequences and operating procedures. Furthermore, it includes probabilistic quantification of fault trees and sequences; and integration and statistic treatment of risk metrics. SCAIS comprehensively implies an intensive use of code coupling techniques to join typical thermal hydraulic analysis, severe accident and probability calculation codes. The integration of accident simulation in the risk assessment process and thus requiring the use of complex nuclear plant models is what makes it so powerful, yet at the cost of an enormous increase in complexity. As the complexity of the process is primarily focused on such accident simulation codes, the question of whether it is possible to reduce the number of required simulation arises, which will be the focus of the present work. This document presents the work done on the investigation of more efficient techniques applied to the process of risk assessment inside the mentioned ISA methodology. Therefore such techniques will have the primary goal of decreasing the number of simulation needed for an adequate estimation of the damage probability. As the methodology and tools are relatively recent, there is not much work done inside this line of investigation, making it a quite difficult but necessary task, and because of time limitations the scope of the work had to be reduced. Therefore, some assumptions were made to work in simplified scenarios best suited for an initial approximation to the problem. The following section tries to explain in detail the process followed to design and test the developed techniques. Then, the next section introduces the general concepts and formulae of the TSD theory which are at the core of the risk assessment process. Afterwards a description of the simulation framework requirements and design is given. Followed by an introduction to the developed techniques, giving full detail of its mathematical background and its procedures. Later, the test case used is described and result from the application of the techniques is shown. Finally the conclusions are presented and future lines of work are exposed.
Resumo:
This paper presents a comprehensive review of stepup single-phase non-isolated inverters suitable for ac-module applications. In order to compare the most feasible solutions of the reviewed topologies, a benchmark is set. This benchmark is based on a typical ac-module application considering the requirements for the solar panels and the grid. The selected solutions are designed and simulated complying with the benchmark obtaining passive and semiconductor components ratings in order to perform a comparison in terms of size and cost. A discussion of the analyzed topologies regarding the obtained ratings as well as ground currents is presented. Recommendations for topological solutions complying with the application benchmark are provided.
Resumo:
After the 2010 Haiti earthquake, that hits the city of Port-au-Prince, capital city of Haiti, a multidisciplinary working group of specialists (seismologist, geologists, engineers and architects) from different Spanish Universities and also from Haiti, joined effort under the SISMO-HAITI project (financed by the Universidad Politecnica de Madrid), with an objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. In this paper, as a first step for a structural damage estimation of future earthquakes in the country, a calibration of damage functions has been carried out by means of a two-stage procedure. After compiling a database with observed damage in the city after the earthquake, the exposure model (building stock) has been classified and through an iteratively two-step calibration process, a specific set of damage functions for the country has been proposed. Additionally, Next Generation Attenuation Models (NGA) and Vs30 models have been analysed to choose the most appropriate for the seismic risk estimation in the city. Finally in a next paper, these functions will be used to estimate a seismic risk scenario for a future earthquake.
Resumo:
En esta Tesis Doctoral se emplean y desarrollan Métodos Bayesianos para su aplicación en análisis geotécnicos habituales, con un énfasis particular en (i) la valoración y selección de modelos geotécnicos basados en correlaciones empíricas; en (ii) el desarrollo de predicciones acerca de los resultados esperados en modelos geotécnicos complejos. Se llevan a cabo diferentes aplicaciones a problemas geotécnicos, como es el caso de: (1) En el caso de rocas intactas, se presenta un método Bayesiano para la evaluación de modelos que permiten estimar el módulo de Young a partir de la resistencia a compresión simple (UCS). La metodología desarrollada suministra estimaciones de las incertidumbres de los parámetros y predicciones y es capaz de diferenciar entre las diferentes fuentes de error. Se desarrollan modelos "específicos de roca" para los tipos de roca más comunes y se muestra cómo se pueden "actualizar" esos modelos "iniciales" para incorporar, cuando se encuentra disponible, la nueva información específica del proyecto, reduciendo las incertidumbres del modelo y mejorando sus capacidades predictivas. (2) Para macizos rocosos, se presenta una metodología, fundamentada en un criterio de selección de modelos, que permite determinar el modelo más apropiado, entre un conjunto de candidatos, para estimar el módulo de deformación de un macizo rocoso a partir de un conjunto de datos observados. Una vez que se ha seleccionado el modelo más apropiado, se emplea un método Bayesiano para obtener distribuciones predictivas de los módulos de deformación de macizos rocosos y para actualizarlos con la nueva información específica del proyecto. Este método Bayesiano de actualización puede reducir significativamente la incertidumbre asociada a la predicción, y por lo tanto, afectar las estimaciones que se hagan de la probabilidad de fallo, lo cual es de un interés significativo para los diseños de mecánica de rocas basados en fiabilidad. (3) En las primeras etapas de los diseños de mecánica de rocas, la información acerca de los parámetros geomecánicos y geométricos, las tensiones in-situ o los parámetros de sostenimiento, es, a menudo, escasa o incompleta. Esto plantea dificultades para aplicar las correlaciones empíricas tradicionales que no pueden trabajar con información incompleta para realizar predicciones. Por lo tanto, se propone la utilización de una Red Bayesiana para trabajar con información incompleta y, en particular, se desarrolla un clasificador Naïve Bayes para predecir la probabilidad de ocurrencia de grandes deformaciones (squeezing) en un túnel a partir de cinco parámetros de entrada habitualmente disponibles, al menos parcialmente, en la etapa de diseño. This dissertation employs and develops Bayesian methods to be used in typical geotechnical analyses, with a particular emphasis on (i) the assessment and selection of geotechnical models based on empirical correlations; on (ii) the development of probabilistic predictions of outcomes expected for complex geotechnical models. Examples of application to geotechnical problems are developed, as follows: (1) For intact rocks, we present a Bayesian framework for model assessment to estimate the Young’s moduli based on their UCS. Our approach provides uncertainty estimates of parameters and predictions, and can differentiate among the sources of error. We develop ‘rock-specific’ models for common rock types, and illustrate that such ‘initial’ models can be ‘updated’ to incorporate new project-specific information as it becomes available, reducing model uncertainties and improving their predictive capabilities. (2) For rock masses, we present an approach, based on model selection criteria to select the most appropriate model, among a set of candidate models, to estimate the deformation modulus of a rock mass, given a set of observed data. Once the most appropriate model is selected, a Bayesian framework is employed to develop predictive distributions of the deformation moduli of rock masses, and to update them with new project-specific data. Such Bayesian updating approach can significantly reduce the associated predictive uncertainty, and therefore, affect our computed estimates of probability of failure, which is of significant interest to reliability-based rock engineering design. (3) In the preliminary design stage of rock engineering, the information about geomechanical and geometrical parameters, in situ stress or support parameters is often scarce or incomplete. This poses difficulties in applying traditional empirical correlations that cannot deal with incomplete data to make predictions. Therefore, we propose the use of Bayesian Networks to deal with incomplete data and, in particular, a Naïve Bayes classifier is developed to predict the probability of occurrence of tunnel squeezing based on five input parameters that are commonly available, at least partially, at design stages.
Resumo:
Los análisis de fiabilidad representan una herramienta adecuada para contemplar las incertidumbres inherentes que existen en los parámetros geotécnicos. En esta Tesis Doctoral se desarrolla una metodología basada en una linealización sencilla, que emplea aproximaciones de primer o segundo orden, para evaluar eficientemente la fiabilidad del sistema en los problemas geotécnicos. En primer lugar, se emplean diferentes métodos para analizar la fiabilidad de dos aspectos propios del diseño de los túneles: la estabilidad del frente y el comportamiento del sostenimiento. Se aplican varias metodologías de fiabilidad — el Método de Fiabilidad de Primer Orden (FORM), el Método de Fiabilidad de Segundo Orden (SORM) y el Muestreo por Importancia (IS). Los resultados muestran que los tipos de distribución y las estructuras de correlación consideradas para todas las variables aleatorias tienen una influencia significativa en los resultados de fiabilidad, lo cual remarca la importancia de una adecuada caracterización de las incertidumbres geotécnicas en las aplicaciones prácticas. Los resultados también muestran que tanto el FORM como el SORM pueden emplearse para estimar la fiabilidad del sostenimiento de un túnel y que el SORM puede mejorar el FORM con un esfuerzo computacional adicional aceptable. Posteriormente, se desarrolla una metodología de linealización para evaluar la fiabilidad del sistema en los problemas geotécnicos. Esta metodología solamente necesita la información proporcionada por el FORM: el vector de índices de fiabilidad de las funciones de estado límite (LSFs) que componen el sistema y su matriz de correlación. Se analizan dos problemas geotécnicos comunes —la estabilidad de un talud en un suelo estratificado y un túnel circular excavado en roca— para demostrar la sencillez, precisión y eficiencia del procedimiento propuesto. Asimismo, se reflejan las ventajas de la metodología de linealización con respecto a las herramientas computacionales alternativas. Igualmente se muestra que, en el caso de que resulte necesario, se puede emplear el SORM —que aproxima la verdadera LSF mejor que el FORM— para calcular estimaciones más precisas de la fiabilidad del sistema. Finalmente, se presenta una nueva metodología que emplea Algoritmos Genéticos para identificar, de manera precisa, las superficies de deslizamiento representativas (RSSs) de taludes en suelos estratificados, las cuales se emplean posteriormente para estimar la fiabilidad del sistema, empleando la metodología de linealización propuesta. Se adoptan tres taludes en suelos estratificados característicos para demostrar la eficiencia, precisión y robustez del procedimiento propuesto y se discuten las ventajas del mismo con respecto a otros métodos alternativos. Los resultados muestran que la metodología propuesta da estimaciones de fiabilidad que mejoran los resultados previamente publicados, enfatizando la importancia de hallar buenas RSSs —y, especialmente, adecuadas (desde un punto de vista probabilístico) superficies de deslizamiento críticas que podrían ser no-circulares— para obtener estimaciones acertadas de la fiabilidad de taludes en suelos. Reliability analyses provide an adequate tool to consider the inherent uncertainties that exist in geotechnical parameters. This dissertation develops a simple linearization-based approach, that uses first or second order approximations, to efficiently evaluate the system reliability of geotechnical problems. First, reliability methods are employed to analyze the reliability of two tunnel design aspects: face stability and performance of support systems. Several reliability approaches —the first order reliability method (FORM), the second order reliability method (SORM), the response surface method (RSM) and importance sampling (IS)— are employed, with results showing that the assumed distribution types and correlation structures for all random variables have a significant effect on the reliability results. This emphasizes the importance of an adequate characterization of geotechnical uncertainties for practical applications. Results also show that both FORM and SORM can be used to estimate the reliability of tunnel-support systems; and that SORM can outperform FORM with an acceptable additional computational effort. A linearization approach is then developed to evaluate the system reliability of series geotechnical problems. The approach only needs information provided by FORM: the vector of reliability indices of the limit state functions (LSFs) composing the system, and their correlation matrix. Two common geotechnical problems —the stability of a slope in layered soil and a circular tunnel in rock— are employed to demonstrate the simplicity, accuracy and efficiency of the suggested procedure. Advantages of the linearization approach with respect to alternative computational tools are discussed. It is also found that, if necessary, SORM —that approximates the true LSF better than FORM— can be employed to compute better estimations of the system’s reliability. Finally, a new approach using Genetic Algorithms (GAs) is presented to identify the fully specified representative slip surfaces (RSSs) of layered soil slopes, and such RSSs are then employed to estimate the system reliability of slopes, using our proposed linearization approach. Three typical benchmark-slopes with layered soils are adopted to demonstrate the efficiency, accuracy and robustness of the suggested procedure, and advantages of the proposed method with respect to alternative methods are discussed. Results show that the proposed approach provides reliability estimates that improve previously published results, emphasizing the importance of finding good RSSs —and, especially, good (probabilistic) critical slip surfaces that might be non-circular— to obtain good estimations of the reliability of soil slope systems.
Resumo:
A Mindlin plate with periodically distributed ribs patterns is analyzed by using homogenization techniques based on asymptotic expansion methods. The stiffness matrix of the homogenized plate is found to be dependent on the geometrical characteristics of the periodical cell, i.e. its skewness, plan shape, thickness variation etc. and on the plate material elastic constants. The computation of this plate stiffness matrix is carried out by averaging over the cell domain some solutions of different periodical boundary value problems. These boundary value problems are defined in variational form by linear first order differential operators on the cell domain and the boundary conditions of the variational equation correspond to a periodic structural problem. The elements of the stiffness matrix of homogenized plate are obtained by linear combinations of the averaged solution functions of the above mentioned boundary value problems. Finally, an illustrative example of application of this homogenization technique to hollowed plates and plate structures with ribs patterns regularly arranged over its area is shown. The possibility of using in the profesional practice the present procedure to the actual analysis of floors of typical buildings is also emphasized.
Resumo:
This dissertation examines the role of topic knowledge (TK) in comprehension among typical readers and those with Specifically Poor Comprehension (SPC), i.e., those who demonstrate deficits in understanding what they read despite adequate decoding. Previous studies of poor comprehension have focused on weaknesses in specific skills, such as word decoding and inferencing ability, but this dissertation examined a different factor: whether deficits in availability and use of TK underlie poor comprehension. It is well known that TK tends to facilitate comprehension among typical readers, but its interaction with working memory and word decoding is unclear, particularly among participants with deficits in these skills. Across several passages, we found that SPCs do in fact have less TK to assist their interpretation of a text. However, we found no evidence that deficits in working memory or word decoding ability make it difficult for children to benefit from their TK when they have it. Instead, children across the skill spectrum are able to draw upon TK to assist their interpretation of a passage. Because TK is difficult to assess and studies vary in methodology, another goal of this dissertation was to compare two methods for measuring it. Both approaches score responses to a concept question to assess TK, but in the first, a human rater assigns a score whereas in the second, a computer algorithm, Latent Semantic Analysis (LSA; Landauer & Dumais, 1997) assigns a score. We found similar results across both methods of assessing TK, suggesting that a continuous measure is not appreciably more sensitive to variations in knowledge than discrete human ratings. This study contributes to our understanding of how best to measure TK, the factors that moderate its relationship with recall, and its role in poor comprehension. The findings suggest that teaching practices that focus on expanding TK are likely to improve comprehension across readers with a variety of abilities.
Resumo:
After the 2010 Haiti earthquake, that hits the city of Port-au-Prince, capital city of Haiti, a multidisciplinary working group of specialists (seismologist, geologists, engineers and architects) from different Spanish Universities and also from Haiti, joined effort under the SISMO-HAITI project (financed by the Universidad Politecnica de Madrid), with an objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. In this paper, as a first step for a structural damage estimation of future earthquakes in the country, a calibration of damage functions has been carried out by means of a two-stage procedure. After compiling a database with observed damage in the city after the earthquake, the exposure model (building stock) has been classified and through an iteratively two-step calibration process, a specific set of damage functions for the country has been proposed. Additionally, Next Generation Attenuation Models (NGA) and Vs30 models have been analysed to choose the most appropriate for the seismic risk estimation in the city. Finally in a next paper, these functions will be used to estimate a seismic risk scenario for a future earthquake.
Resumo:
L’électrofilage est une technique de mise en œuvre efficace et versatile qui permet la production de fibres continues d’un diamètre typique de quelques centaines de nanomètres à partir de l’application d’un haut voltage sur une solution concentrée de polymères enchevêtrés. L’évaporation extrêmement rapide du solvant et les forces d’élongation impliquées dans la formation de ces fibres leur confèrent des propriétés hors du commun et très intéressantes pour plusieurs types d’applications, mais dont on commence seulement à effleurer la surface. À cause de leur petite taille, ces matériaux ont longtemps été étudiés uniquement sous forme d’amas de milliers de fibres avec les techniques conventionnelles telles que la spectroscopie infrarouge ou la diffraction des rayons X. Nos connaissances de leur comportement proviennent donc toujours de la convolution des propriétés de l’amas de fibres et des caractéristiques spécifiques de chacune des fibres qui le compose. Les études récentes à l’échelle de la fibre individuelle ont mis en lumière des comportements inhabituels, particulièrement l’augmentation exponentielle du module avec la réduction du diamètre. L’orientation et, de manière plus générale, la structure moléculaire des fibres sont susceptibles d’être à l'origine de ces propriétés, mais d’une manière encore incomprise. L’établissement de relations structure/propriétés claires et l’identification des paramètres qui les influencent représentent des défis d’importance capitale en vue de tirer profit des caractéristiques très particulières des fibres électrofilées. Pour ce faire, il est nécessaire de développer des méthodes plus accessibles et permettant des analyses structurales rapides et approfondies sur une grande quantité de fibres individuelles présentant une large gamme de diamètre. Dans cette thèse, la spectroscopie Raman confocale est utilisée pour l’étude des caractéristiques structurales, telles que l’orientation moléculaire, la cristallinité et le désenchevêtrement, de fibres électrofilées individuelles. En premier lieu, une nouvelle méthodologie de quantification de l’orientation moléculaire par spectroscopie Raman est développée théoriquement dans le but de réduire la complexité expérimentale de la mesure, d’étendre la gamme de matériaux pour lesquels ces analyses sont possibles et d’éliminer les risques d’erreurs par rapport à la méthode conventionnelle. La validité et la portée de cette nouvelle méthode, appelée MPD, est ensuite démontrée expérimentalement. Par la suite, une méthodologie efficace permettant l’étude de caractéristiques structurales à l’échelle de la fibre individuelle par spectroscopie Raman est présentée en utilisant le poly(éthylène téréphtalate) comme système modèle. Les limites de la technique sont exposées et des stratégies expérimentales pour les contourner sont mises de l’avant. Les résultats révèlent une grande variabilité de l'orientation et de la conformation d'une fibre à l'autre, alors que le taux de cristallinité demeure systématiquement faible, démontrant l'importance et la pertinence des études statistiques de fibres individuelles. La présence de chaînes montrant un degré d’enchevêtrement plus faible dans les fibres électrofilées que dans la masse est ensuite démontrée expérimentalement pour la première fois par spectroscopie infrarouge sur des amas de fibres de polystyrène. Les conditions d'électrofilage favorisant ce phénomène structural, qui est soupçonné d’influencer grandement les propriétés des fibres, sont identifiées. Finalement, l’ensemble des méthodologies développées sont appliquées sur des fibres individuelles de polystyrène pour l’étude approfondie de l’orientation et du désenchevêtrement sur une large gamme de diamètres et pour une grande quantité de fibres. Cette dernière étude permet l’établissement de la première relation structure/propriétés de ces matériaux, à l’échelle individuelle, en montrant clairement le lien entre l’orientation moléculaire, le désenchevêtrement et le module d'élasticité des fibres.
Resumo:
There have been many studies pertaining to the management of herpetic meningoencephalitis (HME), but the majority of them have focussed on virologically unconfirmed cases or included only small sample sizes. We have conducted a multicentre study aimed at providing management strategies for HME. Overall, 501 adult patients with PCR-proven HME were included retrospectively from 35 referral centres in 10 countries; 496 patients were found to be eligible for the analysis. Cerebrospinal fluid (CSF) analysis using a PCR assay yielded herpes simplex virus (HSV)-1 DNA in 351 patients (70.8%), HSV-2 DNA in 83 patients (16.7%) and undefined HSV DNA type in 62 patients (12.5%). A total of 379 patients (76.4%) had at least one of the specified characteristics of encephalitis, and we placed these patients into the encephalitis presentation group. The remaining 117 patients (23.6%) had none of these findings, and these patients were placed in the nonencephalitis presentation group. Abnormalities suggestive of encephalitis were detected in magnetic resonance imaging (MRI) in 83.9% of the patients and in electroencephalography (EEG) in 91.0% of patients in the encephalitis presentation group. In the nonencephalitis presentation group, MRI and EEG data were suggestive of encephalitis in 33.3 and 61.9% of patients, respectively. However, the concomitant use of MRI and EEG indicated encephalitis in 96.3 and 87.5% of the cases with and without encephalitic clinical presentation, respectively. Considering the subtle nature of HME, CSF HSV PCR, EEG and MRI data should be collected for all patients with a central nervous system infection.
Resumo:
L’électrofilage est une technique de mise en œuvre efficace et versatile qui permet la production de fibres continues d’un diamètre typique de quelques centaines de nanomètres à partir de l’application d’un haut voltage sur une solution concentrée de polymères enchevêtrés. L’évaporation extrêmement rapide du solvant et les forces d’élongation impliquées dans la formation de ces fibres leur confèrent des propriétés hors du commun et très intéressantes pour plusieurs types d’applications, mais dont on commence seulement à effleurer la surface. À cause de leur petite taille, ces matériaux ont longtemps été étudiés uniquement sous forme d’amas de milliers de fibres avec les techniques conventionnelles telles que la spectroscopie infrarouge ou la diffraction des rayons X. Nos connaissances de leur comportement proviennent donc toujours de la convolution des propriétés de l’amas de fibres et des caractéristiques spécifiques de chacune des fibres qui le compose. Les études récentes à l’échelle de la fibre individuelle ont mis en lumière des comportements inhabituels, particulièrement l’augmentation exponentielle du module avec la réduction du diamètre. L’orientation et, de manière plus générale, la structure moléculaire des fibres sont susceptibles d’être à l'origine de ces propriétés, mais d’une manière encore incomprise. L’établissement de relations structure/propriétés claires et l’identification des paramètres qui les influencent représentent des défis d’importance capitale en vue de tirer profit des caractéristiques très particulières des fibres électrofilées. Pour ce faire, il est nécessaire de développer des méthodes plus accessibles et permettant des analyses structurales rapides et approfondies sur une grande quantité de fibres individuelles présentant une large gamme de diamètre. Dans cette thèse, la spectroscopie Raman confocale est utilisée pour l’étude des caractéristiques structurales, telles que l’orientation moléculaire, la cristallinité et le désenchevêtrement, de fibres électrofilées individuelles. En premier lieu, une nouvelle méthodologie de quantification de l’orientation moléculaire par spectroscopie Raman est développée théoriquement dans le but de réduire la complexité expérimentale de la mesure, d’étendre la gamme de matériaux pour lesquels ces analyses sont possibles et d’éliminer les risques d’erreurs par rapport à la méthode conventionnelle. La validité et la portée de cette nouvelle méthode, appelée MPD, est ensuite démontrée expérimentalement. Par la suite, une méthodologie efficace permettant l’étude de caractéristiques structurales à l’échelle de la fibre individuelle par spectroscopie Raman est présentée en utilisant le poly(éthylène téréphtalate) comme système modèle. Les limites de la technique sont exposées et des stratégies expérimentales pour les contourner sont mises de l’avant. Les résultats révèlent une grande variabilité de l'orientation et de la conformation d'une fibre à l'autre, alors que le taux de cristallinité demeure systématiquement faible, démontrant l'importance et la pertinence des études statistiques de fibres individuelles. La présence de chaînes montrant un degré d’enchevêtrement plus faible dans les fibres électrofilées que dans la masse est ensuite démontrée expérimentalement pour la première fois par spectroscopie infrarouge sur des amas de fibres de polystyrène. Les conditions d'électrofilage favorisant ce phénomène structural, qui est soupçonné d’influencer grandement les propriétés des fibres, sont identifiées. Finalement, l’ensemble des méthodologies développées sont appliquées sur des fibres individuelles de polystyrène pour l’étude approfondie de l’orientation et du désenchevêtrement sur une large gamme de diamètres et pour une grande quantité de fibres. Cette dernière étude permet l’établissement de la première relation structure/propriétés de ces matériaux, à l’échelle individuelle, en montrant clairement le lien entre l’orientation moléculaire, le désenchevêtrement et le module d'élasticité des fibres.
Resumo:
Thirty-nine trace elements of the Song-Yuan period (960-1368 AD) porcelain bodies from Cizhou, Jizhou and Longquanwu kilns were analyzed with ICP-MS, a technique rarely used in Chinese archaeometry, to investigate its potential application in such studies. Trace element compositions clearly reflect the distinctive raw materials and their mineralogy at the three kilns and allow their products to be distinguished. Significant chemical variations are also observed between Yuan and Song-Jing dynasties samples from Cizhou as well as fine and coarse porcelain bodies from Longquanwu. In Cizhou, porcelains of better quality which imitate the famous Ding kiln have trace element features distinctive from ordinary Cizhou products, that indicates geochemically distinctive raw materials were used and which possibly also underwent extra refining prior to use. The distinct trace element features of different kilns and the various types of porcelains from an individual kiln can be interpreted from a geochemical perspective. ICP-MS can provide a large amount of valuable information about ancient Chinese ceramics as it is capable of analyzing >40 elements with a typical of precision < 2%.