962 resultados para third-dimensional representation
Resumo:
The three-dimensional (3D) weaving process offers the ability to tailor the mechanical properties via design of the weave architecture. One repeat of the 3D woven fabric is represented by the unit cell. The model accepts basic weaver and material manufacturer data as inputs in order to calculate the geometric characteristics of the 3D woven unit cell. The specific weave architecture manufactured and subsequently modelled had an angle interlock type binding configuration. The modelled result was shown to have a close approximation compared to the experimentally measured values and highlighted the importance of the representation of the binder tow path.
Three dimensional morphology and compressive behaviour of sintered biodegradable composite scaffolds
Resumo:
Porous poly-L-lactide acid (PLA) scaffolds are prepared using polymer sintering and porogen leaching method. Different weight fractions of the Hydroxyapatite (HA) are added to the PLA to control the acidity and degradation rate. The three dimensional morphology and surface porosity are tested using micro CT, optical microscopy and scanning electron microscopy (SEM). Results indicate that the surface porosity does not change by addition of HA. The micro Ct examinations show slight decrease in the pore size and increase in wall thickness accompanied with reduced anisotropy for the scaffolds containing HA. SEM micrographs show detectable interconnected pores for the scaffold with pure PLA. Addition of the HA results in agglomeration of the HA which blocks some of the pores. Compression tests of the scaffold identify three stages in the stress-strain curve. The addition of HA adversely affects the modulus of the scaffold at the first stage, but this was reversed for the second and third stages of the compression. The results of these tests are compared with the cellular material model. The manufactured scaffold have acceptable properties for a scaffold, however improvement to the mixing of the phases of PLA and HA is required to achieve better integrity of the composite scaffolds.
Resumo:
The literature has difficulty explaining why the number of parties in majoritarian electoral systems often exceeds the two-party predictions associated with Duverger’s Law. To understand why this is the case, I examine several party systems in Western Europe before the adoption of proportional representation. Drawing from the social cleavage approach, I argue that the emergence of multiparty systems was because of the development of the class cleavage, which provided a base of voters sizeable enough to support third parties. However, in countries where the class cleavage became the largest cleavage, the class divide displaced other cleavages and the number of parties began to converge on two. The results show that the effect of the class cleavage was nonlinear, producing the greatest party system fragmentation in countries where class cleavages were present – but not dominant – and smaller in countries where class cleavages were either dominant or non-existent.
Resumo:
Objective:
The aim of this study was to identify sources of anatomical misrepresentation due to the location of camera mounting, tumour motion velocity and image processing artefacts in order to optimise the 4DCT scan protocol and improve geometrical-temporal accuracy.
Methods:A phantom with an imaging insert was driven with a sinusoidal superior-inferior motion of varying amplitude and period for 4DCT scanning. The length of a high density cube within the insert was measured using treatment planning software to determine the accuracy of its spatial representation. Scan parameters were varied including the tube rotation period and the cine time between reconstructed images. A CT image quality phantom was used to measure various image quality signatures under the scan parameters tested.
Results:No significant difference in spatial accuracy was found for 4DCT scans carried out using the wall mounted or couch mounted camera for sinusoidal target motion. Greater spatial accuracy was found for 4DCT scans carried out using a tube rotation speed of 0.5s rather than 1.0s. The reduction in image quality when using a faster rotation speed was not enough to require an increase in patient dose.
Conclusions:4DCT accuracy may be increased by optimising scan parameters, including choosing faster tube rotation speeds. Peak misidentification in the recorded breathing trace leads to spatial artefacts and this risk can be reduced by using a couch mounted infrared camera.
Advances in knowledge:This study explicitly shows that 4DCT scan accuracy is improved by scanning with a faster CT tube rotation speed.
Resumo:
Tanpura string vibrations have been investigated previously using numerical models based on energy conserving schemes derived from a Hamiltonian description in one-dimensional form. Such time-domain models have the property that, for the lossless case, the numerical Hamiltonian (representing total energy of the system) can be proven to be constant from one time step
to the next, irrespective of any of the system parameters; in practice the Hamiltonian can be shown to be conserved within machine precision. Models of this kind can reproduce a jvari effect, which results from the bridge-string interaction. However the one-dimensional formulation has recently been shown to fail to replicate the jvaris strong dependence on the thread placement. As a first step towards simulations which accurately emulate this sensitivity to the thread placement, a twodimensional model is proposed, incorporating coupling of controllable level between the two string polarisations at the string termination opposite from the barrier. In addition, a friction force acting when the string slides across the bridge in horizontal direction is introduced, thus effecting a further damping mechanism. In this preliminary study, the string is terminated at the position of the thread. As in the one-dimensional model, an implicit scheme has to be used to solve the system, employing Newton's method to calculate the updated positions and momentums of each string segment. The two-dimensional model is proven to be energy conserving when the loss parameters are set to zero, irrespective of the coupling constant. Both frequency-dependent and independent losses are then added to the string, so that the model can be compared to analogous instruments. The influence of coupling and the bridge friction are investigated.
Resumo:
Noncollinear four-wave-mixing (FWM) techniques at near-infrared (NIR), visible, and ultraviolet frequencies have been widely used to map vibrational and electronic couplings, typically in complex molecules. However, correlations between spatially localized inner-valence transitions among different sites of a molecule in the extreme ultraviolet (XUV) spectral range have not been observed yet. As an experimental step toward this goal, we perform time-resolved FWM spectroscopy with femtosecond NIR and attosecond XUV pulses. The first two pulses (XUV-NIR) coincide in time and act as coherent excitation fields, while the third pulse (NIR) acts as a probe. As a first application, we show how coupling dynamics between odd- and even-parity, inner-valence excited states of neon can be revealed using a two-dimensional spectral representation. Experimentally obtained results are found to be in good agreement with ab initio time-dependent R-matrix calculations providing the full description of multielectron interactions, as well as few-level model simulations. Future applications of this method also include site-specific probing of electronic processes in molecules.
Resumo:
The Minho River, situated 30 km south of the Rias Baixas is the most important freshwater source flowing into the Western Galician Coast (NW of the Iberian Peninsula). This discharge is important to determine the hydrological patterns adjacent to its mouth, particularly close to the Galician coastal region. The buoyancy generated by the Minho plume can flood the Rias Baixas for long periods, reversing the normal estuarine density gradients. Thus, it becomes important to analyse its dynamics as well as the thermohaline patterns of the areas affected by the freshwater spreading. Thus, the main aim of this work was to study the propagation of the Minho estuarine plume to the Rias Baixas, establishing the conditions in which this plume affects the circulation and hydrographic features of these coastal systems, through the development and application of the numerical model MOHID. For this purpose, the hydrographic features of the Rias Baixas mouths were studied. It was observed that at the northern mouths, due to their shallowness, the heat fluxes between the atmosphere and ocean are the major forcing, influencing the water temperature, while at the southern mouths the influence of the upwelling events and the Minho River discharge were more frequent. The salinity increases from south to north, revealing that the observed low values may be caused by the Minho River freshwater discharge. An assessment of wind data along the Galician coast was carried out, in order to evaluate the applicability of the study to the dispersal of the Minho estuarine plume. Firstly, a comparative analysis between winds obtained from land meteorological stations and offshore QuikSCAT satellite were performed. This comparison revealed that satellite data constitute a good approach to study wind induced coastal phenomena. However, since the numerical model MOHID requires wind data with high spatial and temporal resolution close to the coast, results of the forecasted model WRF were added to the previous study. The analyses revealed that the WRF model data is a consistent tool to obtain representative wind data near the coast, showing good results when comparing with in situ wind observations from oceanographic buoys. To study the influence of the Minho buoyant discharge influence on the Rias Baixas, a set of three one-way nested models was developed and implemented, using the numerical model MOHID. The first model domain is a barotropic model and includes the whole Iberian Peninsula coast. The second and third domains are baroclinic models, where the second domain is a coarse representation of the Rias Baixas and adjacent coastal area, while the third includes the same area with a higher resolution. A bi-dimensional model was also implemented in the Minho estuary, in order to quantify the flow (and its properties) that the estuary injects into the ocean. The chosen period for the Minho estuarine plume propagation validation was the spring of 1998, since a high Minho River discharge was reported, as well as favourable wind patterns to advect the estuarine plume towards the Rias Baixas, and there was field data available to compare with the model predictions. The obtained results show that the adopted nesting methodology was successful implemented. Model predictions reproduce accurately the hydrodynamics and thermohaline patterns on the Minho estuary and Rias Baixas. The importance of the Minho river discharge and the wind forcing in the event of May 1998 was also studied. The model results showed that a continuous moderate Minho River discharge combined with southerly winds is enough to reverse the Rias Baixas circulation pattern, reducing the importance of the occurrence of specific events of high runoff values. The conditions in which the estuarine plume Minho affects circulation and hydrography of the Rias Baixas were evaluated. The numerical results revealed that the Minho estuarine plume responds rapidly to wind variations and is also influenced by the bathymetry and morphology of the coastline. Without wind forcing, the plume expands offshore, creating a bulge in front of the river mouth. When the wind blows southwards, the main feature is the offshore extension of the plume. Otherwise, northward wind spreads the river plume towards the Rias Baixas. The plume is confined close to the coast, reaching the Rias Baixas after 1.5 days. However, for Minho River discharges higher than 800 m3 s-1, the Minho estuarine plume reverses the circulation patterns in the Rias Baixas. It was also observed that the wind stress and Minho River discharge are the most important factors influencing the size and shape of the Minho estuarine plume. Under the same conditions, the water exchange between Rias Baixas was analysed following the trajectories particles released close to the Minho River mouth. Over 5 days, under Minho River discharges higher than 2100 m3 s-1 combined with southerly winds of 6 m s-1, an intense water exchange between Rias was observed. However, only 20% of the particles found in Ria de Pontevedra come directly from the Minho River. In summary, the model application developed in this study contributed to the characterization and understanding of the influence of the Minho River on the Rias Baixas circulation and hydrography, highlighting that this methodology can be replicated to other coastal systems.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
L’apprentissage supervisé de réseaux hiérarchiques à grande échelle connaît présentement un succès fulgurant. Malgré cette effervescence, l’apprentissage non-supervisé représente toujours, selon plusieurs chercheurs, un élément clé de l’Intelligence Artificielle, où les agents doivent apprendre à partir d’un nombre potentiellement limité de données. Cette thèse s’inscrit dans cette pensée et aborde divers sujets de recherche liés au problème d’estimation de densité par l’entremise des machines de Boltzmann (BM), modèles graphiques probabilistes au coeur de l’apprentissage profond. Nos contributions touchent les domaines de l’échantillonnage, l’estimation de fonctions de partition, l’optimisation ainsi que l’apprentissage de représentations invariantes. Cette thèse débute par l’exposition d’un nouvel algorithme d'échantillonnage adaptatif, qui ajuste (de fa ̧con automatique) la température des chaînes de Markov sous simulation, afin de maintenir une vitesse de convergence élevée tout au long de l’apprentissage. Lorsqu’utilisé dans le contexte de l’apprentissage par maximum de vraisemblance stochastique (SML), notre algorithme engendre une robustesse accrue face à la sélection du taux d’apprentissage, ainsi qu’une meilleure vitesse de convergence. Nos résultats sont présent ́es dans le domaine des BMs, mais la méthode est générale et applicable à l’apprentissage de tout modèle probabiliste exploitant l’échantillonnage par chaînes de Markov. Tandis que le gradient du maximum de vraisemblance peut-être approximé par échantillonnage, l’évaluation de la log-vraisemblance nécessite un estimé de la fonction de partition. Contrairement aux approches traditionnelles qui considèrent un modèle donné comme une boîte noire, nous proposons plutôt d’exploiter la dynamique de l’apprentissage en estimant les changements successifs de log-partition encourus à chaque mise à jour des paramètres. Le problème d’estimation est reformulé comme un problème d’inférence similaire au filtre de Kalman, mais sur un graphe bi-dimensionnel, où les dimensions correspondent aux axes du temps et au paramètre de température. Sur le thème de l’optimisation, nous présentons également un algorithme permettant d’appliquer, de manière efficace, le gradient naturel à des machines de Boltzmann comportant des milliers d’unités. Jusqu’à présent, son adoption était limitée par son haut coût computationel ainsi que sa demande en mémoire. Notre algorithme, Metric-Free Natural Gradient (MFNG), permet d’éviter le calcul explicite de la matrice d’information de Fisher (et son inverse) en exploitant un solveur linéaire combiné à un produit matrice-vecteur efficace. L’algorithme est prometteur: en terme du nombre d’évaluations de fonctions, MFNG converge plus rapidement que SML. Son implémentation demeure malheureusement inefficace en temps de calcul. Ces travaux explorent également les mécanismes sous-jacents à l’apprentissage de représentations invariantes. À cette fin, nous utilisons la famille de machines de Boltzmann restreintes “spike & slab” (ssRBM), que nous modifions afin de pouvoir modéliser des distributions binaires et parcimonieuses. Les variables latentes binaires de la ssRBM peuvent être rendues invariantes à un sous-espace vectoriel, en associant à chacune d’elles, un vecteur de variables latentes continues (dénommées “slabs”). Ceci se traduit par une invariance accrue au niveau de la représentation et un meilleur taux de classification lorsque peu de données étiquetées sont disponibles. Nous terminons cette thèse sur un sujet ambitieux: l’apprentissage de représentations pouvant séparer les facteurs de variations présents dans le signal d’entrée. Nous proposons une solution à base de ssRBM bilinéaire (avec deux groupes de facteurs latents) et formulons le problème comme l’un de “pooling” dans des sous-espaces vectoriels complémentaires.
Resumo:
Cette recherche examine la traduction et la réception en France, en Grande Bretagne et aux États-Unis de la littérature contemporaine d’expression arabe écrite par des femmes, afin de répondre à deux questions principales: comment les écrivaines provenant de pays arabes perdent-elles leur agentivité dans les processus de traduction et de réception? Et comment la traduction et la réception de leurs textes contribuent-elles à la construction d’une altérité arabe? Pour y répondre, l’auteure examine trois romans présentant des traits thématiques et formels très différents, à savoir Fawḍā al-Ḥawāss (1997) par Ahlem Mosteghanemi, Innahā Lundun Yā ‘Azīzī (2001) par Hanan al-Shaykh et Banāt al-Riyāḍ (2005) par Rajaa Alsanea. L’analyse, basée sur le modèle à trois dimensions de Norman Fairclough, vise à découvrir comment les écrivaines expriment leur agentivité à travers l’écriture, et quelles images elles projettent d’elles-mêmes et plus généralement des femmes dans leurs sociétés respectives. L’auteure se penche ensuite sur les traductions anglaise et française de chaque roman. Elle examine les déplacements qui s’opèrent principalement sur le plan de la texture et le plan pragma-sémiotique, et interroge en quoi ces déplacements ébranlent l’autorité des écrivaines. Enfin, une étude de la réception de ces traductions en France, en Grande Bretagne et aux États-Unis vient enrichir l’analyse textuelle. À cette étape, les critiques éditoriales et universitaires ainsi que les choix éditoriaux relatifs au paratexte sont scrutés de façon à mettre en lumière les processus décisionnels, les discours et les tropes sous-tendant la mise en marché et la consommation de ces traductions. L’analyse des originaux révèle tout d’abord qu’à travers leurs textes, les auteures sont des agentes actives de changement social. Elles s’insurgent, chacune à sa manière, contre les discours hégémoniques tant locaux qu’occidentaux, et (ré-)imaginent leurs sociétés et leurs nations. Ce faisant, elles se créent leur propre espace discursif dans la sphère publique. Toutefois, la thèse montre que dans la plupart des traductions, les discours dissidents sont neutralisés, l’agentivité et la subjectivité des écrivaines minées au profit d’un discours dominant orientaliste. Ce même discours semble sous-tendre la réception des romans en traduction. Dans ce discours réifiant, l’expression de la différence culturelle est inextricablement imbriquée dans l’expression de la différence sexuelle: la « femme arabe » est la victime d’une religion islamique et d’une culture arabe essentiellement misogynes et arriérées. L’étude suggère, cependant, que ce sont moins les interventions des traductrices que les décisions des éditeurs, le travail de médiation opéré par les critiques, et l’intérêt (ou le désintérêt) des universitaires qui influencent le plus la manière dont ces romans sont mis en marché et reçus dans les nouveaux contextes. L’auteure conclut par rappeler l’importance d’une éthique de la traduction qui transcende toute approche binaire et se fonde sur une lecture éthique des textes qui fait ressortir le lien entre la poétique et la politique. Enfin, elle propose une lecture basée sur la reconnaissance du caractère situé du texte traduit comme du sujet lisant/traduisant.
Resumo:
Information display technology is a rapidly growing research and development field. Using state-of-the-art technology, optical resolution can be increased dramatically by organic light-emitting diode - since the light emitting layer is very thin, under 100nm. The main question is what pixel size is achievable technologically? The next generation of display will considers three-dimensional image display. In 2D , one is considering vertical and horizontal resolutions. In 3D or holographic images, there is another dimension – depth. The major requirement is the high resolution horizontal dimension in order to sustain the third dimension using special lenticular glass or barrier masks, separate views for each eye. The high-resolution 3D display offers hundreds of more different views of objects or landscape. OLEDs have potential to be a key technology for information displays in the future. The display technology presented in this work promises to bring into use bright colour 3D flat panel displays in a unique way. Unlike the conventional TFT matrix, OLED displays have constant brightness and colour, independent from the viewing angle i.e. the observer's position in front of the screen. A sandwich (just 0.1 micron thick) of organic thin films between two conductors makes an OLE Display device. These special materials are named electroluminescent organic semi-conductors (or organic photoconductors (OPC )). When electrical current is applied, a bright light is emitted (electrophosphorescence) from the formed Organic Light-Emitting Diode. Usually for OLED an ITO layer is used as a transparent electrode. Such types of displays were the first for volume manufacture and only a few products are available in the market at present. The key challenges that OLED technology faces in the application areas are: producing high-quality white light achieving low manufacturing costs increasing efficiency and lifetime at high brightness. Looking towards the future, by combining OLED with specially constructed surface lenses and proper image management software it will be possible to achieve 3D images.
Resumo:
The magnetic properties and interactions between transition metal (TM) impurities and clusters in low-dimensional metallic hosts are studied using a first principles theoretical method. In the first part of this work, the effect of magnetic order in 3d-5d systems is addressed from the perspective of its influence on the enhancement of the magnetic anisotropy energy (MAE). In the second part, the possibility of using external electric fields (EFs) to control the magnetic properties and interactions between nanoparticles deposited at noble metal surfaces is investigated. The influence of 3d composition and magnetic order on the spin polarization of the substrate and its consequences on the MAE are analyzed for the case of 3d impurities in one- and two-dimensional polarizable hosts. It is shown that the MAE and easy- axis of monoatomic free standing 3d-Pt wires is mainly determined by the atomic spin-orbit (SO) coupling contributions. The competition between ferromagnetic (FM) and antiferromagnetic (AF) order in FePtn wires is studied in detail for n=1-4 as a function of the relative position between Fe atoms. Our results show an oscillatory behavior of the magnetic polarization of Pt atoms as a function of their distance from the magnetic impurities, which can be correlated to a long-ranged magnetic coupling of the Fe atoms. Exceptionally large variations of the induced spin and orbital moments at the Pt atoms are found as a function of concentration and magnetic order. Along with a violation of the third Hund’s rule at the Fe sites, these variations result in a non trivial behavior of the MAE. In the case of TM impurities and dimers at the Cu(111), the effects of surface charging and applied EFs on the magnetic properties and substrate-mediated magnetic interactions have been investigated. The modifications of the surface electronic structure, impurity local moments and magnetic exchange coupling as a result of the EF-induced metallic screening and charge rearrangements are analysed. In a first study, the properties of surface substitutional Co and Fe impurities are investigated as a function of the external charge per surface atom q. At large inter-impurity distances the effective magnetic exchange coupling ∆E between impurities shows RKKY-like oscillations as a function of the distance which are not significantly affected by the considered values of q. For distances r < 10 Å, important modifications in the magnitude of ∆E, involving changes from FM to AF coupling, are found depending non-monotonously on the value and polarity of q. The interaction energies are analysed from a local perspective. In a second study, the interplay between external EF effects, internal magnetic order and substrate-mediated magnetic coupling has been investigated for Mn dimers on Cu(111). Our calculations show that EF (∼ 1eV/Å) can induce a switching from AF to FM ground-state magnetic order within single Mn dimers. The relative coupling between a pair of dimers also shows RKKY-like oscillations as a function of the inter-dimer distance. Their effective magnetic exchange interaction is found to depend significantly on the magnetic order within the Mn dimers and on their relative orientation on the surface. The dependence of the substrate-mediated interaction on the magnetic state of the dimers is qualitatively explained in terms of the differences in the scattering of surface electrons. At short inter-dimer distances, the ground-state configuration is determined by an interplay between exchange interactions and EF effects. These results demonstrate that external surface charging and applied EFs offer remarkable possibilities of manipulating the sign and strength of the magnetic coupling of surface supported nanoparticles.
Resumo:
The aim of this thesis is to narrow the gap between two different control techniques: the continuous control and the discrete event control techniques DES. This gap can be reduced by the study of Hybrid systems, and by interpreting as Hybrid systems the majority of large-scale systems. In particular, when looking deeply into a process, it is often possible to identify interaction between discrete and continuous signals. Hybrid systems are systems that have both continuous, and discrete signals. Continuous signals are generally supposed continuous and differentiable in time, since discrete signals are neither continuous nor differentiable in time due to their abrupt changes in time. Continuous signals often represent the measure of natural physical magnitudes such as temperature, pressure etc. The discrete signals are normally artificial signals, operated by human artefacts as current, voltage, light etc. Typical processes modelled as Hybrid systems are production systems, chemical process, or continuos production when time and continuous measures interacts with the transport, and stock inventory system. Complex systems as manufacturing lines are hybrid in a global sense. They can be decomposed into several subsystems, and their links. Another motivation for the study of Hybrid systems is the tools developed by other research domains. These tools benefit from the use of temporal logic for the analysis of several properties of Hybrid systems model, and use it to design systems and controllers, which satisfies physical or imposed restrictions. This thesis is focused in particular types of systems with discrete and continuous signals in interaction. That can be modelled hard non-linealities, such as hysteresis, jumps in the state, limit cycles, etc. and their possible non-deterministic future behaviour expressed by an interpretable model description. The Hybrid systems treated in this work are systems with several discrete states, always less than thirty states (it can arrive to NP hard problem), and continuous dynamics evolving with expression: with Ki ¡ Rn constant vectors or matrices for X components vector. In several states the continuous evolution can be several of them Ki = 0. In this formulation, the mathematics can express Time invariant linear system. By the use of this expression for a local part, the combination of several local linear models is possible to represent non-linear systems. And with the interaction with discrete events of the system the model can compose non-linear Hybrid systems. Especially multistage processes with high continuous dynamics are well represented by the proposed methodology. Sate vectors with more than two components, as third order models or higher is well approximated by the proposed approximation. Flexible belt transmission, chemical reactions with initial start-up and mobile robots with important friction are several physical systems, which profits from the benefits of proposed methodology (accuracy). The motivation of this thesis is to obtain a solution that can control and drive the Hybrid systems from the origin or starting point to the goal. How to obtain this solution, and which is the best solution in terms of one cost function subject to the physical restrictions and control actions is analysed. Hybrid systems that have several possible states, different ways to drive the system to the goal and different continuous control signals are problems that motivate this research. The requirements of the system on which we work is: a model that can represent the behaviour of the non-linear systems, and that possibilities the prediction of possible future behaviour for the model, in order to apply an supervisor which decides the optimal and secure action to drive the system toward the goal. Specific problems can be determined by the use of this kind of hybrid models are: - The unity of order. - Control the system along a reachable path. - Control the system in a safe path. - Optimise the cost function. - Modularity of control The proposed model solves the specified problems in the switching models problem, the initial condition calculus and the unity of the order models. Continuous and discrete phenomena are represented in Linear hybrid models, defined with defined eighth-tuple parameters to model different types of hybrid phenomena. Applying a transformation over the state vector : for LTI system we obtain from a two-dimensional SS a single parameter, alpha, which still maintains the dynamical information. Combining this parameter with the system output, a complete description of the system is obtained in a form of a graph in polar representation. Using Tagaki-Sugeno type III is a fuzzy model which include linear time invariant LTI models for each local model, the fuzzyfication of different LTI local model gives as a result a non-linear time invariant model. In our case the output and the alpha measure govern the membership function. Hybrid systems control is a huge task, the processes need to be guided from the Starting point to the desired End point, passing a through of different specific states and points in the trajectory. The system can be structured in different levels of abstraction and the control in three layers for the Hybrid systems from planning the process to produce the actions, these are the planning, the process and control layer. In this case the algorithms will be applied to robotics ¡V a domain where improvements are well accepted ¡V it is expected to find a simple repetitive processes for which the extra effort in complexity can be compensated by some cost reductions. It may be also interesting to implement some control optimisation to processes such as fuel injection, DC-DC converters etc. In order to apply the RW theory of discrete event systems on a Hybrid system, we must abstract the continuous signals and to project the events generated for these signals, to obtain new sets of observable and controllable events. Ramadge & Wonham¡¦s theory along with the TCT software give a Controllable Sublanguage of the legal language generated for a Discrete Event System (DES). Continuous abstraction transforms predicates over continuous variables into controllable or uncontrollable events, and modifies the set of uncontrollable, controllable observable and unobservable events. Continuous signals produce into the system virtual events, when this crosses the bound limits. If this event is deterministic, they can be projected. It is necessary to determine the controllability of this event, in order to assign this to the corresponding set, , controllable, uncontrollable, observable and unobservable set of events. Find optimal trajectories in order to minimise some cost function is the goal of the modelling procedure. Mathematical model for the system allows the user to apply mathematical techniques over this expression. These possibilities are, to minimise a specific cost function, to obtain optimal controllers and to approximate a specific trajectory. The combination of the Dynamic Programming with Bellman Principle of optimality, give us the procedure to solve the minimum time trajectory for Hybrid systems. The problem is greater when there exists interaction between adjacent states. In Hybrid systems the problem is to determine the partial set points to be applied at the local models. Optimal controller can be implemented in each local model in order to assure the minimisation of the local costs. The solution of this problem needs to give us the trajectory to follow the system. Trajectory marked by a set of set points to force the system to passing over them. Several ways are possible to drive the system from the Starting point Xi to the End point Xf. Different ways are interesting in: dynamic sense, minimum states, approximation at set points, etc. These ways need to be safe and viable and RchW. And only one of them must to be applied, normally the best, which minimises the proposed cost function. A Reachable Way, this means the controllable way and safe, will be evaluated in order to obtain which one minimises the cost function. Contribution of this work is a complete framework to work with the majority Hybrid systems, the procedures to model, control and supervise are defined and explained and its use is demonstrated. Also explained is the procedure to model the systems to be analysed for automatic verification. Great improvements were obtained by using this methodology in comparison to using other piecewise linear approximations. It is demonstrated in particular cases this methodology can provide best approximation. The most important contribution of this work, is the Alpha approximation for non-linear systems with high dynamics While this kind of process is not typical, but in this case the Alpha approximation is the best linear approximation to use, and give a compact representation.
Resumo:
Two wavelet-based control variable transform schemes are described and are used to model some important features of forecast error statistics for use in variational data assimilation. The first is a conventional wavelet scheme and the other is an approximation of it. Their ability to capture the position and scale-dependent aspects of covariance structures is tested in a two-dimensional latitude-height context. This is done by comparing the covariance structures implied by the wavelet schemes with those found from the explicit forecast error covariance matrix, and with a non-wavelet- based covariance scheme used currently in an operational assimilation scheme. Qualitatively, the wavelet-based schemes show potential at modeling forecast error statistics well without giving preference to either position or scale-dependent aspects. The degree of spectral representation can be controlled by changing the number of spectral bands in the schemes, and the least number of bands that achieves adequate results is found for the model domain used. Evidence is found of a trade-off between the localization of features in positional and spectral spaces when the number of bands is changed. By examining implied covariance diagnostics, the wavelet-based schemes are found, on the whole, to give results that are closer to diagnostics found from the explicit matrix than from the nonwavelet scheme. Even though the nature of the covariances has the right qualities in spectral space, variances are found to be too low at some wavenumbers and vertical correlation length scales are found to be too long at most scales. The wavelet schemes are found to be good at resolving variations in position and scale-dependent horizontal length scales, although the length scales reproduced are usually too short. The second of the wavelet-based schemes is often found to be better than the first in some important respects, but, unlike the first, it has no exact inverse transform.