840 resultados para constant work-rate
Resumo:
In this work, AISI 1010 steel samples were plasma nitrided into 20% N 2 100 Pa and 400 Pa for N 2 and H 2 , respectively), temperatures of 500 and 580 °C, during 2 h. Three different procedures for cooling were accomplished after nitriding. In the first procedure the cooling occurred naturally, that is, the sample was kept on substrate holder. In the second one the sample was pulled off and cooling in a cold surface. Finally, in the third cooling process the sample was pulled off the substrate holder down into special reservoir filled with oil held at ambient temperature. The properties of the AISI 1010 steel samples were characterized by optical and electron microscopy, X-ray diffraction, Mössbauer spectroscopy and microhardness tests. Thermal gradient inside the sample kept on substrate holder during cooling process was measured by three inserted thermocouples at different depths. When samples were cooled rapidly the transformation of ϵ-Fe 2 − 3 N to γ′-Fe 4 N was inhibited. Such effect is indicated by the high concentration of ϵ-Fe compound zone. To get solid state solution of nitrogen in the diffusion zone, instead of precipitates of nitride phases, the cooling rate should be higher than a critical value of about 0.95 °C/s. When this value is reached at any depth of the diffusion zone, two distinct diffusion zones will appear. Temperature gradients were measured inside the samples as a consequence of the plasma treatment. It's suggested the need for standardization of the term “treatment temperature” for plasma treatment because different nitrided layer properties could be reported for the same “treatment temperature”.
Resumo:
There is a shortage of nurses leading to challenges in recruitment in Sweden and many other countries. Especially for less populated regions recruitment can be chal-lenging. Nurses often face difficulties with work-life balance (WLB). This study aims to identify the importance of WLB opportunities and support that make a work-place attractive from the perspective of nursing students studying in Dalarna. A questionnaire was distributed via email to 525 students enrolled in the nursing bach-elor program at Dalarna University. They were asked to rate the importance of 15 sub questions regarding WLB opportunities and support. These sub questions were asked in order to analyze the importance of 15 components regarding WLB oppor-tunities and support. 196 students (37 percent) answered the questionnaire. Three WLB components, working from home, childcare and rooms for breastfeeding, were found to be not important to nursing students studying in Dalarna. This was reason-able due to the profession of nursing and the WLB support provided by the Swedish government. Cultural factors, such as the organization being positive towards using WLB opportunities and support, were more important than structural factors, such as the possibility to work part-time. Moreover, to have a manager that is supportive towards using WLB opportunities and support was found to be the most important factor and having workplace practical support such as childcare was found least im-portant. Furthermore, contrary to the expected results, no statistical significance was found on the influence on the importance of all combined relevant WLB opportuni-ties and support by the sociodemographic variables; gender, semester of studies, age, having children, months of work experience and work experience in the healthcare sector. However, nine individual components were found to be influ-enced by one or more sociodemographic variables. Therefore, some recommenda-tions on how to target specific groups of individuals were made. However, the con-clusion of the study is that, regardless of the sociodemographic variables and gov-ernmental support, organizations should offer new nurses opportunities and support to gain a balance between work and life, especially in terms of cultural factors.
Resumo:
Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.
Resumo:
Understanding the dynamics of blood cells is a crucial element to discover biological mechanisms, to develop new efficient drugs, design sophisticated microfluidic devices, for diagnostics. In this work, we focus on the dynamics of red blood cells in microvascular flow. Microvascular blood flow resistance has a strong impact on cardiovascular function and tissue perfusion. The flow resistance in microcirculation is governed by flow behavior of blood through a complex network of vessels, where the distribution of red blood cells across vessel cross-sections may be significantly distorted at vessel bifurcations and junctions. We investigate the development of blood flow and its resistance starting from a dispersed configuration of red blood cells in simulations for different hematocrits, flow rates, vessel diameters, and aggregation interactions between red blood cells. Initially dispersed red blood cells migrate toward the vessel center leading to the formation of a cell-free layer near the wall and to a decrease of the flow resistance. The development of cell-free layer appears to be nearly universal when scaled with a characteristic shear rate of the flow, which allows an estimation of the length of a vessel required for full flow development, $l_c \approx 25D$, with vessel diameter $D$. Thus, the potential effect of red blood cell dispersion at vessel bifurcations and junctions on the flow resistance may be significant in vessels which are shorter or comparable to the length $l_c$. The presence of aggregation interactions between red blood cells lead in general to a reduction of blood flow resistance. The development of the cell-free layer thickness looks similar for both cases with and without aggregation interactions. Although, attractive interactions result in a larger cell-free layer plateau values. However, because the aggregation forces are short-ranged at high enough shear rates ($\bar{\dot{\gamma}} \gtrsim 50~\text{s}^{-1}$) aggregation of red blood cells does not bring a significant change to the blood flow properties. Also, we develop a simple theoretical model which is able to describe the converged cell-free-layer thickness with respect to flow rate assuming steady-state flow. The model is based on the balance between a lift force on red blood cells due to cell-wall hydrodynamic interactions and shear-induced effective pressure due to cell-cell interactions in flow. We expect that these results can also be used to better understand the flow behavior of other suspensions of deformable particles such as vesicles, capsules, and cells. Finally, we investigate segregation phenomena in blood as a two-component suspension under Poiseuille flow, consisting of red blood cells and target cells. The spatial distribution of particles in blood flow is very important. For example, in case of nanoparticle drug delivery, the particles need to come closer to microvessel walls, in order to adhere and bring the drug to a target position within the microvasculature. Here we consider that segregation can be described as a competition between shear-induced diffusion and the lift force that pushes every soft particle in a flow away from the wall. In order to investigate the segregation, on one hand, we have 2D DPD simulations of red blood cells and target cell of different sizes, on the other hand the Fokker-Planck equation for steady state. For the equation we measure force profile, particle distribution and diffusion constant across the channel. We compare simulation results with those from the Fokker-Planck equation and find a very good correspondence between the two approaches. Moreover, we investigate the diffusion behavior of target particles for different hematocrit values and shear rates. Our simulation results indicate that diffusion constant increases with increasing hematocrit and depends linearly on shear rate. The third part of the study describes development of a simulation model of complex vascular geometries. The development of the model is important to reproduce vascular systems of small pieces of tissues which might be gotten from MRI or microscope images. The simulation model of the complex vascular systems might be divided into three parts: modeling the geometry, developing in- and outflow boundary conditions, and simulation domain decomposition for an efficient computation. We have found that for the in- and outflow boundary conditions it is better to use the SDPD fluid than DPD one because of the density fluctuations along the channel of the latter. During the flow in a straight channel, it is difficult to control the density of the DPD fluid. However, the SDPD fluid has not that shortcoming even in more complex channels with many branches and in- and outflows because the force acting on particles is calculated also depending on the local density of the fluid.
Resumo:
In this paper we show how to construct the Evans function for traveling wave solutions of integral neural field equations when the firing rate function is a Heaviside. This allows a discussion of wave stability and bifurcation as a function of system parameters, including the speed and strength of synaptic coupling and the speed of axonal signals. The theory is illustrated with the construction and stability analysis of front solutions to a scalar neural field model and a limiting case is shown to recover recent results of L. Zhang [On stability of traveling wave solutions in synaptically coupled neuronal networks, Differential and Integral Equations, 16, (2003), pp.513-536.]. Traveling fronts and pulses are considered in more general models possessing either a linear or piecewise constant recovery variable. We establish the stability of coexisting traveling fronts beyond a front bifurcation and consider parameter regimes that support two stable traveling fronts of different speed. Such fronts may be connected and depending on their relative speed the resulting region of activity can widen or contract. The conditions for the contracting case to lead to a pulse solution are established. The stability of pulses is obtained for a variety of examples, in each case confirming a previously conjectured stability result. Finally we show how this theory may be used to describe the dynamic instability of a standing pulse that arises in a model with slow recovery. Numerical simulations show that such an instability can lead to the shedding of a pair of traveling pulses.
Resumo:
Os oceanos representam um dos maiores recursos naturais, possuindo expressivo potencial energético, podendo suprir parte da demanda energética mundial. Nas últimas décadas, alguns dispositivos destinados à conversão da energia das ondas dos oceanos em energia elétrica têm sido estudados. No presente trabalho, o princípio de funcionamento do conversor do tipo Coluna de Água Oscilante, do inglês Oscillating Water Colum, (OWC) foi analisado numericamente. As ondas incidentes na câmara hidro-pneumática da OWC, causam um movimento alternado da coluna de água no interior da câmara, o qual produz um fluxo alternado de ar que passa pela chaminé. O ar passa e aciona uma turbina a qual transmite energia para um gerador elétrico. O objetivo do presente estudo foi investigar a influência de diferentes formas geométricas da câmara sobre o fluxo resultante de ar que passa pela turbina, que influencia no desempenho do dispositivo. Para isso, geometrias diferentes para o conversor foram analisadas empregando modelos computacionais 2D e 3D. Um modelo computacional desenvolvido nos softwares GAMBIT e FLUENT foi utilizado, em que o conversor OWC foi acoplado a um tanque de ondas. O método Volume of Fluid (VOF) e a teoria de 2ª ordem Stokes foram utilizados para gerar ondas regulares, permitindo uma interação mais realista entre o conversor, água, ar e OWC. O Método dos Volumes Finitos (MVF) foi utilizado para a discretização das equações governantes. Neste trabalho o Contructal Design (baseado na Teoria Constructal) foi aplicado pela primeira vez em estudos numéricos tridimensionais de OWC para fim de encontrar uma geometria que mais favorece o desempenho do dispositivo. A função objetivo foi a maximização da vazão mássica de ar que passa através da chaminé do dispositivo OWC, analisado através do método mínimos quadrados, do inglês Root Mean Square (RMS). Os resultados indicaram que a forma geométrica da câmara influencia na transformação da energia das ondas em energia elétrica. As geometrias das câmaras analisadas que apresentaram maior área da face de incidência das ondas (sendo altura constante), apresentaram também maior desempenho do conversor OWC. A melhor geometria, entre os casos desse estudo, ofereceu um ganho no desempenho do dispositivo em torno de 30% maior.
Resumo:
Many of the equations describing the dynamics of neural systems are written in terms of firing rate functions, which themselves are often taken to be threshold functions of synaptic activity. Dating back to work by Hill in 1936 it has been recognized that more realistic models of neural tissue can be obtained with the introduction of state-dependent dynamic thresholds. In this paper we treat a specific phenomenological model of threshold accommodation that mimics many of the properties originally described by Hill. Importantly we explore the consequences of this dynamic threshold at the tissue level, by modifying a standard neural field model of Wilson-Cowan type. As in the case without threshold accommodation classical Mexican-Hat connectivity is shown to allow for the existence of spatially localized states (bumps) in both one and two dimensions. Importantly an analysis of bump stability in one dimension, using recent Evans function techniques, shows that bumps may undergo instabilities leading to the emergence of both breathers and traveling waves. Moreover, a similar analysis for traveling pulses leads to the conditions necessary to observe a stable traveling breather. In the regime where a bump solution does not exist direct numerical simulations show the possibility of self-replicating bumps via a form of bump splitting. Simulations in two space dimensions show analogous localized and traveling solutions to those seen in one dimension. Indeed dynamical behavior in this neural model appears reminiscent of that seen in other dissipative systems that support localized structures, and in particular those of coupled cubic complex Ginzburg-Landau equations. Further numerical explorations illustrate that the traveling pulses in this model exhibit particle like properties, similar to those of dispersive solitons observed in some three component reaction-diffusion systems. A preliminary account of this work first appeared in S Coombes and M R Owen, Bumps, breathers, and waves in a neural network with spike frequency adaptation, Physical Review Letters 94 (2005), 148102(1-4).
Resumo:
The value of integrating a heat storage into a geothermal district heating system has been investigated. The behaviour of the system under a novel operational strategy has been simulated focusing on the energetic, economic and environmental effects of the new strategy of incorporation of the heat storage within the system. A typical geothermal district heating system consists of several production wells, a system of pipelines for the transportation of the hot water to end-users, one or more re-injection wells and peak-up devices (usually fossil-fuel boilers). Traditionally in these systems, the production wells change their production rate throughout the day according to heat demand, and if their maximum capacity is exceeded the peak-up devices are used to meet the balance of the heat demand. In this study, it is proposed to maintain a constant geothermal production and add heat storage into the network. Subsequently, hot water will be stored when heat demand is lower than the production and the stored hot water will be released into the system to cover the peak demands (or part of these). It is not intended to totally phase-out the peak-up devices, but to decrease their use, as these will often be installed anyway for back-up purposes. Both the integration of a heat storage in such a system as well as the novel operational strategy are the main novelties of this thesis. A robust algorithm for the sizing of these systems has been developed. The main inputs are the geothermal production data, the heat demand data throughout one year or more and the topology of the installation. The outputs are the sizing of the whole system, including the necessary number of production wells, the size of the heat storage and the dimensions of the pipelines amongst others. The results provide several useful insights into the initial design considerations for these systems, emphasizing particularly the importance of heat losses. Simulations are carried out for three different cases of sizing of the installation (small, medium and large) to examine the influence of system scale. In the second phase of work, two algorithms are developed which study in detail the operation of the installation throughout a random day and a whole year, respectively. The first algorithm can be a potentially powerful tool for the operators of the installation, who can know a priori how to operate the installation on a random day given the heat demand. The second algorithm is used to obtain the amount of electricity used by the pumps as well as the amount of fuel used by the peak-up boilers over a whole year. These comprise the main operational costs of the installation and are among the main inputs of the third part of the study. In the third part of the study, an integrated energetic, economic and environmental analysis of the studied installation is carried out together with a comparison with the traditional case. The results show that by implementing heat storage under the novel operational strategy, heat is generated more cheaply as all the financial indices improve, more geothermal energy is utilised and less fuel is used in the peak-up boilers, with subsequent environmental benefits, when compared to the traditional case. Furthermore, it is shown that the most attractive case of sizing is the large one, although the addition of the heat storage most greatly impacts the medium case of sizing. In other words, the geothermal component of the installation should be sized as large as possible. This analysis indicates that the proposed solution is beneficial from energetic, economic, and environmental perspectives. Therefore, it can be stated that the aim of this study is achieved in its full potential. Furthermore, the new models for the sizing, operation and economic/energetic/environmental analyses of these kind of systems can be used with few adaptations for real cases, making the practical applicability of this study evident. Having this study as a starting point, further work could include the integration of these systems with end-user demands, further analysis of component parts of the installation (such as the heat exchangers) and the integration of a heat pump to maximise utilisation of geothermal energy.
Resumo:
Many existing encrypted Internet protocols leak information through packet sizes and timing. Though seemingly innocuous, prior work has shown that such leakage can be used to recover part or all of the plaintext being encrypted. The prevalence of encrypted protocols as the underpinning of such critical services as e-commerce, remote login, and anonymity networks and the increasing feasibility of attacks on these services represent a considerable risk to communications security. Existing mechanisms for preventing traffic analysis focus on re-routing and padding. These prevention techniques have considerable resource and overhead requirements. Furthermore, padding is easily detectable and, in some cases, can introduce its own vulnerabilities. To address these shortcomings, we propose embedding real traffic in synthetically generated encrypted cover traffic. Novel to our approach is our use of realistic network protocol behavior models to generate cover traffic. The observable traffic we generate also has the benefit of being indistinguishable from other real encrypted traffic further thwarting an adversary's ability to target attacks. In this dissertation, we introduce the design of a proxy system called TrafficMimic that implements realistic cover traffic tunneling and can be used alone or integrated with the Tor anonymity system. We describe the cover traffic generation process including the subtleties of implementing a secure traffic generator. We show that TrafficMimic cover traffic can fool a complex protocol classification attack with 91% of the accuracy of real traffic. TrafficMimic cover traffic is also not detected by a binary classification attack specifically designed to detect TrafficMimic. We evaluate the performance of tunneling with independent cover traffic models and find that they are comparable, and, in some cases, more efficient than generic constant-rate defenses. We then use simulation and analytic modeling to understand the performance of cover traffic tunneling more deeply. We find that we can take measurements from real or simulated traffic with no tunneling and use them to estimate parameters for an accurate analytic model of the performance impact of cover traffic tunneling. Once validated, we use this model to better understand how delay, bandwidth, tunnel slowdown, and stability affect cover traffic tunneling. Finally, we take the insights from our simulation study and develop several biasing techniques that we can use to match the cover traffic to the real traffic while simultaneously bounding external information leakage. We study these bias methods using simulation and evaluate their security using a Bayesian inference attack. We find that we can safely improve performance with biasing while preventing both traffic analysis and defense detection attacks. We then apply these biasing methods to the real TrafficMimic implementation and evaluate it on the Internet. We find that biasing can provide 3-5x improvement in bandwidth for bulk transfers and 2.5-9.5x speedup for Web browsing over tunneling without biasing.
Resumo:
In this work, AISI 1010 steel samples were plasma nitrided into 20% N 2 100 Pa and 400 Pa for N 2 and H 2 , respectively), temperatures of 500 and 580 °C, during 2 h. Three different procedures for cooling were accomplished after nitriding. In the first procedure the cooling occurred naturally, that is, the sample was kept on substrate holder. In the second one the sample was pulled off and cooling in a cold surface. Finally, in the third cooling process the sample was pulled off the substrate holder down into special reservoir filled with oil held at ambient temperature. The properties of the AISI 1010 steel samples were characterized by optical and electron microscopy, X-ray diffraction, Mössbauer spectroscopy and microhardness tests. Thermal gradient inside the sample kept on substrate holder during cooling process was measured by three inserted thermocouples at different depths. When samples were cooled rapidly the transformation of ϵ-Fe 2 − 3 N to γ′-Fe 4 N was inhibited. Such effect is indicated by the high concentration of ϵ-Fe compound zone. To get solid state solution of nitrogen in the diffusion zone, instead of precipitates of nitride phases, the cooling rate should be higher than a critical value of about 0.95 °C/s. When this value is reached at any depth of the diffusion zone, two distinct diffusion zones will appear. Temperature gradients were measured inside the samples as a consequence of the plasma treatment. It's suggested the need for standardization of the term “treatment temperature” for plasma treatment because different nitrided layer properties could be reported for the same “treatment temperature”.
Resumo:
This thesis describes a collection of studies into the electrical response of a III-V MOS stack comprising metal/GaGdO/GaAs layers as a function of fabrication process variables and the findings of those studies. As a result of this work, areas of improvement in the gate process module of a III-V heterostructure MOSFET were identified. Compared to traditional bulk silicon MOSFET design, one featuring a III-V channel heterostructure with a high-dielectric-constant oxide as the gate insulator provides numerous benefits, for example: the insulator can be made thicker for the same capacitance, the operating voltage can be made lower for the same current output, and improved output characteristics can be achieved without reducing the channel length further. It is known that transistors composed of III-V materials are most susceptible to damage induced by radiation and plasma processing. These devices utilise sub-10 nm gate dielectric films, which are prone to contamination, degradation and damage. Therefore, throughout the course of this work, process damage and contamination issues, as well as various techniques to mitigate or prevent those have been investigated through comparative studies of III-V MOS capacitors and transistors comprising various forms of metal gates, various thicknesses of GaGdO dielectric, and a number of GaAs-based semiconductor layer structures. Transistors which were fabricated before this work commenced, showed problems with threshold voltage control. Specifically, MOSFETs designed for normally-off (VTH > 0) operation exhibited below-zero threshold voltages. With the results obtained during this work, it was possible to gain an understanding of why the transistor threshold voltage shifts as the gate length decreases and of what pulls the threshold voltage downwards preventing normally-off device operation. Two main culprits for the negative VTH shift were found. The first was radiation damage induced by the gate metal deposition process, which can be prevented by slowing down the deposition rate. The second was the layer of gold added on top of platinum in the gate metal stack which reduces the effective work function of the whole gate due to its electronegativity properties. Since the device was designed for a platinum-only gate, this could explain the below zero VTH. This could be prevented either by using a platinum-only gate, or by matching the layer structure design and the actual gate metal used for the future devices. Post-metallisation thermal anneal was shown to mitigate both these effects. However, if post-metallisation annealing is used, care should be taken to ensure it is performed before the ohmic contacts are formed as the thermal treatment was shown to degrade the source/drain contacts. In addition, the programme of studies this thesis describes, also found that if the gate contact is deposited before the source/drain contacts, it causes a shift in threshold voltage towards negative values as the gate length decreases, because the ohmic contact anneal process affects the properties of the underlying material differently depending on whether it is covered with the gate metal or not. In terms of surface contamination; this work found that it causes device-to-device parameter variation, and a plasma clean is therefore essential. This work also demonstrated that the parasitic capacitances in the system, namely the contact periphery dependent gate-ohmic capacitance, plays a significant role in the total gate capacitance. This is true to such an extent that reducing the distance between the gate and the source/drain ohmic contacts in the device would help with shifting the threshold voltages closely towards the designed values. The findings made available by the collection of experiments performed for this work have two major applications. Firstly, these findings provide useful data in the study of the possible phenomena taking place inside the metal/GaGdO/GaAs layers and interfaces as the result of chemical processes applied to it. In addition, these findings allow recommendations as to how to best approach fabrication of devices utilising these layers.
Resumo:
Se presenta un estudio de detección y caracterización de eventos sísmicos del tipo volcano tectónicos y largo periodo de registros sísmicos generados por el volcán Cotopaxi. La estructura secuencial de detección propuesta permite en un registro sísmico maximizar la probabilidad de presencia de un evento y minimizar la ausencia de este. La detección se la realiza en el dominio del tiempo en cuasi tiempo real manteniendo una tasa constante de falsa alarma para posteriormente realizar un estudio del contenido espectral de los eventos mediante el uso de estimadores espectrales clásicos como el periodograma y paramétricos como el método de máxima entropía de Burg, logrando así, categorizar a los eventos detectados como volcano tectónicos, largo periodo y otros cuando no poseen características pertenecientes a los otros dos tipos como son los rayos.
Resumo:
As proteases constituem 60-65% do mercado global das enzimas industriais e são utilizadas na indústria de alimentos no processo de amaciamento de carne, na síntese de peptídeos, preparo de fórmulas infantis, panificação, cervejarias, produtos farmacêuticos, diagnósticos médicos, como aditivos na indústria de detergentes e na indústria têxtil no processo de depilação e transformação do couro. Proteases específicas produzidas por micro-organismos queratinolíticos são chamadas de queratinases e distinguem-se de outras proteases pela maior capacidade de degradação de substratos compactos e insolúveis como a queratina. Atualmente, processos que apontem o uso total das matérias-primas e que não resultem em impactos negativos ao meio ambiente tem ganhado destaque. Dentro desta temática, destacam-se a reutilização da farinha de penas residual durante o cultivo do Bacillus sp. P45 para produção de proteases e a biomassa residual de levedura, ambas com elevados teores de proteínas, podendo ser utilizadas no cultivo do Bacillus sp. P45 para obtenção de proteases. O objetivo deste trabalho foi obter a enzima queratinase purificada em grandes quantidades, sua caracterização, bem como a sua aplicação em processos de coagulação enzimática do leite para o desenvolvimento de um queijo cremoso enriquecido com farinha de chia e quinoa. Além disso, aplicar diferentes coprodutos para produção de enzimas proteolíticas e queratinolíticas. A presente tese foi dividida em quatro artigos: no primeiro foi realizado a obtenção da queratinase purificada em maiores quantidades e a determinação dos parâmetros de estabilidade térmica e a influência de componentes químicos na atividade enzimática. A obtenção da enzima em maiores quantidades alcançou fatores de purificação de 2,6, 6,7 e 4,0 vezes, paras 1º SAB, 2º SAB e diafiltração, respectivamente. A recuperação enzimática alcançou valores de 75,3% para o 1º SAB, 75,1% no 2º sistema e 84,3% na diafiltração. A temperatura de 55ºC e o pH 7,5 foram determinados como ótimos para atividade da enzima queratinase. O valor da energia de desativação (Ed) médio foi de 118,0 kJ/mol e os valores de z e D variaram de 13,6 a 18,8ºC, e 6,9 a 237,3 min, respectivamente. Além disso a adição de sais (CaCl2, CaO, C8H5KO4 e MgSO4) elevou a atividade da enzima na presença destes compostos. O segundo artigo apresenta a aplicação da queratinase como coagulante de leite bovino e sua aplicação na obtenção de queijo cremoso enriquecido com chia e quinoa. A enzima mostrou atividade de coagulação semelhante ao coagulante comercial, na concentração de 30mg/mL. A enzima purificada foi empregada de forma eficiente na fabricação do queijo cremoso, que apresentou valores de pH de 5,3 e acidez de 0,06 a 0,1 mol/L, com elevação durante os 25 dias de armazenamento. O terceiro artigo apresenta o perfil do queijo cremoso enriquecido com farinha de chia e quinoa, o qual apresentou alto índice de retenção de água (>99,0%) e baixos valores de sinérese (<0,72%). Elevados teores de fibras foi verificado (3,0 a 5,0%), sugerindo seu consumo como fonte de fibras. As análises microbiológicas foram de acordo com a legislação vigente. Na análise sensorial foi verificado altos valores de suavidade ao paladar e verificado maiores valores de consistência e untabilidade nas amostras com maiores concentrações de nata e quinoa. O quarto artigo traz a extração de β-galactosidase por ultrassom e o uso da biomassa residual da levedura, bem como o uso de farinha de penas residuais como substrato para obtenção de proteases. O ultrassom foi eficiente para ruptura celular e extração de β-galactosidase, apresentando alta atividade (35,0 U/mL) e rendimento (876,0 U/g de biomassa). A maior atividade proteolítica (1300 U/mL em 32 h) e queratinolítica (89,2 U/mL) verificadas ocorreram utilizando-se a biomassa e a farinha de penas residuais, respectivamente. Maior produtividade proteolítica (40,8 U/mL/h) foi verificado no meio utilizando biomassa residual como substrato. Já a maior produtividade queratinolítica (2,8 U/mL/h) foi alcançada utilizando farinha de penas reutilizada.
Resumo:
International audience