992 resultados para Distributed Dislocation Dipole Technique
Resumo:
In this work GaN and AlGaN layers were grown by metal-organic chemical vapor deposition (MOCVD) on sapphire substrates. The research was carried out at Micro and Nanoscience Laboratory of Helsinki University of Technology. The objective of this thesis is the study of MOCVD technique for the growth of GaN and AlGaN films and optimization of growth parameters in purpose to improve crystal quality of the films. The widely used two-step and the new multistep methods have been used for GaN, AlGaN MOCVD growth on c-plane sapphire. Properties of the GaN and AlGaN layers were studied using in-situ reflectance monitoring during MOCVD growth, atomic force microscopy and x-ray diffraction. Compared to the two step method, the multistep method has produced even better qualities of the GaN and AlGaN layers and significant reduction of threading dislocation density.
Resumo:
We propose a new approach and related indicators for globally distributed software support and development based on a 3-year process improvement project in a globally distributed engineering company. The company develops, delivers and supports a complex software system with tailored hardware components and unique end-customer installations. By applying the domain knowledge from operations management on lead time reduction and its multiple benefits to process performance, the workflows of globally distributed software development and multitier support processes were measured and monitored throughout the company. The results show that the global end-to-end process visibility and centrally managed reporting at all levels of the organization catalyzed a change process toward significantly better performance. Due to the new performance indicators based on lead times and their variation with fixed control procedures, the case company was able to report faster bug-fixing cycle times, improved response times and generally better customer satisfaction in its global operations. In all, lead times to implement new features and to respond to customer issues and requests were reduced by 50%.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
Résumé : Ce travail comprend deux parties : La première partie a pour but de présenter une revue des techniques de gastrostomie chez l'enfant. La gastrostomie est, par définition, un tractus fistuleux entre l'estomac et la paroi abdominale. Le but de la gastrostomie est de permettre la décompression gastrique, la nutrition entérale et l'apport médicamenteux. Les indications et contre-indications à la confection et utilisation de la gastrostomie sont détaillées dans ce travail. Historiquement, les premières gastrostomies étaient d'origine accidentelle ou infectieuse (fistule gastro-cutanée), incompatibles avec la vie. Sedillot, en 1845 décrivit la première gastrostomie chirurgicale sans cathéter, qui avait comme désavantage la présence de fuites. Depuis, les techniques se sont multipliées en évoluant vers la continence et l'utilisation de cathéters. En 1979 Gauderer décrivit pour la première fois une technique percutanée, réalisée sur un enfant âgé de 5 mois. Cette technique est appelée « Percutaneous Endoscopic Gastrostomy » (PEG). Elle a ensuite été élargie à la population adulte. Actuellement, il existe une grande multiplicité de techniques par abord « laparotomique », laparoscopique ou percutanée (endoscopique ou radiologique). Ces techniques peuvent être combinées. Toutes ces techniques nécessitent la présence intermittente ou continue d'un dispositif, qui permet le maintient de la gastrostomie ouverte et évite les fuites gastriques. Ces dispositifs sont multiples; initialement il s'agissait de cathéters rigides (bois, métal, caoutchouc). Ensuite ils ont été fabriqués en silicone, ce qui les rend plus souples et mieux tolérés par le patient. Pour éviter leur dislocation, ils possèdent un système d'amarrage intra-gastrique tel que : un champignon (Bard®), un ballonnet (Foley®, Mic-Key®), ou une forme spiralée du cathéter (« pig-tail ») et possèdent un système d'amarrage extra-gastrique (« cross-bar »). En 1982, Gauderer créa le premier dispositif à fleur de peau : le bouton de gastrostomie (BG). Actuellement, il en existe deux types : à champignon (Bard®) et à ballonnet (Mic-Key®). Il existe plusieurs types de complications liées à la technique opératoire, à la prise en charge et au matériel utilisé. Une comparaison des différentes techniques, matériaux utilisés et coûts engendrés est détaillée dans ce travail. La deuxième partie de ce travail est dédiée aux BG et plus spécifiquement au BG à ballonnet (Mic-Key®). Nous présentons les différents boutons et les techniques spécifiques. Le BG est inséré soit dans une gastrostomie préformée, soit directement lors de la confection d'une gastrostomie par laparotomie, laparoscopie ou de façon percutanée. Les complications liées au BG sont rapportées. D'autres utilisations digestives ou urologiques sont décrites. Nous présentons ensuite notre expérience avec 513 BG à ballonnet (Mic-Key®) dans une revue de 73 enfants. La pose du BG est effectuée dans une gastrostomie préformée sans recours à une anesthésie générale. La technique choisie pour la confection de la gastrostomie dépend de la pathologie de base, de l'état général du patient, de la nécessité d'une opération concomitante et du risque anesthésique. Nous apportons des précisions sur le BG telles que la dimension en fonction de l'âge, la durée de vie, et les causes qui ont amené au changement du BG. Nos résultats sont comparés à ceux de la littérature. Sur la base de notre expérience et après avoir passé en revue la littérature spécialisée, nous proposons des recommandations sur le choix de la technique et le choix du matériel. Ce travail se termine avec une réflexion sur le devenir de la gastrostomie. Si le futur consiste à améliorer et innover les techniques et les matériaux, des protocoles destinés à la standardisation des techniques, à la sélection des patients et à l'enseignement des soins devraient s'en suivre. La prise en charge de l'enfant ne se limite pas à la sélection appropriée de la technique et des matériaux, mais il s'agit avant tout d'une approche multidisciplinaire. La collaboration entre le personnel soignant, la famille et l'enfant est essentielle pour que la prise en charge soit optimale et sans risques.
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.
Resumo:
Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.
Resumo:
The objective of this work was to introduce the emerging non-contacting spray coating process and compare it to the existing coating techniques. Particular emphasis was given to the details of the spraying process of paper coating colour and the base paper requirements set by the new coating method. Spraying technology itself is nothing new, but the atomisation process of paper coating colour is quite unknown to the paper industry. The differences between the rheology of painting and coating colours make it very difficult to utilise the existing information from spray painting research. Based on the trials, some basic conclusion can be made:The results of this study suggest that the Brookfield viscosity of spray coating colour should be as low as possible, presently a 50 mPas level is regarded as an optimum. For the paper quality and coater runnability, the solids level should be as high as possible. However, the graininess of coated paper surface and the nozzle wear limits the maximum solids level to 60 % at the moment. Most likelydue to the low solids and low viscosity of the coating colour the low shear Brookfield viscosity correlates very well with the paper and spray fan qualities. High shear viscosity is also important, but yet less significant than the low shear viscosity. Droplet size should be minimized and besides keeping the brrokfield viscosity low that can be helped by using a surfactant or dispersing agent in the coating colour formula. Increasing the spraying pressure in the nozzle can also reduce the droplet size. The small droplet size also improves the coating coverage, since there is hardly any levelling taking place after the impact with the base paper. Because of the lack of shear forces after the application, the pigment particles do not orientate along the paper surface. Therefore the study indicates that based on the present know-how, no quality improvements can be obtained by the use of platy type of pigments. The other disadvantage of them is the rapid deterioration of the nozzle lifetime. Further research in both coating colour rheology and nozzle design may change this in the future, but so far only round shape pigments, like typically calcium carbonate is, can be used with spray coating. The low water retention characteristics of spray coating, enhanced by the low solids and low viscosity, challenge the base paper absorption properties.Filler level has to be low not to increase the number of small pores, which have a great influence on the absorption properties of the base paper. Hydrophobic sizing reduces this absorption and prevents binder migration efficiently. High surface roughness and especially poor formation of the base paper deteriorate thespray coated paper properties. However, pre-calendering of the base paper does not contribute anything to the finished paper quality, at least at the coating colour solids level below 60 %. When targeting a standard offset LWC grade, spraycoating produces similar quality to film coating, but yet blade coating being on a slightly better level. However, because of the savings in both investment and production costs, spray coating may have an excellent future ahead. The porousnature of the spray coated surface offers an optimum substrate for the coldset printing industry to utilise the potential of high quality papers in their business.