879 resultados para next generation sequencing
Resumo:
Understanding and measuring the interaction of light with sub-wavelength structures and atomically thin materials is of critical importance for the development of next generation photonic devices. One approach to achieve the desired optical properties in a material is to manipulate its mesoscopic structure or its composition in order to affect the properties of the light-matter interaction. There has been tremendous recent interest in so called two-dimensional materials, consisting of only a single to a few layers of atoms arranged in a planar sheet. These materials have demonstrated great promise as a platform for studying unique phenomena arising from the low-dimensionality of the material and for developing new types of devices based on these effects. A thorough investigation of the optical and electronic properties of these new materials is essential to realizing their potential. In this work we present studies that explore the nonlinear optical properties and carrier dynamics in nanoporous silicon waveguides, two-dimensional graphite (graphene), and atomically thin black phosphorus. We first present an investigation of the nonlinear response of nanoporous silicon optical waveguides using a novel pump-probe method. A two-frequency heterodyne technique is developed in order to measure the pump-induced transient change in phase and intensity in a single measurement. The experimental data reveal a characteristic material response time and temporally resolved intensity and phase behavior matching a physical model dominated by free-carrier effects that are significantly stronger and faster than those observed in traditional silicon-based waveguides. These results shed light on the large optical nonlinearity observed in nanoporous silicon and demonstrate a new measurement technique for heterodyne pump-probe spectroscopy. Next we explore the optical properties of low-doped graphene in the terahertz spectral regime, where both intraband and interband effects play a significant role. Probing the graphene at intermediate photon energies enables the investigation of the nonlinear optical properties in the graphene as its electron system is heated by the intense pump pulse. By simultaneously measuring the reflected and transmitted terahertz light, a precise determination of the pump-induced change in absorption can be made. We observe that as the intensity of the terahertz radiation is increased, the optical properties of the graphene change from interband, semiconductor-like absorption, to a more metallic behavior with increased intraband processes. This transition reveals itself in our measurements as an increase in the terahertz transmission through the graphene at low fluence, followed by a decrease in transmission and the onset of a large, photo-induced reflection as fluence is increased. A hybrid optical-thermodynamic model successfully describes our observations and predicts this transition will persist across mid- and far-infrared frequencies. This study further demonstrates the important role that reflection plays since the absorption saturation intensity (an important figure of merit for graphene-based saturable absorbers) can be underestimated if only the transmitted light is considered. These findings are expected to contribute to the development of new optoelectronic devices designed to operate in the mid- and far-infrared frequency range. Lastly we discuss recent work with black phosphorus, a two-dimensional material that has recently attracted interest due to its high mobility and direct, configurable band gap (300 meV to 2eV), depending on the number of atomic layers comprising the sample. In this work we examine the pump-induced change in optical transmission of mechanically exfoliated black phosphorus flakes using a two-color optical pump-probe measurement. The time-resolved data reveal a fast pump-induced transparency accompanied by a slower absorption that we attribute to Pauli blocking and free-carrier absorption, respectively. Polarization studies show that these effects are also highly anisotropic - underscoring the importance of crystal orientation in the design of optical devices based on this material. We conclude our discussion of black phosphorus with a study that employs this material as the active element in a photoconductive detector capable of gigahertz class detection at room temperature for mid-infrared frequencies.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.
Resumo:
The next generation of vehicles will be equipped with automated Accident Warning Systems (AWSs) capable of warning neighbouring vehicles about hazards that might lead to accidents. The key enabling technology for these systems is the Vehicular Ad-hoc Networks (VANET) but the dynamics of such networks make the crucial timely delivery of warning messages challenging. While most previously attempted implementations have used broadcast-based data dissemination schemes, these do not cope well as data traffic load or network density increases. This problem of sending warning messages in a timely manner is addressed by employing a network coding technique in this thesis. The proposed NETwork COded DissEmination (NETCODE) is a VANET-based AWS responsible for generating and sending warnings to the vehicles on the road. NETCODE offers an XOR-based data dissemination scheme that sends multiple warning in a single transmission and therefore, reduces the total number of transmissions required to send the same number of warnings that broadcast schemes send. Hence, it reduces contention and collisions in the network improving the delivery time of the warnings. The first part of this research (Chapters 3 and 4) asserts that in order to build a warning system, it is needful to ascertain the system requirements, information to be exchanged, and protocols best suited for communication between vehicles. Therefore, a study of these factors along with a review of existing proposals identifying their strength and weakness is carried out. Then an analysis of existing broadcast-based warning is conducted which concludes that although this is the most straightforward scheme, loading can result an effective collapse, resulting in unacceptably long transmission delays. The second part of this research (Chapter 5) proposes the NETCODE design, including the main contribution of this thesis, a pair of encoding and decoding algorithms that makes the use of an XOR-based technique to reduce transmission overheads and thus allows warnings to get delivered in time. The final part of this research (Chapters 6--8) evaluates the performance of the proposed scheme as to how it reduces the number of transmissions in the network in response to growing data traffic load and network density and investigates its capacity to detect potential accidents. The evaluations use a custom-built simulator to model real-world scenarios such as city areas, junctions, roundabouts, motorways and so on. The study shows that the reduction in the number of transmissions helps reduce competition in the network significantly and this allows vehicles to deliver warning messages more rapidly to their neighbours. It also examines the relative performance of NETCODE when handling both sudden event-driven and longer-term periodic messages in diverse scenarios under stress caused by increasing numbers of vehicles and transmissions per vehicle. This work confirms the thesis' primary contention that XOR-based network coding provides a potential solution on which a more efficient AWS data dissemination scheme can be built.
Resumo:
The microorganisms play very important roles in maintaining ecosystems, which explains the enormous interest in understanding the relationship between these organisms as well as between them and the environment. It is estimated that the total number of prokaryotic cells on Earth is between 4 and 6 x 1030, constituting an enormous biological and genetic pool to be explored. Although currently only 1% of all this wealth can be cultivated by standard laboratory techniques, metagenomic tools allow access to the genomic potential of environmental samples in a independent culture manner, and in combination with third generation sequencing technologies, the samples coverage become even greater. Soils, in particular, are the major reservoirs of this diversity, and many important environments around us, as the Brazilian biomes Caatinga and Atlantic Forest, are poorly studied. Thus, the genetic material from environmental soil samples of Caatinga and Atlantic Forest biomes were extracted by direct techniques, pyrosequenced, and the sequences generated were analyzed by bioinformatics programs (MEGAN MG-RAST and WEBCarma). Taxonomic comparative profiles of the samples showed that the phyla Proteobacteria, Actinobacteria, Acidobacteria and Planctomycetes were the most representative. In addition, fungi of the phylum Ascomycota were identified predominantly in the soil sample from the Atlantic Forest. Metabolic profiles showed that despite the existence of environmental differences, sequences from both samples were similarly placed in the various functional subsystems, indicating no specific habitat functions. This work, a pioneer in taxonomic and metabolic comparative analysis of soil samples from Brazilian biomes, contributes to the knowledge of these complex environmental systems, so far little explored
Resumo:
The poor heating efficiency of the most reported magnetic nanoparticles (MNPs), allied to the lack of comprehensive biocompatibility and haemodynamic studies, hampers the spread of multifunctional nanoparticles as the next generation of therapeutic bio-agents in medicine. The present work reports the synthesis and characterization, with special focus on biological/toxicological compatibility, of superparamagnetic nanoparticles with diameter around 18 nm, suitable for theranostic applications (i.e. simultaneous diagnosis and therapy of cancer). Envisioning more insights into the complex nanoparticle-red blood cells (RBCs) membrane interaction, the deformability of the human RBCs in contact with magnetic nanoparticles (MNPs) was assessed for the first time with a microfluidic extensional approach, and used as an indicator of haematological disorders in comparison with a conventional haematological test, i.e. the haemolysis analysis. Microfluidic results highlight the potential of this microfluidic tool over traditional haemolysis analysis, by detecting small increments in the rigidity of the blood cells, when traditional haemotoxicology analysis showed no significant alteration (haemolysis rates lower than 2 %). The detected rigidity has been predicted to be due to the wrapping of small MNPs by the bilayer membrane of the RBCs, which is directly related to MNPs size, shape and composition. The proposed microfluidic tool adds a new dimension into the field of nanomedicine, allowing to be applied as a highsensitivity technique capable of bringing a better understanding of the biological impact of nanoparticles developed for clinical applications.
Resumo:
Across the international educational landscape, numerous higher education institutions (HEIs) offer postgraduate programmes in occupational health psychology (OHP). These seek to empower the next generation of OHP practitioners with the knowledge and skills necessary to advance the understanding and prevention of workplace illness and injury, improve working life and promote healthy work through the application of psychological principles and practices. Among the OHP curricula operated within these programmes there exists considerable variability in the topics addressed. This is due, inter alia, to the youthfulness of the discipline and the fact that the development of educational provision has been managed at the level of the HEI where it has remained undirected by external forces such as the discipline’s representative bodies. Such variability makes it difficult to discern the key characteristics of a curriculum which is important for programme accreditation purposes, the professional development and regulation of practitioners and, ultimately, the long-term sustainability of the discipline. This chapter has as its focus the imperative for and development of consensus surrounding OHP curriculum areas. It begins by examining the factors that are currently driving curriculum developments and explores some of the barriers to such. It then reviews the limited body of previous research that has attempted to discern key OHP curriculum areas. This provides a foundation upon which to describe a study conducted by the current authors that involved the elicitation of subject matter expert opinion from an international sample of academics involved in OHP-related teaching and research on the question of which topic areas might be considered important for inclusion within an OHP curriculum. The chapter closes by drawing conclusions on steps that could be taken by the discipline’s representative bodies towards the consolidation and accreditation of a core curriculum.
Resumo:
We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new ‘Danger Theory’ (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of ‘grounding’ the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.
Resumo:
De nos jours, les séries télévisées américaines représentent une part incontournable de la culture populaire, à tel point que plusieurs traductions audiovisuelles coexistent au sein de la francophonie. Outre le doublage qui permet leur diffusion à la télévision, elles peuvent être sous titrées jusqu’à trois fois soit, en ordre chronologique : par des fans sur Internet; au Québec, pour la vente sur DVD en Amérique du Nord; et en France, pour la vente sur DVD en Europe. Pourtant, bien que ces trois sous titrages répondent aux mêmes contraintes linguistiques (celles de la langue française) et techniques (diffusion au petit écran), ils diffèrent dans leur traitement des dialogues originaux. Nous établissons dans un premier temps les pratiques à l’œuvre auprès des professionnels et des amateurs. Par la suite, l’analyse des traductions ainsi que le recours à un corpus comparable de séries télévisées françaises et québécoises permettent d’établir les normes linguistiques (notamment eu égard à la variété) et culturelles appliquées par les différents traducteurs et, subsidiairement, de définir ce que cache l’appellation « Canadian French ». Cette thèse s’inscrit dans le cadre des études descriptives et sociologiques. Nous y décrivons la réalité professionnelle des traducteurs de l’audiovisuel et l’influence que les fansubbers exercent non seulement sur la pratique professionnelle, mais aussi sur de nouvelles méthodes de formation de la prochaine génération de traducteurs. Par ailleurs, en étudiant plusieurs traductions d’une même œuvre, nous démontrons que les variétés de français ne sauraient justifier, à elles seules, la multiplication de l’offre en sous titrage, vu le faible taux de différences purement linguistiques.
Resumo:
Good schools are essential for building thriving urban areas. They are important for preparing the future human resource and directly contribute to social and economic development of a place. They not only act as magnets for prospective residents, but also are necessary for retaining current population. As public infrastructure, schools mirror their neighborhood. “Their location, design and physical condition are important determinants of neighborhood quality, regional growth and change, and quality of life.”2 They impact housing development and utility requirements among many things. Hence, planning for schools along with other infrastructure in an area is essential. Schools are very challenging to plan, especially in urbanizing areas with changing demographic dynamics, where the development market and housing development can shift drastically a number of times. In such places projecting the future school enrollments is very difficult and in case of large population influx, school development can be unable to catch up with population growth which results in overcrowding. Typical is the case of Arlington County VA. In the past two decades the County has changed dramatically from a collection of bedroom communities in Washington DC Metro Region to a thriving urban area. Its metro accessible urban corridors are among most desired locations for development in the region. However, converting single family neighborhoods into high density areas has put a lot of pressure on its school facilities and has resulted in overcrowded schools. Its public school enrollment has grown by 19% from 2009 to 2014.3 While the percentage of population under 5 years age has increased in last 10 years, those in the 5-19 age group have decreased4. Hence, there is more pressure on the elementary school facilities than others in the County. Design-wise, elementary schools, due to their size, can be imagined as a community component. There are a number of strategies that can be used to develop elementary school in urbanizing areas as a part of the neighborhood. Experimenting with space planning and building on partnership and mixed-use opportunities can help produce better designs for new schools in future. This thesis is an attempt to develop elementary school models for urbanizing areas of Arlington County. The school models will be designed keeping in mind the shifting nature of population and resulting student enrollments in these areas. They will also aim to be efficient and sustainable, and lead to the next generation design for elementary school education. The overall purpose of the project is to address barriers to elementary school development in urbanizing areas through creative design and planning strategies. To test above mentioned ideas, the Joint-Use School typology of housing +school design has been identified for elementary school development in urbanizing areas in this thesis project. The development is based on the Arlington Public School’s Program guidelines (catering to 600 students). The site selected for this project is Clarendon West (part of Red Top Cab Properties) in Clarendon, Arlington County VA.
Resumo:
Magnesium (Mg) battery is considered as a promising candidate for the next generation battery technology that could potentially replace the current lithium (Li)-ion batteries due to the following factors. Magnesium possesses a higher volumetric capacity than commercialized Li-ion battery anode materials. Additionally, the low cost and high abundance of Mg compared to Li makes Mg batteries even more attractive. Moreover, unlike metallic Li anodes which have a tendency to develop a dendritic structure on the surface upon the cycling of the battery, Mg metal is known to be free from such a hazardous phenomenon. Due to these merits of Mg as an anode, the topic of rechargea¬ble Mg batteries has attracted considerable attention among researchers in the last few decades. However, the aforementioned advantages of Mg batteries have not been fully utilized due to the serious kinetic limitation of Mg2+ diffusion process in many hosting compounds which is believed to be due to a strong electrostatic interaction between divalent Mg2+ ions and hosting matrix. This serious kinetic hindrance is directly related to the lack of cathode materials for Mg battery that provide comparable electrochemical performances to that of Li-based system. Manganese oxide (MnO2) is one of the most well studied electrode materials due to its excellent electrochemical properties, including high Li+ ion capacity and relatively high operating voltage (i.e., ~ 4 V vs. Li/Li+ for LiMn2O4 and ~ 3.2 V vs. Mg/Mg2+). However, unlike the good electrochemical properties of MnO2 realized in Li-based systems, rather poor electrochemical performances have been reported in Mg based systems, particularly with low capacity and poor cycling performances. While the origin of the observed poor performances is believed to be due to the aforementioned strong ionic interaction between the Mg2+ ions and MnO2 lattice resulting in a limited diffusion of Mg2+ ions in MnO2, very little has been explored regarding the charge storage mechanism of MnO2 with divalent Mg2+ ions. This dissertation investigates the charge storage mechanism of MnO2, focusing on the insertion behaviors of divalent Mg2+ ions and exploring the origins of the limited Mg2+ insertion behavior in MnO2. It is found that the limited Mg2+ capacity in MnO2 can be significantly improved by introducing water molecules in the Mg electrolyte system, where the water molecules effectively mitigated the kinetic hindrance of Mg2+ insertion process. The combination of nanostructured MnO2 electrode and water effect provides a synergic effect demonstrating further enhanced Mg2+ insertion capability. Furthermore, it is demonstrated in this study that pre-cycling MnO2 electrodes in water-containing electrolyte activates MnO2 electrode, after which improved Mg2+ capacity is maintained in dry Mg electrolyte. Based on a series of XPS analysis, a conversion mechanism is proposed where magnesiated MnO2 undergoes a conversion reaction to Mg(OH)2 and MnOx and Mn(OH)y species in the presence of water molecules. This conversion process is believed to be the driving force that generates the improved Mg2+ capacity in MnO2 along with the water molecule’s charge screening effect. Finally, it is discussed that upon a consecutive cycling of MnO2 in the water-containing Mg electrolyte, structural water is generated within the MnO2 lattice, which is thought to be the origin of the observed activation phenomenon. The results provided in this dissertation highlight that the divalency of Mg2+ ions result in very different electrochemical behaviors than those of the well-studied monovalent Li+ ions towards MnO2.
Resumo:
The atomic-level structure and chemistry of materials ultimately dictate their observed macroscopic properties and behavior. As such, an intimate understanding of these characteristics allows for better materials engineering and improvements in the resulting devices. In our work, two material systems were investigated using advanced electron and ion microscopy techniques, relating the measured nanoscale traits to overall device performance. First, transmission electron microscopy and electron energy loss spectroscopy (TEM-EELS) were used to analyze interfacial states at the semiconductor/oxide interface in wide bandgap SiC microelectronics. This interface contains defects that significantly diminish SiC device performance, and their fundamental nature remains generally unresolved. The impacts of various microfabrication techniques were explored, examining both current commercial and next-generation processing strategies. In further investigations, machine learning techniques were applied to the EELS data, revealing previously hidden Si, C, and O bonding states at the interface, which help explain the origins of mobility enhancement in SiC devices. Finally, the impacts of SiC bias temperature stressing on the interfacial region were explored. In the second system, focused ion beam/scanning electron microscopy (FIB/SEM) was used to reconstruct 3D models of solid oxide fuel cell (SOFC) cathodes. Since the specific degradation mechanisms of SOFC cathodes are poorly understood, FIB/SEM and TEM were used to analyze and quantify changes in the microstructure during performance degradation. Novel strategies for microstructure calculation from FIB-nanotomography data were developed and applied to LSM-YSZ and LSCF-GDC composite cathodes, aged with environmental contaminants to promote degradation. In LSM-YSZ, migration of both La and Mn cations to the grain boundaries of YSZ was observed using TEM-EELS. Few substantial changes however, were observed in the overall microstructure of the cells, correlating with a lack of performance degradation induced by the H2O. Using similar strategies, a series of LSCF-GDC cathodes were analyzed, aged in H2O, CO2, and Cr-vapor environments. FIB/SEM observation revealed considerable formation of secondary phases within these cathodes, and quantifiable modifications of the microstructure. In particular, Cr-poisoning was observed to cause substantial byproduct formation, which was correlated with drastic reductions in cell performance.
Resumo:
Part 11: Reference and Conceptual Models
Resumo:
En la actualidad, todos los servicios convergen en una Red de Próxima Generación [NGN]. Asimismo, las exigencias de calidad de servicio [QoS], por los requerimientos de los usuarios, son más estrictas, lo que hace necesario plantear procedimientos de QoS que garanticen una operación eficaz en el transporte de los servicios más críticos y de tiempo real ¿como la voz¿, garantizando la disminución de los problemas de latencia, jitter, pérdida de paquetes y eco. Los operadores de Telecomunicaciones deben aplicar las regulaciones emitidas por la Comisión de Regulación de Comunicaciones de Colombia [CRC] y ajustarse a las recomendaciones Y.1540 y Y.1541 de la Unión Internacional de Telecomunicaciones [UIT]. Este documento presenta un procedimiento para aplicar mecanismos de QoS en una NGN en el acceso xDSL con el fin de mantener un nivel de QoS en Voz sobre IP (VoIP) que permita su provisión, con eficiencia económica y técnica, en favor tanto del cliente, como del operador de telecomunicaciones.
Resumo:
Rather often we have to confront with the pessimistic views on the future of the family business. Contrary to these prognosis, the FB is not only present but also improving its position in the global economy and playing a key role in the European economy too. They represent 60 % of employment and more than 60 million jobs in the private sector. Among many internal challenges of FB in the five years’ time, the importance of the ‘company succession’ is increasing together with the renewing technology and ‘attracting the right sills/ talents’ (Global Family Survey, 2015). This article is focusing on the transfer of socio-economic wealth (SEW) as a key intangible asset within the intergenerational changes in the FB. The paper outlines the various concepts (narrow vs. broad) of the SEW and special attention is paid to the risk prone [taken] and risk adverse entrepreneurial attitudes. In this relation, the authors made distinction between the ‘opportunity’ and ‘necessity entrepreneurs’. Using empirical experiences based on multi-site company case studies in the three INSIST project countries, the various sub-sections are focusing on the transfer of the following key components of the SEW to the next generation: trust-based social-system, generic human values (i.e. openness, mutual respect, correctness, reliability, responsibility etc.) and ‘practice based – embedded collective knowledge’. Key lesson of this analysis is the following: transferring physical assets in the succession process seems to us less important than the transfer of the intangible one embedded in the company’s culture community. Further systematic national and international investigations – combining quantitative and qualitative research tools – are necessary to acquire more accurate picture on the impacts of transferring both intangible and tangible assets in the succession process in the FB.