948 resultados para Other Materials Science and Engineering


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nanostructures are highly attractive for future electrical energy storage devices because they enable large surface area and short ion transport time through thin electrode layers for high power devices. Significant enhancement in power density of batteries has been achieved by nano-engineered structures, particularly anode and cathode nanostructures spatially separated far apart by a porous membrane and/or a defined electrolyte region. A self-aligned nanostructured battery fully confined within a single nanopore presents a powerful platform to determine the rate performance and cyclability limits of nanostructured storage devices. Atomic layer deposition (ALD) has enabled us to create and evaluate such structures, comprised of nanotubular electrodes and electrolyte confined within anodic aluminum oxide (AAO) nanopores. The V2O5- V2O5 symmetric nanopore battery displays exceptional power-energy performance and cyclability when tested as a massively parallel device (~2billion/cm2), each with ~1m3 volume (~1fL). Cycled between 0.2V and 1.8V, this full cell has capacity retention of 95% at 5C rate and 46% at 150C, with more than 1000 charge/discharge cycles. These results demonstrate the promise of ultrasmall, self-aligned/regular, densely packed nanobattery structures as a testbed to study ionics and electrodics at the nanoscale with various geometrical modifications and as a building block for high performance energy storage systems[1, 2]. Further increase of full cell output potential is also demonstrated in asymmetric full cell configurations with various low voltage anode materials. The asymmetric full cell nanopore batteries, comprised of V2O5 as cathode and prelithiated SnO2 or anatase phase TiO2 as anode, with integrated nanotubular metal current collectors underneath each nanotubular storage electrode, also enabled by ALD. By controlling the amount of lithium ion prelithiated into SnO2 anode, we can tune full cell output voltage in the range of 0.3V and 3V. This asymmetric nanopore battery array displays exceptional rate performance and cyclability. When cycled between 1V and 3V, it has capacity retention of approximately 73% at 200C rate compared to 1C, with only 2% capacity loss after more than 500 charge/discharge cycles. With increased full cell output potential, the asymmetric V2O5-SnO2 nanopore battery shows significantly improved energy and power density. This configuration presents a more realistic test - through its asymmetric (vs symmetric) configuration – of performance and cyclability in nanoconfined environment. This dissertation covers (1) Ultra small electrochemical storage platform design and fabrication, (2) Electron and ion transport in nanostructured electrodes inside a half cell configuration, (3) Ion transport between anode and cathode in confined nanochannels in symmetric full cells, (4) Scale up energy and power density with geometry optimization and low voltage anode materials in asymmetric full cell configurations. As a supplement, selective growth of ALD to improve graphene conductance will also be discussed[3]. References: 1. Liu, C., et al., (Invited) A Rational Design for Batteries at Nanoscale by Atomic Layer Deposition. ECS Transactions, 2015. 69(7): p. 23-30. 2. Liu, C.Y., et al., An all-in-one nanopore battery array. Nature Nanotechnology, 2014. 9(12): p. 1031-1039. 3. Liu, C., et al., Improving Graphene Conductivity through Selective Atomic Layer Deposition. ECS Transactions, 2015. 69(7): p. 133-138.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have employed identical location transmission electron microscopy (IL-TEM) to study changes in the shape and morphology of faceted Pt nanoparticles as a result of electrochemical cycling; a procedure typically employed for activating platinum surfaces. We find that the shape and morphology of the as-prepared hexagonal nanoparticles are rapidly degraded as a result of potential cycling up to +1.3 V. As few as 25 potential cycles are sufficient to cause significant degradation, and after about 500–1000 cycles the particles are dramatically degraded. We also see clear evidence of particle migration during potential cycling. These finding suggest that great care must be exercised in the use and study of shaped Pt nanoparticles (and related systems) as electrocatlysts, especially for the oxygen reduction reaction where high positive potentials are typically employed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As nuclear energy systems become more advanced, the materials encompassing them need to perform at higher temperatures for longer periods of time. In this Master’s thesis we experiment with an oxide dispersion strengthened (ODS) austenitic steel that has been recently developed. ODS materials have a small concentration of nano oxide particles dispersed in their matrix, and typically have higher strength and better extreme temperature creep resistance characteristics than ordinary steels. However, no ODS materials have ever been installed in a commercial power reactor to date. Being a newer research material, there are many unanswered phenomena that need to be addressed regarding the performance under irradiation. Furthermore, due to the ODS material traditionally needing to follow a powder metallurgy fabrication route, there are many processing parameters that need to be optimized before achieving a nuclear grade material specification. In this Master’s thesis we explore the development of a novel ODS processing technology conducted in Beijing, China, to produce solutionized bulk ODS samples with ~97% theoretical density. This is done using relatively low temperatures and ultra high pressure (UHP) equipment, to compact the mechanically alloyed (MA) steel powder into bulk samples without any thermal phase change influence or oxide precipitation. By having solutionized bulk ODS samples, transmission electron microscopy (TEM) observation of nano oxide precipitation within the steel material can be studied by applying post heat treatments. These types of samples will be very useful to the science and engineering community, to answer questions regarding material powder compacting, oxide synthesis, and performance. Subsequent analysis performed at Queen’s University included X-ray diffraction (XRD) and inductively coupled plasma optical emission spectrometry (ICP-OES). Additional TEM in-situ 1MeV Kr2+ irradiation experiments coupled with energy dispersive X-ray (EDX) techniques, were also performed on large (200nm+) non-stoichiometric oxides embedded within the austenite steel grains, in an attempt to quantify the elemental compositional changes during high temperature (520oC) heavy ion irradiation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les zéolithes étant des matériaux cristallins microporeux ont démontré leurs potentiels et leur polyvalence dans un nombre très important d’applications. Les propriétés uniques des zéolithes ont poussé les chercheurs à leur trouver constamment de nouvelles utilités pour tirer le meilleur parti de ces matériaux extraordinaires. Modifier les caractéristiques des zéolithes classiques ou les combiner en synergie avec d’autres matériaux se trouvent être deux approches viables pour trouver encore de nouvelles applications. Dans ce travail de doctorat, ces deux approches ont été utilisées séparément, premièrement avec la modification morphologique de la ZSM-12 et deuxièmement lors de la formation des matériaux de type coeur/coquille (silice mésoporeuses@silicalite-1). La ZSM-12 est une zéolithe à haute teneur en silice qui a récemment attiré beaucoup l’attention par ses performances supérieures dans les domaines de l’adsorption et de la catalyse. Afin de synthétiser la ZSM-12 avec une pureté élevée et une morphologie contrôlée, la cristallisation de la zéolithe ZSM-12 a été étudiée en détail en fonction des différents réactifs chimiques disponibles (agent directeur de structure, types de silicium et source d’aluminium) et des paramètres réactionnels (l’alcalinité, ratio entre Na, Al et eau). Les résultats présentés dans cette étude ont montré que, contrairement à l’utilisation du structurant organique TEAOH, en utilisant un autre structurant, le MTEAOH, ainsi que le Al(o-i-Pr)3, cela a permis la formation de monocristaux ZSM-12 monodisperses dans un temps plus court. L’alcalinité et la teneur en Na jouent également des rôles déterminants lors de ces synthèses. Les structures de types coeur/coquille avec une zéolithe polycristalline silicalite-1 en tant que coquille, entourant un coeur formé par une microsphère de silice mésoporeuse (tailles de particules de 1,5, 3 et 20-45 μm) ont été synthétisés soit sous forme pure ou chargée avec des espèces hôtes métalliques. Des techniques de nucléations de la zéolithe sur le noyau ont été utilisées pour faire croitre la coquille de façon fiable et arriver à former ces matériaux. C’est la qualité des produits finaux en termes de connectivité des réseaux poreux et d’intégrité de la coquille, qui permet d’obtenir une stéréosélectivité. Ceci a été étudié en faisant varier les paramètres de synthèse, par exemple, lors de prétraitements qui comprennent ; la modification de surface, la nucléation, la calcination et le nombre d’étapes secondaires de cristallisation hydrothermale. En fonction de la taille du noyau mésoporeux et des espèces hôtes incorporées, l’efficacité de la nucléation se révèle être influencée par la technique de modification de surface choisie. En effet, les microsphères de silice mésoporeuses contenant des espèces métalliques nécessitent un traitement supplémentaire de fonctionnalisation chimique sur leur surface externe avec des précurseurs tels que le (3-aminopropyl) triéthoxysilane (APTES), plutôt que d’utiliser une modification de surface avec des polymères ioniques. Nous avons également montré que, selon la taille du noyau, de deux à quatre traitements hydrothermaux rapides sont nécessaires pour envelopper totalement le noyau sans aucune agrégation et sans dissoudre le noyau. De tels matériaux avec une enveloppe de tamis moléculaire cristallin peuvent être utilisés dans une grande variété d’applications, en particulier pour de l’adsorption et de la catalyse stéréo-sélective. Ce type de matériaux a été étudié lors d’une série d’expériences sur l’adsorption sélective du glycérol provenant de biodiesel brut avec des compositions différentes et à des températures différentes. Les résultats obtenus ont été comparés à ceux utilisant des adsorbants classiques comme par exemple du gel de sphères de silice mésoporeux, des zéolithes classiques, silicalite-1, Si-BEA et ZSM-5(H+), sous forment de cristaux, ainsi que le mélange physique de ces matériaux références, à savoir un mélange silicalite-1 et le gel de silice sphères. Bien que le gel de sphères de silice mésoporeux ait montré une capacité d’adsorption de glycérol un peu plus élevée, l’étude a révélé que les adsorbants mésoporeux ont tendance à piéger une quantité importante de molécules plus volumineuses, telles que les « fatty acid methyl ester » (FAME), dans leur vaste réseau de pores. Cependant, dans l’adsorbant à porosité hiérarchisée, la fine couche de zéolite silicalite-1 microporeuse joue un rôle de membrane empêchant la diffusion des molécules de FAME dans les mésopores composant le noyau/coeur de l’adsorbant composite, tandis que le volume des mésopores du noyau permet l’adsorption du glycérol sous forme de multicouches. Finalement, cette caractéristique du matériau coeur/coquille a sensiblement amélioré les performances en termes de rendement de purification et de capacité d’adsorption, par rapport à d’autres adsorbants classiques, y compris le gel de silice mésoporeuse et les zéolithes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this book are published results of high-tech application of computational modeling and simulation the dynamics of different flows, heat and mass transfer in different fields of science and engineering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates the use of iPads in the assessment of predominantly second year Bachelor of Education (Primary/Early Childhood) pre-service teachers undertaking a physical education and health unit. Within this unit, practical assessment tasks are graded by tutors in a variety of indoor and outdoor settings. The main barriers for the lecturer or tutor for effective assessment in these contexts include limited time to assess and the provision of explicit feedback for large numbers of students, complex assessment procedures, overwhelming record-keeping and assessing students without distracting from the performance being presented. The purpose of this pilot study was to investigate whether incorporating mobile technologies such as iPads to access online rubrics within the Blackboard environment would enhance and simplify the assessment process. Results from the findings indicate that using iPads to access online rubrics was successful in streamlining the assessment process because it provided pre-service teachers with immediate and explicit feedback. In addition, tutors experienced a reduction in the amount of time required for the same workload by allowing quicker forms of feedback via the iPad dictation function. These outcomes have future implications and potential for mobile paperless assessment in other disciplines such as health, environmental science and engineering.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The multi-faced evolution of network technologies ranges from big data centers to specialized network infrastructures and protocols for mission-critical operations. For instance, technologies such as Software Defined Networking (SDN) revolutionized the world of static configuration of the network - i.e., by removing the distributed and proprietary configuration of the switched networks - centralizing the control plane. While this disruptive approach is interesting from different points of view, it can introduce new unforeseen vulnerabilities classes. One topic of particular interest in the last years is industrial network security, an interest which started to rise in 2016 with the introduction of the Industry 4.0 (I4.0) movement. Networks that were basically isolated by design are now connected to the internet to collect, archive, and analyze data. While this approach got a lot of momentum due to the predictive maintenance capabilities, these network technologies can be exploited in various ways from a cybersecurity perspective. Some of these technologies lack security measures and can introduce new families of vulnerabilities. On the other side, these networks can be used to enable accurate monitoring, formal verification, or defenses that were not practical before. This thesis explores these two fields: by introducing monitoring, protections, and detection mechanisms where the new network technologies make it feasible; and by demonstrating attacks on practical scenarios related to emerging network infrastructures not protected sufficiently. The goal of this thesis is to highlight this lack of protection in terms of attacks on and possible defenses enabled by emerging technologies. We will pursue this goal by analyzing the aforementioned technologies and by presenting three years of contribution to this field. In conclusion, we will recapitulate the research questions and give answers to them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the framework of industrial problems, the application of Constrained Optimization is known to have overall very good modeling capability and performance and stands as one of the most powerful, explored, and exploited tool to address prescriptive tasks. The number of applications is huge, ranging from logistics to transportation, packing, production, telecommunication, scheduling, and much more. The main reason behind this success is to be found in the remarkable effort put in the last decades by the OR community to develop realistic models and devise exact or approximate methods to solve the largest variety of constrained or combinatorial optimization problems, together with the spread of computational power and easily accessible OR software and resources. On the other hand, the technological advancements lead to a data wealth never seen before and increasingly push towards methods able to extract useful knowledge from them; among the data-driven methods, Machine Learning techniques appear to be one of the most promising, thanks to its successes in domains like Image Recognition, Natural Language Processes and playing games, but also the amount of research involved. The purpose of the present research is to study how Machine Learning and Constrained Optimization can be used together to achieve systems able to leverage the strengths of both methods: this would open the way to exploiting decades of research on resolution techniques for COPs and constructing models able to adapt and learn from available data. In the first part of this work, we survey the existing techniques and classify them according to the type, method, or scope of the integration; subsequently, we introduce a novel and general algorithm devised to inject knowledge into learning models through constraints, Moving Target. In the last part of the thesis, two applications stemming from real-world projects and done in collaboration with Optit will be presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge graphs and ontologies are closely related concepts in the field of knowledge representation. In recent years, knowledge graphs have gained increasing popularity and are serving as essential components in many knowledge engineering projects that view them as crucial to their success. The conceptual foundation of the knowledge graph is provided by ontologies. Ontology modeling is an iterative engineering process that consists of steps such as the elicitation and formalization of requirements, the development, testing, refactoring, and release of the ontology. The testing of the ontology is a crucial and occasionally overlooked step of the process due to the lack of integrated tools to support it. As a result of this gap in the state-of-the-art, the testing of the ontology is completed manually, which requires a considerable amount of time and effort from the ontology engineers. The lack of tool support is noticed in the requirement elicitation process as well. In this aspect, the rise in the adoption and accessibility of knowledge graphs allows for the development and use of automated tools to assist with the elicitation of requirements from such a complementary source of data. Therefore, this doctoral research is focused on developing methods and tools that support the requirement elicitation and testing steps of an ontology engineering process. To support the testing of the ontology, we have developed XDTesting, a web application that is integrated with the GitHub platform that serves as an ontology testing manager. Concurrently, to support the elicitation and documentation of competency questions, we have defined and implemented RevOnt, a method to extract competency questions from knowledge graphs. Both methods are evaluated through their implementation and the results are promising.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report the observation at the Relativistic Heavy Ion Collider of suppression of back-to-back correlations in the direct photon+jet channel in Au+Au relative to p+p collisions. Two-particle correlations of direct photon triggers with associated hadrons are obtained by statistical subtraction of the decay photon-hadron (gamma-h) background. The initial momentum of the away-side parton is tightly constrained, because the parton-photon pair exactly balance in momentum at leading order in perturbative quantum chromodynamics, making such correlations a powerful probe of the in-medium parton energy loss. The away-side nuclear suppression factor, I(AA), in central Au+Au collisions, is 0.32 +/- 0.12(stat)+/- 0.09(syst) for hadrons of 3 < p(T)(h)< 5 in coincidence with photons of 5 < p(T)(gamma)< 15 GeV/c. The suppression is comparable to that observed for high-p(T) single hadrons and dihadrons. The direct photon associated yields in p+p collisions scale approximately with the momentum balance, z(T)equivalent to p(T)(h)/p(T)(gamma), as expected for a measurement of the away-side parton fragmentation function. We compare to Au+Au collisions for which the momentum balance dependence of the nuclear modification should be sensitive to the path-length dependence of parton energy loss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The PHENIX experiment presents results from the RHIC 2006 run with polarized p + p collisions at root s = 62.4 GeV, for inclusive pi(0) production at midrapidity. Unpolarized cross section results are measured for transverse momenta p(T) = 0.5 to 7 GeV/c. Next-to-leading order perturbative quantum chromodynamics calculations are compared with the data, and while the calculations are consistent with the measurements, next-to-leading logarithmic corrections improve the agreement. Double helicity asymmetries A(LL) are presented for p(T) = 1 to 4 GeV/c and probe the higher range of Bjorken x of the gluon (x(g)) with better statistical precision than our previous measurements at root s = 200 GeV. These measurements are sensitive to the gluon polarization in the proton for 0.06 < x(g) < 0.4.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In ultraperipheral relativistic heavy-ion collisions, a photon from the electromagnetic field of one nucleus can fluctuate to a quark-antiquark pair and scatter from the other nucleus, emerging as a rho(0). The rho(0) production occurs in two well-separated (median impact parameters of 20 and 40 F for the cases considered here) nuclei, so the system forms a two-source interferometer. At low transverse momenta, the two amplitudes interfere destructively, suppressing rho(0) production. Since the rho(0) decays before the production amplitudes from the two sources can overlap, the two-pion system can only be described with an entangled nonlocal wave function, and is thus an example of the Einstein-Podolsky-Rosen paradox. We observe this suppression in 200 GeV per nucleon-pair gold-gold collisions. The interference is 87%+/- 5%(stat.)+/- 8%(syst.) of the expected level. This translates into a limit on decoherence due to wave function collapse or other factors of 23% at the 90% confidence level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study on the use of artificial intelligence (AI) techniques for the modelling and subsequent control of an electric resistance spot welding process (ERSW) is presented. The ERSW process is characterized by the coupling of thermal, electrical, mechanical, and metallurgical phenomena. For this reason, early attempts to model it using computational methods established as the methods of finite differences, finite element, and finite volumes, ask for simplifications that lead the model obtained far from reality or very costly in terms of computational costs, to be used in a real-time control system. In this sense, the authors have developed an ERSW controller that uses fuzzy logic to adjust the energy transferred to the weld nugget. The proposed control strategies differ in the speed with which it reaches convergence. Moreover, their application for a quality control of spot weld through artificial neural networks (ANN) is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thermoelastic properties of ferropericlase Mg(1-x)Fe(x)O (x = 0.1875) throughout the iron high-to-low spin cross-over have been investigated by first principles at Earth`s lower mantle conditions. This cross-over has important consequences for elasticity such as an anomalous bulk modulus (K(S)) reduction. At room temperature the anomaly is somewhat sharp in pressure but broadens with increasing temperature. Along a typical geotherm it occurs across most of the lower mantle with a more significant K(S) reduction at approximate to 1,400-1,600 km depth. This anomaly might also cause a reduction in the effective activation energy for diffusion creep and lead to a viscosity minimum in the mid-lower mantle, in apparent agreement with results from inversion of data related with mantle convection and postglacial rebound.