992 resultados para bulk production
Resumo:
This work presents an assessment of the coprecipitation technique for the reliable production of high-temperature superconducting (HTS) copper-oxide powders in quantities scaled up to 1 kg. This process affords precise control of cation stoichiometry (< 4% relative), occurs rapidly (almost instantaneously) and can be suitably developed for large-scale (e.g. tonne) manufacture of HTS materials. The process is based upon a simple control of the chemistry of the cation solution and precipitation with oxalic acid. This coprecipitation method is applicable to all copper-oxides and has been demonstrated in this work using over thirty separate experiments for the following compositions: YBa2Cu3O7-δ, Y2BaCuO5 and YBa2Cu4O8. The precursor powders formed via this coprecipitation process are fine-grained (∼ 5-10 nm), chemically homogeneous at the nanometer scale and reactive, Conversion to phase-pure HTS powders can therefore occur in minutes at appropriate firing temperatures. © 1995.
Resumo:
Synthesis of metal borides is typically undertaken at high temperature using direct combinations of elemental starting materials[1]. Techniques include carbothermal reduction using elemental carbon, metals, metal oxides and B2O3[2] or reaction between metal chlorides and boron sources[3]. These reactions generally require temperatures greater than 1200oC and are not readily suitable for an industrial setting nor scalable to bulk production.
Resumo:
Free nanoparticles of iron (Fe) and their colloids with high saturation magnetization are in demand for medical and microfluidic applications. However, the oxide layer that forms during processing has made such synthesis a formidable challenge. Lowering the synthesis temperature decreases rate of oxidation and hence provides a new way of producing pure metallic nanoparticles prone to oxidation in bulk amount (large quantity). In this paper we have proposed a methodology that is designed with the knowledge of thermodynamic imperatives of oxidation to obtain almost oxygen-free iron nanoparticles, with or without any organic capping by controlled milling at low temperatures in a specially designed high-energy ball mill with the possibility of bulk production. The particles can be ultrasonicated to produce colloids and can be bio-capped to produce transparent solution. The magnetic properties of these nanoparticles confirm their superiority for possible biomedical and other applications.
Resumo:
Nanotechnologies are rapidly expanding because of the opportunities that the new materials offer in many areas such as the manufacturing industry, food production, processing and preservation, and in the pharmaceutical and cosmetic industry. Size distribution of the nanoparticles determines their properties and is a fundamental parameter that needs to be monitored from the small-scale synthesis up to the bulk production and quality control of nanotech products on the market. A consequence of the increasing number of applications of nanomaterial is that the EU regulatory authorities are introducing the obligation for companies that make use of nanomaterials to acquire analytical platforms for the assessment of the size parameters of the nanomaterials. In this work, Asymmetrical Flow Field-Flow Fractionation (AF4) and Hollow Fiber F4 (HF5), hyphenated with Multiangle Light Scattering (MALS) are presented as tools for a deep functional characterization of nanoparticles. In particular, it is demonstrated the applicability of AF4-MALS for the characterization of liposomes in a wide series of mediums. Afterwards the technique is used to explore the functional features of a liposomal drug vector in terms of its biological and physical interaction with blood serum components: a comprehensive approach to understand the behavior of lipid vesicles in terms of drug release and fusion/interaction with other biological species is described, together with weaknesses and strength of the method. Afterwards the size characterization, size stability, and conjugation of azidothymidine drug molecules with a new generation of metastable drug vectors, the Metal Organic Frameworks, is discussed. Lastly, it is shown the applicability of HF5-ICP-MS for the rapid screening of samples of relevant nanorisk: rather than a deep and comprehensive characterization it this time shown a quick and smart methodology that within few steps provides qualitative information on the content of metallic nanoparticles in tattoo ink samples.
Resumo:
The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.
Resumo:
The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.
Resumo:
The feasibility of ex vivo blood production is limited by both biological and engineering challenges. From an engineering perspective, these challenges include the significant volumes required to generate even a single unit of a blood product, as well as the correspondingly high protein consumption required for such large volume cultures. Membrane bioreactors, such as hollow fiber bioreactors (HFBRs), enable cell densities approximately 100-fold greater than traditional culture systems and therefore may enable a significant reduction in culture working volumes. As cultured cells, and larger molecules, are retained within a fraction of the system volume, via a semipermeable membrane it may be possible to reduce protein consumption by limiting supplementation to only this fraction. Typically, HFBRs are complex perfusion systems having total volumes incompatible with bench scale screening and optimization of stem cell-based cultures. In this article we describe the use of a simplified HFBR system to assess the feasibility of this technology to produce blood products from umbilical cord blood-derived CD34+ hematopoietic stem progenitor cells (HSPCs). Unlike conventional HFBR systems used for protein manufacture, where cells are cultured in the extracapillary space, we have cultured cells in the intracapillary space, which is likely more compatible with the large-scale production of blood cell suspension cultures. Using this platform we direct HSPCs down the myeloid lineage, while targeting a 100-fold increase in cell density and the use of protein-free bulk medium. Our results demonstrate the potential of this system to deliver high cell densities, even in the absence of protein supplementation of the bulk medium.
Resumo:
This paper details the implementation and trialling of a prototype in-bucket bulk density monitor on a production dragline. Bulk density information can provide feedback to mine planning and scheduling to improve blasting and consequently facilitating optimal bucket sizing. The bulk density measurement builds upon outcomes presented in the AMTC2009 paper titled ‘Automatic In-Bucket Volume Estimation for Dragline Operations’ and utilises payload information from a commercial dragline monitor. While the previous paper explains the algorithms and theoretical basis for the system design and scaled model testing this paper will focus on the full scale implementation and the challenges involved.
Resumo:
Infectious diseases such as SARS, influenza and bird flu have the potential to cause global pandemics; a key intervention will be vaccination. Hence, it is imperative to have in place the capacity to create vaccines against new diseases in the shortest time possible. In 2004, The Institute of Medicine asserted that the world is tottering on the verge of a colossal influenza outbreak. The institute stated that, inadequate production system for influenza vaccines is a major obstruction in the preparation towards influenza outbreaks. Because of production issues, the vaccine industry is facing financial and technological bottlenecks: In October 2004, the FDA was caught off guard by the shortage of flu vaccine, caused by a contamination in a US-based plant (Chiron Corporation), one of the only two suppliers of US flu vaccine. Due to difficulties in production and long processing times, the bulk of the world's vaccine production comes from very small number of companies compared to the number of companies producing drugs. Conventional vaccines are made of attenuated or modified forms of viruses. Relatively high and continuous doses are administered when a non-viable vaccine is used and the overall protective immunity obtained is ephemeral. The safety concerns of viral vaccines have propelled interest in creating a viable replacement that would be more effective and safer to use.
Resumo:
The work reported herein is part of an on-going programme to develop a computer code which, given the geometrical, process and material parameters of the forging operation, is able to predict the die and the billet cooling/heating characteristics in forging production. The code has been experimentally validated earlier for a single forging cycle and is now validated for a small batch production. To facilitate a step-by-step development of the code, the billet deformation has so far been limited to its surface layers, a situation akin to coining. The code has been used here to study the effects of die preheat-temperature, machine speed and rate of deformation the cooling/heating of the billet and the dies over a small batch of 150 forgings. The study shows: that there is a pre-heat temperature at which the billet temperature changes little from one forging to the next; that beyond a particular number of forgings, the machine speed ceases to have any pronounced influence on the temperature characteristics of the billet; and that increasing the rate of deformation reduces the heat loss from the billet and gives the billet a stable temperature profile with respect to the number of forgings. The code, which is simple to use, is being extended to bulk-deformation problems. Given a practical range of possible machine, billet and process specifics, the code should be able to arrive at a combination of these parameters which will give the best thermal characteristics of the die-billet system. The code is also envisaged as being useful in the design of isothermal dies and processes.
Resumo:
Viscous modifications to the thermal distributions of quark-antiquarks and gluons have been studied in a quasiparticle description of the quark-gluon-plasma medium created in relativistic heavy-ion collision experiments. The model is described in terms of quasipartons that encode the hot QCD medium effects in their respective effective fugacities. Both shear and bulk viscosities have been taken in to account in the analysis, and the modifications to thermal distributions have been obtained by modifying the energy-momentum tensor in view of the nontrivial dispersion relations for the gluons and quarks. The interactions encoded in the equation of state induce significant modifications to the thermal distributions. As an implication, the dilepton production rate in the q (q) over bar annihilation process has been investigated. The equation of state is found to have a significant impact on the dilepton production rate along with the viscosities.
Resumo:
Graphene is at the center of an ever growing research effort due to its unique properties, interesting for both fundamental science and applications. A key requirement for applications is the development of industrial-scale, reliable, inexpensive production processes. Here we review the state of the art of graphene preparation, production, placement and handling. Graphene is just the first of a new class of two dimensional materials, derived from layered bulk crystals. Most of the approaches used for graphene can be extended to these crystals, accelerating their journey towards applications. © 2012 Elsevier Ltd.
Resumo:
The methane hydrate was formed in a pressure vessel 38 mm in id and 500 mm in length. Experimental works on gas production from the hydrate-bearing core by depressurization to 0.1, 0.93, and 1.93 MPa have been carried out. The hydrate reservoir simulator TOUGH-Fx/Hydrate was used to simulate the experimental gas production behavior, and the intrinsic hydration dissociation constant (K-0) fitted for the experimental data was on the order of 104 mol m(-2) Pa-1 s(-1), which was one order lower than that of the bulk hydrate dissociation. The sensitivity analyses based on the simulator have been carried out, and the results suggested that the hydrate dissociation kinetics had a great effect on the gas production behavior for the laboratory-scale hydrate-bearing core. However for a field-scale hydrate reservoir, the flow ability dominated the gas production behavior and the effect of hydrate dissociation kinetics on the gas production behavior could be neglected.