918 resultados para Garment cutting.
Resumo:
Cdc48/p97 is an essential, highly abundant hexameric member of the AAA (ATPase associated with various cellular activities) family. It has been linked to a variety of processes throughout the cell but it is best known for its role in the ubiquitin proteasome pathway. In this system it is believed that Cdc48 behaves as a segregase, transducing the chemical energy of ATP hydrolysis into mechanical force to separate ubiquitin-conjugated proteins from their tightly-bound partners.
Current models posit that Cdc48 is linked to its substrates through a variety of adaptor proteins, including a family of seven proteins (13 in humans) that contain a Cdc48-binding UBX domain. As such, due to the complexity of the network of adaptor proteins for which it serves as the hub, Cdc48/p97 has the potential to exert a profound influence on the ubiquitin proteasome pathway. However, the number of known substrates of Cdc48/p97 remains relatively small, and smaller still is the number of substrates that have been linked to a specific UBX domain protein. As such, the goal of this dissertation research has been to discover new substrates and better understand the functions of the Cdc48 network. With this objective in mind, we established a proteomic screen to assemble a catalog of candidate substrate/targets of the Ubx adaptor system.
Here we describe the implementation and optimization of a cutting-edge quantitative mass spectrometry method to measure relative changes in the Saccharomyces cerevisiae proteome. Utilizing this technology, and in order to better understand the breadth of function of Cdc48 and its adaptors, we then performed a global screen to identify accumulating ubiquitin conjugates in cdc48-3 and ubxΔ mutants. In this screen different ubx mutants exhibited reproducible patterns of conjugate accumulation that differed greatly from each other, pointing to various unexpected functional specializations of the individual Ubx proteins.
As validation of our mass spectrometry findings, we then examined in detail the endoplasmic-reticulum bound transcription factor Spt23, which we identified as a putative Ubx2 substrate. In these studies ubx2Δ cells were deficient in processing of Spt23 to its active p90 form, and in localizing p90 to the nucleus. Additionally, consistent with reduced processing of Spt23, ubx2Δ cells demonstrated a defect in expression of their target gene OLE1, a fatty acid desaturase. Overall, this work demonstrates the power of proteomics as a tool to identify new targets of various pathways and reveals Ubx2 as a key regulator lipid membrane biosynthesis.
Resumo:
The termite hindgut microbial ecosystem functions like a miniature lignocellulose-metabolizing natural bioreactor, has significant implications to nutrient cycling in the terrestrial environment, and represents an array of microbial metabolic diversity. Deciphering the intricacies of this microbial community to obtain as complete a picture as possible of how it functions as a whole, requires a combination of various traditional and cutting-edge bioinformatic, molecular, physiological, and culturing approaches. Isolates from this ecosystem, including Treponema primitia str. ZAS-1 and ZAS-2 as well as T. azotonutricium str. ZAS-9, have been significant resources for better understanding the termite system. While not all functions predicted by the genomes of these three isolates are demonstrated in vitro, these isolates do have the capacity for several metabolisms unique to spirochetes and critical to the termite system’s reliance upon lignocellulose. In this thesis, work culturing, enriching for, and isolating diverse microorganisms from the termite hindgut is discussed. Additionally, strategies of members of the termite hindgut microbial community to defend against O2-stress and to generate acetate, the “biofuel” of the termite system, are proposed. In particular, catechol 2,3-dioxygenase and other meta-cleavage catabolic pathway genes are described in the “anaerobic” termite hindgut spirochetes T. primitia str. ZAS-1 and ZAS-2, and the first evidence for aromatic ring cleavage in the phylum (division) Spirochetes is also presented. These results suggest that the potential for O2-dependent, yet nonrespiratory, metabolisms of plant-derived aromatics should be re-evaluated in termite hindgut communities. Potential future work is also illustrated.
Warsaw Conference: “Small steps forward while awaiting major decisions at the 2015 Paris Conference”
Resumo:
6 p.
Resumo:
6 p.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The Little Sea is an 80-acre, shallow freshwater lake formed about a hundred years ago by sand-dunes cutting off a sea-inlet at Studland Bay, near Swanage. This work presents a general survey of the phytoplankton in the lake from October 1990 to December 1993. Many species were present throughout the year; others showed seasonal variations. Numerically, the diatoms, Monoraphidium and sometimes Rhodomonas, were the main constituents of the phytoplankton. One species of alga in the lake of particular interest is Chrysosphaerella longispina Lauterb. which, up to 1991, had only been recorded from five localities in Britain.
Resumo:
[ES]Diseño de una instalación de cogeneración basada en un motor de combustible gas natural para una empresa de tratamientos térmicos y superficiales. Para satisfacer las necesidades energéticas de la planta, la potencia eléctrica la suministrará un alternador conectado al motor y, a su vez, la entalpía de los humos de escape del motor se aprovechará para la producción de vapor de agua, necesario para la actividad industrial de la empresa. Por otro lado, el calor que es necesario disipar de dicho motor se recuperará para el calentamiento de agua de red, con la finalidad de limpiar la taladrina de las piezas tratadas.
Resumo:
This short paper records some measurements made on the Little Sea, a shallow, coastal, acidic lake on Studland Heath, Dorset. The lake, formed about 100 years ago by dunes cutting off a sea inlet, has not received any input of agricultural fertilizers or other waste products for at least the last 30 years. It is a Site of Special Scientific Interest (SSSI). Samples of surface water were taken from the northern and southern ends of the lake at 3-monthly intervals, from July 1995 to April 1996. The first samples in July 1995 were taken during a period of drought; rain, sometimes very heavy, came in late September. With the exception of silicate, potassium and phosphate, there were no large changes in plant nutrient concentrations during the year. The concentration of nitrate-nitrogen was very low (close to the limits of analytical detection), but total phosphorus at ca. 30 mu g per litre was similar to concentrations found in some of the Cumbrian eutrophic lakes. The large number of algal species at low cell/colony concentrations suggested that the lake is mesotrophic. Sodium, chloride and magnesium in the lake water were close to the same proportions as those found in sea water. Dry and wet deposition of sea-salts on the lake surface and its catchment area probably is the major source of sodium, magnesium and chloride ions in the lake, and also accounts for about half of the mean potassium and sulphate concentrations.
Resumo:
[ES]Los objetivos del siguiente trabajo consisten en analizar e optimizar el proceso del torneado en duro del acero ASP-23 indagando de especial manera en la realización de diferentes soluciones para brochas. En este caso, este proyecto nace de la importancia de reducir así como los costes económicos y los costes temporales de fabricación de elementos basados en el acero ASP-23 mediante el torneado en duro; proceso de mecanizado, cuya importancia cada vez es mayor como en las industrias de automoción o aeronáutica. El desarrollo del proyecto es fruto de la necesidad de EKIN S. Coop, uno de los líderes en los procesos de máquina-herramienta de alta precisión para el brochado, de desarrollar un proceso de mecanizado más eficaz de las brochas que produce. Así en el aula máquina-herramienta (ETSIB) se han intentado demostrar los beneficios que tiene el torneado en duro en el mecanizado del ASP-23. Hoy en día, con el rápido desarrollo de nuevos materiales, los procesos de fabricación se están haciendo cada vez más complejos, por la amplia variedad de maquinas con las que se realizan los procesos, por la variedad de geometría/material de las herramientas empleadas, por las propiedades del material de la pieza a mecanizar, por los parámetros de corte tan variados con los que podemos implementar el proceso (profundidad de corte, velocidad, alimentación...) y por la diversidad de elementos de sujeción utilizados. Además debemos ser conscientes de que tal variedad implica grandes magnitudes de deformaciones, velocidades y temperaturas. He aquí la justificación y el gran interés en el proyecto a realizar. Por ello, en este proyecto intentamos dar un pequeño paso en el conocimiento del proceso del torneado en duro de aceros con poca maquinabilidad, siendo conscientes de la amplia variedad y dificultad del avance en la ingeniería de fabricación y del mucho trabajo que queda por hacer.
Resumo:
Despite years of research on low-angle detachments, much about them remains enigmatic. This thesis addresses some of the uncertainty regarding two particular detachments, the Mormon Peak detachment in Nevada and the Heart Mountain detachment in Wyoming and Montana.
Constraints on the geometry and kinematics of emplacement of the Mormon Peak detachment are provided by detailed geologic mapping of the Meadow Valley Mountains, along with an analysis of structural data within the allochthon in the Mormon Mountains. Identifiable structures well suited to constrain the kinematics of the detachment include a newly mapped, Sevier-age monoclinal flexure in the hanging wall of the detachment. This flexure, including the syncline at its base and the anticline at its top, can be readily matched to the base and top of the frontal Sevier thrust ramp, which is exposed in the footwall of the detachment to the east in the Mormon Mountains and Tule Springs Hills. The ~12 km of offset of these structural markers precludes the radial sliding hypothesis for emplacement of the allochthon.
The role of fluids in the slip along faults is a widely investigated topic, but the use of carbonate clumped-isotope thermometry to investigate these fluids is new. Faults rocks from within ~1 m of the Mormon Peak detachment, including veins, breccias, gouges, and host rocks, were analyzed for carbon, oxygen, and clumped-isotope measurements. The data indicate that much of the carbonate breccia and gouge material along the detachment is comminuted host rock, as expected. Measurements in vein material indicate that the fluid system is dominated by meteoric water, whose temperature indicates circulation to substantial depths (c. 4 km) in the upper crust near the fault zone.
Slip along the subhorizontal Heart Mountain detachment is particularly enigmatic, and many different mechanisms for failure have been proposed, predominantly involving catastrophic failure. Textural evidence of multiple slip events is abundant, and include multiple brecciation events and cross-cutting clastic dikes. Footwall deformation is observed in numerous exposures of the detachment. Stylolitic surfaces and alteration textures within and around “banded grains” previously interpreted to be an indicator of high-temperature fluidization along the fault suggest their formation instead via low-temperature dissolution and alteration processes. There is abundant textural evidence of the significant role of fluids along the detachment via pressure solution. The process of pressure solution creep may be responsible for enabling multiple slip events on the low-angle detachment, via a local rotation of the stress field.
Clumped-isotope thermometry of fault rocks associated with the Heart Mountain detachment indicates that despite its location on the flanks of a volcano that was active during slip, the majority of carbonate along the Heart Mountain detachment does not record significant heating above ambient temperatures (c. 40-70°C). Instead, cold meteoric fluids infiltrated the detachment breccia, and carbonate precipitated under ambient temperatures controlled by structural depth. Locally, fault gouge does preserve hot temperatures (>200°C), as is observed in both the Mormon Peak detachment and Heart Mountain detachment areas. Samples with very hot temperatures attributable to frictional shear heating are present but rare. They appear to be best preserved in hanging wall structures related to the detachment, rather than along the main detachment.
Evidence is presented for the prevalence of relatively cold, meteoric fluids along both shallow crustal detachments studied, and for protracted histories of slip along both detachments. Frictional heating is evident from both areas, but is a minor component of the preserved fault rock record. Pressure solution is evident, and might play a role in initiating slip on the Heart Mountain fault, and possibly other low-angle detachments.
Resumo:
With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.
All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.
We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.
Resumo:
With the advent of the laser in the year 1960, the field of optics experienced a renaissance from what was considered to be a dull, solved subject to an active area of development, with applications and discoveries which are yet to be exhausted 55 years later. Light is now nearly ubiquitous not only in cutting-edge research in physics, chemistry, and biology, but also in modern technology and infrastructure. One quality of light, that of the imparted radiation pressure force upon reflection from an object, has attracted intense interest from researchers seeking to precisely monitor and control the motional degrees of freedom of an object using light. These optomechanical interactions have inspired myriad proposals, ranging from quantum memories and transducers in quantum information networks to precision metrology of classical forces. Alongside advances in micro- and nano-fabrication, the burgeoning field of optomechanics has yielded a class of highly engineered systems designed to produce strong interactions between light and motion.
Optomechanical crystals are one such system in which the patterning of periodic holes in thin dielectric films traps both light and sound waves to a micro-scale volume. These devices feature strong radiation pressure coupling between high-quality optical cavity modes and internal nanomechanical resonances. Whether for applications in the quantum or classical domain, the utility of optomechanical crystals hinges on the degree to which light radiating from the device, having interacted with mechanical motion, can be collected and detected in an experimental apparatus consisting of conventional optical components such as lenses and optical fibers. While several efficient methods of optical coupling exist to meet this task, most are unsuitable for the cryogenic or vacuum integration required for many applications. The first portion of this dissertation will detail the development of robust and efficient methods of optically coupling optomechanical resonators to optical fibers, with an emphasis on fabrication processes and optical characterization.
I will then proceed to describe a few experiments enabled by the fiber couplers. The first studies the performance of an optomechanical resonator as a precise sensor for continuous position measurement. The sensitivity of the measurement, limited by the detection efficiency of intracavity photons, is compared to the standard quantum limit imposed by the quantum properties of the laser probe light. The added noise of the measurement is seen to fall within a factor of 3 of the standard quantum limit, representing an order of magnitude improvement over previous experiments utilizing optomechanical crystals, and matching the performance of similar measurements in the microwave domain.
The next experiment uses single photon counting to detect individual phonon emission and absorption events within the nanomechanical oscillator. The scattering of laser light from mechanical motion produces correlated photon-phonon pairs, and detection of the emitted photon corresponds to an effective phonon counting scheme. In the process of scattering, the coherence properties of the mechanical oscillation are mapped onto the reflected light. Intensity interferometry of the reflected light then allows measurement of the temporal coherence of the acoustic field. These correlations are measured for a range of experimental conditions, including the optomechanical amplification of the mechanics to a self-oscillation regime, and comparisons are drawn to a laser system for phonons. Finally, prospects for using phonon counting and intensity interferometry to produce non-classical mechanical states are detailed following recent proposals in literature.
Resumo:
O objetivo deste trabalho é analisar in vitro a dissipação de tensões em incisivos centrais superiores humanos restaurados com facetas de cerâmica feldspática, através da análise do método dos elementos finitos, considerando cargas funcionais de mastigação e corte dos alimentos, em função de três tipos de preparos utilizados: sem proteção incisal; com proteção incisal em ângulo e com proteção incisal em degrau palatino. Foram utilizadas modelagens bidimensionais de um incisivo central superior e suas estruturas de suporte, simulando três situações: (Primeira modelagem) incisivo central superior com desgaste vestibular (em forma de janela); (Segunda modelagem) incisivo central superior com desgaste vestibular e proteção incisal em plano inclinado; (Terceira modelagem) incisivo central superior com desgaste vestibular, e proteção incisal com degrau palatino. Foi considerada uma carga (P=100N) com uma inclinação de 45 concentrada, simulando a região de contato do incisivo central inferior com o superior durante a mastigação e uma na região de contato topo a topo dos incisivos superior e inferior, simulando o corte dos alimentos. Após a análise dos dados obtidos pela distribuição de tensões, pode-se concluir que quanto à dissipação das tensões em todo o sistema proposto, com a aplicação de carga em 45, não foram observadas mudanças no estado tensional nos três diferentes preparos. Quando foi aplicada carga vertical, simulando o contato de topo, houve variação no estado tensional no sistema do dente com preparo em janela. Nas facetas, com a aplicação de carga em 45, nos preparos em janela e com proteção incisal em plano inclinado o resultado foi semelhante nos valores tensionais enquanto, nas facetas em dentes preparados com proteção incisal com degrau palatino, a distribuição foi mais homogênea tendo valores superiores, mostrando que o abraçamento do dente diminuiu a flexão.
Resumo:
Topological superconductors are particularly interesting in light of the active ongoing experimental efforts for realizing exotic physics such as Majorana zero modes. These systems have excitations with non-Abelian exchange statistics, which provides a path towards topological quantum information processing. Intrinsic topological superconductors are quite rare in nature. However, one can engineer topological superconductivity by inducing effective p-wave pairing in materials which can be grown in the laboratory. One possibility is to induce the proximity effect in topological insulators; another is to use hybrid structures of superconductors and semiconductors.
The proposal of interfacing s-wave superconductors with quantum spin Hall systems provides a promising route to engineered topological superconductivity. Given the exciting recent progress on the fabrication side, identifying experiments that definitively expose the topological superconducting phase (and clearly distinguish it from a trivial state) raises an increasingly important problem. With this goal in mind, we proposed a detection scheme to get an unambiguous signature of topological superconductivity, even in the presence of ordinarily detrimental effects such as thermal fluctuations and quasiparticle poisoning. We considered a Josephson junction built on top of a quantum spin Hall material. This system allows the proximity effect to turn edge states in effective topological superconductors. Such a setup is promising because experimentalists have demonstrated that supercurrents indeed flow through quantum spin Hall edges. To demonstrate the topological nature of the superconducting quantum spin Hall edges, theorists have proposed examining the periodicity of Josephson currents respect to the phase across a Josephson junction. The periodicity of tunneling currents of ground states in a topological superconductor Josephson junction is double that of a conventional Josephson junction. In practice, this modification of periodicity is extremely difficult to observe because noise sources, such as quasiparticle poisoning, wash out the signature of topological superconductors. For this reason, We propose a new, relatively simple DC measurement that can compellingly reveal topological superconductivity in such quantum spin Hall/superconductor heterostructures. More specifically, We develop a general framework for capturing the junction's current-voltage characteristics as a function of applied magnetic flux. Our analysis reveals sharp signatures of topological superconductivity in the field-dependent critical current. These signatures include the presence of multiple critical currents and a non-vanishing critical current for all magnetic field strengths as a reliable identification scheme for topological superconductivity.
This system becomes more interesting as interactions between electrons are involved. By modeling edge states as a Luttinger liquid, we find conductance provides universal signatures to distinguish between normal and topological superconductors. More specifically, we use renormalization group methods to extract universal transport characteristics of superconductor/quantum spin Hall heterostructures where the native edge states serve as a lead. Interestingly, arbitrarily weak interactions induce qualitative changes in the behavior relative to the free-fermion limit, leading to a sharp dichotomy in conductance for the trivial (narrow superconductor) and topological (wide superconductor) cases. Furthermore, we find that strong interactions can in principle induce parafermion excitations at a superconductor/quantum spin Hall junction.
As we identify the existence of topological superconductor, we can take a step further. One can use topological superconductor for realizing Majorana modes by breaking time reversal symmetry. An advantage of 2D topological insulator is that networks required for braiding Majoranas along the edge channels can be obtained by adjoining 2D topological insulator to form corner junctions. Physically cutting quantum wells for this purpose, however, presents technical challenges. For this reason, I propose a more accessible means of forming networks that rely on dynamically manipulating the location of edge states inside of a single 2D topological insulator sheet. In particular, I show that edge states can effectively be dragged into the system's interior by gating a region near the edge into a metallic regime and then removing the resulting gapless carriers via proximity-induced superconductivity. This method allows one to construct rather general quasi-1D networks along which Majorana modes can be exchanged by electrostatic means.
Apart from 2D topological insulators, Majorana fermions can also be generated in other more accessible materials such as semiconductors. Following up on a suggestion by experimentalist Charlie Marcus, I proposed a novel geometry to create Majorana fermions by placing a 2D electron gas in proximity to an interdigitated superconductor-ferromagnet structure. This architecture evades several manufacturing challenges by allowing single-side fabrication and widening the class of 2D electron gas that may be used, such as the surface states of bulk semiconductors. Furthermore, it naturally allows one to trap and manipulate Majorana fermions through the application of currents. Thus, this structure may lead to the development of a circuit that enables fully electrical manipulation of topologically-protected quantum memory. To reveal these exotic Majorana zero modes, I also proposed an interference scheme to detect Majorana fermions that is broadly applicable to any 2D topological superconductor platform.
Resumo:
Este estudo teve como finalidade, testar uma barreira contra a contaminação microbiológica em placas de contato, utilizadas em monitoramento de salas limpas para fabricação de produtos farmacêuticos estéreis. Durante o ano de 2007, foram realizados testes de contato com a utilização da mencionada barreira, e os resultados foram comparados com dados dos anos de, 2004, 2005 e 2006, quando a barreira não foi utilizada. Os ambientes utilizados para os testes foram duas salas limpas de uma planta farmacêutica localizada no Rio de Janeiro. Nos mencionados ambientes é necessário o uso de uma vestimenta especial, de forma a evitar que partículas do corpo dos operadores, bactérias e fungos, migrem para a superfície externa do uniforme e coloquem em risco a esterilidade dos produtos. Sendo assim, foi proposta a colocação de uma camiseta diretamente sobre a pele do operador durante todo o ano de 2007 de forma a evitar ou reduzir a possibilidade de migração dessas partículas; e os resultados foram comparados com os anos de 2004, 2005 e 2006, quando a camiseta não foi usada. Os testes demonstraram que houve uma redução de cerca de 50% na ocorrência de placas contaminadas. Com relação ao número total de colônias formadas, a redução foi de 75% na comparação com os anos de 2004 e 2005 e de 50% com relação ao ano de 2006