946 resultados para Tell and show routine
Resumo:
Antimicrobial peptides and proteins (AMPs) are widespread in the living kingdom. They are key effectors of defense reactions and mediators of competitions between organisms. They are often cationic and amphiphilic, which favors their interactions with the anionic membranes of microorganisms. Several AMP families do not directly alter membrane integrity but rather target conserved components of the bacterial membranes in a process that provides them with potent and specific antimicrobial activities. Thus, lipopolysaccharides (LPS), lipoteichoic acids (LTA) or the peptidoglycan precursor Lipid II are targeted by a broad series of AMPs. Studying the functional diversity of immune effectors tells us about the essential residues involved in AMP mechanism of action. Marine invertebrates have been found to produce a remarkable diversity of AMPs. Molluscan defensins and crustacean anti-LPS factors (ALF) are diverse in terms of amino acid sequence and show contrasted phenotypes in terms of antimicrobial activity. Their activity is directed essentially against Gram-positive or Gram-negative bacteria due their specific interactions with Lipid II or Lipid A, respectively. Through those interesting examples, we discuss here how sequence diversity generated throughout evolution informs us on residues required for essential molecular interaction at the bacterial membranes and subsequent antibacterial activity. Through the analysis of molecular variants having lost antibacterial activity or shaped novel functions, we also discuss the molecular bases of functional divergence in AMPs.
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
Quantum mechanics, optics and indeed any wave theory exhibits the phenomenon of interference. In this thesis we present two problems investigating interference due to indistinguishable alternatives and a mostly unrelated investigation into the free space propagation speed of light pulses in particular spatial modes. In chapter 1 we introduce the basic properties of the electromagnetic field needed for the subsequent chapters. In chapter 2 we review the properties of interference using the beam splitter and the Mach-Zehnder interferometer. In particular we review what happens when one of the paths of the interferometer is marked in some way so that the particle having traversed it contains information as to which path it went down (to be followed up in chapter 3) and we review Hong-Ou-Mandel interference at a beam splitter (to be followed up in chapter 5). In chapter 3 we present the first of the interference problems. This consists of a nested Mach-Zehnder interferometer in which each of the free space propagation segments are weakly marked by mirrors vibrating at different frequencies [1]. The original experiment drew the conclusions that the photons followed disconnected paths. We partition the description of the light in the interferometer according to the number of paths it contains which-way information about and reinterpret the results reported in [1] in terms of the interference of paths spatially connected from source to detector. In chapter 4 we briefly review optical angular momentum, entanglement and spontaneous parametric down conversion. These concepts feed into chapter 5 in which we present the second of the interference problems namely Hong-Ou-Mandel interference with particles possessing two degrees of freedom. We analyse the problem in terms of exchange symmetry for both boson and fermion pairs and show that the particle statistics at a beam splitter can be controlled for suitably chosen states. We propose an experimental test of these ideas using orbital angular momentum entangled photons. In chapter 6 we look at the effect that the transverse spatial structure of the mode that a pulse of light is excited in has on its group velocity. We show that the resulting group velocity is slower than the speed of light in vacuum for plane waves and that this reduction in the group velocity is related to the spread in the wave vectors required to create the transverse spatial structure. We present experimental results of the measurement of this slowing down using Hong-Ou-Mandel interference.
Resumo:
The Santa Eulalia plutonic complex (SEPC) is a late-Variscan granitic body placed in the Ossa-Morena Zone. The host rocks of the complex belong to metamorphic formations from Proterozoic to Lower Paleozoic. The SEPC is a ring massif (ca. 400 km2 area) composed by two main granitic facies with different colours and textures. From the rim to the core, there is (i) a peripheral pink medium- to coarse-grained granite (G0 group) involving large elongated masses of mafic and intermediate rocks, from gabbros to granodiorites (M group), and (ii) a central gray medium-grained granite (G1 group). The mafic to intermediate rocks (M group) are metaluminous and show wide compositions: 3.34–13.51 wt% MgO; 0.70–7.20 ppm Th; 0.84–1.06 (Eu/Eu*)N (Eu* calculated between Sm and Tb); 0.23–0.97 (Nb/Nb*)N (Nb* calculated between Th and La). Although involving the M-type bodies and forming the outer ring, the G0 granites are the most differentiated magmatic rocks of the SEPC, with a transitional character between metaluminous and peraluminous: 0.00–0.62 wt% MgO; 15.00–56.00 ppm Th; and 0.19–0.42 (Eu/Eu*)N ; 0.08–0.19 (Nb/Nb*)N [1][2]. The G1 group is composed by monzonitic granites with a dominant peraluminous character and represents the most homogeneous compositional group of the SEPC: 0.65–1.02 wt% MgO; 13.00–16.95 ppm Th; 0.57–0.70 (Eu/Eu*)N ; 0.14–0.16 (Nb/Nb*)N . According to the SiO2 vs. (Na2O+K2O–CaO) relationships, the M and G1 groups predominantly fall in the calc-alkaline field, while the G0 group is essencially alkali-calcic; on the basis of the SiO2 vs. FeOt/(FeOt+MgO) correlation, SEPC should be considered as a magnesian plutonic association [3]. New geochronological data (U-Pb on zircons) slightly correct the age of the SEPC, previously obtained by other methods (290 Ma, [4]). They provide ages of 306 2 Ma for the M group, 305 6 Ma for the G1 group, and 301 4 Ma for the G0 group, which confirm the late-Variscan character of the SEPC, indicating however a faintly older emplacement, during the Upper Carboniferous. Recent whole-rock isotopic data show that the Rb-Sr system suffered significant post-magmatic disturbance, but reveal a consistent set of Sm-Nd results valuable in the approach to the magmatic sources of this massif: M group (2.9 < Ndi < +1.8); G1 group (5.8 < Ndi < 4.6); G0 group (2.2 < Ndi < 0.8). These geochemical data suggest a petrogenetic model for the SEPC explained by a magmatic event developed in two stages. Initially, magmas derived from long-term depleted mantle sources (Ndi < +1.8 in M group) were extracted to the crust promoting its partial melting and extensive mixing and/or AFC magmatic evolution, thereby generating the G1 granites (Ndi < 4.6). Subsequently, a later extraction of similar primary magmas in the same place or nearby, could have caused partial melting of some intermediate facies (e.g. diorites) of the M group, followed by magmatic differentiation processes, mainly fractional crystallization, able to produce residual liquids compositionally close to the G0 granites (Ndi < 0.8). The kinetic energy associated with the structurally controlled (cauldron subsidence type?) motion of the G0 liquids to the periphery, would have been strong enough to drag up M group blocks as those occurring inside the G0 granitic ring.
Resumo:
In this talk, we propose an all regime Lagrange-Projection like numerical scheme for the gas dynamics equations. By all regime, we mean that the numerical scheme is able to compute accurate approximate solutions with an under-resolved discretization with respect to the Mach number M, i.e. such that the ratio between the Mach number M and the mesh size or the time step is small with respect to 1. The key idea is to decouple acoustic and transport phenomenon and then alter the numerical flux in the acoustic approximation to obtain a uniform truncation error in term of M. This modified scheme is conservative and endowed with good stability properties with respect to the positivity of the density and the internal energy. A discrete entropy inequality under a condition on the modification is obtained thanks to a reinterpretation of the modified scheme in the Harten Lax and van Leer formalism. A natural extension to multi-dimensional problems discretized over unstructured mesh is proposed. Then a simple and efficient semi implicit scheme is also proposed. The resulting scheme is stable under a CFL condition driven by the (slow) material waves and not by the (fast) acoustic waves and so verifies the all regime property. Numerical evidences are proposed and show the ability of the scheme to deal with tests where the flow regime may vary from low to high Mach values.
Resumo:
Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.
Resumo:
The sea surface temperature (SST) and chlorophyll-a concentration (CHL-a) were analysed in the Gulf of Tadjourah from two set of 8-day composite satellite data, respectively from 2008 to 2012 and from 2005 to 2011. A singular spectrum analysis (SSA) shows that the annual cycle of SST is strong (74.3% of variance) and consists of warming (April-October) and cooling (November-March) of about 2.5C than the long-term average. The semi-annual cycle captures only 14.6% of temperature variance and emphasises the drop of SST during July-August. Similarly, the annual cycle of CHL-a (29.7% of variance) depicts high CHL-a from June to October and low concentration from November to May. In addition, the first spatial empirical orthogonal function (EOF) of SST (93% of variance) shows that the seasonal warming/cooling is in phase across the whole study area but the southeastern part always remaining warmer or cooler. In contrast to the SST, the first EOF of CHL-a (54.1% of variance) indicates the continental shelf in phase opposition with the offshore area in winter during which the CHL-a remains sequestrated in the coastal area particularly in the south-east and in the Ghoubet Al-Kharab Bay. Inversely during summer, higher CHL-a quantities appear in the offshore waters. In order to investigate processes generating these patterns, a multichannel spectrum analysis was applied to a set of oceanic (SST, CHL-a) and atmospheric parameters (wind speed, air temperature and air specific humidity). This analysis shows that the SST is well correlated to the atmospheric parameters at an annual scale. The windowed cross correlation indicates that this correlation is significant only from October to May. During this period, the warming was related to the solar heating of the surface water when the wind is low (April-May and October) while the cooling (November-March) was linked to the strong and cold North-East winds and to convective mixing. The summer drop in SST followed by a peak of CHL-a, seems strongly correlated to the upwelling. The second EOF modes of SST and CHL-a explain respectively 1.3% and 5% of the variance and show an east-west gradient during winter that is reversed during summer. This work showed that the seasonal signals have a wide spatial influence and dominate the variability of the SST and CHL-a while the east-west gradient are specific for the Gulf of Tadjourah and seem induced by the local wind modulated by the topography.
Resumo:
Image (Video) retrieval is an interesting problem of retrieving images (videos) similar to the query. Images (Videos) are represented in an input (feature) space and similar images (videos) are obtained by finding nearest neighbors in the input representation space. Numerous input representations both in real valued and binary space have been proposed for conducting faster retrieval. In this thesis, we present techniques that obtain improved input representations for retrieval in both supervised and unsupervised settings for images and videos. Supervised retrieval is a well known problem of retrieving same class images of the query. We address the practical aspects of achieving faster retrieval with binary codes as input representations for the supervised setting in the first part, where binary codes are used as addresses into hash tables. In practice, using binary codes as addresses does not guarantee fast retrieval, as similar images are not mapped to the same binary code (address). We address this problem by presenting an efficient supervised hashing (binary encoding) method that aims to explicitly map all the images of the same class ideally to a unique binary code. We refer to the binary codes of the images as `Semantic Binary Codes' and the unique code for all same class images as `Class Binary Code'. We also propose a new class based Hamming metric that dramatically reduces the retrieval times for larger databases, where only hamming distance is computed to the class binary codes. We also propose a Deep semantic binary code model, by replacing the output layer of a popular convolutional Neural Network (AlexNet) with the class binary codes and show that the hashing functions learned in this way outperforms the state of the art, and at the same time provide fast retrieval times. In the second part, we also address the problem of supervised retrieval by taking into account the relationship between classes. For a given query image, we want to retrieve images that preserve the relative order i.e. we want to retrieve all same class images first and then, the related classes images before different class images. We learn such relationship aware binary codes by minimizing the similarity between inner product of the binary codes and the similarity between the classes. We calculate the similarity between classes using output embedding vectors, which are vector representations of classes. Our method deviates from the other supervised binary encoding schemes as it is the first to use output embeddings for learning hashing functions. We also introduce new performance metrics that take into account the related class retrieval results and show significant gains over the state of the art. High Dimensional descriptors like Fisher Vectors or Vector of Locally Aggregated Descriptors have shown to improve the performance of many computer vision applications including retrieval. In the third part, we will discuss an unsupervised technique for compressing high dimensional vectors into high dimensional binary codes, to reduce storage complexity. In this approach, we deviate from adopting traditional hyperplane hashing functions and instead learn hyperspherical hashing functions. The proposed method overcomes the computational challenges of directly applying the spherical hashing algorithm that is intractable for compressing high dimensional vectors. A practical hierarchical model that utilizes divide and conquer techniques using the Random Select and Adjust (RSA) procedure to compress such high dimensional vectors is presented. We show that our proposed high dimensional binary codes outperform the binary codes obtained using traditional hyperplane methods for higher compression ratios. In the last part of the thesis, we propose a retrieval based solution to the Zero shot event classification problem - a setting where no training videos are available for the event. To do this, we learn a generic set of concept detectors and represent both videos and query events in the concept space. We then compute similarity between the query event and the video in the concept space and videos similar to the query event are classified as the videos belonging to the event. We show that we significantly boost the performance using concept features from other modalities.
Resumo:
Gap junction coupling is ubiquitous in the brain, particularly between the dendritic trees of inhibitory interneurons. Such direct non-synaptic interaction allows for direct electrical communication between cells. Unlike spike-time driven synaptic neural network models, which are event based, any model with gap junctions must necessarily involve a single neuron model that can represent the shape of an action potential. Indeed, not only do neurons communicating via gaps feel super-threshold spikes, but they also experience, and respond to, sub-threshold voltage signals. In this chapter we show that the so-called absolute integrate-and-fire model is ideally suited to such studies. At the single neuron level voltage traces for the model may be obtained in closed form, and are shown to mimic those of fast-spiking inhibitory neurons. Interestingly in the presence of a slow spike adaptation current the model is shown to support periodic bursting oscillations. For both tonic and bursting modes the phase response curve can be calculated in closed form. At the network level we focus on global gap junction coupling and show how to analyze the asynchronous firing state in large networks. Importantly, we are able to determine the emergence of non-trivial network rhythms due to strong coupling instabilities. To illustrate the use of our theoretical techniques (particularly the phase-density formalism used to determine stability) we focus on a spike adaptation induced transition from asynchronous tonic activity to synchronous bursting in a gap-junction coupled network.
Resumo:
Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.
Resumo:
Experiments with ultracold atoms in optical lattice have become a versatile testing ground to study diverse quantum many-body Hamiltonians. A single-band Bose-Hubbard (BH) Hamiltonian was first proposed to describe these systems in 1998 and its associated quantum phase-transition was subsequently observed in 2002. Over the years, there has been a rapid progress in experimental realizations of more complex lattice geometries, leading to more exotic BH Hamiltonians with contributions from excited bands, and modified tunneling and interaction energies. There has also been interesting theoretical insights and experimental studies on “un- conventional” Bose-Einstein condensates in optical lattices and predictions of rich orbital physics in higher bands. In this thesis, I present our results on several multi- band BH models and emergent quantum phenomena. In particular, I study optical lattices with two local minima per unit cell and show that the low energy states of a multi-band BH Hamiltonian with only pairwise interactions is equivalent to an effec- tive single-band Hamiltonian with strong three-body interactions. I also propose a second method to create three-body interactions in ultracold gases of bosonic atoms in a optical lattice. In this case, this is achieved by a careful cancellation of two contributions in the pair-wise interaction between the atoms, one proportional to the zero-energy scattering length and a second proportional to the effective range. I subsequently study the physics of Bose-Einstein condensation in the second band of a double-well 2D lattice and show that the collision aided decay rate of the con- densate to the ground band is smaller than the tunneling rate between neighboring unit cells. Finally, I propose a numerical method using the discrete variable repre- sentation for constructing real-valued Wannier functions localized in a unit cell for optical lattices. The developed numerical method is general and can be applied to a wide array of optical lattice geometries in one, two or three dimensions.
Resumo:
Ultra-slow fluctuations (0.01-0.1 Hz) are a feature of intrinsic brain activity of as yet unclear origin. We propose a candidate mechanism based on retrograde endocannabinoid signaling in a synaptically coupled network of excitatory neurons. This is known to cause depolarization-induced suppression of excitation (DISE), which we model phenomenologically. We construct emergent network oscillations in a globally coupled network and show that for strong synaptic coupling DISE can lead to a synchronized population burst at the frequencies of resting brain rhythms.
Resumo:
Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MAT-LAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/(Mentaschi et al., 2016).
Resumo:
Free standing diamond films were used to study the effect of diamond surface morphology and microstructure on the electrical properties of Schottky barrier diodes. By using free standing films both the rough top diamond surface and the very smooth bottom surface are available for post-metal deposition. Rectifying electrical contacts were then established either with the smooth or the rough surface. The estimate of doping density from the capacitance-voltage plots shows that the smooth surface has a lower doping density when compared with the top layers of the same film. The results also show that surface roughness does not contribute significantly to the frequency dispersion of the small signal capacitance. The electrical properties of an abrupt asymmetric n(+)(silicon)-p(diamond) junction have also been measured. The I-V curves exhibit at low temperatures a plateau near zero bias, and show inversion of rectification. Capacitance-voltage characteristics show a capacitance minimum with forward bias, which is dependent on the environment conditions. It is proposed that this anomalous effect arises from high level injection of minority carriers into the bulk.
Resumo:
Free standing diamond films were used to study the effect of diamond surface morphology and microstructure on the electrical properties of Schottky barrier diodes. By using free standing films both the rough top diamond surface and the very smooth bottom surface are available for post-metal deposition. Rectifying electrical contacts were then established either with the smooth or the rough surface. The estimate of doping density from the capacitance-voltage plots shows that the smooth surface has a lower doping density when compared with the top layers of the same film. The results also show that surface roughness does not contribute significantly to the frequency dispersion of the small signal capacitance. The electrical properties of an abrupt asymmetric n(+)(silicon)-p(diamond) junction have also been measured. The I-V curves exhibit at low temperatures a plateau near zero bias, and show inversion of rectification. Capacitance-voltage characteristics show a capacitance minimum with forward bias, which is dependent on the environment conditions. It is proposed that this anomalous effect arises from high level injection of minority carriers into the bulk.