952 resultados para single channel algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel design based on electric field-free open microwell arrays for the automated continuous-flow sorting of single or small clusters of cells is presented. The main feature of the proposed device is the parallel analysis of cell-cell and cell-particle interactions in each microwell of the array. High throughput sample recovery with a fast and separate transfer from the microsites to standard microtiter plates is also possible thanks to the flexible printed circuit board technology which permits to produce cost effective large area arrays featuring geometries compatible with laboratory equipment. The particle isolation is performed via negative dielectrophoretic forces which convey the particles’ into the microwells. Particles such as cells and beads flow in electrically active microchannels on whose substrate the electrodes are patterned. The introduction of particles within the microwells is automatically performed by generating the required feedback signal by a microscope-based optical counting and detection routine. In order to isolate a controlled number of particles we created two particular configurations of the electric field within the structure. The first one permits their isolation whereas the second one creates a net force which repels the particles from the microwell entrance. To increase the parallelism at which the cell-isolation function is implemented, a new technique based on coplanar electrodes to detect particle presence was implemented. A lock-in amplifying scheme was used to monitor the impedance of the channel perturbed by flowing particles in high-conductivity suspension mediums. The impedance measurement module was also combined with the dielectrophoretic focusing stage situated upstream of the measurement stage, to limit the measured signal amplitude dispersion due to the particles position variation within the microchannel. In conclusion, the designed system complies with the initial specifications making it suitable for cellomics and biotechnology applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Auf der Suche nach dem „vulnerablen Plaque“, der ein besonders hohes Risiko für Schlaganfall und Herzinfarkt besitzt, findet momentan ein Paradigmenwechsel statt. Anstelle des klassischen Stenosegrades gewinnt die Darstellung der Plaquemorphologie zunehmend an Bedeutung. Fragestellung: Ziel dieser Arbeit ist es, die Fähigkeiten eines modernen 16-Kanal-CT hinsichtlich der Auflösung des Plaqueinneren bei Atherosklerose der Karotiden zu untersuchen und den Halo-Effekt in vivo zu erforschen. Methoden: Für die Studie wurden von 28 Patienten mit bekannter, symptomatischer Karotisstenose vor der gefäßchirurgischen Intervention CT-Bilder angefertigt, die nachfolgend mit der Histologie der Gefäßpräparate korreliert wurden. Auf diese Weise konnten die mikroskopisch identifizierten Lipidkerne im CT-Bild eingezeichnet und hinsichtlich ihrer Fläche und Dichtewerte evaluiert werden. In einem weiteren Schritt führten 2 Radiologen in Unkenntnis der histologischen Ergebnisse unabhängig voneinander eine Befundung durch und markierten mutmaßliche Lipidkerne. Zudem wurden sowohl in der verblindeten als auch in der histologiekontrollierten Auswertung die Plaquetypen anhand der AHA-Klassifikation bestimmt. Ein dritter Befundungsdurchgang geschah unter Zuhilfenahme einer von uns entwickelten Software, die CT-Bilder farbkodiert um die Detektion der Lipidkerne zu verbessern. Anhand der Farbkodierung wurde zudem ein Indexwert errechnet, der eine objektive Zuordnung zur AHA-Klassifikation ermöglichen sollte. Von 6 Patienten wurde zusätzlich noch eine native CT-Aufnahme angefertigt, die durch MPR exakt an die Kontrastmittelserie angeglichen wurde. Auf diese Weise konnte der Halo-Effekt, der die Plaqueanteile im lumennahen Bereich überstrahlt, quantifiziert und charakterisiert werden. Ergebnisse: Während die Einstufung in die AHA-Klassifikation sowohl durch den Befunder als auch durch den Softwarealgorithmus eine hohe Korrelation mit der Histologie aufweist (Typ IV/Va: 89 %, Typ Vb: 70 %, Typ Vc: 89 %, Typ VI: 55 %), ist die Detektion der Lipidkerne in beiden Fällen nicht ausreichend gut und die Befunderabhängigkeit zu groß (Cohens Kappa: 18 %). Eine Objektivierung der AHA-Klassifikation der Plaques durch Indexberechnung nach Farbkodierung scheint möglich, wenn auch dem Befunder nicht überlegen. Die fibröse Kappe kann nicht abgegrenzt werden, da Überstrahlungseffekte des Kontrastmittels dessen HU-Werte verfälschen. Dieser Halo-Effekt zeigte sich im Median 1,1 mm breit mit einer Standardabweichung von 0,38 mm. Eine Abhängigkeit von der Kontrastmitteldichte im Gefäßlumen konnte dabei nicht nachgewiesen werden. Der Halo-Effekt fiel im Median um -106 HU/mm ab, bei einer Standardabweichung von 33 HU/mm. Schlussfolgerung: Die CT-Technologie zeigt sich, was die Darstellung von einzelnen Plaquekomponenten angeht, den bekannten Fähigkeiten der MRT noch unterlegen, insbesondere in Bezug auf die fibröse Kappe. Ihre Fähigkeiten liegen bisher eher in der Einstufung von Plaques in eine grobe Klassifikation, angelehnt an die der AHA. Die klinische Relevanz dessen jedoch gilt es in Zukunft in größeren Studien weiter zu untersuchen. Auch lässt die Weiterentwicklung der Computertomographie auf eine zukünftig höhere Auflösung der Plaquemorphologie hoffen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since its discovery, top quark has represented one of the most investigated field in particle physics. The aim of this thesis is the reconstruction of hadronic top with high transverse momentum (boosted) with the Template Overlap Method (TOM). Because of the high energy, the decay products of boosted tops are partially or totally overlapped and thus they are contained in a single large radius jet (fat-jet). TOM compares the internal energy distributions of the candidate fat-jet to a sample of tops obtained by a MC simulation (template). The algorithm is based on the definition of an overlap function, which quantifies the level of agreement between the fat-jet and the template, allowing an efficient discrimination of signal from the background contributions. A working point has been decided in order to obtain a signal efficiency close to 90% and a corresponding background rejection at 70%. TOM performances have been tested on MC samples in the muon channel and compared with the previous methods present in literature. All the methods will be merged in a multivariate analysis to give a global top tagging which will be included in ttbar production differential cross section performed on the data acquired in 2012 at sqrt(s)=8 TeV in high phase space region, where new physics processes could be possible. Due to its peculiarity to increase the pT, the Template Overlap Method will play a crucial role in the next data taking at sqrt(s)=13 TeV, where the almost totality of the tops will be produced at high energy, making the standard reconstruction methods inefficient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quark model successfully describes all ground state bary-ons as members of $SU(N)$ flavour multiplets. For excited baryon states the situation is totally different. There are much less states found in the experiment than predicted in most theoretical calculations. This fact has been known for a long time as the 'missing resonance problem'. In addition, many states found in experiments are only poorly measured up to now. Therefore, further experimental efforts are needed to clarify the situation.rnrnAt mbox{COMPASS}, reactions of a $190uskgigaeVperclight$ hadron beam impinging on a liquid hydrogen target are investigated.rnThe hadron beam contains different species of particles ($pi$, $K$, $p$). To distinguish these particles, two Cherenkov detectors are used. In this thesis, a new method for the identification of particles from the detector information is developed. This method is based on statistical approaches and allows a better kaon identification efficiency with a similar purity compared to the method, which was used before.rnrnThe reaction $pprightarrow ppX$ with $X=(pi^0,~eta,~omega,~phi)$ is used to study different production mechanisms. A previous analysis of $omega$ and $phi$ mesons is extended to pseudoscalar mesons. As the resonance contributions in $peta$ are smaller than in $ppi^0$ a different behaviour of these two final states is expected as a function of kinematic variables. The investigation of these differences allows to study different production mechanisms and to estimate the size of the resonant contribution in the different channels.rnrnIn addition, the channel $pprightarrow ppX$ allows to study baryon resonances in the $pX$ system.rnIn the mbox{COMPASS} energy regime, the reaction is dominated by Pomeron exchange. As a Pomeron carries vacuum quantum numbers, no isospin is transferred between the target proton and the beam proton. Therefore, the $pX$ final state has isospin $textstylefrac{1}{2}$ and all baryon resonances in this channel are $N^ast$ baryons. This offers the opportunity to do spectroscopy without taking $Delta$ resonances into account. rnrnTo disentangle the contributions of different resonances a partial wave analysis (PWA) is used. Different resonances have different spin and parity $J^parity$, which results in different angular distributions of the decay particles. These angular distributions can be calculated from models and then be fitted to the data. From the fit the contributions of the single resonances as well as resonance parameters -- namely the mass and the width -- can be extracted. In this thesis, two different approaches for a partial wave analysis of the reaction $pprightarrow pppi^0$ are developed and tested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to assess a pharmacokinetic algorithm to predict ketamine plasma concentration and drive a target-controlled infusion (TCI) in ponies. Firstly, the algorithm was used to simulate the course of ketamine enantiomers plasma concentrations after the administration of an intravenous bolus in six ponies based on individual pharmacokinetic parameters obtained from a previous experiment. Using the same pharmacokinetic parameters, a TCI of S-ketamine was then performed over 120 min to maintain a concentration of 1 microg/mL in plasma. The actual plasma concentrations of S-ketamine were measured from arterial samples using capillary electrophoresis. The performance of the simulation for the administration of a single bolus was very good. During the TCI, the S-ketamine plasma concentrations were maintained within the limit of acceptance (wobble and divergence <20%) at a median of 79% (IQR, 71-90) of the peak concentration reached after the initial bolus. However, in three ponies the steady concentrations were significantly higher than targeted. It is hypothesized that an inaccurate estimation of the volume of the central compartment is partly responsible for that difference. The algorithm allowed good predictions for the single bolus administration and an appropriate maintenance of constant plasma concentrations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this manuscript we are concerned with functional imaging of the colon to assess the kinetics of a microbicide lubricant. The overarching goal is to understand the distribution of the lubricant in the colon. Such information is crucial for understanding the potential impact of the microbicide on HIV viral transmission. The experiment was conducted by imaging a radiolabeled lubricant distributed in the subject’s colon. The tracer imaging was conducted via single photon emission computed tomography (SPECT), a non-invasive, in-vivo functional imaging technique. We develop a novel principal curve algorithm to construct a three dimensional curve through the colon images. The developed algorithm is tested and debugged on several difficult two dimensional images of familiar curves where the original principal curve algorithm does not apply. The final curve fit to the colon data is compared with experimental sigmoidoscope collection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RATIONALE: Olanzapine is an atypical antipsychotic drug with a more favourable safety profile than typical antipsychotics with a hitherto unknown topographic quantitative electroencephalogram (QEEG) profile. OBJECTIVES: We investigated electrical brain activity (QEEG and cognitive event related potentials, ERPs) in healthy subjects who received olanzapine. METHODS: Vigilance-controlled, 19-channel EEG and ERP in an auditory odd-ball paradigm were recorded before and 3 h, 6 h and 9 h after administration of either a single dose of placebo or olanzapine (2.5 mg and 5 mg) in ten healthy subjects. QEEG was analysed by spectral analysis and evaluated in nine frequency bands. For the P300 component in the odd-ball ERP, the amplitude and latency was analysed. Statistical effects were tested using a repeated-measurement analysis of variance. RESULTS: For the interaction between time and treatment, significant effects were observed for theta, alpha-2, beta-2 and beta-4 frequency bands. The amplitude of the activity in the theta band increased most significantly 6 h after the 5-mg administration of olanzapine. A pronounced decrease of the alpha-2 activity especially 9 h after 5 mg olanzapine administration could be observed. In most beta frequency bands, and most significantly in the beta-4 band, a dose-dependent decrease of the activity beginning 6 h after drug administration was demonstrated. Topographic effects could be observed for the beta-2 band (occipital decrease) and a tendency for the alpha-2 band (frontal increase and occipital decrease), both indicating a frontal shift of brain electrical activity. There were no significant changes in P300 amplitude or latency after drug administration. Conclusion: QEEG alterations after olanzapine administration were similar to EEG effects gained by other atypical antipsychotic drugs, such as clozapine. The increase of theta activity is comparable to the frequency distribution observed for thymoleptics or antipsychotics for which treatment-emergent somnolence is commonly observed, whereas the decrease of beta activity observed after olanzapine administration is not characteristic for these drugs. There were no clear signs for an increased cerebral excitability after a single-dose administration of 2.5 mg and 5 mg olanzapine in healthy controls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel variable decomposition approach for pose recovery of the distal locking holes using single calibrated fluoroscopic image. The problem is formulated as a model-based optimal fitting process, where the control variables are decomposed into two sets: (a) the angle between the nail axis and its projection on the imaging plane, and (b) the translation and rotation of the geometrical model of the distal locking hole around the nail axis. By using an iterative algorithm to find the optimal values of the latter set of variables for any given value of the former variable, we reduce the multiple-dimensional model-based optimal fitting problem to a one-dimensional search along a finite interval. We report the results of our in vitro experiments, which demonstrate that the accuracy of our approach is adequate for successful distal locking of intramedullary nails.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An optimizing compiler internal representation fundamentally affects the clarity, efficiency and feasibility of optimization algorithms employed by the compiler. Static Single Assignment (SSA) as a state-of-the-art program representation has great advantages though still can be improved. This dissertation explores the domain of single assignment beyond SSA, and presents two novel program representations: Future Gated Single Assignment (FGSA) and Recursive Future Predicated Form (RFPF). Both FGSA and RFPF embed control flow and data flow information, enabling efficient traversal program information and thus leading to better and simpler optimizations. We introduce future value concept, the designing base of both FGSA and RFPF, which permits a consumer instruction to be encountered before the producer of its source operand(s) in a control flow setting. We show that FGSA is efficiently computable by using a series T1/T2/TR transformation, yielding an expected linear time algorithm for combining together the construction of the pruned single assignment form and live analysis for both reducible and irreducible graphs. As a result, the approach results in an average reduction of 7.7%, with a maximum of 67% in the number of gating functions compared to the pruned SSA form on the SPEC2000 benchmark suite. We present a solid and near optimal framework to perform inverse transformation from single assignment programs. We demonstrate the importance of unrestricted code motion and present RFPF. We develop algorithms which enable instruction movement in acyclic, as well as cyclic regions, and show the ease to perform optimizations such as Partial Redundancy Elimination on RFPF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The craze for faster and smaller electronic devices has never gone down and this has always kept researchers on their toes. Following Moore’s law, which states that the number of transistors in a single chip will double in every 18 months, today “30 million transistors can fit into the head of a 1.5 mm diameter pin”. But this miniaturization cannot continue indefinitely due to the ‘quantum leakage’ limit in the thickness of the insulating layer between the gate electrode and the current carrying channel. To bypass this limitation, scientists came up with the idea of using vastly available organic molecules as components in an electronic device. One of the primary challenges in this field was the ability to perform conductance measurements across single molecular junctions. Once that was achieved the focus shifted to a deeper understanding of the underlying physics behind the electron transport across these molecular scale devices. Our initial theoretical approach is based on the conventional Non-Equilibrium Green Function(NEGF) formulation, but the self-energy of the leads is modified to include a weighting factor that ensures negligible current in the absence of a molecular pathway as observed in a Mechanically Controlled Break Junction (MCBJ) experiment. The formulation is then made parameter free by a more careful estimation of the self-energy of the leads. The calculated conductance turns out to be atleast an order more than the experimental values which is probably due to a strong chemical bond at the metal-molecule junction unlike in the experiments. The focus is then shifted to a comparative study of charge transport in molecular wires of different lengths within the same formalism. The molecular wires, composed of a series of organic molecules, are sanwiched between two gold electrodes to make a two terminal device. The length of the wire is increased by sequentially increasing the number of molecules in the wire from 1 to 3. In the low bias regime all the molecular devices are found to exhibit Ohmic behavior. However, the magnitude of conductance decreases exponentially with increase in length of the wire. In the next study, the relative contribution of the ‘in-phase’ and the ‘out-of-phase’ components of the total electronic current under the influence of an external bias is estimated for the wires of three different lengths. In the low bias regime, the ‘out-of-phase’ contribution to the total current is minimal and the ‘in-phase’ elastic tunneling of the electrons is responsible for the net electronic current. This is true irrespective of the length of the molecular spacer. In this regime, the current-voltage characteristics follow Ohm’s law and the conductance of the wires is found to decrease exponentially with increase in length which is in agreement with experimental results. However, after a certain ‘off-set’ voltage, the current increases non-linearly with bias and the ‘out-of-phase’ tunneling of electrons reduces the net current substantially. Subsequently, the interaction of conduction electrons with the vibrational modes as a function of external bias in the three different oligomers is studied since they are one of the main sources of phase-breaking scattering. The number of vibrational modes that couple strongly with the frontier molecular orbitals are found to increase with length of the spacer and the external field. This is consistent with the existence of lowest ‘off-set’ voltage for the longest wire under study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissipation of high heat flux from integrated circuit chips and the maintenance of acceptable junction temperatures in high powered electronics require advanced cooling technologies. One such technology is two-phase cooling in microchannels under confined flow boiling conditions. In macroscale flow boiling bubbles will nucleate on the channel walls, grow, and depart from the surface. In microscale flow boiling bubbles can fill the channel diameter before the liquid drag force has a chance to sweep them off the channel wall. As a confined bubble elongates in a microchannel, it traps thin liquid films between the heated wall and the vapor core that are subject to large temperature gradients. The thin films evaporate rapidly, sometimes faster than the incoming mass flux can replenish bulk fluid in the microchannel. When the local vapor pressure spike exceeds the inlet pressure, it forces the upstream interface to travel back into the inlet plenum and create flow boiling instabilities. Flow boiling instabilities reduce the temperature at which critical heat flux occurs and create channel dryout. Dryout causes high surface temperatures that can destroy the electronic circuits that use two-phase micro heat exchangers for cooling. Flow boiling instability is characterized by periodic oscillation of flow regimes which induce oscillations in fluid temperature, wall temperatures, pressure drop, and mass flux. When nanofluids are used in flow boiling, the nanoparticles become deposited on the heated surface and change its thermal conductivity, roughness, capillarity, wettability, and nucleation site density. It also affects heat transfer by changing bubble departure diameter, bubble departure frequency, and the evaporation of the micro and macrolayer beneath the growing bubbles. Flow boiling was investigated in this study using degassed, deionized water, and 0.001 vol% aluminum oxide nanofluids in a single rectangular brass microchannel with a hydraulic diameter of 229 µm for one inlet fluid temperature of 63°C and two constant flow rates of 0.41 ml/min and 0.82 ml/min. The power input was adjusted for two average surface temperatures of 103°C and 119°C at each flow rate. High speed images were taken periodically for water and nanofluid flow boiling after durations of 25, 75, and 125 minutes from the start of flow. The change in regime timing revealed the effect of nanoparticle suspension and deposition on the Onset of Nucelate Boiling (ONB) and the Onset of Bubble Elongation (OBE). Cycle duration and bubble frequencies are reported for different nanofluid flow boiling durations. The addition of nanoparticles was found to stabilize bubble nucleation and growth and limit the recession rate of the upstream and downstream interfaces, mitigating the spreading of dry spots and elongating the thin film regions to increase thin film evaporation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

All mitochondria have integral outer membrane proteins with beta-barrel structures including the conserved metabolite transporter VDAC (voltage dependent anion channel) and the conserved protein import channel Tom40. Bioinformatic searches of the Trypanosoma brucei genome for either VDAC or Tom40 identified a single open reading frame, with sequence analysis suggesting that VDACs and Tom40s are ancestrally related and should be grouped into the same protein family: the mitochondrial porins. The single T. brucei mitochondrial porin is essential only under growth conditions that depend on oxidative phosphorylation. Mitochondria isolated from homozygous knockout cells did not produce adenosine-triphosphate (ATP) in response to added substrates, but ATP production was restored by physical disruption of the outer membrane. These results demonstrate that the mitochondrial porin identified in T. brucei is the main metabolite channel in the outer membrane and therefore the functional orthologue of VDAC. No distinct Tom40 was identified in T. brucei. In addition to mitochondrial proteins, T. brucei imports all mitochondrial tRNAs from the cytosol. Isolated mitochondria from the VDAC knockout cells import tRNA as efficiently as wild-type. Thus, unlike the scenario in plants, VDAC is not required for mitochondrial tRNA import in T. brucei.