939 resultados para Single-molecule detection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using CMOS transistors for terahertz detection is currently a disruptive technology that offers the direct integration of a terahertz detector with video preamplifiers. The detectors are based on the resistive mixer concept and performance mainly depends on the following parameters: type of antenna, electrical parameters (gate to drain capacitor and channel length of the CMOS device) and foundry. Two different 300 GHz detectors are discussed: a single transistor detector with a broadband antenna and a differential pair driven by a resonant patch antenna.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose an innovative method for the automatic detection and tracking of road traffic signs using an onboard stereo camera. It involves a combination of monocular and stereo analysis strategies to increase the reliability of the detections such that it can boost the performance of any traffic sign recognition scheme. Firstly, an adaptive color and appearance based detection is applied at single camera level to generate a set of traffic sign hypotheses. In turn, stereo information allows for sparse 3D reconstruction of potential traffic signs through a SURF-based matching strategy. Namely, the plane that best fits the cloud of 3D points traced back from feature matches is estimated using a RANSAC based approach to improve robustness to outliers. Temporal consistency of the 3D information is ensured through a Kalman-based tracking stage. This also allows for the generation of a predicted 3D traffic sign model, which is in turn used to enhance the previously mentioned color-based detector through a feedback loop, thus improving detection accuracy. The proposed solution has been tested with real sequences under several illumination conditions and in both urban areas and highways, achieving very high detection rates in challenging environments, including rapid motion and significant perspective distortion

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, there has been a growing interest in incorporating microgrids in electrical power networks. This is due to various advantages they present, particularly the possibility of working in either autonomous mode or grid connected, which makes them highly versatile structures for incorporating intermittent generation and energy storage. However, they pose safety issues in being able to support a local island in case of utility disconnection. Thus, in the event of an unintentional island situation, they should be able to detect the loss of mains and disconnect for self-protection and safety reasons. Most of the anti-islanding schemes are implemented within control of single generation devices, such as dc-ac inverters used with solar electric systems being incompatible with the concept of microgrids due to the variety and multiplicity of sources within the microgrid. In this paper, a passive islanding detection method based on the change of the 5th harmonic voltage magnitude at the point of common coupling between grid-connected and islanded modes of operation is presented. Hardware test results from the application of this approach to a laboratory scale microgrid are shown. The experimental results demonstrate the validity of the proposed method, in meeting the requirements of IEEE 1547 standards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Video analytics play a critical role in most recent traffic monitoring and driver assistance systems. In this context, the correct detection and classification of surrounding vehicles through image analysis has been the focus of extensive research in the last years. Most of the pieces of work reported for image-based vehicle verification make use of supervised classification approaches and resort to techniques, such as histograms of oriented gradients (HOG), principal component analysis (PCA), and Gabor filters, among others. Unfortunately, existing approaches are lacking in two respects: first, comparison between methods using a common body of work has not been addressed; second, no study of the combination potentiality of popular features for vehicle classification has been reported. In this study the performance of the different techniques is first reviewed and compared using a common public database. Then, the combination capabilities of these techniques are explored and a methodology is presented for the fusion of classifiers built upon them, taking into account also the vehicle pose. The study unveils the limitations of single-feature based classification and makes clear that fusion of classifiers is highly beneficial for vehicle verification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis se centra en el análisis de dos aspectos complementarios de la ciberdelincuencia (es decir, el crimen perpetrado a través de la red para ganar dinero). Estos dos aspectos son las máquinas infectadas utilizadas para obtener beneficios económicos de la delincuencia a través de diferentes acciones (como por ejemplo, clickfraud, DDoS, correo no deseado) y la infraestructura de servidores utilizados para gestionar estas máquinas (por ejemplo, C & C, servidores explotadores, servidores de monetización, redirectores). En la primera parte se investiga la exposición a las amenazas de los ordenadores victimas. Para realizar este análisis hemos utilizado los metadatos contenidos en WINE-BR conjunto de datos de Symantec. Este conjunto de datos contiene metadatos de instalación de ficheros ejecutables (por ejemplo, hash del fichero, su editor, fecha de instalación, nombre del fichero, la versión del fichero) proveniente de 8,4 millones de usuarios de Windows. Hemos asociado estos metadatos con las vulnerabilidades en el National Vulnerability Database (NVD) y en el Opens Sourced Vulnerability Database (OSVDB) con el fin de realizar un seguimiento de la decadencia de la vulnerabilidad en el tiempo y observar la rapidez de los usuarios a remiendar sus sistemas y, por tanto, su exposición a posibles ataques. Hemos identificado 3 factores que pueden influir en la actividad de parches de ordenadores victimas: código compartido, el tipo de usuario, exploits. Presentamos 2 nuevos ataques contra el código compartido y un análisis de cómo el conocimiento usuarios y la disponibilidad de exploit influyen en la actividad de aplicación de parches. Para las 80 vulnerabilidades en nuestra base de datos que afectan código compartido entre dos aplicaciones, el tiempo entre el parche libera en las diferentes aplicaciones es hasta 118 das (con una mediana de 11 das) En la segunda parte se proponen nuevas técnicas de sondeo activos para detectar y analizar las infraestructuras de servidores maliciosos. Aprovechamos técnicas de sondaje activo, para detectar servidores maliciosos en el internet. Empezamos con el análisis y la detección de operaciones de servidores explotadores. Como una operación identificamos los servidores que son controlados por las mismas personas y, posiblemente, participan en la misma campaña de infección. Hemos analizado un total de 500 servidores explotadores durante un período de 1 año, donde 2/3 de las operaciones tenían un único servidor y 1/2 por varios servidores. Hemos desarrollado la técnica para detectar servidores explotadores a diferentes tipologías de servidores, (por ejemplo, C & C, servidores de monetización, redirectores) y hemos logrado escala de Internet de sondeo para las distintas categorías de servidores maliciosos. Estas nuevas técnicas se han incorporado en una nueva herramienta llamada CyberProbe. Para detectar estos servidores hemos desarrollado una novedosa técnica llamada Adversarial Fingerprint Generation, que es una metodología para generar un modelo único de solicitud-respuesta para identificar la familia de servidores (es decir, el tipo y la operación que el servidor apartenece). A partir de una fichero de malware y un servidor activo de una determinada familia, CyberProbe puede generar un fingerprint válido para detectar todos los servidores vivos de esa familia. Hemos realizado 11 exploraciones en todo el Internet detectando 151 servidores maliciosos, de estos 151 servidores 75% son desconocidos a bases de datos publicas de servidores maliciosos. Otra cuestión que se plantea mientras se hace la detección de servidores maliciosos es que algunos de estos servidores podrán estar ocultos detrás de un proxy inverso silente. Para identificar la prevalencia de esta configuración de red y mejorar el capacidades de CyberProbe hemos desarrollado RevProbe una nueva herramienta a través del aprovechamiento de leakages en la configuración de la Web proxies inversa puede detectar proxies inversos. RevProbe identifica que el 16% de direcciones IP maliciosas activas analizadas corresponden a proxies inversos, que el 92% de ellos son silenciosos en comparación con 55% para los proxies inversos benignos, y que son utilizado principalmente para equilibrio de carga a través de múltiples servidores. ABSTRACT In this dissertation we investigate two fundamental aspects of cybercrime: the infection of machines used to monetize the crime and the malicious server infrastructures that are used to manage the infected machines. In the first part of this dissertation, we analyze how fast software vendors apply patches to secure client applications, identifying shared code as an important factor in patch deployment. Shared code is code present in multiple programs. When a vulnerability affects shared code the usual linear vulnerability life cycle is not anymore effective to describe how the patch deployment takes place. In this work we show which are the consequences of shared code vulnerabilities and we demonstrate two novel attacks that can be used to exploit this condition. In the second part of this dissertation we analyze malicious server infrastructures, our contributions are: a technique to cluster exploit server operations, a tool named CyberProbe to perform large scale detection of different malicious servers categories, and RevProbe a tool that detects silent reverse proxies. We start by identifying exploit server operations, that are, exploit servers managed by the same people. We investigate a total of 500 exploit servers over a period of more 13 months. We have collected malware from these servers and all the metadata related to the communication with the servers. Thanks to this metadata we have extracted different features to group together servers managed by the same entity (i.e., exploit server operation), we have discovered that 2/3 of the operations have a single server while 1/3 have multiple servers. Next, we present CyberProbe a tool that detects different malicious server types through a novel technique called adversarial fingerprint generation (AFG). The idea behind CyberProbe’s AFG is to run some piece of malware and observe its network communication towards malicious servers. Then it replays this communication to the malicious server and outputs a fingerprint (i.e. a port selection function, a probe generation function and a signature generation function). Once the fingerprint is generated CyberProbe scans the Internet with the fingerprint and finds all the servers of a given family. We have performed a total of 11 Internet wide scans finding 151 new servers starting with 15 seed servers. This gives to CyberProbe a 10 times amplification factor. Moreover we have compared CyberProbe with existing blacklists on the internet finding that only 40% of the server detected by CyberProbe were listed. To enhance the capabilities of CyberProbe we have developed RevProbe, a reverse proxy detection tool that can be integrated with CyberProbe to allow precise detection of silent reverse proxies used to hide malicious servers. RevProbe leverages leakage based detection techniques to detect if a malicious server is hidden behind a silent reverse proxy and the infrastructure of servers behind it. At the core of RevProbe is the analysis of differences in the traffic by interacting with a remote server.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During protein synthesis, the two elongation factors Tu and G alternately bind to the 50S ribosomal subunit at a site of which the protein L7/L12 is an essential component. L7/L12 is present in each 50S subunit in four copies organized as two dimers. Each dimer consists of distinct domains: a single N-terminal (“tail”) domain that is responsible for both dimerization and binding to the ribosome via interaction with the protein L10 and two independent globular C-terminal domains (“heads”) that are required for binding of elongation factors to ribosomes. The two heads are connected by flexible hinge sequences to the N-terminal domain. Important questions concerning the mechanism by which L7/L12 interacts with elongation factors are posed by us in response to the presence of two dimers, two heads per dimer, and their dynamic, mobile properties. In an attempt to answer these questions, we constructed a single-headed dimer of L7/L12 by using recombinant DNA techniques and chemical cross-linking. This chimeric molecule was added to inactive core particles lacking wild-type L7/L12 and shown to restore activity to a level approaching that of wild-type two-headed L7/L12.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By using molecular dynamics simulations, we have examined the binding of a hexaNAG substrate and two potential hydrolysis intermediates (an oxazoline ion and an oxocarbenium ion) to a family 19 barley chitinase. We find the hexaNAG substrate binds with all sugars in a chair conformation, unlike the family 18 chitinase which causes substrate distortion. Glu 67 is in a position to protonate the anomeric oxygen linking sugar residues D and E whereas Asn 199 serves to hydrogen bond with the C2′ N-acetyl group of sugar D, thus preventing the formation of an oxazoline ion intermediate. In addition, Glu 89 is part of a flexible loop region allowing a conformational change to occur within the active site to bring the oxocarbenium ion intermediate and Glu 89 closer by 4–5 Å. A hydrolysis product with inversion of the anomeric configuration occurs because of nucleophilic attack by a water molecule that is coordinated by Glu 89 and Ser 120. Issues important for the design of inhibitors specific to family 19 chitinases over family 18 chitinases also are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A single-chain Fv (scFv) fusion phage library derived from random combinations of VH and VL (variable heavy and light chains) domains in the antibody repertoire of a vaccinated melanoma patient was previously used to isolate clones that bind specifically to melanoma cells. An unexpected finding was that one of the clones encoded a truncated scFv molecule with most of the VL domain deleted, indicating that a VH domain alone can exhibit tumor-specific binding. In this report a VH fusion phage library containing VH domains unassociated with VL domains was compared with a scFv fusion phage library as a source of melanoma-specific clones; both libraries contained the same VH domains from the vaccinated melanoma patient. The results demonstrate that the clones can be isolated from both libraries, and that both libraries should be used to optimize the chance of isolating clones binding to different epitopes. Although this strategy has been tested only for melanoma, it is also applicable to other cancers. Because of their small size, human origin and specificity for cell surface tumor antigens, the VH and scFv molecules have significant advantages as tumor-targeting molecules for diagnostic and therapeutic procedures and can also serve as probes for identifying the cognate tumor antigens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single light-harvesting complexes LH-2 from Rhodopseudomonas acidophila were immobilized on various charged surfaces under physiological conditions. Polarized light experiments showed that the complexes were situated on the surface as nearly upright cylinders. Their fluorescence lifetimes and photobleaching properties were obtained by using a confocal fluorescence microscope with picosecond time resolution. Initially all molecules fluoresced with a lifetime of 1 ± 0.2 ns, similar to the bulk value. The photobleaching of one bacteriochlorophyll molecule from the 18-member assembly caused the fluorescence to switch off completely, because of trapping of the mobile excitations by energy transfer. This process was linear in light intensity. On continued irradiation the fluorescence often reappeared, but all molecules did not show the same behavior. Some LH-2 complexes displayed a variation of their quantum yields that was attributed to photoinduced confinement of the excited states and thereby a diminution of the superradiance. Others showed much shorter lifetimes caused by excitation energy traps that are only ≈3% efficient. On repeated excitation some molecules entered a noisy state where the fluorescence switched on and off with a correlation time of ≈0.1 s. About 490 molecules were examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement of fluorescent lifetimes of dye-tagged DNA molecules reveal the existence of different conformations. Conformational fluctuations observed by fluorescence correlation spectroscopy give rise to a relaxation behavior that is described by “stretched” exponentials and indicates the presence of a distribution of transition rates between two conformations. Whether this is an inhomogeneous distribution, where each molecule contributes with its own reaction rate to the overall distribution, or a homogeneous distribution, where the reaction rate of each molecule is time-dependent, is not yet known. We used a tetramethylrhodamine-linked 217-bp DNA oligonucleotide as a probe for conformational fluctuations. Fluorescence fluctuations from single DNA molecules attached to a streptavidin-coated surface directly show the transitions between two conformational states. The conformational fluctuations typical for single molecules are similar to those seen in single ion channels in cell membranes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stimulation of antitumor immune mechanisms is the primary goal of cancer immunotherapy, and accumulating evidence suggests that effective alteration of the host–tumor relationship involves immunomodulating cytokines and also the presence of costimulatory molecules. To examine the antitumor effect of direct in vivo gene transfer of murine interleukin 12 (IL-12) and B7-1 into tumors, we developed an adenovirus (Ad) vector, AdIL12–B7-1, that encodes the two IL-12 subunits in early region 1 (E1) and the B7-1 gene in E3 under control of the murine cytomegalovirus promoter. This vector expressed high levels of IL-12 and B7-1 in infected murine and human cell lines and in primary murine tumor cells. In mice bearing tumors derived from a transgenic mouse mammary adenocarcinoma, a single intratumoral injection with a low dose (2.5 × 107 pfu/mouse) of AdIL12–B7-1 mediated complete regression in 70% of treated animals. By contrast, administration of a similar dose of recombinant virus encoding IL-12 or B7-1 alone resulted in only a delay in tumor growth. Interestingly, coinjection of two different viruses expressing either IL-12 or B7-1 induced complete tumor regression in only 30% of animals treated at this dose. Significantly, cured animals remained tumor free after rechallenge with fresh tumor cells, suggesting that protective immunity had been induced by treatment with AdIL12–B7-1. These results support the use of Ad vectors as a highly efficient delivery system for synergistically acting molecules and show that the combination of IL-12 and B7-1 within a single Ad vector might be a promising approach for in vivo cancer therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to use a vital cell marker to study mouse embryogenesis will open new avenues of experimental research. Recently, the use of transgenic mice, containing multiple copies of the jellyfish gene encoding the green fluorescent protein (GFP), has begun to realize this potential. Here, we show that the fluorescent signals produced by single-copy, targeted GFP in-frame fusions with two different murine Hox genes, Hoxa1 and Hoxc13, are readily detectable by using confocal microscopy. Since Hoxa1 is expressed early and Hoxc13 is expressed late in mouse embryogenesis, this study shows that single-copy GFP gene fusions can be used through most of mouse embryogenesis. Previously, targeted lacZ gene fusions have been very useful for analyzing mouse mutants. Use of GFP gene fusions extends the benefits of targeted lacZ gene fusions by providing the additional utility of a vital marker. Our analysis of the Hoxc13GFPneo embryos reveals GFP expression in each of the sites expected from analysis of Hoxc13lacZneo embryos. Similarly, Hoxa1GFPneo expression was detected in all of the sites predicted from RNA in situ analysis. GFP expression in the foregut pocket of Hoxa1GFPneo embryos suggests a role for Hoxa1 in foregut-mediated differentiation of the cardiogenic mesoderm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resistance to organophosphorus (OP) insecticides is associated with decreased carboxylesterase activity in several insect species. It has been proposed that the resistance may be the result of a mutation in a carboxylesterase that simultaneously reduces its carboxylesterase activity and confers an OP hydrolase activity (the “mutant ali-esterase hypothesis”). In the sheep blowfly, Lucilia cuprina, the association is due to a change in a specific esterase isozyme, E3, which, in resistant flies, has a null phenotype on gels stained using standard carboxylesterase substrates. Here we show that an OP-resistant allele of the gene that encodes E3 differs at five amino acid replacement sites from a previously described OP-susceptible allele. Knowledge of the structure of a related enzyme (acetylcholinesterase) suggests that one of these substitutions (Gly137 → Asp) lies within the active site of the enzyme. The occurrence of this substitution is completely correlated with resistance across 15 isogenic strains. In vitro expression of two natural and two synthetic chimeric alleles shows that the Asp137 substitution alone is responsible for both the loss of E3’s carboxylesterase activity and the acquisition of a novel OP hydrolase activity. Modeling of Asp137 in the homologous position in acetylcholinesterase suggests that Asp137 may act as a base to orientate a water molecule in the appropriate position for hydrolysis of the phosphorylated enzyme intermediate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nucleic acid sequence-based amplification (NASBA) has proved to be an ultrasensitive method for HIV-1 diagnosis in plasma even in the primary HIV infection stage. This technique was combined with fluorescence correlation spectroscopy (FCS) which enables online detection of the HIV-1 RNA molecules amplified by NASBA. A fluorescently labeled DNA probe at nanomolar concentration was introduced into the NASBA reaction mixture and hybridizing to a distinct sequence of the amplified RNA molecule. The specific hybridization and extension of this probe during amplification reaction, resulting in an increase of its diffusion time, was monitored online by FCS. As a consequence, after having reached a critical concentration of 0.1–1 nM (threshold for unaided FCS detection), the number of amplified RNA molecules in the further course of reaction could be determined. Evaluation of the hybridization/extension kinetics allowed an estimation of the initial HIV-1 RNA concentration that was present at the beginning of amplification. The value of initial HIV-1 RNA number enables discrimination between positive and false-positive samples (caused for instance by carryover contamination)—this possibility of discrimination is an essential necessity for all diagnostic methods using amplification systems (PCR as well as NASBA). Quantitation of HIV-1 RNA in plasma by combination of NASBA with FCS may also be useful in assessing the efficacy of anti-HIV agents, especially in the early infection stage when standard ELISA antibody tests often display negative results.