891 resultados para Pairing pattern
Resumo:
In this paper, we analyze the performance of several well-known pattern recognition and dimensionality reduction techniques when applied to mass-spectrometry data for odor biometric identification. Motivated by the successful results of previous works capturing the odor from other parts of the body, this work attempts to evaluate the feasibility of identifying people by the odor emanated from the hands. By formulating this task according to a machine learning scheme, the problem is identified with a small-sample-size supervised classification problem in which the input data is formed by mass spectrograms from the hand odor of 13 subjects captured in different sessions. The high dimensionality of the data makes it necessary to apply feature selection and extraction techniques together with a simple classifier in order to improve the generalization capabilities of the model. Our experimental results achieve recognition rates over 85% which reveals that there exists discriminatory information in the hand odor and points at body odor as a promising biometric identifier.
Resumo:
The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.
Resumo:
The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm3 and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers.
Resumo:
Interest in commercially farmed rabbit welfare has increased in recent years. As a result, new alternative housing systems have been developed, although they require evaluation in order to demonstrate their potential for improving welfare. The aim of this trial was to study the behavioural traits of rabbit does housed in 2 different types of cage (TC): conventional vs. alternative with an elevated platform, at different physiological stages (PS); lactation and gestation. Behavioural observations were carried out on 12 rabbit commercial does using continuous 24 h video recording. Independently of PS and TC, rabbit does spent most of their time on foot mats (on av. 57.7%). However, due to the use of platforms (on av. 23.0% of time), lactating does spent 36.6% less time on foot mats (P<0.001) and gestating does spent 27.0% less time on wire mesh (P<0.001) in alternative cages than in conventional cages. Alternative cages allowed for standing posture, but this behaviour was only observed in gestating does (on av. 4.6 times a day). Frequency of drinking was higher in conventional than in alternative cages (24.6 vs. 19.1 times a day; P<0.05). Gestating does housed in conventional cages reached the highest duration and frequency of interacting with neighbours (276 s/d and 4.6 times/d; P<0.05). The frequency of interacting with kits was lower in alternative than in conventional cages (2.4 vs. 8.6 times a day; P<0.01). Doe behaviour was influenced by the time of day, with less activity during the midday hours. During dark hours, rabbit does more frequently performed restless behaviour such as hyperactivity or nursing, matching the time at which rabbit does spent more time on the platform. The platform was frequently used by rabbit does, regardless of their physiological stage, and during late lactation phase, when mothers were not receptive to nursing, does housed in alternative cages used the platform as a mean to flee from kits trying to suckle. Use of the platform might lead to hygienic problems due to retained faeces on the platform and faeces and urine falling onto animals located in the lower part of the cage. The absence of stereotypies in rabbit does of this trial, suggested that animal welfare was not compromised by the type of housing (conventional or alternative cages).
Resumo:
La diabetes comprende un conjunto de enfermedades metabólicas que se caracterizan por concentraciones de glucosa en sangre anormalmente altas. En el caso de la diabetes tipo 1 (T1D, por sus siglas en inglés), esta situación es debida a una ausencia total de secreción endógena de insulina, lo que impide a la mayoría de tejidos usar la glucosa. En tales circunstancias, se hace necesario el suministro exógeno de insulina para preservar la vida del paciente; no obstante, siempre con la precaución de evitar caídas agudas de la glucemia por debajo de los niveles recomendados de seguridad. Además de la administración de insulina, las ingestas y la actividad física son factores fundamentales que influyen en la homeostasis de la glucosa. En consecuencia, una gestión apropiada de la T1D debería incorporar estos dos fenómenos fisiológicos, en base a una identificación y un modelado apropiado de los mismos y de sus sorrespondientes efectos en el balance glucosa-insulina. En particular, los sistemas de páncreas artificial –ideados para llevar a cabo un control automático de los niveles de glucemia del paciente– podrían beneficiarse de la integración de esta clase de información. La primera parte de esta tesis doctoral cubre la caracterización del efecto agudo de la actividad física en los perfiles de glucosa. Con este objetivo se ha llevado a cabo una revisión sistemática de la literatura y meta-análisis que determinen las respuestas ante varias modalidades de ejercicio para pacientes con T1D, abordando esta caracterización mediante unas magnitudes que cuantifican las tasas de cambio en la glucemia a lo largo del tiempo. Por otro lado, una identificación fiable de los periodos con actividad física es un requisito imprescindible para poder proveer de esa información a los sistemas de páncreas artificial en condiciones libres y ambulatorias. Por esta razón, la segunda parte de esta tesis está enfocada a la propuesta y evaluación de un sistema automático diseñado para reconocer periodos de actividad física, clasificando su nivel de intensidad (ligera, moderada o vigorosa); así como, en el caso de periodos vigorosos, identificando también la modalidad de ejercicio (aeróbica, mixta o de fuerza). En este sentido, ambos aspectos tienen una influencia específica en el mecanismo metabólico que suministra la energía para llevar a cabo el ejercicio y, por tanto, en las respuestas glucémicas en T1D. En este trabajo se aplican varias combinaciones de técnicas de aprendizaje máquina y reconocimiento de patrones sobre la fusión multimodal de señales de acelerometría y ritmo cardíaco, las cuales describen tanto aspectos mecánicos del movimiento como la respuesta fisiológica del sistema cardiovascular ante el ejercicio. Después del reconocimiento de patrones se incorpora también un módulo de filtrado temporal para sacar partido a la considerable coherencia temporal presente en los datos, una redundancia que se origina en el hecho de que en la práctica, las tendencias en cuanto a actividad física suelen mantenerse estables a lo largo de cierto tiempo, sin fluctuaciones rápidas y repetitivas. El tercer bloque de esta tesis doctoral aborda el tema de las ingestas en el ámbito de la T1D. En concreto, se propone una serie de modelos compartimentales y se evalúan éstos en función de su capacidad para describir matemáticamente el efecto remoto de las concetraciones plasmáticas de insulina exógena sobre las tasas de eleiminación de la glucosa atribuible a la ingesta; un aspecto hasta ahora no incorporado en los principales modelos de paciente para T1D existentes en la literatura. Los datos aquí utilizados se obtuvieron gracias a un experimento realizado por el Institute of Metabolic Science (Universidad de Cambridge, Reino Unido) con 16 pacientes jóvenes. En el experimento, de tipo ‘clamp’ con objetivo variable, se replicaron los perfiles individuales de glucosa, según lo observado durante una visita preliminar tras la ingesta de una cena con o bien alta carga glucémica, o bien baja. Los seis modelos mecanísticos evaluados constaban de: a) submodelos de doble compartimento para las masas de trazadores de glucosa, b) un submodelo de único compartimento para reflejar el efecto remoto de la insulina, c) dos tipos de activación de este mismo efecto remoto (bien lineal, bien con un punto de corte), y d) diversas condiciones iniciales. ABSTRACT Diabetes encompasses a series of metabolic diseases characterized by abnormally high blood glucose concentrations. In the case of type 1 diabetes (T1D), this situation is caused by a total absence of endogenous insulin secretion, which impedes the use of glucose by most tissues. In these circumstances, exogenous insulin supplies are necessary to maintain patient’s life; although caution is always needed to avoid acute decays in glycaemia below safe levels. In addition to insulin administrations, meal intakes and physical activity are fundamental factors influencing glucose homoeostasis. Consequently, a successful management of T1D should incorporate these two physiological phenomena, based on an appropriate identification and modelling of these events and their corresponding effect on the glucose-insulin balance. In particular, artificial pancreas systems –designed to perform an automated control of patient’s glycaemia levels– may benefit from the integration of this type of information. The first part of this PhD thesis covers the characterization of the acute effect of physical activity on glucose profiles. With this aim, a systematic review of literature and metaanalyses are conduced to determine responses to various exercise modalities in patients with T1D, assessed via rates-of-change magnitudes to quantify temporal variations in glycaemia. On the other hand, a reliable identification of physical activity periods is an essential prerequisite to feed artificial pancreas systems with information concerning exercise in ambulatory, free-living conditions. For this reason, the second part of this thesis focuses on the proposal and evaluation of an automatic system devised to recognize physical activity, classifying its intensity level (light, moderate or vigorous) and for vigorous periods, identifying also its exercise modality (aerobic, mixed or resistance); since both aspects have a distinctive influence on the predominant metabolic pathway involved in fuelling exercise, and therefore, in the glycaemic responses in T1D. Various combinations of machine learning and pattern recognition techniques are applied on the fusion of multi-modal signal sources, namely: accelerometry and heart rate measurements, which describe both mechanical aspects of movement and the physiological response of the cardiovascular system to exercise. An additional temporal filtering module is incorporated after recognition in order to exploit the considerable temporal coherence (i.e. redundancy) present in data, which stems from the fact that in practice, physical activity trends are often maintained stable along time, instead of fluctuating rapid and repeatedly. The third block of this PhD thesis addresses meal intakes in the context of T1D. In particular, a number of compartmental models are proposed and compared in terms of their ability to describe mathematically the remote effect of exogenous plasma insulin concentrations on the disposal rates of meal-attributable glucose, an aspect which had not yet been incorporated to the prevailing T1D patient models in literature. Data were acquired in an experiment conduced at the Institute of Metabolic Science (University of Cambridge, UK) on 16 young patients. A variable-target glucose clamp replicated their individual glucose profiles, observed during a preliminary visit after ingesting either a high glycaemic-load or a low glycaemic-load evening meal. The six mechanistic models under evaluation here comprised: a) two-compartmental submodels for glucose tracer masses, b) a single-compartmental submodel for insulin’s remote effect, c) two types of activations for this remote effect (either linear or with a ‘cut-off’ point), and d) diverse forms of initial conditions.
Resumo:
Esta tesis se centra en la generación de ondas superficiales subarmónicas en fluidos sometidos a vibración forzada en el régimen gravitatorio capilar con líquidos de baja viscosidad. Tres problemas diferentes han sido estudiados: un contenedor rectangular con vibración horizontal, la misma geometría pero con una combinación de vibración vertical y horizontal y un obstáculo completamente sumergido vibrado verticalmente en un contenedor grande. Se deriva una ecuación de amplitud desde primeros principios para describir las ondas subarmónicas con forzamiento parámetrico inducido por la vibración. La ecuación es bidimensional mientras que el problema original es tridimensional y admite un forzamiento espacial no uniforme. Usando esta ecuación los tres sistemas han sido analizados, centrándose en calcular la amplitud crítica, la orientación de los patrones y el carácter temporal de los patrones espaciotemporales, que pueden ser estrictamente subarmónicos o cuasiperiodicos con una frecuencia de modulación temporal. La dependencia con los parámetros adimensionales también se considera. La teoría será comparada con los experimentos disponibles en la literatura. Abstract This thesis focus on the generation of subharmonic surface waves on fluids subject to forced vibration in the gravity-capillary regime with liquids of small viscosity. Three different problems have been considered: a rectangular container under horizontal vibration; the same geometry but under a combination of horizontal and vertical vibration; and a fully submerged vertically vibrated obstacle in a large container. An amplitude equation is derived from first principles that fairly precisely describes the subharmonic surfaces waves parametrically driven by vibration. That equation is two dimensional while the underlying problem is three-dimensional and permits spatially nonuniform forcing. Using this equation, the three systems have been analyzed, focusing on the calculation of the threshold amplitude, the pattern orientation, and the temporal character of the spatio-temporal patterns, which can be either strictly subharmonic or quasi-periodic, showing an additional modulation frequency. Dependence on the non-dimensional parameters is also considered. The theory is compared with the experiments available in the literature.
Resumo:
The study of life history evolution in hominids is crucial for the discernment of when and why humans have acquired our unique maturational pattern. Because the development of dentition is critically integrated into the life cycle in mammals, the determination of the time and pattern of dental development represents an appropriate method to infer changes in life history variables that occurred during hominid evolution. Here we present evidence derived from Lower Pleistocene human fossil remains recovered from the TD6 level (Aurora stratum) of the Gran Dolina site in the Sierra de Atapuerca, northern Spain. These hominids present a pattern of development similar to that of Homo sapiens, although some aspects (e.g., delayed M3 calcification) are not as derived as that of European populations and people of European origin. This evidence, taken together with the present knowledge of cranial capacity of these and other late Early Pleistocene hominids, supports the view that as early as 0.8 Ma at least one Homo species shared with modern humans a prolonged pattern of maturation.
Resumo:
The formation of heteroduplex joints in Escherichia coli recombination is initiated by invasion of double-stranded DNA by a single-stranded homologue. To determine the polarity of the invasive strand, linear molecules with direct terminal repeats were released by in vivo restriction of infecting chimeric phage DNA and heteroduplex products of intramolecular recombination were analyzed. With this substrate, the invasive strand is expected to be incorporated into the circular crossover product and the complementary strand is expected to be incorporated into the reciprocal linear product. Strands of both polarities were incorporated into heteroduplex structures, but only strands ending 3′ at the break were incorporated into circular products. This result indicates that invasion of the 3′-ending strand initiates the heteroduplex joint formation and that the complementary 5′-ending strand is incorporated into heteroduplex structures in the process of reciprocal strand exchange. The polarity of the invasive strand was not affected by recD, recJ, or xonA mutations. However, xonA and recJ mutations increased the proportion of heteroduplexes containing 5′-ending strands. This observation suggests that RecJ exonuclease and exonuclease I may enhance recombination by degrading the displaced strands during branch migration and thereby causing strand exchange to be unidirectional.
Resumo:
Membrane bilayer fusion has been shown to be mediated by v- and t-SNAREs initially present in separate populations of liposomes and to occur with high efficiency at a physiologically meaningful rate. Lipid mixing was demonstrated to involve both the inner and the outer leaflets of the membrane bilayer. Here, we use a fusion assay that relies on duplex formation of oligonucleotides introduced in separate liposome populations and report that SNARE proteins suffice to mediate complete membrane fusion accompanied by mixing of luminal content. We also find that SNARE-mediated membrane fusion does not compromise the integrity of liposomes.
Resumo:
We have investigated physical distances and directions of transposition of the maize transposable element Ac in Arabidopsis thaliana. We prepared a transferred DNA (T-DNA) construct that carried a non-autonomous derivative of Ac with a site for cleavage by endonuclease I-SceI (designated dAc-I-RS element). Another cleavage site was also introduced into the T-DNA region outside dAc-I-RS. Three transgenic Arabidopsis plants were generated, each of which had a single copy of the T-DNA at a different chromosomal location. These transgenic plants were crossed with the Arabidopsis that carried the gene for Ac transposase and progeny in which dAc-I-RS had been transposed were isolated. After digestion of the genomic DNA of these progeny with endonuclease I-SceI, sizes of segment of DNA were determined by pulse-field gel electrophoresis. We also performed linkage analysis for the transposed elements and sites of mutations near the elements. Our results showed that 50% of all transposition events had occurred within 1,700 kb on the same chromosome, with 35% within 200 kb, and that the elements transposed in both directions on the chromosome with roughly equal probability. The data thus indicate that the Ac–Ds system is most useful for tagging of genes that are present within 200 kb of the chromosomal site of Ac in Arabidopsis. In addition, determination of the precise localization of the transposed dAc-I-RS element should definitely assist in map-based cloning of genes around insertion sites.
Resumo:
Following striate cortex damage in monkeys and humans there can be residual function mediated by parallel visual pathways. In humans this can sometimes be associated with a “feeling” that something has happened, especially with rapid movement or abrupt onset. For less transient events, discriminative performance may still be well above chance even when the subject reports no conscious awareness of the stimulus. In a previous study we examined parameters that yield good residual visual performance in the “blind” hemifield of a subject with unilateral damage to the primary visual cortex. With appropriate parameters we demonstrated good discriminative performance, both with and without conscious awareness of a visual event. These observations raise the possibility of imaging the brain activity generated in the “aware” and the “unaware” modes, with matched levels of discrimination performance, and hence of revealing patterns of brain activation associated with visual awareness. The intact hemifield also allows a comparison with normal vision. Here we report the results of a functional magnetic resonance imaging study on the same subject carried out under aware and unaware stimulus conditions. The results point to a shift in the pattern of activity from neocortex in the aware mode, to subcortical structures in the unaware mode. In the aware mode prestriate and dorsolateral prefrontal cortices (area 46) are active. In the unaware mode the superior colliculus is active, together with medial and orbital prefrontal cortical sites.
Resumo:
We have established a differential peptide display method, based on a mass spectrometric technique, to detect peptides that show semiquantitative changes in the neurointermediate lobe (NIL) of individual rats subjected to salt-loading. We employed matrix-assisted laser desorption/ionization mass spectrometry, using a single-reference peptide in combination with careful scanning of the whole crystal rim of the matrix-analyte preparation, to detect in a semiquantitative manner the molecular ions present in the unfractionated NIL homogenate. Comparison of the mass spectra generated from NIL homogenates of salt-loaded and control rats revealed a selective and significant decrease in the intensities of several molecular ion species of the NIL homogenates from salt-loaded rats. These ion species, which have masses that correspond to the masses of oxytocin, vasopressin, neurophysins, and an unidentified putative peptide, were subsequently chemically characterized. We confirmed that the decreased molecular ion species are peptides derived exclusively from propressophysin and prooxyphysin (i.e., oxytocin, vasopressin, and various neurophysins). The putative peptide is carboxyl-terminal glycopeptide. The carbohydrate moiety of the latter peptide was determined by electrospray tandem MS as bisected biantennary Hex3HexNAc5Fuc. This posttranslational modification accounts for the mass difference between the predicted mass of the peptide based on cDNA studies and the measured mass of the mature peptide.
Resumo:
The mouse Snrpn gene encodes the Smn protein, which is involved in RNA splicing. The gene maps to a region in the central part of chromosome 7 that is syntenic to the Prader–Willi/Angelman syndromes (PWS-AS) region on human chromosome 15q11-q13. The mouse gene, like its human counterpart, is imprinted and paternally expressed, primarily in brain and heart. We provide here a detailed description of the structural features and differential methylation pattern of the gene. We have identified a maternally methylated region at the 5′ end (DMR1), which correlates inversely with the Snrpn paternal expression. We also describe a region at the 3′ end of the gene (DMR2) that is preferentially methylated on the paternal allele. Analysis of Snrpn mRNA levels in a methylase-deficient mouse embryo revealed that maternal methylation of DMR1 may play a role in silencing the maternal allele. Yet both regions, DMR1 and DMR2, inherit the parental-specific methylation profile from the gametes. This methylation pattern is erased in 12.5-days postcoitum (dpc) primordial germ cells and reestablished during gametogenesis. DMR1 is remethylated during oogenesis, whereas DMR2 is remethylated during spermatogenesis. Once established, these methylation patterns are transmitted to the embryo and maintained, protected from methylation changes during embryogenesis and cell differentiation. Transfections of DMR1 and DMR2 into embryonic stem cells and injection into pronuclei of fertilized eggs reveal that embryonic cells lack the capacity to establish anew the differential methylation pattern of Snrpn. That all PWS patients lack DMR1, together with the overall high resemblance of the mouse gene to the human SNRPN, offers an excellent experimental tool to study the regional control of this imprinted chromosomal domain.
Resumo:
Compound 1 (F), a nonpolar nucleoside analog that is isosteric with thymidine, has been proposed as a probe for the importance of hydrogen bonds in biological systems. Consistent with its lack of strong H-bond donors or acceptors, F is shown here by thermal denaturation studies to pair very poorly and with no significant selectivity among natural bases in DNA oligonucleotides. We report the synthesis of the 5′-triphosphate derivative of 1 and the study of its ability to be inserted into replicating DNA strands by the Klenow fragment (KF, exo− mutant) of Escherichia coli DNA polymerase I. We find that this nucleotide derivative (dFTP) is a surprisingly good substrate for KF; steady-state measurements indicate it is inserted into a template opposite adenine with efficiency (Vmax/Km) only 40-fold lower than dTTP. Moreover, it is inserted opposite A (relative to C, G, or T) with selectivity nearly as high as that observed for dTTP. Elongation of the strand past F in an F–A pair is associated with a brief pause, whereas that beyond A in the inverted A–F pair is not. Combined with data from studies with F in the template strand, the results show that KF can efficiently replicate a base pair (A–F/F–A) that is inherently very unstable, and the replication occurs with very high fidelity despite a lack of inherent base-pairing selectivity. The results suggest that hydrogen bonds may be less important in the fidelity of replication than commonly believed and that nucleotide/template shape complementarity may play a more important role than previously believed.
Resumo:
This paper deals with pattern recognition of the shape of the boundary of closed figures on the basis of a circular sequence of measurements taken on the boundary at equal intervals of a suitably chosen argument with an arbitrary starting point. A distance measure between two boundaries is defined in such a way that it has zero value when the associated sequences of measurements coincide by shifting the starting point of one of the sequences. Such a distance measure, which is invariant to the starting point of the sequence of measurements, is used in identification or discrimination by the shape of the boundary of a closed figure. The mean shape of a given set of closed figures is defined, and tests of significance of differences in mean shape between populations are proposed.