841 resultados para Accreditation: What It Is . . .and Is Not


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El auge y penetración de las nuevas tecnologías junto con la llamada Web Social están cambiando la forma en la que accedemos a la medicina. Cada vez más pacientes y profesionales de la medicina están creando y consumiendo recursos digitales de contenido clínico a través de Internet, surgiendo el problema de cómo asegurar la fiabilidad de estos recursos. Además, un nuevo concepto está apareciendo, el de pervasive healthcare o sanidad ubicua, motivado por pacientes que demandan un acceso a los servicios sanitarios en todo momento y en todo lugar. Este nuevo escenario lleva aparejado un problema de confianza en los proveedores de servicios sanitarios. Las plataformas de eLearning se están erigiendo como paradigma de esta nueva Medicina 2.0 ya que proveen un servicio abierto a la vez que controlado/supervisado a recursos digitales, y facilitan las interacciones y consultas entre usuarios, suponiendo una buena aproximación para esta sanidad ubicua. En estos entornos los problemas de fiabilidad y confianza pueden ser solventados mediante la implementación de mecanismos de recomendación de recursos y personas de manera confiable. Tradicionalmente las plataformas de eLearning ya cuentan con mecanismos de recomendación, si bien están más enfocados a la recomendación de recursos. Para la recomendación de usuarios es necesario acudir a mecanismos más elaborados como son los sistemas de confianza y reputación (trust and reputation) En ambos casos, tanto la recomendación de recursos como el cálculo de la reputación de los usuarios se realiza teniendo en cuenta criterios principalmente subjetivos como son las opiniones de los usuarios. En esta tesis doctoral proponemos un nuevo modelo de confianza y reputación que combina evaluaciones automáticas de los recursos digitales en una plataforma de eLearning, con las opiniones vertidas por los usuarios como resultado de las interacciones con otros usuarios o después de consumir un recurso. El enfoque seguido presenta la novedad de la combinación de una parte objetiva con otra subjetiva, persiguiendo mitigar el efecto de posibles castigos subjetivos por parte de usuarios malintencionados, a la vez que enriquecer las evaluaciones objetivas con información adicional acerca de la capacidad pedagógica del recurso o de la persona. El resultado son recomendaciones siempre adaptadas a los requisitos de los usuarios, y de la máxima calidad tanto técnica como educativa. Esta nueva aproximación requiere una nueva herramienta para su validación in-silico, al no existir ninguna aplicación que permita la simulación de plataformas de eLearning con mecanismos de recomendación de recursos y personas, donde además los recursos sean evaluados objetivamente. Este trabajo de investigación propone pues una nueva herramienta, basada en el paradigma de programación orientada a agentes inteligentes para el modelado de comportamientos complejos de usuarios en plataformas de eLearning. Además, la herramienta permite también la simulación del funcionamiento de este tipo de entornos dedicados al intercambio de conocimiento. La evaluación del trabajo propuesto en este documento de tesis se ha realizado de manera iterativa a lo largo de diferentes escenarios en los que se ha situado al sistema frente a una amplia gama de comportamientos de usuarios. Se ha comparado el rendimiento del modelo de confianza y reputación propuesto frente a dos modos de recomendación tradicionales: a) utilizando sólo las opiniones subjetivas de los usuarios para el cálculo de la reputación y por extensión la recomendación; y b) teniendo en cuenta sólo la calidad objetiva del recurso sin hacer ningún cálculo de reputación. Los resultados obtenidos nos permiten afirmar que el modelo desarrollado mejora la recomendación ofrecida por las aproximaciones tradicionales, mostrando una mayor flexibilidad y capacidad de adaptación a diferentes situaciones. Además, el modelo propuesto es capaz de asegurar la recomendación de nuevos usuarios entrando al sistema frente a la nula recomendación para estos usuarios presentada por el modo de recomendación predominante en otras plataformas que basan la recomendación sólo en las opiniones de otros usuarios. Por último, el paradigma de agentes inteligentes ha probado su valía a la hora de modelar plataformas virtuales complejas orientadas al intercambio de conocimiento, especialmente a la hora de modelar y simular el comportamiento de los usuarios de estos entornos. La herramienta de simulación desarrollada ha permitido la evaluación del modelo de confianza y reputación propuesto en esta tesis en una amplia gama de situaciones diferentes. ABSTRACT Internet is changing everything, and this revolution is especially present in traditionally offline spaces such as medicine. In recent years health consumers and health service providers are actively creating and consuming Web contents stimulated by the emergence of the Social Web. Reliability stands out as the main concern when accessing the overwhelming amount of information available online. Along with this new way of accessing the medicine, new concepts like ubiquitous or pervasive healthcare are appearing. Trustworthiness assessment is gaining relevance: open health provisioning systems require mechanisms that help evaluating individualsâ reputation in pursuit of introducing safety to these open and dynamic environments. Technical Enhanced Learning (TEL) -commonly known as eLearning- platforms arise as a paradigm of this Medicine 2.0. They provide an open while controlled/supervised access to resources generated and shared by users, enhancing what it is being called informal learning. TEL systems also facilitate direct interactions amongst users for consultation, resulting in a good approach to ubiquitous healthcare. The aforementioned reliability and trustworthiness problems can be faced by the implementation of mechanisms for the trusted recommendation of both resources and healthcare services providers. Traditionally, eLearning platforms already integrate recommendation mechanisms, although this recommendations are basically focused on providing an ordered classifications of resources. For usersâ recommendation, the implementation of trust and reputation systems appears as the best solution. Nevertheless, both approaches base the recommendation on the information from the subjective opinions of other users of the platform regarding the resources or the users. In this PhD work a novel approach is presented for the recommendation of both resources and users within open environments focused on knowledge exchange, as it is the case of TEL systems for ubiquitous healthcare. The proposed solution adds the objective evaluation of the resources to the traditional subjective personal opinions to estimate the reputation of the resources and of the users of the system. This combined measure, along with the reliability of that calculation, is used to provide trusted recommendations. The integration of opinions and evaluations, subjective and objective, allows the model to defend itself against misbehaviours. Furthermore, it also allows â˜colouringâ cold evaluation values by providing additional quality information such as the educational capacities of a digital resource in an eLearning system. As a result, the recommendations are always adapted to user requirements, and of the maximum technical and educational quality. To our knowledge, the combination of objective assessments and subjective opinions to provide recommendation has not been considered before in the literature. Therefore, for the evaluation of the trust and reputation model defined in this PhD thesis, a new simulation tool will be developed following the agent-oriented programming paradigm. The multi-agent approach allows an easy modelling of independent and proactive behaviours for the simulation of users of the system, conforming a faithful resemblance of real users of TEL platforms. For the evaluation of the proposed work, an iterative approach have been followed, testing the performance of the trust and reputation model while providing recommendation in a varied range of scenarios. A comparison with two traditional recommendation mechanisms was performed: a) using only usersâ past opinions about a resource and/or other users; and b) not using any reputation assessment and providing the recommendation considering directly the objective quality of the resources. The results show that the developed model improves traditional approaches at providing recommendations in Technology Enhanced Learning (TEL) platforms, presenting a higher adaptability to different situations, whereas traditional approaches only have good results under favourable conditions. Furthermore the promotion period mechanism implemented successfully helps new users in the system to be recommended for direct interactions as well as the resources created by them. On the contrary OnlyOpinions fails completely and new users are never recommended, while traditional approaches only work partially. Finally, the agent-oriented programming (AOP) paradigm has proven its validity at modelling usersâ behaviours in TEL platforms. Intelligent software agentsâ characteristics matched the main requirements of the simulation tool. The proactivity, sociability and adaptability of the developed agents allowed reproducing real usersâ actions and attitudes through the diverse situations defined in the evaluation framework. The result were independent users, accessing to different resources and communicating amongst them to fulfil their needs, basing these interactions on the recommendations provided by the reputation engine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accreditation models in the international context mainly consider the evaluation of learning outcomes and the ability of programs (or higher education institutions) to achieve the educational objectives stated in their mission. However, it is not clear if these objectives and therefore their outcomes satisfy real national and regional needs, a critical point in engineering master's programs, especially in developing countries. The aim of this paper is to study the importance of the local relevancy evaluation of these programs and to analyze the main models of quality assurance and accreditation bodies of USA, Europe and Latin America, in order to ascertain whether the relevancy is evaluated or not. After a literature review, we found that in a free-market economic context and international education, the accreditation of master's programs follows an international accreditation model, and doesÅt take in account in most cases criteria and indicators for local relevancy. It concludes that it is necessary both, international accreditation to ensure the effectiveness of the program (achievement of learning outcomes) and the national accreditation through which it could ensure local relevancy of programs, for which we are giving some indicators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Existing evaluation models for higher education have, mainly, accreditation purposes, and evaluate the efficiency of training programs, that is to say, the degree of suitability between the educational results and the objectives of the program. However, it is not guaranteed that those objectives adequate to the needs and real interests of students and stakeholders, that is to say, they do not assess the relevance of the programs, a very important aspect in developing countries. From the review of experiences, this paper proposes a model for evaluating the relevance of engineering masters program, and applies it to the case of a master?s degree at the University of Piura, Peru. We conclude that the proposed model is applicable to other masters program, offers an objective way for determining is a training program keep being relevant, and identifies improvement opportunities

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Primary CD8+ T cells from HIV+ asymptomatics can suppress virus production from CD4+ T cells acutely infected with either non-syncytia-inducing (NSI) or syncytia-inducing (SI) HIV-1 isolates. NSI strains of HIV-1 predominantly use the CCR5 chemokine receptor as a fusion cofactor, whereas fusion of T cell line-adapted SI isolates is mediated by another chemokine receptor, CXCR4. The CCR5 ligands RANTES (regulated on activation, normal T cell expressed and secreted), macrophage inflammatory protein 1α (MIP-1α), and MIP-1β are HIV-1 suppressive factors secreted by CD8+ cells that inhibit NSI viruses. Recently, the CXC chemokine stromal cell-derived factor 1 (SDF-1) was identified as a ligand for CXCR4 and shown to inhibit SI strains. We speculated that SDF-1 might be an effector molecule for CD8+ suppression of SI isolates and assessed several SDF-1 preparations for inhibition of HIV-1LAI-mediated cellâcell fusion, and examined levels of SDF-1 transcripts in CD8+ T cells. SDF-1 fusion inhibitory activity correlated with the N terminus, and the α and β forms of SDF-1 exhibited equivalent fusion blocking activity. SDF-1 preparations having the N terminus described by Bleul et al. (Bleul, C.C., Fuhlbrigge, R.C., Casasnovas, J.M., Aiuti, A. & Springer, T.A. (1996) J. Exp. Med. 184, 1101â1109) readily blocked HIV-1LAI-mediated fusion, whereas forms containing two or three additional N-terminal amino acids lacked this activity despite their ability to bind and/or signal through CXCR4. Though SDF-1 is constitutively expressed in most tissues, CD8 T cells contained extremely low levels of SDF-1 mRNA transcripts (<1 transcript/5,000 cells), and these levels did not correlate with virus suppressive activity. We conclude that suppression of SI strains of HIV-1 by CD8+ T cells is unlikely to involve SDF-1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sequence-selective transcription by bacterial RNA polymerase (RNAP) requires σ factor that participates in both promoter recognition and DNA melting. RNAP lacking σ (core enzyme) will initiate RNA synthesis from duplex ends, nicks, gaps, and single-stranded regions. We have used DNA templates containing short regions of heteroduplex (bubbles) to compare initiation in the presence and absence of various σ factors. Using bubble templates containing the σD-dependent flagellin promoter, with or without its associated upstream promoter (UP) element, we demonstrate that UP element stimulation occurs efficiently even in the absence of σ. This supports a model in which the UP element acts primarily through the α subunit of core enzyme to increase the initial association of RNAP with the promoter. Core and holoenzyme do differ substantially in the template positions chosen for initiation: σD restricts initiation to sites 8â9 nucleotides downstream of the conserved âˆ10 element. Remarkably, σA also has a dramatic effect on start-site selection even though the σA holoenzyme is inactive on the corresponding homoduplexes. The start sites chosen by the σA holoenzyme are located 8 nucleotides downstream of sequences on the nontemplate strand that resemble the conserved âˆ10 hexamer recognized by σA. Thus, σA appears to recognize the âˆ10 region even in a single-stranded state. We propose that in addition to its described roles in promoter recognition and start-site melting, σ also localizes the transcription start site.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

N-methyl-d-aspartate receptors (NMDARs) are Ca2+-permeable glutamate-gated ion channels whose physiological properties in neurons are modulated by protein kinase C (PKC). The present study was undertaken to determine the role in PKC-induced potentiation of the NR1 and NR2A C-terminal tails, which serve as targets of PKC phosphorylation [Tingley, W. G., Ehlers, M. D., Kameyama, K., Doherty, C., Ptak, J. B., Riley, C. T. & Huganir, R. L. (1997) J. Biol. Chem. 272, 5157â5166]. Serine residue 890 in the C1 cassette is a primary target of PKC phosphorylation and a critical residue in receptor clustering at the membrane. We report herein that the presence of the C1 cassette reduces PKC potentiation and that mutation of Ser-890 significantly restores PKC potentiation. Splicing out or deletion of other C-terminal cassettes singly or in combination had little or no effect on PKC potentiation. Moreover, experiments involving truncation mutants reveal the unexpected finding that NMDARs assembled from subunits lacking all known sites of PKC phosphorylation can show PKC potentiation. These results indicate that PKC-induced potentiation of NMDAR activity does not occur by direct phosphorylation of the receptor protein but rather of associated targeting, anchoring, or signaling protein(s). PKC potentiation of NMDAR function is likely to be an important mode of NMDAR regulation in vivo and may play a role in NMDA-dependent long-term potentiation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prolactin (PRL) is widely considered to be the juvenile hormone of anuran tadpoles and to counteract the effects of thyroid hormone (TH), the hormone that controls amphibian metamorphosis. This putative function was concluded mainly from experiments in which mammalian PRL was injected into tadpoles or added to cultured tadpole tissues. In this study, we show that overexpression of ovine or Xenopus laevis PRL in transgenic X. laevis does not prolong tadpole life, establishing that PRL does not play a role in the life cycle of amphibians that is equivalent to that of juvenile hormone in insect metamorphosis. However, overexpression of PRL produces tailed frogs by reversing specifically some but not all of the programs of tail resorption and stimulating growth of fibroblasts in the tail. Whereas TH induces muscle resorption in tails of these transgenics, the tail fibroblasts continue to proliferate resulting in a fibrotic tail that is resistant to TH.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

γ-Aminobutyric acid type A receptors (GABAARs) are ligand-gated chloride channels that exist in numerous distinct subunit combinations. At postsynaptic membrane specializations, different GABAAR isoforms colocalize with the tubulin-binding protein gephyrin. However, direct interactions of GABAAR subunits with gephyrin have not been reported. Recently, the GABAAR-associated protein GABARAP was found to bind to the γ2 subunit of GABAARs. Here we show that GABARAP interacts with gephyrin in both biochemical assays and transfected cells. Confocal analysis of neurons derived from wild-type and gephyrin-knockout mice revealed that GABARAP is highly enriched in intracellular compartments, but not at gephyrin-positive postsynaptic membrane specializations. Our data indicate that GABARAPâgephyrin interactions are not important for postsynaptic GABAAR anchoring but may be implicated in receptor sorting and/or targeting mechanisms. Consistent with this idea, a close homolog of GABARAP, p16, has been found to function as a late-acting intra-Golgi transport factor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thioredoxin 1 is a major thiol-disulfide oxidoreductase in the cytoplasm of Escherichia coli. One of its functions is presumed to be the reduction of the disulfide bond in the active site of the essential enzyme ribonucleotide reductase. Thioredoxin 1 is kept in a reduced state by thioredoxin reductase. In a thioredoxin reductase null mutant however, most of thioredoxin 1 is in the oxidized form; recent reports have suggested that this oxidized form might promote disulfide bond formation in vivo. In the Escherichia coli periplasm, the protein disulfide isomerase DsbC is maintained in the reduced and active state by the membrane protein DsbD. In a dsbD null mutant, DsbC accumulates in the oxidized form. This oxidized form is then able to promote disulfide bond formation. In both these cases, the inversion of the function of these thiol oxidoreductases appears to be due to an altered redox balance of the environment in which they find themselves. Here, we show that thioredoxin 1 attached to the alkaline phosphatase signal sequence can be exported into the E. coli periplasm. In this new environment for thioredoxin 1, we show that thioredoxin 1 can promote disulfide bond formation and, therefore, partially complement a dsbA strain defective for disulfide bond formation. Thus, we provide evidence that by changing the location of thioredoxin 1 from cytoplasm to periplasm, we change its function from a reductant to an oxidant. We conclude that the in vivo redox function of thioredoxin 1 depends on the redox environment in which it is localized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Binding of erythropoietin (Epo) to the Epo receptor (EpoR) is crucial for production of mature red cells. Although it is well established that the Epo-bound EpoR is a dimer, it is not clear whether, in the absence of ligand, the intact EpoR is a monomer or oligomer. Using antibody-mediated immunofluorescence copatching (oligomerizing) of epitope-tagged receptors at the surface of live cells, we show herein that a major fraction of the full-length murine EpoR exists as preformed dimers/oligomers in BOSC cells, which are human embryo kidney 293T-derived cells. This observed oligomerization is specific because, under the same conditions, epitope-tagged EpoR did not oligomerize with several other tagged receptors (thrombopoietin receptor, transforming growth factor β receptor type II, or prolactin receptor). Strikingly, the EpoR transmembrane (TM) domain but not the extracellular or intracellular domains enabled the prolactin receptor to copatch with EpoR. Preformed EpoR oligomers are not constitutively active and Epo binding was required to induce signaling. In contrast to tyrosine kinase receptors (e.g., insulin receptor), which cannot signal when their TM domain is replaced by the strongly dimerizing TM domain of glycophorin A, the EpoR could tolerate the replacement of its TM domain with that of glycophorin A and retained signaling. We propose a model in which TM domain-induced dimerization maintains unliganded EpoR in an inactive state that can readily be switched to an active state by physiologic levels of Epo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fair innings argument (FIA) is frequently put forward as a justification for denying elderly patients treatment when they are in competition with younger patients and resources are scarce. In this paper I will examine some arguments that are used to support the FIA. My conclusion will be that they do not stand up to scrutiny and therefore, the FIA should not be used to justify the denial of treatment to elderly patients, or to support rationing of health care by age.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Early metazoan development is programmed by maternal mRNAs inherited by the egg at the time of fertilization. These mRNAs are not translated en masse at any one time or at any one place, but instead their expression is regulated both temporally and spatially. Recent evidence has shown that one maternal mRNA, cyclin B1, is concentrated on mitotic spindles in the early Xenopus embryo, where its translation is controlled by CPEB (cytoplasmic polyadenylation element binding protein), a sequence-specific RNA binding protein. Disruption of the spindle-associated translation of this mRNA results in a morphologically abnormal mitotic apparatus and inhibited cell division. Mammalian neurons, particularly in the synapto-dendritic compartment, also contain localized mRNAs such as that encoding α-CaMKII. Here, synaptic activation drives local translation, an event that is involved in synaptic plasticity and possibly long-term memory storage. Synaptic translation of α-CaMKII mRNA also appears to be controlled by CPEB, which is enriched in the postsynaptic density. Therefore, CPEB-controlled local translation may influence such seemingly disparate processes as the cell cycle and synaptic plasticity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hydroperoxide lyase (HPL) cleaves lipid hydroperoxides to produce volatile flavor molecules and also potential signal molecules. We have characterized a gene from Arabidopsis that is homologous to a recently cloned HPL from green pepper (Capsicum annuum). The deduced protein sequence indicates that this gene encodes a cytochrome P-450 with a structure similar to that of allene oxide synthase. The gene was cloned into an expression vector and expressed in Escherichia coli to demonstrate HPL activity. Significant HPL activity was evident when 13S-hydroperoxy-9(Z),11(E),15(Z)-octadecatrienoic acid was used as the substrate, whereas activity with 13S-hydroperoxy-9(Z),11(E)-octadecadienoic acid was approximately 10-fold lower. Analysis of headspace volatiles by gas chromatography-mass spectrometry, after addition of the substrate to E. coli extracts expressing the protein, confirmed enzyme-activity data, since cis-3-hexenal was produced by the enzymatic activity of the encoded protein, whereas hexanal production was limited. Molecular characterization of this gene indicates that it is expressed at high levels in floral tissue and is wound inducible but, unlike allene oxide synthase, it is not induced by treatment with methyl jasmonate.