936 resultados para Shut Up and Write!
Resumo:
La predicción de energía eólica ha desempeñado en la última década un papel fundamental en el aprovechamiento de este recurso renovable, ya que permite reducir el impacto que tiene la naturaleza fluctuante del viento en la actividad de diversos agentes implicados en su integración, tales como el operador del sistema o los agentes del mercado eléctrico. Los altos niveles de penetración eólica alcanzados recientemente por algunos países han puesto de manifiesto la necesidad de mejorar las predicciones durante eventos en los que se experimenta una variación importante de la potencia generada por un parque o un conjunto de ellos en un tiempo relativamente corto (del orden de unas pocas horas). Estos eventos, conocidos como rampas, no tienen una única causa, ya que pueden estar motivados por procesos meteorológicos que se dan en muy diferentes escalas espacio-temporales, desde el paso de grandes frentes en la macroescala a procesos convectivos locales como tormentas. Además, el propio proceso de conversión del viento en energía eléctrica juega un papel relevante en la ocurrencia de rampas debido, entre otros factores, a la relación no lineal que impone la curva de potencia del aerogenerador, la desalineación de la máquina con respecto al viento y la interacción aerodinámica entre aerogeneradores. En este trabajo se aborda la aplicación de modelos estadísticos a la predicción de rampas a muy corto plazo. Además, se investiga la relación de este tipo de eventos con procesos atmosféricos en la macroescala. Los modelos se emplean para generar predicciones de punto a partir del modelado estocástico de una serie temporal de potencia generada por un parque eólico. Los horizontes de predicción considerados van de una a seis horas. Como primer paso, se ha elaborado una metodología para caracterizar rampas en series temporales. La denominada función-rampa está basada en la transformada wavelet y proporciona un índice en cada paso temporal. Este índice caracteriza la intensidad de rampa en base a los gradientes de potencia experimentados en un rango determinado de escalas temporales. Se han implementado tres tipos de modelos predictivos de cara a evaluar el papel que juega la complejidad de un modelo en su desempeño: modelos lineales autorregresivos (AR), modelos de coeficientes variables (VCMs) y modelos basado en redes neuronales (ANNs). Los modelos se han entrenado en base a la minimización del error cuadrático medio y la configuración de cada uno de ellos se ha determinado mediante validación cruzada. De cara a analizar la contribución del estado macroescalar de la atmósfera en la predicción de rampas, se ha propuesto una metodología que permite extraer, a partir de las salidas de modelos meteorológicos, información relevante para explicar la ocurrencia de estos eventos. La metodología se basa en el análisis de componentes principales (PCA) para la síntesis de la datos de la atmósfera y en el uso de la información mutua (MI) para estimar la dependencia no lineal entre dos señales. Esta metodología se ha aplicado a datos de reanálisis generados con un modelo de circulación general (GCM) de cara a generar variables exógenas que posteriormente se han introducido en los modelos predictivos. Los casos de estudio considerados corresponden a dos parques eólicos ubicados en España. Los resultados muestran que el modelado de la serie de potencias permitió una mejora notable con respecto al modelo predictivo de referencia (la persistencia) y que al añadir información de la macroescala se obtuvieron mejoras adicionales del mismo orden. Estas mejoras resultaron mayores para el caso de rampas de bajada. Los resultados también indican distintos grados de conexión entre la macroescala y la ocurrencia de rampas en los dos parques considerados. Abstract One of the main drawbacks of wind energy is that it exhibits intermittent generation greatly depending on environmental conditions. Wind power forecasting has proven to be an effective tool for facilitating wind power integration from both the technical and the economical perspective. Indeed, system operators and energy traders benefit from the use of forecasting techniques, because the reduction of the inherent uncertainty of wind power allows them the adoption of optimal decisions. Wind power integration imposes new challenges as higher wind penetration levels are attained. Wind power ramp forecasting is an example of such a recent topic of interest. The term ramp makes reference to a large and rapid variation (1-4 hours) observed in the wind power output of a wind farm or portfolio. Ramp events can be motivated by a broad number of meteorological processes that occur at different time/spatial scales, from the passage of large-scale frontal systems to local processes such as thunderstorms and thermally-driven flows. Ramp events may also be conditioned by features related to the wind-to-power conversion process, such as yaw misalignment, the wind turbine shut-down and the aerodynamic interaction between wind turbines of a wind farm (wake effect). This work is devoted to wind power ramp forecasting, with special focus on the connection between the global scale and ramp events observed at the wind farm level. The framework of this study is the point-forecasting approach. Time series based models were implemented for very short-term prediction, this being characterised by prediction horizons up to six hours ahead. As a first step, a methodology to characterise ramps within a wind power time series was proposed. The so-called ramp function is based on the wavelet transform and it provides a continuous index related to the ramp intensity at each time step. The underlying idea is that ramps are characterised by high power output gradients evaluated under different time scales. A number of state-of-the-art time series based models were considered, namely linear autoregressive (AR) models, varying-coefficient models (VCMs) and artificial neural networks (ANNs). This allowed us to gain insights into how the complexity of the model contributes to the accuracy of the wind power time series modelling. The models were trained in base of a mean squared error criterion and the final set-up of each model was determined through cross-validation techniques. In order to investigate the contribution of the global scale into wind power ramp forecasting, a methodological proposal to identify features in atmospheric raw data that are relevant for explaining wind power ramp events was presented. The proposed methodology is based on two techniques: principal component analysis (PCA) for atmospheric data compression and mutual information (MI) for assessing non-linear dependence between variables. The methodology was applied to reanalysis data generated with a general circulation model (GCM). This allowed for the elaboration of explanatory variables meaningful for ramp forecasting that were utilized as exogenous variables by the forecasting models. The study covered two wind farms located in Spain. All the models outperformed the reference model (the persistence) during both ramp and non-ramp situations. Adding atmospheric information had a noticeable impact on the forecasting performance, specially during ramp-down events. Results also suggested different levels of connection between the ramp occurrence at the wind farm level and the global scale.
Resumo:
This work focuses on the analysis of a structural element of MetOP-A satellite. Given the special interest in the influence of equipment installed on structural elements, the paper studies one of the lateral faces on which the Advanced SCATterometer (ASCAT) is installed. The work is oriented towards the modal characterization of the specimen, describing the experimental set-up and the application of results to the development of a Finite Element Method (FEM) model to study the vibro-acoustic response. For the high frequency range, characterized by a high modal density, a Statistical Energy Analysis (SEA) model is considered, and the FEM model is used when modal density is low. The methodology for developing the SEA model and a compound FEM and Boundary Element Method (BEM) model to provide continuity in the medium frequency range is presented, as well as the necessary updating, characterization and coupling between models required to achieve numerical models that match experimental results.
Resumo:
In order to implement accurate models for wind power ramp forecasting, ramps need to be previously characterised. This issue has been typically addressed by performing binary ramp/non-ramp classifications based on ad-hoc assessed thresholds. However, recent works question this approach. This paper presents the ramp function, an innovative wavelet- based tool which detects and characterises ramp events in wind power time series. The underlying idea is to assess a continuous index related to the ramp intensity at each time step, which is obtained by considering large power output gradients evaluated under different time scales (up to typical ramp durations). The ramp function overcomes some of the drawbacks shown by the aforementioned binary classification and permits forecasters to easily reveal specific features of the ramp behaviour observed at a wind farm. As an example, the daily profile of the ramp-up and ramp-down intensities are obtained for the case of a wind farm located in Spain
Resumo:
The use of tungsten disulphide inorganic nanotubes (INT-WS2) offers the opportunity to produce novel and advanced biopolymer-based nanocomposite materials with excellent nanoparticle dispersion without the need for modifiers or surfactants via conventional melt blending. The study of the non-isothermal melt-crystallization kinetics provides a clear picture of the transformation of poly(L-lactic acid) (PLLA) molecules from the non-ordered to the ordered state. The overall crystallization rate, final crystallinity and subsequent melting behaviour of PLLA were controlled by both the incorporation of INT-WS2 and the variation of the cooling rate. In particular, it was shown that INT-WS2 exhibits much more prominent nucleation activity on the crystallization of PLLA than other specific nucleating agents or nano-sized fillers. These features may be advantageous for the enhancement of mechanical properties and process-ability of PLLA-based materials. PLLA/INT-WS2 nanocomposites can be employed as low cost biodegradable materials for many eco-friendly and medical applications, and the exceptional crystallization behaviour observed opens new perspectives for scale-up and broader applications.
Resumo:
Quizzes are among the most widely used resources in web-based education due to their many benefits. However, educators need suitable authoring tools that can be used to create reusable quizzes and to enhance existing materials with them. On the other hand, if teachers use Audience Response Systems (ARSs) they can get instant feedback from their students and thereby enhance their instruction. This paper presents an online authoring tool for creating reusable quizzes and enhancing existing learning resources with them, and a web-based ARS that enables teachers to launch the created quizzes and get instant feedback from the class. Both the authoring tool and the ARS were evaluated. The evaluation of the authoring tool showed that educators can effectively enhance existing learning resources in an easy way by creating and adding quizzes using that tool. Besides, the different factors that assure the reusability of the created quizzes are also exposed. Finally, the evaluation of the developed ARS showed an excellent acceptance of the system by teachers and students, and also it indicated that teachers found the system easy to set up and use in their classrooms.
Resumo:
Resource analysis aims at inferring the cost of executing programs for any possible input, in terms of a given resource, such as the traditional execution steps, time ormemory, and, more recently energy consumption or user defined resources (e.g., number of bits sent over a socket, number of database accesses, number of calls to particular procedures, etc.). This is performed statically, i.e., without actually running the programs. Resource usage information is useful for a variety of optimization and verification applications, as well as for guiding software design. For example, programmers can use such information to choose different algorithmic solutions to a problem; program transformation systems can use cost information to choose between alternative transformations; parallelizing compilers can use cost estimates for granularity control, which tries to balance the overheads of task creation and manipulation against the benefits of parallelization. In this thesis we have significatively improved an existing prototype implementation for resource usage analysis based on abstract interpretation, addressing a number of relevant challenges and overcoming many limitations it presented. The goal of that prototype was to show the viability of casting the resource analysis as an abstract domain, and howit could overcome important limitations of the state-of-the-art resource usage analysis tools. For this purpose, it was implemented as an abstract domain in the abstract interpretation framework of the CiaoPP system, PLAI.We have improved both the design and implementation of the prototype, for eventually allowing an evolution of the tool to the industrial application level. The abstract operations of such tool heavily depend on the setting up and finding closed-form solutions of recurrence relations representing the resource usage behavior of program components and the whole program as well. While there exist many tools, such as Computer Algebra Systems (CAS) and libraries able to find closed-form solutions for some types of recurrences, none of them alone is able to handle all the types of recurrences arising during program analysis. In addition, there are some types of recurrences that cannot be solved by any existing tool. This clearly constitutes a bottleneck for this kind of resource usage analysis. Thus, one of the major challenges we have addressed in this thesis is the design and development of a novel modular framework for solving recurrence relations, able to combine and take advantage of the results of existing solvers. Additionally, we have developed and integrated into our novel solver a technique for finding upper-bound closed-form solutions of a special class of recurrence relations that arise during the analysis of programs with accumulating parameters. Finally, we have integrated the improved resource analysis into the CiaoPP general framework for resource usage verification, and specialized the framework for verifying energy consumption specifications of embedded imperative programs in a real application, showing the usefulness and practicality of the resulting tool.---ABSTRACT---El Análisis de recursos tiene como objetivo inferir el coste de la ejecución de programas para cualquier entrada posible, en términos de algún recurso determinado, como pasos de ejecución, tiempo o memoria, y, más recientemente, el consumo de energía o recursos definidos por el usuario (por ejemplo, número de bits enviados a través de un socket, el número de accesos a una base de datos, cantidad de llamadas a determinados procedimientos, etc.). Ello se realiza estáticamente, es decir, sin necesidad de ejecutar los programas. La información sobre el uso de recursos resulta muy útil para una gran variedad de aplicaciones de optimización y verificación de programas, así como para asistir en el diseño de los mismos. Por ejemplo, los programadores pueden utilizar dicha información para elegir diferentes soluciones algorítmicas a un problema; los sistemas de transformación de programas pueden utilizar la información de coste para elegir entre transformaciones alternativas; los compiladores paralelizantes pueden utilizar las estimaciones de coste para realizar control de granularidad, el cual trata de equilibrar el coste debido a la creación y gestión de tareas, con los beneficios de la paralelización. En esta tesis hemos mejorado de manera significativa la implementación de un prototipo existente para el análisis del uso de recursos basado en interpretación abstracta, abordando diversos desafíos relevantes y superando numerosas limitaciones que éste presentaba. El objetivo de dicho prototipo era mostrar la viabilidad de definir el análisis de recursos como un dominio abstracto, y cómo se podían superar las limitaciones de otras herramientas similares que constituyen el estado del arte. Para ello, se implementó como un dominio abstracto en el marco de interpretación abstracta presente en el sistema CiaoPP, PLAI. Hemos mejorado tanto el diseño como la implementación del mencionado prototipo para posibilitar su evolución hacia una herramienta utilizable en el ámbito industrial. Las operaciones abstractas de dicha herramienta dependen en gran medida de la generación, y posterior búsqueda de soluciones en forma cerrada, de relaciones recurrentes, las cuales modelizan el comportamiento, respecto al consumo de recursos, de los componentes del programa y del programa completo. Si bien existen actualmente muchas herramientas capaces de encontrar soluciones en forma cerrada para ciertos tipos de recurrencias, tales como Sistemas de Computación Algebraicos (CAS) y librerías de programación, ninguna de dichas herramientas es capaz de tratar, por sí sola, todos los tipos de recurrencias que surgen durante el análisis de recursos. Existen incluso recurrencias que no las puede resolver ninguna herramienta actual. Esto constituye claramente un cuello de botella para este tipo de análisis del uso de recursos. Por lo tanto, uno de los principales desafíos que hemos abordado en esta tesis es el diseño y desarrollo de un novedoso marco modular para la resolución de relaciones recurrentes, combinando y aprovechando los resultados de resolutores existentes. Además de ello, hemos desarrollado e integrado en nuestro nuevo resolutor una técnica para la obtención de cotas superiores en forma cerrada de una clase característica de relaciones recurrentes que surgen durante el análisis de programas lógicos con parámetros de acumulación. Finalmente, hemos integrado el nuevo análisis de recursos con el marco general para verificación de recursos de CiaoPP, y hemos instanciado dicho marco para la verificación de especificaciones sobre el consumo de energía de programas imperativas embarcados, mostrando la viabilidad y utilidad de la herramienta resultante en una aplicación real.
Resumo:
La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.
Resumo:
Divalent cations are thought essential for motile function of leukocytes in general, and for the function of critical adhesion molecules in particular. In the current study, under direct microscopic observation with concomitant time-lapse video recording, we examined the effects of 10 mM EDTA on locomotion of human blood polymorphonuclear leukocytes (PMN). In very thin slide preparations, EDTA did not impair either random locomotion or chemotaxis; motile behavior appeared to benefit from the close approximation of slide and coverslip (“chimneying”). In preparations twice as thick, PMN in EDTA first exhibited active deformability with little or no displacement, then rounded up and became motionless. However, on creation of a chemotactic gradient, the same cells were able to orient and make their way to the target, often, however, losing momentarily their purchase on the substrate. In either of these preparations without EDTA, specific antibodies to β2 integrins did not prevent random locomotion or chemotaxis, even when we added antibodies to β1 and αvβ3 integrins and to integrin-associated protein, and none of these antibodies added anything to the effects of EDTA. In the more turbulent environment of even more media, effects of anti-β2 integrins became evident: PMN still could locomote but adhered to substrate largely by their uropods and by uropod-associated filaments. We relate these findings to the reported independence from integrins of PMN in certain experimental and disease states. Moreover, we suggest that PMN locomotion in close quarters is not only integrin-independent, but independent of external divalent cations as well.
Resumo:
In eukaryotic cells, both lysosomal and nonlysosomal pathways are involved in degradation of cytosolic proteins. The physiological condition of the cell often determines the degradation pathway of a specific protein. In this article, we show that cytosolic proteins can be taken up and degraded by isolated Saccharomyces cerevisiae vacuoles. After starvation of the cells, protein uptake increases. Uptake and degradation are temperature dependent and show biphasic kinetics. Vacuolar protein import is dependent on cytosolic heat shock proteins of the hsp70 family and on protease-sensitive component(s) on the outer surface of vacuoles. Degradation of the imported cytosolic proteins depends on a functional vacuolar ATPase. We show that the cytosolic isoform of yeast glyceraldehyde-3-phosphate dehydrogenase is degraded via this pathway. This import and degradation pathway is reminiscent of the protein transport pathway from the cytosol to lysosomes of mammalian cells.
Resumo:
We have investigated the role of myosin in cytokinesis in Dictyostelium cells by examining cells under both adhesive and nonadhesive conditions. On an adhesive surface, both wild-type and myosin-null cells undergo the normal processes of mitotic rounding, cell elongation, polar ruffling, furrow ingression, and separation of daughter cells. When cells are denied adhesion through culturing in suspension or on a hydrophobic surface, wild-type cells undergo these same processes. However, cells lacking myosin round up and polar ruffle, but fail to elongate, furrow, or divide. These differences show that cell division can be driven by two mechanisms that we term Cytokinesis A, which requires myosin, and Cytokinesis B, which is cell adhesion dependent. We have used these approaches to examine cells expressing a myosin whose two light chain-binding sites were deleted (ΔBLCBS-myosin). Although this myosin is a slower motor than wild-type myosin and has constitutively high activity due to the abolition of regulation by light-chain phosphorylation, cells expressing ΔBLCBS-myosin were previously shown to divide in suspension (Uyeda et al., 1996). However, we suspected their behavior during cytokinesis to be different from wild-type cells given the large alteration in their myosin. Surprisingly, ΔBLCBS-myosin undergoes relatively normal spatial and temporal changes in localization during mitosis. Furthermore, the rate of furrow progression in cells expressing a ΔBLCBS-myosin is similar to that in wild-type cells.
Resumo:
Gene regulation by imposed localization was studied by using designed zinc finger proteins that bind 18-bp DNA sequences in the 5′ untranslated regions of the protooncogenes erbB-2 and erbB-3. Transcription factors were generated by fusion of the DNA-binding proteins to repression or activation domains. When introduced into cells these transcription factors acted as dominant repressors or activators of, respectively, endogenous erbB-2 or erbB-3 gene expression. Significantly, imposed regulation of the two genes was highly specific, despite the fact that the transcription factor binding sites targeted in erbB-2 and erbB-3 share 15 of 18 nucleotides. Regulation of erbB-2 gene expression was observed in cells derived from several species that conserve the DNA target sequence. Repression of erbB-2 in SKBR3 breast cancer cells inhibited cell-cycle progression by inducing a G1 accumulation, suggesting the potential of designed transcription factors for cancer gene therapy. These results demonstrate the willful up- and down-regulation of endogenous genes, and provide an additional means to alter biological systems.
Resumo:
In addition to their well-known functions in cellular energy transduction, mitochondria play an important role in modulating the amplitude and time course of intracellular Ca2+ signals. In many cells, mitochondria act as Ca2+ buffers by taking up and releasing Ca2+, but this simple buffering action by itself often cannot explain the organelle's effects on Ca2+ signaling dynamics. Here we describe the functional interaction of mitochondria with store-operated Ca2+ channels in T lymphocytes as a mechanism of mitochondrial Ca2+ signaling. In Jurkat T cells with functional mitochondria, prolonged depletion of Ca2+ stores causes sustained activation of the store-operated Ca2+ current, ICRAC (CRAC, Ca2+ release-activated Ca2+). Inhibition of mitochondrial Ca2+ uptake by compounds that dissipate the intramitochondrial potential unmasks Ca2+-dependent inactivation of ICRAC. Thus, functional mitochondria are required to maintain CRAC-channel activity, most likely by preventing local Ca2+ accumulation near sites that govern channel inactivation. In cells stimulated through the T-cell antigen receptor, acute blockade of mitochondrial Ca2+ uptake inhibits the nuclear translocation of the transcription factor NFAT in parallel with CRAC channel activity and [Ca2+]i elevation, indicating a functional link between mitochondrial regulation of ICRAC and T-cell activation. These results demonstrate a role for mitochondria in controlling Ca2+ channel activity and signal transmission from the plasma membrane to the nucleus.
Resumo:
In recent decades antenatal screening has become one of the most routine procedure of pregnancy-follow up and the subject of hot debate in bioethics circles. In this paper the rationale behind doing antenatal screening and the actual and potential problems that it may cause will be discussed. The paper will examine the issue from the point of wiew of parents, health care professionals and, most importantly, the child-to-be. It will show how unthoughtfully antenatal screening is performed and how pregnancy is treated almost as a disease just since the emergence of antenatal screening. Genetic screening and ethical problems caused by the procedure will also be addressed and I will suggest that screening is more to do with the interests of others rather than those of the child-to be.
Resumo:
Chemotactic responses in Escherichia coli are typically mediated by transmembrane receptors that monitor chemoeffector levels with periplasmic binding domains and communicate with the flagellar motors through two cytoplasmic proteins, CheA and CheY. CheA autophosphorylates and then donates its phosphate to CheY, which in turn controls flagellar rotation. E. coli also exhibits chemotactic responses to substrates that are transported by the phosphoenolpyruvate (PEP)-dependent carbohydrate phosphotransferase system (PTS). Unlike conventional chemoreception, PTS substrates are sensed during their uptake and concomitant phosphorylation by the cell. The phosphoryl groups are transferred from PEP to the carbohydrates through two common intermediates, enzyme I (EI) and phosphohistidine carrier protein (HPr), and then to sugar-specific enzymes II. We found that in mutant strains HPr-like proteins could substitute for HPr in transport but did not mediate chemotactic signaling. In in vitro assays, these proteins exhibited reduced phosphotransfer rates from EI, indicating that the phosphorylation state of EI might link the PTS phospho-relay to the flagellar signaling pathway. Tests with purified proteins revealed that unphosphorylated EI inhibited CheA autophosphorylation, whereas phosphorylated EI did not. These findings suggest the following model for signal transduction in PTS-dependent chemotaxis. During uptake of a PTS carbohydrate, EI is dephosphorylated more rapidly by HPr than it is phosphorylated at the expense of PEP. Consequently, unphosphorylated EI builds up and inhibits CheA autophosphorylation. This slows the flow of phosphates to CheY, eliciting an up-gradient swimming response by the cell.
Resumo:
Chloroplast DNA restriction-site variation was surveyed among 40 accessions representing all 11 species of giant senecios (Dendrosenecio, Asteraceae) at all but one known location, plus three outgroup species. Remarkably little variation (only 9 variable sites out of roughly 1000 sites examined) was found among the 40 giant senecio accessions, yet as a group they differ significantly (at 18 sites) from Cineraria deltoidea, the closest known relative. This pattern indicates that the giant senecios underwent a recent dramatic radiation in eastern Africa and evolved from a relatively isolated lineage within the Senecioneae. Biogeographic interpretation of the molecular phylogeny suggests that the giant senecios originated high on Mt. Kilimanjaro, with subsequent dispersion to the Aberdares, Mt. Kenya, and the Cherangani Hills, followed by dispersion westward to the Ruwenzori Mountains, and then south to the Virunga Mountains, Mt. Kahuzi, and Mt. Muhi, but with dispersion back to Mt. Elgon. Geographic radiation was an important antecedent to the diversification in eastern Africa, which primarily involved repeated altitudinal radiation, both up and down the mountains, leading to morphological parallelism in both directions. In general, the plants on a given mountain are more closely related to each other than they are to plants on other mountains, and plants on nearby mountains are more closely related to each other than they are to plants on more distant mountains. The individual steps of the geographic radiation have occurred at various altitudes, some clearly the result of intermountain dispersal. The molecular evidence suggests that two species are extant ancestors to other species on the same or nearby mountains.